uid
stringlengths 4
7
| premise
stringlengths 19
9.21k
| hypothesis
stringlengths 13
488
| label
stringclasses 3
values |
---|---|---|---|
id_6400 | The Pace of Evolutionary Change A heated debate has enlivened recent studies of evolution. Darwin's original thesis, and the viewpoint supported by evolutionary gradualists, is that species change continuously but slowly and in small increments. Such changes are all but invisible over the short time scale of modern observations, and, it is argued, they are usually obscured by innumerable gaps in the imperfect fossil record. Gradualism, with its stress on the slow pace of change, is a comforting position, repeated over and over again in generations of textbooks. By the early twentieth century, the question about the rate of evolution had been answered in favor of gradualism to most biologists' satisfaction. Sometimes a closed question must be reopened as new evidence or new arguments based on old evidence come to light. In 1972 paleontologist Stephen Jay Gould and Niles Eldredge challenged conventional wisdom with an opposing viewpoint, the punctuated equilibrium hypothesis, which posits that species give rise to new species in relatively sudden bursts, without a lengthy transition period. These episodes of rapid evolution are separated by relatively long static spans during which a species may hardly change at all. The punctuated equilibrium hypothesis attempts to explain a curious feature of the fossil record one that has been familiar to paleontologist for more than a century but has usually been ignored. Many species appear to remain unchanged in the fossil record for millions of years a situation that seems to be at odds with Darwin's model of continuous change. Intermediated fossil forms, predicted by gradualism, are typically lacking. In most localities a given species of clam or coral persists essentially unchanged throughout a thick formation of rock, only to be replaced suddenly by a new and different species. The evolution of North American horse, which was once presented as a classic textbook example of gradual evolution, is now providing equally compelling evidence for punctuated equilibrium. A convincing 50-million-year sequence of modern horse ancestors each slightly larger, with more complex teeth, a longer face, and a more prominent central toe seemed to provide strong support for Darwin's contention that species evolve gradually. But close examination of those fossil deposits now reveals a somewhat different story. Horses evolved in discrete steps, each of which persisted almost unchanged for millions of years and was eventually replaced by a distinctive newer model. The four-toed Eohippus preceded the three-toed Miohippus, for example, but North American fossil evidence suggests a jerky, uneven transition between the two. If evolution had been a continuous, gradual process, one might expect that almost every fossil specimen would be slightly different from every year. If it seems difficult to conceive how major changes could occur rapidly, consider this: an alteration of a single gene in files is enough to turn a normal fly with a single pair of wings into one that has two pairs of wings. The question about the rate of evolution must now be turned around: does evolution ever proceed gradually, or does it always occur in short bursts? Detailed field studies of thick rock formations containing fossils provide the best potential tests of the competing theories. Occasionally, a sequence of fossil-rich layers of rock permits a comprehensive look at one type of organism over a long period of time. For example, Peter Sheldon's studies of trilobites, a now extinct marine animal with a segmented body, offer a detailed glimpse into three million years of evolution in one marine environment. In that study, each of eight different trilobite species was observed to undergo a gradual change in the number of segments typically an increase of one or two segments over the whole time interval. No significant discontinuous were observed, leading Sheldon to conclude that environmental conditions were quite stable during the period he examined. Similar exhaustive studies are required for many different kinds of organisms from many different periods. Most researchers expect to find that both modes of transition from one species to another are at work in evolution. Slow, continuous change may be the norm during periods of environmental stability, while rapid evolution of new species occurs during periods of environment stress. But a lot more studies like Sheldon's are needed before we can say for sure. | Darwin's evolutionary thesis was rejected because small changes could not be observed in the evolutionary record. | contradiction |
id_6401 | The Pace of Evolutionary Change A heated debate has enlivened recent studies of evolution. Darwin's original thesis, and the viewpoint supported by evolutionary gradualists, is that species change continuously but slowly and in small increments. Such changes are all but invisible over the short time scale of modern observations, and, it is argued, they are usually obscured by innumerable gaps in the imperfect fossil record. Gradualism, with its stress on the slow pace of change, is a comforting position, repeated over and over again in generations of textbooks. By the early twentieth century, the question about the rate of evolution had been answered in favor of gradualism to most biologists' satisfaction. Sometimes a closed question must be reopened as new evidence or new arguments based on old evidence come to light. In 1972 paleontologist Stephen Jay Gould and Niles Eldredge challenged conventional wisdom with an opposing viewpoint, the punctuated equilibrium hypothesis, which posits that species give rise to new species in relatively sudden bursts, without a lengthy transition period. These episodes of rapid evolution are separated by relatively long static spans during which a species may hardly change at all. The punctuated equilibrium hypothesis attempts to explain a curious feature of the fossil record one that has been familiar to paleontologist for more than a century but has usually been ignored. Many species appear to remain unchanged in the fossil record for millions of years a situation that seems to be at odds with Darwin's model of continuous change. Intermediated fossil forms, predicted by gradualism, are typically lacking. In most localities a given species of clam or coral persists essentially unchanged throughout a thick formation of rock, only to be replaced suddenly by a new and different species. The evolution of North American horse, which was once presented as a classic textbook example of gradual evolution, is now providing equally compelling evidence for punctuated equilibrium. A convincing 50-million-year sequence of modern horse ancestors each slightly larger, with more complex teeth, a longer face, and a more prominent central toe seemed to provide strong support for Darwin's contention that species evolve gradually. But close examination of those fossil deposits now reveals a somewhat different story. Horses evolved in discrete steps, each of which persisted almost unchanged for millions of years and was eventually replaced by a distinctive newer model. The four-toed Eohippus preceded the three-toed Miohippus, for example, but North American fossil evidence suggests a jerky, uneven transition between the two. If evolution had been a continuous, gradual process, one might expect that almost every fossil specimen would be slightly different from every year. If it seems difficult to conceive how major changes could occur rapidly, consider this: an alteration of a single gene in files is enough to turn a normal fly with a single pair of wings into one that has two pairs of wings. The question about the rate of evolution must now be turned around: does evolution ever proceed gradually, or does it always occur in short bursts? Detailed field studies of thick rock formations containing fossils provide the best potential tests of the competing theories. Occasionally, a sequence of fossil-rich layers of rock permits a comprehensive look at one type of organism over a long period of time. For example, Peter Sheldon's studies of trilobites, a now extinct marine animal with a segmented body, offer a detailed glimpse into three million years of evolution in one marine environment. In that study, each of eight different trilobite species was observed to undergo a gradual change in the number of segments typically an increase of one or two segments over the whole time interval. No significant discontinuous were observed, leading Sheldon to conclude that environmental conditions were quite stable during the period he examined. Similar exhaustive studies are required for many different kinds of organisms from many different periods. Most researchers expect to find that both modes of transition from one species to another are at work in evolution. Slow, continuous change may be the norm during periods of environmental stability, while rapid evolution of new species occurs during periods of environment stress. But a lot more studies like Sheldon's are needed before we can say for sure. | By the early twentieth century, most biologists believed that gradualism explained evolutionary change. | entailment |
id_6402 | The Pace of Evolutionary Change A heated debate has enlivened recent studies of evolution. Darwin's original thesis, and the viewpoint supported by evolutionary gradualists, is that species change continuously but slowly and in small increments. Such changes are all but invisible over the short time scale of modern observations, and, it is argued, they are usually obscured by innumerable gaps in the imperfect fossil record. Gradualism, with its stress on the slow pace of change, is a comforting position, repeated over and over again in generations of textbooks. By the early twentieth century, the question about the rate of evolution had been answered in favor of gradualism to most biologists' satisfaction. Sometimes a closed question must be reopened as new evidence or new arguments based on old evidence come to light. In 1972 paleontologist Stephen Jay Gould and Niles Eldredge challenged conventional wisdom with an opposing viewpoint, the punctuated equilibrium hypothesis, which posits that species give rise to new species in relatively sudden bursts, without a lengthy transition period. These episodes of rapid evolution are separated by relatively long static spans during which a species may hardly change at all. The punctuated equilibrium hypothesis attempts to explain a curious feature of the fossil record one that has been familiar to paleontologist for more than a century but has usually been ignored. Many species appear to remain unchanged in the fossil record for millions of years a situation that seems to be at odds with Darwin's model of continuous change. Intermediated fossil forms, predicted by gradualism, are typically lacking. In most localities a given species of clam or coral persists essentially unchanged throughout a thick formation of rock, only to be replaced suddenly by a new and different species. The evolution of North American horse, which was once presented as a classic textbook example of gradual evolution, is now providing equally compelling evidence for punctuated equilibrium. A convincing 50-million-year sequence of modern horse ancestors each slightly larger, with more complex teeth, a longer face, and a more prominent central toe seemed to provide strong support for Darwin's contention that species evolve gradually. But close examination of those fossil deposits now reveals a somewhat different story. Horses evolved in discrete steps, each of which persisted almost unchanged for millions of years and was eventually replaced by a distinctive newer model. The four-toed Eohippus preceded the three-toed Miohippus, for example, but North American fossil evidence suggests a jerky, uneven transition between the two. If evolution had been a continuous, gradual process, one might expect that almost every fossil specimen would be slightly different from every year. If it seems difficult to conceive how major changes could occur rapidly, consider this: an alteration of a single gene in files is enough to turn a normal fly with a single pair of wings into one that has two pairs of wings. The question about the rate of evolution must now be turned around: does evolution ever proceed gradually, or does it always occur in short bursts? Detailed field studies of thick rock formations containing fossils provide the best potential tests of the competing theories. Occasionally, a sequence of fossil-rich layers of rock permits a comprehensive look at one type of organism over a long period of time. For example, Peter Sheldon's studies of trilobites, a now extinct marine animal with a segmented body, offer a detailed glimpse into three million years of evolution in one marine environment. In that study, each of eight different trilobite species was observed to undergo a gradual change in the number of segments typically an increase of one or two segments over the whole time interval. No significant discontinuous were observed, leading Sheldon to conclude that environmental conditions were quite stable during the period he examined. Similar exhaustive studies are required for many different kinds of organisms from many different periods. Most researchers expect to find that both modes of transition from one species to another are at work in evolution. Slow, continuous change may be the norm during periods of environmental stability, while rapid evolution of new species occurs during periods of environment stress. But a lot more studies like Sheldon's are needed before we can say for sure. | Gaps in the fossil record were used to explain why it is difficult to see continuous small changes in the evolution of species. | entailment |
id_6403 | The Pace of Evolutionary Change A heated debate has enlivened recent studies of evolution. Darwin's original thesis, and the viewpoint supported by evolutionary gradualists, is that species change continuously but slowly and in small increments. Such changes are all but invisible over the short time scale of modern observations, and, it is argued, they are usually obscured by innumerable gaps in the imperfect fossil record. Gradualism, with its stress on the slow pace of change, is a comforting position, repeated over and over again in generations of textbooks. By the early twentieth century, the question about the rate of evolution had been answered in favor of gradualism to most biologists' satisfaction. Sometimes a closed question must be reopened as new evidence or new arguments based on old evidence come to light. In 1972 paleontologist Stephen Jay Gould and Niles Eldredge challenged conventional wisdom with an opposing viewpoint, the punctuated equilibrium hypothesis, which posits that species give rise to new species in relatively sudden bursts, without a lengthy transition period. These episodes of rapid evolution are separated by relatively long static spans during which a species may hardly change at all. The punctuated equilibrium hypothesis attempts to explain a curious feature of the fossil record one that has been familiar to paleontologist for more than a century but has usually been ignored. Many species appear to remain unchanged in the fossil record for millions of years a situation that seems to be at odds with Darwin's model of continuous change. Intermediated fossil forms, predicted by gradualism, are typically lacking. In most localities a given species of clam or coral persists essentially unchanged throughout a thick formation of rock, only to be replaced suddenly by a new and different species. The evolution of North American horse, which was once presented as a classic textbook example of gradual evolution, is now providing equally compelling evidence for punctuated equilibrium. A convincing 50-million-year sequence of modern horse ancestors each slightly larger, with more complex teeth, a longer face, and a more prominent central toe seemed to provide strong support for Darwin's contention that species evolve gradually. But close examination of those fossil deposits now reveals a somewhat different story. Horses evolved in discrete steps, each of which persisted almost unchanged for millions of years and was eventually replaced by a distinctive newer model. The four-toed Eohippus preceded the three-toed Miohippus, for example, but North American fossil evidence suggests a jerky, uneven transition between the two. If evolution had been a continuous, gradual process, one might expect that almost every fossil specimen would be slightly different from every year. If it seems difficult to conceive how major changes could occur rapidly, consider this: an alteration of a single gene in files is enough to turn a normal fly with a single pair of wings into one that has two pairs of wings. The question about the rate of evolution must now be turned around: does evolution ever proceed gradually, or does it always occur in short bursts? Detailed field studies of thick rock formations containing fossils provide the best potential tests of the competing theories. Occasionally, a sequence of fossil-rich layers of rock permits a comprehensive look at one type of organism over a long period of time. For example, Peter Sheldon's studies of trilobites, a now extinct marine animal with a segmented body, offer a detailed glimpse into three million years of evolution in one marine environment. In that study, each of eight different trilobite species was observed to undergo a gradual change in the number of segments typically an increase of one or two segments over the whole time interval. No significant discontinuous were observed, leading Sheldon to conclude that environmental conditions were quite stable during the period he examined. Similar exhaustive studies are required for many different kinds of organisms from many different periods. Most researchers expect to find that both modes of transition from one species to another are at work in evolution. Slow, continuous change may be the norm during periods of environmental stability, while rapid evolution of new species occurs during periods of environment stress. But a lot more studies like Sheldon's are needed before we can say for sure. | Darwin saw evolutionary change as happening slowly and gradually. | entailment |
id_6404 | The Pacific yew is an evergreen tree that grows in the Pacific Northwest. The Pacific yew has a fleshy, poisonous fruit. Recently, taxol, a substance found in the bark of the Pacific yew, was discovered to be a promising new anticancer drug. | The Pacific yew was considered worthless until taxol was discovered. | neutral |
id_6405 | The Pacific yew is an evergreen tree that grows in the Pacific Northwest. The Pacific yew has a fleshy, poisonous fruit. Recently, taxol, a substance found in the bark of the Pacific yew, was discovered to be a promising new anticancer drug. | People should not eat the fruit of the Pacific yew. | entailment |
id_6406 | The Pacific yew is an evergreen tree that grows in the Pacific Northwest. The Pacific yew has a fleshy, poisonous fruit. Recently, taxol, a substance found in the bark of the Pacific yew, was discovered to be a promising new anticancer drug. | Taxol is poisonous when taken by healthy people. | neutral |
id_6407 | The Pacific yew is an evergreen tree that grows in the Pacific Northwest. The Pacific yew has a fleshy, poisonous fruit. Recently, taxol, a substance found in the bark of the Pacific yew, was discovered to be a promising new anticancer drug. | Taxol has cured people from various diseases. | neutral |
id_6408 | The Party of Regions took 32 per cent of the vote, the block of parties led by Yulia Tymoshenko polled 22 per cent, Our Ukraine Party secured 14 per cent and The Socialists trailed with 3 per cent. The result means that the next government of the Ukraine is likely to be a coalition. A union between the parties led by Tymoshenko and Our Ukraine Party seems least likely given that Tymoshenko was sacked from a ministerial post and split from the Our Ukraine Party to run her election campaign on the platform of anti-corruption. Many commentators describe the electrical punishment dished out to the President and leader of the Our Ukraine Party, Mr Yushchenko, as expected. The newly established and largely free press played its part in bringing about the result and the parliamentary elections were by common agreement the freest so far. | It can be inferred from the passage that the election failed to produce an outright winner. | entailment |
id_6409 | The Party of Regions took 32 per cent of the vote, the block of parties led by Yulia Tymoshenko polled 22 per cent, Our Ukraine Party secured 14 per cent and The Socialists trailed with 3 per cent. The result means that the next government of the Ukraine is likely to be a coalition. A union between the parties led by Tymoshenko and Our Ukraine Party seems least likely given that Tymoshenko was sacked from a ministerial post and split from the Our Ukraine Party to run her election campaign on the platform of anti-corruption. Many commentators describe the electrical punishment dished out to the President and leader of the Our Ukraine Party, Mr Yushchenko, as expected. The newly established and largely free press played its part in bringing about the result and the parliamentary elections were by common agreement the freest so far. | The leader of the block of parties that polled 22 per cent of the vote is a woman. | entailment |
id_6410 | The Party of Regions took 32 per cent of the vote, the block of parties led by Yulia Tymoshenko polled 22 per cent, Our Ukraine Party secured 14 per cent and The Socialists trailed with 3 per cent. The result means that the next government of the Ukraine is likely to be a coalition. A union between the parties led by Tymoshenko and Our Ukraine Party seems least likely given that Tymoshenko was sacked from a ministerial post and split from the Our Ukraine Party to run her election campaign on the platform of anti-corruption. Many commentators describe the electrical punishment dished out to the President and leader of the Our Ukraine Party, Mr Yushchenko, as expected. The newly established and largely free press played its part in bringing about the result and the parliamentary elections were by common agreement the freest so far. | Tymoshenko was sacked by Yushchenko. | neutral |
id_6411 | The Pearl Throughout history, pearls have held a unique presence within the wealthy and powerful. For instance, the pearl was the favoured gem of the wealthy during the Roman Empire. This gift from the sea had been brought back from the orient by the Roman conquests. Roman women wore pearls to bed so they could be reminded of their wealth immediately upon waking up. Before jewellers learned to cut gems, the pearl was of greater value than the diamond. In the Orient and Persia Empire, pearls were ground into powders to cure anything from heart disease to epilepsy, with possible aphrodisiac uses as well. Pearls were once considered an exclusive privilege for royalty. A law in 1612 drawn up by the Duke of Saxony prohibited the wearing of pearls by the nobility, professors, doctors or their wives in an effort to further distinguish royal appearance. American Indians also used freshwater pearls from the Mississippi River as decorations and jewellery. There are essentially three types of pearls: natural, cultured and imitation. A natural pearl (often called an Oriental pearl) forms when an irritant, such as a piece of sand, works its way into a particular species of oyster, mussel, or clam. As a defence mechanism, the mollusc secretes a fluid to coat the irritant. The layer upon layer of this coating is deposited on the irritant until a lustrous pearl is formed. The only difference between natural pearls and cultured pearls is that the irritant is a surgically implanted bead or piece of shell called Mother of Pearl. Often, these shells are ground oyster shells that are worth significant amounts of money in their own right as irritant-catalysts for quality pearls. The resulting core is, therefore, much larger than in a natural pearl. Yet, as long as there are enough layers of nacre (the secreted fluid covering the irritant) to result in a beautiful, gem-quality pearl, the size of the nucleus is of no consequence to beauty or durability. Pearls can come from either salt or freshwater sources. Typically, saltwater pearls tend to be higher quality, although there are several types of freshwater pearls that are considered high in quality as well. Freshwater pearls tend to be very irregular in shape, with a puffed rice appearance, the most prevalent. Nevertheless, it is each individual pearls merits that determines value more than the source of the pearl. Saltwater pearl oysters are usually cultivated in protected lagoons or volcanic atolls. However, most freshwater cultured pearls sold today come from China. Cultured pearls are the response of the shell to a tissue implant. A tiny piece of mantle tissue from a donor shell is transplanted into a recipient shell. This graft will form a pearl sac and the tissue will precipitate calcium carbonate into this pocket. There are a number of options for producing cultured pearls: use fresh water or seawater shells, transplant the graft into the mantle or into the gonad, add a spherical bead or do it non-beaded. The majority of saltwater cultured pearls are grown with beads. Regardless of the method used to acquire a pearl, the process usually takes several years. Mussels must reach a mature age, which can take up to 3 years, and then be implanted or naturally receive an irritant. Once the irritant is in place, it can take up to another 3 years for the pearl to reach its full size. Often, the irritant may be rejected, the pearl will terrifically misshapen, or the oyster may simply die from disease or countless other complications. By the end of a 5 to 10-year cycle, only 50% of the oysters will have survived. And of the pearls produced, only approximately 5% are of substantial quality for top jewellery makers. From the outset, a pearl farmer can figure on spending over $100 for every oyster that is farmed, of which many will produce nothing or die. Imitation pearls are a different story altogether. In most cases, a glass bead is dipped into a solution made from fish scales. This coating is thin and may eventually wear off. One can usually tell an imitation by biting on it. Fake pearls glide across your teeth, while the layers of nacre on real pearls feel gritty. The Island of Mallorca (in Spain) is known for its imitation pearl industry. Quality natural pearls are very rare jewels. The actual value of a natural pearl is determined in the same way as it would be for other precious gems. The valuation factors include size, shape, and colour, quality of surface, orient, and lustre. In general, cultured pearls are less valuable than natural pearls, whereas imitation pearls almost have no value. One way that jewellers can determine whether a pearl is cultured or natural is to have a gem lab perform an x-ray of the pearl. If the x-ray reveals a nucleus, the pearl is likely a bead-nucleated saltwater pearl. If no nucleus is present, but irregular and small dark inner spots indicating a cavity are visible, combined with concentric rings of organic substance, the pearl is likely a cultured freshwater. Cultured freshwater pearls can often be confused for natural pearls which present as homogeneous pictures that continuously darken toward the surface of the pearl. Natural pearls will often show larger cavities where organic matter has dried out and decomposed. Although imitation pearls look the part, they do not have the same weight or smoothness as real pearls, and their lustre will also dim greatly. Among cultured pearls, Akoya pearls from Japan are some of the most lustrous. A good quality necklace of 40 Akoya pearls measuring 7 mm in diameter sells for about $1,500, while a super- high-quality strand sells for about $4,500. Size, on the other hand, has to do with the age of the oyster that created the pearl (the more mature oysters produce larger pearls) and the location in which the pearl was cultured. The South Sea waters of Australia tend to produce the larger pearls; probably because the water along the coastline is supplied with rich nutrients from the ocean floor. Also, the type of mussel common to the area seems to possess a predilection for producing comparatively large pearls Historically, the worlds best pearls came from the Persian Gulf, especially around what is now Bahrain. The pearls of the Persian Gulf were naturally created and collected by breath-hold divers. The secret to the special lustre of Gulf pearls probably derived from the unique mixture of sweet and saltwater around the island. Unfortunately, the natural pearl industry of the Persian Gulf ended abruptly in the early 1930s with the discovery of large deposits of oil. Those who once dove for pearls sought prosperity in the economic boom ushered in by the oil industry. The water pollution resulting from spilled oil and indiscriminate over-fishing of oysters essentially ruined the once pristine pearl-producing waters of the Gulf. Today, pearl diving is practiced only as a hobby. Still, Bahrain remains one of the foremost trading centers for high-quality pearls. In fact, cultured pearls are banned from the Bahrain pearl market, in an effort to preserve the locations heritage. Nowadays, the largest stock of natural pearls probably resides in India. Ironically, much of Indias stock of natural pearls came originally from Bahrain. Unlike Bahrain, which has essentially lost its pearl resource, traditional pearl fishing is still practiced on a small scale in India. | Cultivated cultured pearls are generally valued the same as natural ones. | contradiction |
id_6412 | The Pearl Throughout history, pearls have held a unique presence within the wealthy and powerful. For instance, the pearl was the favoured gem of the wealthy during the Roman Empire. This gift from the sea had been brought back from the orient by the Roman conquests. Roman women wore pearls to bed so they could be reminded of their wealth immediately upon waking up. Before jewellers learned to cut gems, the pearl was of greater value than the diamond. In the Orient and Persia Empire, pearls were ground into powders to cure anything from heart disease to epilepsy, with possible aphrodisiac uses as well. Pearls were once considered an exclusive privilege for royalty. A law in 1612 drawn up by the Duke of Saxony prohibited the wearing of pearls by the nobility, professors, doctors or their wives in an effort to further distinguish royal appearance. American Indians also used freshwater pearls from the Mississippi River as decorations and jewellery. There are essentially three types of pearls: natural, cultured and imitation. A natural pearl (often called an Oriental pearl) forms when an irritant, such as a piece of sand, works its way into a particular species of oyster, mussel, or clam. As a defence mechanism, the mollusc secretes a fluid to coat the irritant. The layer upon layer of this coating is deposited on the irritant until a lustrous pearl is formed. The only difference between natural pearls and cultured pearls is that the irritant is a surgically implanted bead or piece of shell called Mother of Pearl. Often, these shells are ground oyster shells that are worth significant amounts of money in their own right as irritant-catalysts for quality pearls. The resulting core is, therefore, much larger than in a natural pearl. Yet, as long as there are enough layers of nacre (the secreted fluid covering the irritant) to result in a beautiful, gem-quality pearl, the size of the nucleus is of no consequence to beauty or durability. Pearls can come from either salt or freshwater sources. Typically, saltwater pearls tend to be higher quality, although there are several types of freshwater pearls that are considered high in quality as well. Freshwater pearls tend to be very irregular in shape, with a puffed rice appearance, the most prevalent. Nevertheless, it is each individual pearls merits that determines value more than the source of the pearl. Saltwater pearl oysters are usually cultivated in protected lagoons or volcanic atolls. However, most freshwater cultured pearls sold today come from China. Cultured pearls are the response of the shell to a tissue implant. A tiny piece of mantle tissue from a donor shell is transplanted into a recipient shell. This graft will form a pearl sac and the tissue will precipitate calcium carbonate into this pocket. There are a number of options for producing cultured pearls: use fresh water or seawater shells, transplant the graft into the mantle or into the gonad, add a spherical bead or do it non-beaded. The majority of saltwater cultured pearls are grown with beads. Regardless of the method used to acquire a pearl, the process usually takes several years. Mussels must reach a mature age, which can take up to 3 years, and then be implanted or naturally receive an irritant. Once the irritant is in place, it can take up to another 3 years for the pearl to reach its full size. Often, the irritant may be rejected, the pearl will terrifically misshapen, or the oyster may simply die from disease or countless other complications. By the end of a 5 to 10-year cycle, only 50% of the oysters will have survived. And of the pearls produced, only approximately 5% are of substantial quality for top jewellery makers. From the outset, a pearl farmer can figure on spending over $100 for every oyster that is farmed, of which many will produce nothing or die. Imitation pearls are a different story altogether. In most cases, a glass bead is dipped into a solution made from fish scales. This coating is thin and may eventually wear off. One can usually tell an imitation by biting on it. Fake pearls glide across your teeth, while the layers of nacre on real pearls feel gritty. The Island of Mallorca (in Spain) is known for its imitation pearl industry. Quality natural pearls are very rare jewels. The actual value of a natural pearl is determined in the same way as it would be for other precious gems. The valuation factors include size, shape, and colour, quality of surface, orient, and lustre. In general, cultured pearls are less valuable than natural pearls, whereas imitation pearls almost have no value. One way that jewellers can determine whether a pearl is cultured or natural is to have a gem lab perform an x-ray of the pearl. If the x-ray reveals a nucleus, the pearl is likely a bead-nucleated saltwater pearl. If no nucleus is present, but irregular and small dark inner spots indicating a cavity are visible, combined with concentric rings of organic substance, the pearl is likely a cultured freshwater. Cultured freshwater pearls can often be confused for natural pearls which present as homogeneous pictures that continuously darken toward the surface of the pearl. Natural pearls will often show larger cavities where organic matter has dried out and decomposed. Although imitation pearls look the part, they do not have the same weight or smoothness as real pearls, and their lustre will also dim greatly. Among cultured pearls, Akoya pearls from Japan are some of the most lustrous. A good quality necklace of 40 Akoya pearls measuring 7 mm in diameter sells for about $1,500, while a super- high-quality strand sells for about $4,500. Size, on the other hand, has to do with the age of the oyster that created the pearl (the more mature oysters produce larger pearls) and the location in which the pearl was cultured. The South Sea waters of Australia tend to produce the larger pearls; probably because the water along the coastline is supplied with rich nutrients from the ocean floor. Also, the type of mussel common to the area seems to possess a predilection for producing comparatively large pearls Historically, the worlds best pearls came from the Persian Gulf, especially around what is now Bahrain. The pearls of the Persian Gulf were naturally created and collected by breath-hold divers. The secret to the special lustre of Gulf pearls probably derived from the unique mixture of sweet and saltwater around the island. Unfortunately, the natural pearl industry of the Persian Gulf ended abruptly in the early 1930s with the discovery of large deposits of oil. Those who once dove for pearls sought prosperity in the economic boom ushered in by the oil industry. The water pollution resulting from spilled oil and indiscriminate over-fishing of oysters essentially ruined the once pristine pearl-producing waters of the Gulf. Today, pearl diving is practiced only as a hobby. Still, Bahrain remains one of the foremost trading centers for high-quality pearls. In fact, cultured pearls are banned from the Bahrain pearl market, in an effort to preserve the locations heritage. Nowadays, the largest stock of natural pearls probably resides in India. Ironically, much of Indias stock of natural pearls came originally from Bahrain. Unlike Bahrain, which has essentially lost its pearl resource, traditional pearl fishing is still practiced on a small scale in India. | Akoya pearls from Japan Glows more deeply than the South Sea pearls of Australia | neutral |
id_6413 | The Pearl Throughout history, pearls have held a unique presence within the wealthy and powerful. For instance, the pearl was the favoured gem of the wealthy during the Roman Empire. This gift from the sea had been brought back from the orient by the Roman conquests. Roman women wore pearls to bed so they could be reminded of their wealth immediately upon waking up. Before jewellers learned to cut gems, the pearl was of greater value than the diamond. In the Orient and Persia Empire, pearls were ground into powders to cure anything from heart disease to epilepsy, with possible aphrodisiac uses as well. Pearls were once considered an exclusive privilege for royalty. A law in 1612 drawn up by the Duke of Saxony prohibited the wearing of pearls by the nobility, professors, doctors or their wives in an effort to further distinguish royal appearance. American Indians also used freshwater pearls from the Mississippi River as decorations and jewellery. There are essentially three types of pearls: natural, cultured and imitation. A natural pearl (often called an Oriental pearl) forms when an irritant, such as a piece of sand, works its way into a particular species of oyster, mussel, or clam. As a defence mechanism, the mollusc secretes a fluid to coat the irritant. The layer upon layer of this coating is deposited on the irritant until a lustrous pearl is formed. The only difference between natural pearls and cultured pearls is that the irritant is a surgically implanted bead or piece of shell called Mother of Pearl. Often, these shells are ground oyster shells that are worth significant amounts of money in their own right as irritant-catalysts for quality pearls. The resulting core is, therefore, much larger than in a natural pearl. Yet, as long as there are enough layers of nacre (the secreted fluid covering the irritant) to result in a beautiful, gem-quality pearl, the size of the nucleus is of no consequence to beauty or durability. Pearls can come from either salt or freshwater sources. Typically, saltwater pearls tend to be higher quality, although there are several types of freshwater pearls that are considered high in quality as well. Freshwater pearls tend to be very irregular in shape, with a puffed rice appearance, the most prevalent. Nevertheless, it is each individual pearls merits that determines value more than the source of the pearl. Saltwater pearl oysters are usually cultivated in protected lagoons or volcanic atolls. However, most freshwater cultured pearls sold today come from China. Cultured pearls are the response of the shell to a tissue implant. A tiny piece of mantle tissue from a donor shell is transplanted into a recipient shell. This graft will form a pearl sac and the tissue will precipitate calcium carbonate into this pocket. There are a number of options for producing cultured pearls: use fresh water or seawater shells, transplant the graft into the mantle or into the gonad, add a spherical bead or do it non-beaded. The majority of saltwater cultured pearls are grown with beads. Regardless of the method used to acquire a pearl, the process usually takes several years. Mussels must reach a mature age, which can take up to 3 years, and then be implanted or naturally receive an irritant. Once the irritant is in place, it can take up to another 3 years for the pearl to reach its full size. Often, the irritant may be rejected, the pearl will terrifically misshapen, or the oyster may simply die from disease or countless other complications. By the end of a 5 to 10-year cycle, only 50% of the oysters will have survived. And of the pearls produced, only approximately 5% are of substantial quality for top jewellery makers. From the outset, a pearl farmer can figure on spending over $100 for every oyster that is farmed, of which many will produce nothing or die. Imitation pearls are a different story altogether. In most cases, a glass bead is dipped into a solution made from fish scales. This coating is thin and may eventually wear off. One can usually tell an imitation by biting on it. Fake pearls glide across your teeth, while the layers of nacre on real pearls feel gritty. The Island of Mallorca (in Spain) is known for its imitation pearl industry. Quality natural pearls are very rare jewels. The actual value of a natural pearl is determined in the same way as it would be for other precious gems. The valuation factors include size, shape, and colour, quality of surface, orient, and lustre. In general, cultured pearls are less valuable than natural pearls, whereas imitation pearls almost have no value. One way that jewellers can determine whether a pearl is cultured or natural is to have a gem lab perform an x-ray of the pearl. If the x-ray reveals a nucleus, the pearl is likely a bead-nucleated saltwater pearl. If no nucleus is present, but irregular and small dark inner spots indicating a cavity are visible, combined with concentric rings of organic substance, the pearl is likely a cultured freshwater. Cultured freshwater pearls can often be confused for natural pearls which present as homogeneous pictures that continuously darken toward the surface of the pearl. Natural pearls will often show larger cavities where organic matter has dried out and decomposed. Although imitation pearls look the part, they do not have the same weight or smoothness as real pearls, and their lustre will also dim greatly. Among cultured pearls, Akoya pearls from Japan are some of the most lustrous. A good quality necklace of 40 Akoya pearls measuring 7 mm in diameter sells for about $1,500, while a super- high-quality strand sells for about $4,500. Size, on the other hand, has to do with the age of the oyster that created the pearl (the more mature oysters produce larger pearls) and the location in which the pearl was cultured. The South Sea waters of Australia tend to produce the larger pearls; probably because the water along the coastline is supplied with rich nutrients from the ocean floor. Also, the type of mussel common to the area seems to possess a predilection for producing comparatively large pearls Historically, the worlds best pearls came from the Persian Gulf, especially around what is now Bahrain. The pearls of the Persian Gulf were naturally created and collected by breath-hold divers. The secret to the special lustre of Gulf pearls probably derived from the unique mixture of sweet and saltwater around the island. Unfortunately, the natural pearl industry of the Persian Gulf ended abruptly in the early 1930s with the discovery of large deposits of oil. Those who once dove for pearls sought prosperity in the economic boom ushered in by the oil industry. The water pollution resulting from spilled oil and indiscriminate over-fishing of oysters essentially ruined the once pristine pearl-producing waters of the Gulf. Today, pearl diving is practiced only as a hobby. Still, Bahrain remains one of the foremost trading centers for high-quality pearls. In fact, cultured pearls are banned from the Bahrain pearl market, in an effort to preserve the locations heritage. Nowadays, the largest stock of natural pearls probably resides in India. Ironically, much of Indias stock of natural pearls came originally from Bahrain. Unlike Bahrain, which has essentially lost its pearl resource, traditional pearl fishing is still practiced on a small scale in India. | The size of pearls produced in Japan is usually of a smaller size than those came from Australia. | entailment |
id_6414 | The Pearl Throughout history, pearls have held a unique presence within the wealthy and powerful. For instance, the pearl was the favoured gem of the wealthy during the Roman Empire. This gift from the sea had been brought back from the orient by the Roman conquests. Roman women wore pearls to bed so they could be reminded of their wealth immediately upon waking up. Before jewellers learned to cut gems, the pearl was of greater value than the diamond. In the Orient and Persia Empire, pearls were ground into powders to cure anything from heart disease to epilepsy, with possible aphrodisiac uses as well. Pearls were once considered an exclusive privilege for royalty. A law in 1612 drawn up by the Duke of Saxony prohibited the wearing of pearls by the nobility, professors, doctors or their wives in an effort to further distinguish royal appearance. American Indians also used freshwater pearls from the Mississippi River as decorations and jewellery. There are essentially three types of pearls: natural, cultured and imitation. A natural pearl (often called an Oriental pearl) forms when an irritant, such as a piece of sand, works its way into a particular species of oyster, mussel, or clam. As a defence mechanism, the mollusc secretes a fluid to coat the irritant. The layer upon layer of this coating is deposited on the irritant until a lustrous pearl is formed. The only difference between natural pearls and cultured pearls is that the irritant is a surgically implanted bead or piece of shell called Mother of Pearl. Often, these shells are ground oyster shells that are worth significant amounts of money in their own right as irritant-catalysts for quality pearls. The resulting core is, therefore, much larger than in a natural pearl. Yet, as long as there are enough layers of nacre (the secreted fluid covering the irritant) to result in a beautiful, gem-quality pearl, the size of the nucleus is of no consequence to beauty or durability. Pearls can come from either salt or freshwater sources. Typically, saltwater pearls tend to be higher quality, although there are several types of freshwater pearls that are considered high in quality as well. Freshwater pearls tend to be very irregular in shape, with a puffed rice appearance, the most prevalent. Nevertheless, it is each individual pearls merits that determines value more than the source of the pearl. Saltwater pearl oysters are usually cultivated in protected lagoons or volcanic atolls. However, most freshwater cultured pearls sold today come from China. Cultured pearls are the response of the shell to a tissue implant. A tiny piece of mantle tissue from a donor shell is transplanted into a recipient shell. This graft will form a pearl sac and the tissue will precipitate calcium carbonate into this pocket. There are a number of options for producing cultured pearls: use fresh water or seawater shells, transplant the graft into the mantle or into the gonad, add a spherical bead or do it non-beaded. The majority of saltwater cultured pearls are grown with beads. Regardless of the method used to acquire a pearl, the process usually takes several years. Mussels must reach a mature age, which can take up to 3 years, and then be implanted or naturally receive an irritant. Once the irritant is in place, it can take up to another 3 years for the pearl to reach its full size. Often, the irritant may be rejected, the pearl will terrifically misshapen, or the oyster may simply die from disease or countless other complications. By the end of a 5 to 10-year cycle, only 50% of the oysters will have survived. And of the pearls produced, only approximately 5% are of substantial quality for top jewellery makers. From the outset, a pearl farmer can figure on spending over $100 for every oyster that is farmed, of which many will produce nothing or die. Imitation pearls are a different story altogether. In most cases, a glass bead is dipped into a solution made from fish scales. This coating is thin and may eventually wear off. One can usually tell an imitation by biting on it. Fake pearls glide across your teeth, while the layers of nacre on real pearls feel gritty. The Island of Mallorca (in Spain) is known for its imitation pearl industry. Quality natural pearls are very rare jewels. The actual value of a natural pearl is determined in the same way as it would be for other precious gems. The valuation factors include size, shape, and colour, quality of surface, orient, and lustre. In general, cultured pearls are less valuable than natural pearls, whereas imitation pearls almost have no value. One way that jewellers can determine whether a pearl is cultured or natural is to have a gem lab perform an x-ray of the pearl. If the x-ray reveals a nucleus, the pearl is likely a bead-nucleated saltwater pearl. If no nucleus is present, but irregular and small dark inner spots indicating a cavity are visible, combined with concentric rings of organic substance, the pearl is likely a cultured freshwater. Cultured freshwater pearls can often be confused for natural pearls which present as homogeneous pictures that continuously darken toward the surface of the pearl. Natural pearls will often show larger cavities where organic matter has dried out and decomposed. Although imitation pearls look the part, they do not have the same weight or smoothness as real pearls, and their lustre will also dim greatly. Among cultured pearls, Akoya pearls from Japan are some of the most lustrous. A good quality necklace of 40 Akoya pearls measuring 7 mm in diameter sells for about $1,500, while a super- high-quality strand sells for about $4,500. Size, on the other hand, has to do with the age of the oyster that created the pearl (the more mature oysters produce larger pearls) and the location in which the pearl was cultured. The South Sea waters of Australia tend to produce the larger pearls; probably because the water along the coastline is supplied with rich nutrients from the ocean floor. Also, the type of mussel common to the area seems to possess a predilection for producing comparatively large pearls Historically, the worlds best pearls came from the Persian Gulf, especially around what is now Bahrain. The pearls of the Persian Gulf were naturally created and collected by breath-hold divers. The secret to the special lustre of Gulf pearls probably derived from the unique mixture of sweet and saltwater around the island. Unfortunately, the natural pearl industry of the Persian Gulf ended abruptly in the early 1930s with the discovery of large deposits of oil. Those who once dove for pearls sought prosperity in the economic boom ushered in by the oil industry. The water pollution resulting from spilled oil and indiscriminate over-fishing of oysters essentially ruined the once pristine pearl-producing waters of the Gulf. Today, pearl diving is practiced only as a hobby. Still, Bahrain remains one of the foremost trading centers for high-quality pearls. In fact, cultured pearls are banned from the Bahrain pearl market, in an effort to preserve the locations heritage. Nowadays, the largest stock of natural pearls probably resides in India. Ironically, much of Indias stock of natural pearls came originally from Bahrain. Unlike Bahrain, which has essentially lost its pearl resource, traditional pearl fishing is still practiced on a small scale in India. | Often cultured pearls center is significantly larger than in a natural pearl. | entailment |
id_6415 | The People of Corn Maize is Mexicos lifeblood the countrys history and identity are entwined with it. But this centuries-old relationship is now threatened by free trade. Laura Carlsen investigates the threat and profiles a growing activist movement. On a mountain top in southern Mexico, Indian families gather. They chant and sprinkle cornmeal in consecration, praying for the success of their new crops, the unity of their communities and the health of their families. In this village in Oaxaca people eat corn tamales, sow maize plots and teach children to care for the plant. The cultural rhythms of this community, its labours, rituals and celebrations will be defined as they have been for millennia by the lifecycle of corn. Indeed, if it werent for the domestication of teocintle (the ancestor of modern maize) 9,000 years ago mesoamerican civilization could never have developed. In the Mayan sacred book, the Popol Vuh, the gods create people out of cornmeal. The people of corn flourished and built one of the most remarkable cultures in human history. But in Mexico and Central America today maize has come under attack. As a result of the North American Free Trade Agreement (NAFTA) Mexico has been flooded with imported corn from north of the border in the US. The contamination of native varieties with genetically modified imported maize could have major consequences for Mexican campesinos (farmers), for local biodiversity and for the worlds genetic reserves. A decade ago Mexican bureaucrats and business people had it all figured out. NAFTA would drive uncompetitive maize farmers from the countryside to work in booming assembly factories across the country. Their standard of living would rise as the cost of providing services like electricity and water to scattered rural communities would fall. Best of all, cheap imported maize from the US the worlds most efficient and most heavily subsidized producer would be a benefit to Mexican consumers. Unfortunately, it didnt turn out that way. There werent quite enough of those factory jobs and the ones that did materialize continued to be along the US border, not further in Mexico. And despite a huge drop in the price farmers received for their corn, consumers often ended up paying more. The price of tortillas the countrys staple food rose nearly fivefold as the Government stopped domestic subsidies and giant agribusiness firms took over the market. Free trade defenders like Mexicos former Under-Secretary of Agriculture Luis Tellez suggest: Its not that NAFTA failed, its just that reality didnt turn out the way we planned it. Part of that reality was that the Government did nothing to help campesinos in the supposed transition. Nor did NAFTA recognize inequalities or create compensation funds to help the victims of free trade unlike what occurred with economic integration in the European Union. Basically, Mexico adopted a sink-or-swim policy for small farmers, opening the floodgates to tons of imported US corn. Maize imports tripled under NAFTA and producer prices fell by half. The drop in income immediately hit the most vulnerable and poorest members of rural society. While more than a third of the corn grown by small farmers is used to feed their families, the rest is sold on local markets. Without this critical cash, rural living standards plunged. Maize is at the heart of indigenous and campesino identity. Jose Carrillo de la Cruz, a Huichol Indian from northern Jalisco, describes that relationship: Corn is the force, the life and the strength of the Huichol. If there were a change, if someone from outside patented our corn, it would end our life and existence. The good news is that the free-trade threat to Mexicos culture and food security has sparked a lively resistance. In Defence of Corn, a movement to protect local maize varieties, is not a membership organization but a series of forums and actions led by campesinos themselves. Its a direct challenge to both free trade and the dictums of corporate science. The farmers tenacity and refusal to abandon the crop of their ancestors is impressive. But larger economic conditions continue to shape their lives. Rural poverty and hunger have soared under free trade and placed a heavier burden on women left to work the land. The battle for food sovereignty continues. Movement leaders insist that the Government reassess its free trade policies and develop a real rural development programme. | After NAFTA, a lot of corn from the USA has been sold in Mexico. | entailment |
id_6416 | The People of Corn Maize is Mexicos lifeblood the countrys history and identity are entwined with it. But this centuries-old relationship is now threatened by free trade. Laura Carlsen investigates the threat and profiles a growing activist movement. On a mountain top in southern Mexico, Indian families gather. They chant and sprinkle cornmeal in consecration, praying for the success of their new crops, the unity of their communities and the health of their families. In this village in Oaxaca people eat corn tamales, sow maize plots and teach children to care for the plant. The cultural rhythms of this community, its labours, rituals and celebrations will be defined as they have been for millennia by the lifecycle of corn. Indeed, if it werent for the domestication of teocintle (the ancestor of modern maize) 9,000 years ago mesoamerican civilization could never have developed. In the Mayan sacred book, the Popol Vuh, the gods create people out of cornmeal. The people of corn flourished and built one of the most remarkable cultures in human history. But in Mexico and Central America today maize has come under attack. As a result of the North American Free Trade Agreement (NAFTA) Mexico has been flooded with imported corn from north of the border in the US. The contamination of native varieties with genetically modified imported maize could have major consequences for Mexican campesinos (farmers), for local biodiversity and for the worlds genetic reserves. A decade ago Mexican bureaucrats and business people had it all figured out. NAFTA would drive uncompetitive maize farmers from the countryside to work in booming assembly factories across the country. Their standard of living would rise as the cost of providing services like electricity and water to scattered rural communities would fall. Best of all, cheap imported maize from the US the worlds most efficient and most heavily subsidized producer would be a benefit to Mexican consumers. Unfortunately, it didnt turn out that way. There werent quite enough of those factory jobs and the ones that did materialize continued to be along the US border, not further in Mexico. And despite a huge drop in the price farmers received for their corn, consumers often ended up paying more. The price of tortillas the countrys staple food rose nearly fivefold as the Government stopped domestic subsidies and giant agribusiness firms took over the market. Free trade defenders like Mexicos former Under-Secretary of Agriculture Luis Tellez suggest: Its not that NAFTA failed, its just that reality didnt turn out the way we planned it. Part of that reality was that the Government did nothing to help campesinos in the supposed transition. Nor did NAFTA recognize inequalities or create compensation funds to help the victims of free trade unlike what occurred with economic integration in the European Union. Basically, Mexico adopted a sink-or-swim policy for small farmers, opening the floodgates to tons of imported US corn. Maize imports tripled under NAFTA and producer prices fell by half. The drop in income immediately hit the most vulnerable and poorest members of rural society. While more than a third of the corn grown by small farmers is used to feed their families, the rest is sold on local markets. Without this critical cash, rural living standards plunged. Maize is at the heart of indigenous and campesino identity. Jose Carrillo de la Cruz, a Huichol Indian from northern Jalisco, describes that relationship: Corn is the force, the life and the strength of the Huichol. If there were a change, if someone from outside patented our corn, it would end our life and existence. The good news is that the free-trade threat to Mexicos culture and food security has sparked a lively resistance. In Defence of Corn, a movement to protect local maize varieties, is not a membership organization but a series of forums and actions led by campesinos themselves. Its a direct challenge to both free trade and the dictums of corporate science. The farmers tenacity and refusal to abandon the crop of their ancestors is impressive. But larger economic conditions continue to shape their lives. Rural poverty and hunger have soared under free trade and placed a heavier burden on women left to work the land. The battle for food sovereignty continues. Movement leaders insist that the Government reassess its free trade policies and develop a real rural development programme. | The Mexican farmers were paid a lot less for their corn after NAFTA. | entailment |
id_6417 | The People of Corn Maize is Mexicos lifeblood the countrys history and identity are entwined with it. But this centuries-old relationship is now threatened by free trade. Laura Carlsen investigates the threat and profiles a growing activist movement. On a mountain top in southern Mexico, Indian families gather. They chant and sprinkle cornmeal in consecration, praying for the success of their new crops, the unity of their communities and the health of their families. In this village in Oaxaca people eat corn tamales, sow maize plots and teach children to care for the plant. The cultural rhythms of this community, its labours, rituals and celebrations will be defined as they have been for millennia by the lifecycle of corn. Indeed, if it werent for the domestication of teocintle (the ancestor of modern maize) 9,000 years ago mesoamerican civilization could never have developed. In the Mayan sacred book, the Popol Vuh, the gods create people out of cornmeal. The people of corn flourished and built one of the most remarkable cultures in human history. But in Mexico and Central America today maize has come under attack. As a result of the North American Free Trade Agreement (NAFTA) Mexico has been flooded with imported corn from north of the border in the US. The contamination of native varieties with genetically modified imported maize could have major consequences for Mexican campesinos (farmers), for local biodiversity and for the worlds genetic reserves. A decade ago Mexican bureaucrats and business people had it all figured out. NAFTA would drive uncompetitive maize farmers from the countryside to work in booming assembly factories across the country. Their standard of living would rise as the cost of providing services like electricity and water to scattered rural communities would fall. Best of all, cheap imported maize from the US the worlds most efficient and most heavily subsidized producer would be a benefit to Mexican consumers. Unfortunately, it didnt turn out that way. There werent quite enough of those factory jobs and the ones that did materialize continued to be along the US border, not further in Mexico. And despite a huge drop in the price farmers received for their corn, consumers often ended up paying more. The price of tortillas the countrys staple food rose nearly fivefold as the Government stopped domestic subsidies and giant agribusiness firms took over the market. Free trade defenders like Mexicos former Under-Secretary of Agriculture Luis Tellez suggest: Its not that NAFTA failed, its just that reality didnt turn out the way we planned it. Part of that reality was that the Government did nothing to help campesinos in the supposed transition. Nor did NAFTA recognize inequalities or create compensation funds to help the victims of free trade unlike what occurred with economic integration in the European Union. Basically, Mexico adopted a sink-or-swim policy for small farmers, opening the floodgates to tons of imported US corn. Maize imports tripled under NAFTA and producer prices fell by half. The drop in income immediately hit the most vulnerable and poorest members of rural society. While more than a third of the corn grown by small farmers is used to feed their families, the rest is sold on local markets. Without this critical cash, rural living standards plunged. Maize is at the heart of indigenous and campesino identity. Jose Carrillo de la Cruz, a Huichol Indian from northern Jalisco, describes that relationship: Corn is the force, the life and the strength of the Huichol. If there were a change, if someone from outside patented our corn, it would end our life and existence. The good news is that the free-trade threat to Mexicos culture and food security has sparked a lively resistance. In Defence of Corn, a movement to protect local maize varieties, is not a membership organization but a series of forums and actions led by campesinos themselves. Its a direct challenge to both free trade and the dictums of corporate science. The farmers tenacity and refusal to abandon the crop of their ancestors is impressive. But larger economic conditions continue to shape their lives. Rural poverty and hunger have soared under free trade and placed a heavier burden on women left to work the land. The battle for food sovereignty continues. Movement leaders insist that the Government reassess its free trade policies and develop a real rural development programme. | Many Mexican farmers wanted to leave Mexico after the Free Trade Agreement. | neutral |
id_6418 | The People of Corn Maize is Mexicos lifeblood the countrys history and identity are entwined with it. But this centuries-old relationship is now threatened by free trade. Laura Carlsen investigates the threat and profiles a growing activist movement. On a mountain top in southern Mexico, Indian families gather. They chant and sprinkle cornmeal in consecration, praying for the success of their new crops, the unity of their communities and the health of their families. In this village in Oaxaca people eat corn tamales, sow maize plots and teach children to care for the plant. The cultural rhythms of this community, its labours, rituals and celebrations will be defined as they have been for millennia by the lifecycle of corn. Indeed, if it werent for the domestication of teocintle (the ancestor of modern maize) 9,000 years ago mesoamerican civilization could never have developed. In the Mayan sacred book, the Popol Vuh, the gods create people out of cornmeal. The people of corn flourished and built one of the most remarkable cultures in human history. But in Mexico and Central America today maize has come under attack. As a result of the North American Free Trade Agreement (NAFTA) Mexico has been flooded with imported corn from north of the border in the US. The contamination of native varieties with genetically modified imported maize could have major consequences for Mexican campesinos (farmers), for local biodiversity and for the worlds genetic reserves. A decade ago Mexican bureaucrats and business people had it all figured out. NAFTA would drive uncompetitive maize farmers from the countryside to work in booming assembly factories across the country. Their standard of living would rise as the cost of providing services like electricity and water to scattered rural communities would fall. Best of all, cheap imported maize from the US the worlds most efficient and most heavily subsidized producer would be a benefit to Mexican consumers. Unfortunately, it didnt turn out that way. There werent quite enough of those factory jobs and the ones that did materialize continued to be along the US border, not further in Mexico. And despite a huge drop in the price farmers received for their corn, consumers often ended up paying more. The price of tortillas the countrys staple food rose nearly fivefold as the Government stopped domestic subsidies and giant agribusiness firms took over the market. Free trade defenders like Mexicos former Under-Secretary of Agriculture Luis Tellez suggest: Its not that NAFTA failed, its just that reality didnt turn out the way we planned it. Part of that reality was that the Government did nothing to help campesinos in the supposed transition. Nor did NAFTA recognize inequalities or create compensation funds to help the victims of free trade unlike what occurred with economic integration in the European Union. Basically, Mexico adopted a sink-or-swim policy for small farmers, opening the floodgates to tons of imported US corn. Maize imports tripled under NAFTA and producer prices fell by half. The drop in income immediately hit the most vulnerable and poorest members of rural society. While more than a third of the corn grown by small farmers is used to feed their families, the rest is sold on local markets. Without this critical cash, rural living standards plunged. Maize is at the heart of indigenous and campesino identity. Jose Carrillo de la Cruz, a Huichol Indian from northern Jalisco, describes that relationship: Corn is the force, the life and the strength of the Huichol. If there were a change, if someone from outside patented our corn, it would end our life and existence. The good news is that the free-trade threat to Mexicos culture and food security has sparked a lively resistance. In Defence of Corn, a movement to protect local maize varieties, is not a membership organization but a series of forums and actions led by campesinos themselves. Its a direct challenge to both free trade and the dictums of corporate science. The farmers tenacity and refusal to abandon the crop of their ancestors is impressive. But larger economic conditions continue to shape their lives. Rural poverty and hunger have soared under free trade and placed a heavier burden on women left to work the land. The battle for food sovereignty continues. Movement leaders insist that the Government reassess its free trade policies and develop a real rural development programme. | The Mexican farmers were not able to do anything to help themselves after the Trade Agreement. | contradiction |
id_6419 | The People of Corn Maize is Mexicos lifeblood the countrys history and identity are entwined with it. But this centuries-old relationship is now threatened by free trade. Laura Carlsen investigates the threat and profiles a growing activist movement. On a mountain top in southern Mexico, Indian families gather. They chant and sprinkle cornmeal in consecration, praying for the success of their new crops, the unity of their communities and the health of their families. In this village in Oaxaca people eat corn tamales, sow maize plots and teach children to care for the plant. The cultural rhythms of this community, its labours, rituals and celebrations will be defined as they have been for millennia by the lifecycle of corn. Indeed, if it werent for the domestication of teocintle (the ancestor of modern maize) 9,000 years ago mesoamerican civilization could never have developed. In the Mayan sacred book, the Popol Vuh, the gods create people out of cornmeal. The people of corn flourished and built one of the most remarkable cultures in human history. But in Mexico and Central America today maize has come under attack. As a result of the North American Free Trade Agreement (NAFTA) Mexico has been flooded with imported corn from north of the border in the US. The contamination of native varieties with genetically modified imported maize could have major consequences for Mexican campesinos (farmers), for local biodiversity and for the worlds genetic reserves. A decade ago Mexican bureaucrats and business people had it all figured out. NAFTA would drive uncompetitive maize farmers from the countryside to work in booming assembly factories across the country. Their standard of living would rise as the cost of providing services like electricity and water to scattered rural communities would fall. Best of all, cheap imported maize from the US the worlds most efficient and most heavily subsidized producer would be a benefit to Mexican consumers. Unfortunately, it didnt turn out that way. There werent quite enough of those factory jobs and the ones that did materialize continued to be along the US border, not further in Mexico. And despite a huge drop in the price farmers received for their corn, consumers often ended up paying more. The price of tortillas the countrys staple food rose nearly fivefold as the Government stopped domestic subsidies and giant agribusiness firms took over the market. Free trade defenders like Mexicos former Under-Secretary of Agriculture Luis Tellez suggest: Its not that NAFTA failed, its just that reality didnt turn out the way we planned it. Part of that reality was that the Government did nothing to help campesinos in the supposed transition. Nor did NAFTA recognize inequalities or create compensation funds to help the victims of free trade unlike what occurred with economic integration in the European Union. Basically, Mexico adopted a sink-or-swim policy for small farmers, opening the floodgates to tons of imported US corn. Maize imports tripled under NAFTA and producer prices fell by half. The drop in income immediately hit the most vulnerable and poorest members of rural society. While more than a third of the corn grown by small farmers is used to feed their families, the rest is sold on local markets. Without this critical cash, rural living standards plunged. Maize is at the heart of indigenous and campesino identity. Jose Carrillo de la Cruz, a Huichol Indian from northern Jalisco, describes that relationship: Corn is the force, the life and the strength of the Huichol. If there were a change, if someone from outside patented our corn, it would end our life and existence. The good news is that the free-trade threat to Mexicos culture and food security has sparked a lively resistance. In Defence of Corn, a movement to protect local maize varieties, is not a membership organization but a series of forums and actions led by campesinos themselves. Its a direct challenge to both free trade and the dictums of corporate science. The farmers tenacity and refusal to abandon the crop of their ancestors is impressive. But larger economic conditions continue to shape their lives. Rural poverty and hunger have soared under free trade and placed a heavier burden on women left to work the land. The battle for food sovereignty continues. Movement leaders insist that the Government reassess its free trade policies and develop a real rural development programme. | Following NAFTA, Mexican business people tried to stop maize farmers from working in factories throughout the country. | neutral |
id_6420 | The Philippines is part of the so-called "coral triangle, " which spans eastern Indonesia, parts of Malaysia, Papua New Guinea, Timor Leste and the Solomon Islands. It covers an area that is equivalent to half of the entire United States. Although there are 1,000 marine protected areas (MPAs) within the country, only 20 percent are functioning, the update said. MPAs are carefully selected areas where human development and exploitation of natural resources are regulated to protect species and habitats. In the Philippines, coral reefs are important economic assets, contributing more than US$1 billion annually to the economy. "Many local, coastal communities do not understand or know what a coral reef actually is, how its ecosystem interacts with them, and why it is so important for their villages to preserve and conserve it, " Southeast Asian Centre of Excellence (SEA CoE) said in a statement. Unknowingly, coral reefs touted to be the tropical rainforest of the sea attract a diverse array of organisms in the ocean. They provide a source of food and shelter for a large variety of species including fish, shellfish, fungi, sponges, sea anemones, sea urchins, turtles and snails. A single reef can support as many as 3,000 species of marine life. As fishing grounds, they are thought to be 10 to 100 times as productive per unit area as the open sea. In the Philippines, an estimated 10-15 per cent of the total fisheries come from coral reefs. Not only coral reefs serve as home to marine fish species, they also supply compounds for medicines. The Aids drug AZT is based on chemicals extracted from a reef sponge while more than half of all new cancer drug research focuses on marine organisms. Unfortunately, these beautiful coral reefs are now at serious risk from degradation. According to scientists, 70 percent of the world's coral reefs may be lost by 2050. In the Philippines, coral reefs have been slowly dying over the past 30 years. The World Atlas of Coral Reefs, compiled by the United Nations Environment Program (UNEP), reported that 97 percent of reefs in the Philippines are under threat from destructive fishing techniques, including cyanide poisoning, over-fishing, or from deforestation and urbanization that result in harmful sediment spilling into the sea. Last year, Reef Check, an international organization assessing the health of reefs in 82 countries, stated that only five percent of the country's coral reefs are in "excellent condition. " These are the Tubbataha Reef Marine Park in Palawan, Apo Island in Negros Oriental, Apo Reef in Puerto Galera, Mindoro, and Verde Island Passage off Batangas. 69About 80-90 per cent of the incomes of small island communities come from fisheries. "Coral reef fish yields range from 20 to 25 metric tons per square kilometre per year for healthy reefs, " said Angel C. Alcala, former environment secretary. Alcala is known for his work in Apo Island, one of the world-renowned community-run fish sanctuaries in the country. It even earned him the prestigious Ramon Magsaysay Award. Rapid population growth and the increasing human pressure on coastal resources have also resulted in the massive degradation of the coral reefs. Robert Ginsburg, a specialist on coral reefs working with the Rosenstiel School of Marine and Atmospheric Science at the University of Miami, said human beings have a lot to do with the rapid destruction of reefs. "In areas where people are using the reefs or where there is a large population, there are significant declines in coral reefs, " he pointed out. "Life in the Philippines is never far from the sea, " wrote Joan Castro and Leona D'Agnes in a new report. "Every Filipino lives within 45 miles of the coast, and every day, more than 4,500 new residents are born. " Estimates show that if the present rapid population growth and declining trend in fish production continue, only 10 kilograms of fish will be available per Filipino per year by 2010, as opposed to 28.5 kilograms per year in 2003. | Available fish resources in the Philippines are expected to reduce by more than 50% over a period of seven years. | entailment |
id_6421 | The Philippines is part of the so-called "coral triangle, " which spans eastern Indonesia, parts of Malaysia, Papua New Guinea, Timor Leste and the Solomon Islands. It covers an area that is equivalent to half of the entire United States. Although there are 1,000 marine protected areas (MPAs) within the country, only 20 percent are functioning, the update said. MPAs are carefully selected areas where human development and exploitation of natural resources are regulated to protect species and habitats. In the Philippines, coral reefs are important economic assets, contributing more than US$1 billion annually to the economy. "Many local, coastal communities do not understand or know what a coral reef actually is, how its ecosystem interacts with them, and why it is so important for their villages to preserve and conserve it, " Southeast Asian Centre of Excellence (SEA CoE) said in a statement. Unknowingly, coral reefs touted to be the tropical rainforest of the sea attract a diverse array of organisms in the ocean. They provide a source of food and shelter for a large variety of species including fish, shellfish, fungi, sponges, sea anemones, sea urchins, turtles and snails. A single reef can support as many as 3,000 species of marine life. As fishing grounds, they are thought to be 10 to 100 times as productive per unit area as the open sea. In the Philippines, an estimated 10-15 per cent of the total fisheries come from coral reefs. Not only coral reefs serve as home to marine fish species, they also supply compounds for medicines. The Aids drug AZT is based on chemicals extracted from a reef sponge while more than half of all new cancer drug research focuses on marine organisms. Unfortunately, these beautiful coral reefs are now at serious risk from degradation. According to scientists, 70 percent of the world's coral reefs may be lost by 2050. In the Philippines, coral reefs have been slowly dying over the past 30 years. The World Atlas of Coral Reefs, compiled by the United Nations Environment Program (UNEP), reported that 97 percent of reefs in the Philippines are under threat from destructive fishing techniques, including cyanide poisoning, over-fishing, or from deforestation and urbanization that result in harmful sediment spilling into the sea. Last year, Reef Check, an international organization assessing the health of reefs in 82 countries, stated that only five percent of the country's coral reefs are in "excellent condition. " These are the Tubbataha Reef Marine Park in Palawan, Apo Island in Negros Oriental, Apo Reef in Puerto Galera, Mindoro, and Verde Island Passage off Batangas. 69About 80-90 per cent of the incomes of small island communities come from fisheries. "Coral reef fish yields range from 20 to 25 metric tons per square kilometre per year for healthy reefs, " said Angel C. Alcala, former environment secretary. Alcala is known for his work in Apo Island, one of the world-renowned community-run fish sanctuaries in the country. It even earned him the prestigious Ramon Magsaysay Award. Rapid population growth and the increasing human pressure on coastal resources have also resulted in the massive degradation of the coral reefs. Robert Ginsburg, a specialist on coral reefs working with the Rosenstiel School of Marine and Atmospheric Science at the University of Miami, said human beings have a lot to do with the rapid destruction of reefs. "In areas where people are using the reefs or where there is a large population, there are significant declines in coral reefs, " he pointed out. "Life in the Philippines is never far from the sea, " wrote Joan Castro and Leona D'Agnes in a new report. "Every Filipino lives within 45 miles of the coast, and every day, more than 4,500 new residents are born. " Estimates show that if the present rapid population growth and declining trend in fish production continue, only 10 kilograms of fish will be available per Filipino per year by 2010, as opposed to 28.5 kilograms per year in 2003. | Humans are one reason why coral reefs are decreasing in size. | entailment |
id_6422 | The Philippines is part of the so-called "coral triangle, " which spans eastern Indonesia, parts of Malaysia, Papua New Guinea, Timor Leste and the Solomon Islands. It covers an area that is equivalent to half of the entire United States. Although there are 1,000 marine protected areas (MPAs) within the country, only 20 percent are functioning, the update said. MPAs are carefully selected areas where human development and exploitation of natural resources are regulated to protect species and habitats. In the Philippines, coral reefs are important economic assets, contributing more than US$1 billion annually to the economy. "Many local, coastal communities do not understand or know what a coral reef actually is, how its ecosystem interacts with them, and why it is so important for their villages to preserve and conserve it, " Southeast Asian Centre of Excellence (SEA CoE) said in a statement. Unknowingly, coral reefs touted to be the tropical rainforest of the sea attract a diverse array of organisms in the ocean. They provide a source of food and shelter for a large variety of species including fish, shellfish, fungi, sponges, sea anemones, sea urchins, turtles and snails. A single reef can support as many as 3,000 species of marine life. As fishing grounds, they are thought to be 10 to 100 times as productive per unit area as the open sea. In the Philippines, an estimated 10-15 per cent of the total fisheries come from coral reefs. Not only coral reefs serve as home to marine fish species, they also supply compounds for medicines. The Aids drug AZT is based on chemicals extracted from a reef sponge while more than half of all new cancer drug research focuses on marine organisms. Unfortunately, these beautiful coral reefs are now at serious risk from degradation. According to scientists, 70 percent of the world's coral reefs may be lost by 2050. In the Philippines, coral reefs have been slowly dying over the past 30 years. The World Atlas of Coral Reefs, compiled by the United Nations Environment Program (UNEP), reported that 97 percent of reefs in the Philippines are under threat from destructive fishing techniques, including cyanide poisoning, over-fishing, or from deforestation and urbanization that result in harmful sediment spilling into the sea. Last year, Reef Check, an international organization assessing the health of reefs in 82 countries, stated that only five percent of the country's coral reefs are in "excellent condition. " These are the Tubbataha Reef Marine Park in Palawan, Apo Island in Negros Oriental, Apo Reef in Puerto Galera, Mindoro, and Verde Island Passage off Batangas. 69About 80-90 per cent of the incomes of small island communities come from fisheries. "Coral reef fish yields range from 20 to 25 metric tons per square kilometre per year for healthy reefs, " said Angel C. Alcala, former environment secretary. Alcala is known for his work in Apo Island, one of the world-renowned community-run fish sanctuaries in the country. It even earned him the prestigious Ramon Magsaysay Award. Rapid population growth and the increasing human pressure on coastal resources have also resulted in the massive degradation of the coral reefs. Robert Ginsburg, a specialist on coral reefs working with the Rosenstiel School of Marine and Atmospheric Science at the University of Miami, said human beings have a lot to do with the rapid destruction of reefs. "In areas where people are using the reefs or where there is a large population, there are significant declines in coral reefs, " he pointed out. "Life in the Philippines is never far from the sea, " wrote Joan Castro and Leona D'Agnes in a new report. "Every Filipino lives within 45 miles of the coast, and every day, more than 4,500 new residents are born. " Estimates show that if the present rapid population growth and declining trend in fish production continue, only 10 kilograms of fish will be available per Filipino per year by 2010, as opposed to 28.5 kilograms per year in 2003. | Coral reefs make better fishing areas than the open sea. | entailment |
id_6423 | The Philippines is part of the so-called "coral triangle, " which spans eastern Indonesia, parts of Malaysia, Papua New Guinea, Timor Leste and the Solomon Islands. It covers an area that is equivalent to half of the entire United States. Although there are 1,000 marine protected areas (MPAs) within the country, only 20 percent are functioning, the update said. MPAs are carefully selected areas where human development and exploitation of natural resources are regulated to protect species and habitats. In the Philippines, coral reefs are important economic assets, contributing more than US$1 billion annually to the economy. "Many local, coastal communities do not understand or know what a coral reef actually is, how its ecosystem interacts with them, and why it is so important for their villages to preserve and conserve it, " Southeast Asian Centre of Excellence (SEA CoE) said in a statement. Unknowingly, coral reefs touted to be the tropical rainforest of the sea attract a diverse array of organisms in the ocean. They provide a source of food and shelter for a large variety of species including fish, shellfish, fungi, sponges, sea anemones, sea urchins, turtles and snails. A single reef can support as many as 3,000 species of marine life. As fishing grounds, they are thought to be 10 to 100 times as productive per unit area as the open sea. In the Philippines, an estimated 10-15 per cent of the total fisheries come from coral reefs. Not only coral reefs serve as home to marine fish species, they also supply compounds for medicines. The Aids drug AZT is based on chemicals extracted from a reef sponge while more than half of all new cancer drug research focuses on marine organisms. Unfortunately, these beautiful coral reefs are now at serious risk from degradation. According to scientists, 70 percent of the world's coral reefs may be lost by 2050. In the Philippines, coral reefs have been slowly dying over the past 30 years. The World Atlas of Coral Reefs, compiled by the United Nations Environment Program (UNEP), reported that 97 percent of reefs in the Philippines are under threat from destructive fishing techniques, including cyanide poisoning, over-fishing, or from deforestation and urbanization that result in harmful sediment spilling into the sea. Last year, Reef Check, an international organization assessing the health of reefs in 82 countries, stated that only five percent of the country's coral reefs are in "excellent condition. " These are the Tubbataha Reef Marine Park in Palawan, Apo Island in Negros Oriental, Apo Reef in Puerto Galera, Mindoro, and Verde Island Passage off Batangas. 69About 80-90 per cent of the incomes of small island communities come from fisheries. "Coral reef fish yields range from 20 to 25 metric tons per square kilometre per year for healthy reefs, " said Angel C. Alcala, former environment secretary. Alcala is known for his work in Apo Island, one of the world-renowned community-run fish sanctuaries in the country. It even earned him the prestigious Ramon Magsaysay Award. Rapid population growth and the increasing human pressure on coastal resources have also resulted in the massive degradation of the coral reefs. Robert Ginsburg, a specialist on coral reefs working with the Rosenstiel School of Marine and Atmospheric Science at the University of Miami, said human beings have a lot to do with the rapid destruction of reefs. "In areas where people are using the reefs or where there is a large population, there are significant declines in coral reefs, " he pointed out. "Life in the Philippines is never far from the sea, " wrote Joan Castro and Leona D'Agnes in a new report. "Every Filipino lives within 45 miles of the coast, and every day, more than 4,500 new residents are born. " Estimates show that if the present rapid population growth and declining trend in fish production continue, only 10 kilograms of fish will be available per Filipino per year by 2010, as opposed to 28.5 kilograms per year in 2003. | All of the coral reefs in the Philippines will be destroyed by 2050. | neutral |
id_6424 | The Philippines is part of the so-called "coral triangle, " which spans eastern Indonesia, parts of Malaysia, Papua New Guinea, Timor Leste and the Solomon Islands. It covers an area that is equivalent to half of the entire United States. Although there are 1,000 marine protected areas (MPAs) within the country, only 20 percent are functioning, the update said. MPAs are carefully selected areas where human development and exploitation of natural resources are regulated to protect species and habitats. In the Philippines, coral reefs are important economic assets, contributing more than US$1 billion annually to the economy. "Many local, coastal communities do not understand or know what a coral reef actually is, how its ecosystem interacts with them, and why it is so important for their villages to preserve and conserve it, " Southeast Asian Centre of Excellence (SEA CoE) said in a statement. Unknowingly, coral reefs touted to be the tropical rainforest of the sea attract a diverse array of organisms in the ocean. They provide a source of food and shelter for a large variety of species including fish, shellfish, fungi, sponges, sea anemones, sea urchins, turtles and snails. A single reef can support as many as 3,000 species of marine life. As fishing grounds, they are thought to be 10 to 100 times as productive per unit area as the open sea. In the Philippines, an estimated 10-15 per cent of the total fisheries come from coral reefs. Not only coral reefs serve as home to marine fish species, they also supply compounds for medicines. The Aids drug AZT is based on chemicals extracted from a reef sponge while more than half of all new cancer drug research focuses on marine organisms. Unfortunately, these beautiful coral reefs are now at serious risk from degradation. According to scientists, 70 percent of the world's coral reefs may be lost by 2050. In the Philippines, coral reefs have been slowly dying over the past 30 years. The World Atlas of Coral Reefs, compiled by the United Nations Environment Program (UNEP), reported that 97 percent of reefs in the Philippines are under threat from destructive fishing techniques, including cyanide poisoning, over-fishing, or from deforestation and urbanization that result in harmful sediment spilling into the sea. Last year, Reef Check, an international organization assessing the health of reefs in 82 countries, stated that only five percent of the country's coral reefs are in "excellent condition. " These are the Tubbataha Reef Marine Park in Palawan, Apo Island in Negros Oriental, Apo Reef in Puerto Galera, Mindoro, and Verde Island Passage off Batangas. 69About 80-90 per cent of the incomes of small island communities come from fisheries. "Coral reef fish yields range from 20 to 25 metric tons per square kilometre per year for healthy reefs, " said Angel C. Alcala, former environment secretary. Alcala is known for his work in Apo Island, one of the world-renowned community-run fish sanctuaries in the country. It even earned him the prestigious Ramon Magsaysay Award. Rapid population growth and the increasing human pressure on coastal resources have also resulted in the massive degradation of the coral reefs. Robert Ginsburg, a specialist on coral reefs working with the Rosenstiel School of Marine and Atmospheric Science at the University of Miami, said human beings have a lot to do with the rapid destruction of reefs. "In areas where people are using the reefs or where there is a large population, there are significant declines in coral reefs, " he pointed out. "Life in the Philippines is never far from the sea, " wrote Joan Castro and Leona D'Agnes in a new report. "Every Filipino lives within 45 miles of the coast, and every day, more than 4,500 new residents are born. " Estimates show that if the present rapid population growth and declining trend in fish production continue, only 10 kilograms of fish will be available per Filipino per year by 2010, as opposed to 28.5 kilograms per year in 2003. | The natural resources in twenty percent of the marine protected areas are still exploited. | contradiction |
id_6425 | The Phoenicians: an almost forgotten people The Phoenicians inhabited the region of modern Lebanon and Syria from about 3000 BC. They became the greatest traders of the pre-classical world, and were the first people to establish a large colonial network. Both of these activities were based on seafaring, an ability the Phoenicians developed from the example of their maritime predecessors, the Minoans of Crete. An Egyptian narrative of about 1080 BC, the Story of Wen-Amen, provides an insight into the scale of their trading activity. One of the characters is Wereket-El, a Phoenician merchant living at Tanis in Egypts Nile delta. As many as 50 ships carry out his business, plying back and forth between the Nile and the Phoenician port of Sidon. The most prosperous period for Phoenicia was the 10th century BC, when the surrounding region was stable. Hiram, the king of the Phoenician city of Tyre, was an ally and business partner of Solomon, King of Israel. For Solomons temple in Jerusalem, Hiram provided craftsmen with particular skills that were needed for this major construction project. He also supplied materials particularly timber, including cedar from the forests of Lebanon. And the two kings went into trade in partnership. They sent out Phoenician vessels on long expeditions (of up to three years for the return trip) to bring back gold, sandalwood, ivory, monkeys and peacocks from Ophir. This is an unidentified place, probably on the east coast of Africa or the west coast of India. Phoenicia was famous for its luxury goods. The cedar wood was not only exported as top-quality timber for architecture and shipbuilding. It was also carved by the Phoenicians, and the same skill was adapted to even more precious work in ivory. The rare and expensive dye for cloth, Tyrian purple, complemented another famous local product, fine linen. The metalworkers of the region, particularly those working in gold, were famous. Tyre and Sidon were also known for their glass. These were the main products which the Phoenicians exported. In addition, as traders and middlemen, they took a commission on a much greater range of precious goods that they transported from elsewhere. The extensive trade of Phoenicia required much book-keeping and correspondence, and it was in the field of writing that the Phoenicians made their most lasting contribution to world history. The scripts in use in the world up to the second millennium BC (in Egypt, Mesopotamia or China) all required the writer to learn a large number of separate characters each of them expressing either a whole word or an element of its meaning. By contrast, the Phoenicians, in about 1500 BC, developed an entirely new approach to writing. The marks made (with a pointed tool called a stylus, on damp clay) now attempted to capture the sound of a word. This required an alphabet of individual letters. The trading and seafaring skills of the Phoenicians resulted in a network of colonies, spreading westwards through the Mediterranean. The first was probably Citium, in Cyprus, established in the 9th century BC. But the main expansion came from the 8th century BC onwards, when pressure from Assyria to the east disrupted the patterns of trade on the Phoenician coast. Trading colonies were developed on the string of islands in the centre of the Mediterranean Crete, Sicily, Malta, Sardinia, Ibiza and also on the coast of north Africa. The African colonies clustered in particular around the great promontory which, with Sicily opposite, forms the narrowest channel on the main Mediterranean sea route. This is the site of Carthage. Carthage was the largest of the towns founded by the Phoenicians on the north African coast, and it rapidly assumed a leading position among the neighbouring colonies. The traditional date of its founding is 814 BC, but archaeological evidence suggests that it was probably settled a little over a century later. The subsequent spread and growth of Phoenician colonies in the western Mediterranean, and even out to the Atlantic coasts of Africa and Spain, was as much the achievement of Carthage as of the original Phoenician trading cities such as Tyre and Sidon. But no doubt links were maintained with the homeland, and new colonists continued to travel west. From the 8th century BC, many of the coastal cities of Phoenicia came under the control of a succession of imperial powers, each of them defeated and replaced in the region by the next: first the Assyrians, then the Babylonians, Persians and Macedonian Greeks. In 64 BC, the area of Phoenicia became part of the Roman province of Syria. The Phoenicians as an identifiable people then faded from history, merging into the populations of modern Lebanon and northern Syria. | Problems with Assyria led to the establishment of a number of Phoenician colonies. | entailment |
id_6426 | The Phoenicians: an almost forgotten people The Phoenicians inhabited the region of modern Lebanon and Syria from about 3000 BC. They became the greatest traders of the pre-classical world, and were the first people to establish a large colonial network. Both of these activities were based on seafaring, an ability the Phoenicians developed from the example of their maritime predecessors, the Minoans of Crete. An Egyptian narrative of about 1080 BC, the Story of Wen-Amen, provides an insight into the scale of their trading activity. One of the characters is Wereket-El, a Phoenician merchant living at Tanis in Egypts Nile delta. As many as 50 ships carry out his business, plying back and forth between the Nile and the Phoenician port of Sidon. The most prosperous period for Phoenicia was the 10th century BC, when the surrounding region was stable. Hiram, the king of the Phoenician city of Tyre, was an ally and business partner of Solomon, King of Israel. For Solomons temple in Jerusalem, Hiram provided craftsmen with particular skills that were needed for this major construction project. He also supplied materials particularly timber, including cedar from the forests of Lebanon. And the two kings went into trade in partnership. They sent out Phoenician vessels on long expeditions (of up to three years for the return trip) to bring back gold, sandalwood, ivory, monkeys and peacocks from Ophir. This is an unidentified place, probably on the east coast of Africa or the west coast of India. Phoenicia was famous for its luxury goods. The cedar wood was not only exported as top-quality timber for architecture and shipbuilding. It was also carved by the Phoenicians, and the same skill was adapted to even more precious work in ivory. The rare and expensive dye for cloth, Tyrian purple, complemented another famous local product, fine linen. The metalworkers of the region, particularly those working in gold, were famous. Tyre and Sidon were also known for their glass. These were the main products which the Phoenicians exported. In addition, as traders and middlemen, they took a commission on a much greater range of precious goods that they transported from elsewhere. The extensive trade of Phoenicia required much book-keeping and correspondence, and it was in the field of writing that the Phoenicians made their most lasting contribution to world history. The scripts in use in the world up to the second millennium BC (in Egypt, Mesopotamia or China) all required the writer to learn a large number of separate characters each of them expressing either a whole word or an element of its meaning. By contrast, the Phoenicians, in about 1500 BC, developed an entirely new approach to writing. The marks made (with a pointed tool called a stylus, on damp clay) now attempted to capture the sound of a word. This required an alphabet of individual letters. The trading and seafaring skills of the Phoenicians resulted in a network of colonies, spreading westwards through the Mediterranean. The first was probably Citium, in Cyprus, established in the 9th century BC. But the main expansion came from the 8th century BC onwards, when pressure from Assyria to the east disrupted the patterns of trade on the Phoenician coast. Trading colonies were developed on the string of islands in the centre of the Mediterranean Crete, Sicily, Malta, Sardinia, Ibiza and also on the coast of north Africa. The African colonies clustered in particular around the great promontory which, with Sicily opposite, forms the narrowest channel on the main Mediterranean sea route. This is the site of Carthage. Carthage was the largest of the towns founded by the Phoenicians on the north African coast, and it rapidly assumed a leading position among the neighbouring colonies. The traditional date of its founding is 814 BC, but archaeological evidence suggests that it was probably settled a little over a century later. The subsequent spread and growth of Phoenician colonies in the western Mediterranean, and even out to the Atlantic coasts of Africa and Spain, was as much the achievement of Carthage as of the original Phoenician trading cities such as Tyre and Sidon. But no doubt links were maintained with the homeland, and new colonists continued to travel west. From the 8th century BC, many of the coastal cities of Phoenicia came under the control of a succession of imperial powers, each of them defeated and replaced in the region by the next: first the Assyrians, then the Babylonians, Persians and Macedonian Greeks. In 64 BC, the area of Phoenicia became part of the Roman province of Syria. The Phoenicians as an identifiable people then faded from history, merging into the populations of modern Lebanon and northern Syria. | Phoenicians reached the Atlantic ocean. | entailment |
id_6427 | The Phoenicians: an almost forgotten people The Phoenicians inhabited the region of modern Lebanon and Syria from about 3000 BC. They became the greatest traders of the pre-classical world, and were the first people to establish a large colonial network. Both of these activities were based on seafaring, an ability the Phoenicians developed from the example of their maritime predecessors, the Minoans of Crete. An Egyptian narrative of about 1080 BC, the Story of Wen-Amen, provides an insight into the scale of their trading activity. One of the characters is Wereket-El, a Phoenician merchant living at Tanis in Egypts Nile delta. As many as 50 ships carry out his business, plying back and forth between the Nile and the Phoenician port of Sidon. The most prosperous period for Phoenicia was the 10th century BC, when the surrounding region was stable. Hiram, the king of the Phoenician city of Tyre, was an ally and business partner of Solomon, King of Israel. For Solomons temple in Jerusalem, Hiram provided craftsmen with particular skills that were needed for this major construction project. He also supplied materials particularly timber, including cedar from the forests of Lebanon. And the two kings went into trade in partnership. They sent out Phoenician vessels on long expeditions (of up to three years for the return trip) to bring back gold, sandalwood, ivory, monkeys and peacocks from Ophir. This is an unidentified place, probably on the east coast of Africa or the west coast of India. Phoenicia was famous for its luxury goods. The cedar wood was not only exported as top-quality timber for architecture and shipbuilding. It was also carved by the Phoenicians, and the same skill was adapted to even more precious work in ivory. The rare and expensive dye for cloth, Tyrian purple, complemented another famous local product, fine linen. The metalworkers of the region, particularly those working in gold, were famous. Tyre and Sidon were also known for their glass. These were the main products which the Phoenicians exported. In addition, as traders and middlemen, they took a commission on a much greater range of precious goods that they transported from elsewhere. The extensive trade of Phoenicia required much book-keeping and correspondence, and it was in the field of writing that the Phoenicians made their most lasting contribution to world history. The scripts in use in the world up to the second millennium BC (in Egypt, Mesopotamia or China) all required the writer to learn a large number of separate characters each of them expressing either a whole word or an element of its meaning. By contrast, the Phoenicians, in about 1500 BC, developed an entirely new approach to writing. The marks made (with a pointed tool called a stylus, on damp clay) now attempted to capture the sound of a word. This required an alphabet of individual letters. The trading and seafaring skills of the Phoenicians resulted in a network of colonies, spreading westwards through the Mediterranean. The first was probably Citium, in Cyprus, established in the 9th century BC. But the main expansion came from the 8th century BC onwards, when pressure from Assyria to the east disrupted the patterns of trade on the Phoenician coast. Trading colonies were developed on the string of islands in the centre of the Mediterranean Crete, Sicily, Malta, Sardinia, Ibiza and also on the coast of north Africa. The African colonies clustered in particular around the great promontory which, with Sicily opposite, forms the narrowest channel on the main Mediterranean sea route. This is the site of Carthage. Carthage was the largest of the towns founded by the Phoenicians on the north African coast, and it rapidly assumed a leading position among the neighbouring colonies. The traditional date of its founding is 814 BC, but archaeological evidence suggests that it was probably settled a little over a century later. The subsequent spread and growth of Phoenician colonies in the western Mediterranean, and even out to the Atlantic coasts of Africa and Spain, was as much the achievement of Carthage as of the original Phoenician trading cities such as Tyre and Sidon. But no doubt links were maintained with the homeland, and new colonists continued to travel west. From the 8th century BC, many of the coastal cities of Phoenicia came under the control of a succession of imperial powers, each of them defeated and replaced in the region by the next: first the Assyrians, then the Babylonians, Persians and Macedonian Greeks. In 64 BC, the area of Phoenicia became part of the Roman province of Syria. The Phoenicians as an identifiable people then faded from history, merging into the populations of modern Lebanon and northern Syria. | Parts of Phoenicia were conquered by a series of empires. | entailment |
id_6428 | The Phoenicians: an almost forgotten people The Phoenicians inhabited the region of modern Lebanon and Syria from about 3000 BC. They became the greatest traders of the pre-classical world, and were the first people to establish a large colonial network. Both of these activities were based on seafaring, an ability the Phoenicians developed from the example of their maritime predecessors, the Minoans of Crete. An Egyptian narrative of about 1080 BC, the Story of Wen-Amen, provides an insight into the scale of their trading activity. One of the characters is Wereket-El, a Phoenician merchant living at Tanis in Egypts Nile delta. As many as 50 ships carry out his business, plying back and forth between the Nile and the Phoenician port of Sidon. The most prosperous period for Phoenicia was the 10th century BC, when the surrounding region was stable. Hiram, the king of the Phoenician city of Tyre, was an ally and business partner of Solomon, King of Israel. For Solomons temple in Jerusalem, Hiram provided craftsmen with particular skills that were needed for this major construction project. He also supplied materials particularly timber, including cedar from the forests of Lebanon. And the two kings went into trade in partnership. They sent out Phoenician vessels on long expeditions (of up to three years for the return trip) to bring back gold, sandalwood, ivory, monkeys and peacocks from Ophir. This is an unidentified place, probably on the east coast of Africa or the west coast of India. Phoenicia was famous for its luxury goods. The cedar wood was not only exported as top-quality timber for architecture and shipbuilding. It was also carved by the Phoenicians, and the same skill was adapted to even more precious work in ivory. The rare and expensive dye for cloth, Tyrian purple, complemented another famous local product, fine linen. The metalworkers of the region, particularly those working in gold, were famous. Tyre and Sidon were also known for their glass. These were the main products which the Phoenicians exported. In addition, as traders and middlemen, they took a commission on a much greater range of precious goods that they transported from elsewhere. The extensive trade of Phoenicia required much book-keeping and correspondence, and it was in the field of writing that the Phoenicians made their most lasting contribution to world history. The scripts in use in the world up to the second millennium BC (in Egypt, Mesopotamia or China) all required the writer to learn a large number of separate characters each of them expressing either a whole word or an element of its meaning. By contrast, the Phoenicians, in about 1500 BC, developed an entirely new approach to writing. The marks made (with a pointed tool called a stylus, on damp clay) now attempted to capture the sound of a word. This required an alphabet of individual letters. The trading and seafaring skills of the Phoenicians resulted in a network of colonies, spreading westwards through the Mediterranean. The first was probably Citium, in Cyprus, established in the 9th century BC. But the main expansion came from the 8th century BC onwards, when pressure from Assyria to the east disrupted the patterns of trade on the Phoenician coast. Trading colonies were developed on the string of islands in the centre of the Mediterranean Crete, Sicily, Malta, Sardinia, Ibiza and also on the coast of north Africa. The African colonies clustered in particular around the great promontory which, with Sicily opposite, forms the narrowest channel on the main Mediterranean sea route. This is the site of Carthage. Carthage was the largest of the towns founded by the Phoenicians on the north African coast, and it rapidly assumed a leading position among the neighbouring colonies. The traditional date of its founding is 814 BC, but archaeological evidence suggests that it was probably settled a little over a century later. The subsequent spread and growth of Phoenician colonies in the western Mediterranean, and even out to the Atlantic coasts of Africa and Spain, was as much the achievement of Carthage as of the original Phoenician trading cities such as Tyre and Sidon. But no doubt links were maintained with the homeland, and new colonists continued to travel west. From the 8th century BC, many of the coastal cities of Phoenicia came under the control of a succession of imperial powers, each of them defeated and replaced in the region by the next: first the Assyrians, then the Babylonians, Persians and Macedonian Greeks. In 64 BC, the area of Phoenicia became part of the Roman province of Syria. The Phoenicians as an identifiable people then faded from history, merging into the populations of modern Lebanon and northern Syria. | Carthage was an enemy town which the Phoenicians won in battle. | contradiction |
id_6429 | The Phoenicians: an almost forgotten people The Phoenicians inhabited the region of modern Lebanon and Syria from about 3000 BC. They became the greatest traders of the pre-classical world, and were the first people to establish a large colonial network. Both of these activities were based on seafaring, an ability the Phoenicians developed from the example of their maritime predecessors, the Minoans of Crete. An Egyptian narrative of about 1080 BC, the Story of Wen-Amen, provides an insight into the scale of their trading activity. One of the characters is Wereket-El, a Phoenician merchant living at Tanis in Egypts Nile delta. As many as 50 ships carry out his business, plying back and forth between the Nile and the Phoenician port of Sidon. The most prosperous period for Phoenicia was the 10th century BC, when the surrounding region was stable. Hiram, the king of the Phoenician city of Tyre, was an ally and business partner of Solomon, King of Israel. For Solomons temple in Jerusalem, Hiram provided craftsmen with particular skills that were needed for this major construction project. He also supplied materials particularly timber, including cedar from the forests of Lebanon. And the two kings went into trade in partnership. They sent out Phoenician vessels on long expeditions (of up to three years for the return trip) to bring back gold, sandalwood, ivory, monkeys and peacocks from Ophir. This is an unidentified place, probably on the east coast of Africa or the west coast of India. Phoenicia was famous for its luxury goods. The cedar wood was not only exported as top-quality timber for architecture and shipbuilding. It was also carved by the Phoenicians, and the same skill was adapted to even more precious work in ivory. The rare and expensive dye for cloth, Tyrian purple, complemented another famous local product, fine linen. The metalworkers of the region, particularly those working in gold, were famous. Tyre and Sidon were also known for their glass. These were the main products which the Phoenicians exported. In addition, as traders and middlemen, they took a commission on a much greater range of precious goods that they transported from elsewhere. The extensive trade of Phoenicia required much book-keeping and correspondence, and it was in the field of writing that the Phoenicians made their most lasting contribution to world history. The scripts in use in the world up to the second millennium BC (in Egypt, Mesopotamia or China) all required the writer to learn a large number of separate characters each of them expressing either a whole word or an element of its meaning. By contrast, the Phoenicians, in about 1500 BC, developed an entirely new approach to writing. The marks made (with a pointed tool called a stylus, on damp clay) now attempted to capture the sound of a word. This required an alphabet of individual letters. The trading and seafaring skills of the Phoenicians resulted in a network of colonies, spreading westwards through the Mediterranean. The first was probably Citium, in Cyprus, established in the 9th century BC. But the main expansion came from the 8th century BC onwards, when pressure from Assyria to the east disrupted the patterns of trade on the Phoenician coast. Trading colonies were developed on the string of islands in the centre of the Mediterranean Crete, Sicily, Malta, Sardinia, Ibiza and also on the coast of north Africa. The African colonies clustered in particular around the great promontory which, with Sicily opposite, forms the narrowest channel on the main Mediterranean sea route. This is the site of Carthage. Carthage was the largest of the towns founded by the Phoenicians on the north African coast, and it rapidly assumed a leading position among the neighbouring colonies. The traditional date of its founding is 814 BC, but archaeological evidence suggests that it was probably settled a little over a century later. The subsequent spread and growth of Phoenician colonies in the western Mediterranean, and even out to the Atlantic coasts of Africa and Spain, was as much the achievement of Carthage as of the original Phoenician trading cities such as Tyre and Sidon. But no doubt links were maintained with the homeland, and new colonists continued to travel west. From the 8th century BC, many of the coastal cities of Phoenicia came under the control of a succession of imperial powers, each of them defeated and replaced in the region by the next: first the Assyrians, then the Babylonians, Persians and Macedonian Greeks. In 64 BC, the area of Phoenicia became part of the Roman province of Syria. The Phoenicians as an identifiable people then faded from history, merging into the populations of modern Lebanon and northern Syria. | The Phoenicians welcomed Roman control of the area. | neutral |
id_6430 | The Placebo Effect. Want to devise a new form of alternative medicine? No problem. Here is the recipe. Be warm, sympathetic, reassuring and enthusiastic. Your treatment should involve physical contact, and each session with your patients should last at least half an hour. Encourage your patients to take an active part in their treatment and understand how their disorders relate to the rest of their lives. Tell them that their own bodies possess the true power to heal. Make them pay you out of their own pockets. Describe your treatment in familiar words, but embroidered with a hint of mysticism: energy fields, energy flows, energy blocks, meridians, forces, auras, rhythms and the like. Refer to the knowledge of an earlier age: wisdom carelessly swept aside by the rise and rise of blind, mechanistic science. Oh, come off it, you are saying. Something invented off the top of your head could not possibly work, could it? Well yes, it could and often well enough to earn you a living. A good living if you are sufficiently convincing, or, better still, really believes in your therapy. Many illnesses get better on their own, so if you are lucky and administer your treatment at just the right time you will get the credit. But that's only part of it. Some of the improvement really would be down to you. Your healing power would be the outcome of a paradoxical force that conventional medicine recognizes but remains oddly ambivalent about: the placebo effect. Placebos are treatments that have no direct effect on the body, yet still work because the patient has faith in their power to heal. Most often the term refers to a dummy pill, but it applies just as much to any device or procedure, from a sticking plaster to a crystal to an operation. The existence of the placebo effect implies that even quackery may confer real benefits, which is why any mention of placebo is a touchy subject for many practitioners of complementary and alternative medicine, who are likely to regard it as tantamount to a charge of charlatanism. In fact, the placebo effect is a powerful part of all medical care, orthodox or otherwise, though its role is often neglected or misunderstood. One of the great strengths of CAM (Complementary and Alternative Medicine) may be its practitioners' skill in deploying the placebo effect to accomplish real healing. Complementary practitioners are miles better at producing non-specific effects and good therapeutic relationships, says Edzard Ernst, professor of CAM at Exeter University. The question is whether CAM could be integrated into conventional medicine, as some would like, without losing much of this power. At one level, it should come as no surprise that our state of mind can influence our physiology: anger opens the superficial blood vessels of the face; sadness pumps the tear glands. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research done so far has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, morphine-like neurochemicals known to help control pain. But exactly how placebos work their medical magic is still largely unknown. Any of the neurochemicals involved in transmitting pain impulses or modulating them might also be involved in generating the placebo response, says Don Price, an oral surgeon at the University of Florida who studies the placebo effect in dental pain. But endorphins are still out in front. That case has been strengthened by the recent work of Fabrizio Benedetti of the University of Turin, who showed that the placebo effect can be abolished by a drug, naloxone, which blocks the effects of endorphins. Benedetti induced pain in human volunteers by inflating a blood-pressure cuff on the forearm. He did this several times a day for several days, using morphine each time to control the pain. On the final day, without saying anything, he replaced the morphine with a saline solution. This still relieved the subjects' pain: a placebo effect. But when he added naloxone to the saline the pain relief disappeared. Here was direct proof that placebo analgesia is mediated, at least in part, by these natural opiates. Still, no one knows how belief triggers endorphin release, or why most people can't achieve placebo pain relief simply by willing it. Though scientists don't know exactly how placebos work, they have accumulated a fair bit of knowledge about how to trigger the effect. A London rheumatologist found, for example, that red dummy capsules made more effective painkillers than blue, green or yellow ones. Research on American students revealed that blue pills make better sedatives than pink, a color more suitable for stimulants. Even branding can make a difference: if Aspro or Tylenol is what you like to take for a headache, their chemically identical generic equivalents may be less effective. It matters, too, how the treatment is delivered. Decades ago, when the major tranquillizer chlorpromazine was being introduced, a doctor in Kansas categorized his colleagues according to whether they were keen on it, openly skeptical of its benefits, or took a let's try and see attitude. His conclusion: the more enthusiastic the doctor, the better the drug performed. And this year Ernst surveyed published studies that compared doctors' bedside manners. The studies turned up one consistent finding: Physicians who adopt a warm, friendly and reassuring manner, he reported, are more effective than those whose consultations are formal and do not offer reassurance. Warm, friendly and reassuring is precisely CAM'S strong suits, of course. Many of the ingredients of that opening recipe the physical contact, the generous swathes of time, the strong hints of supernormal healing power are just the kind of thing likely to impress patients. It's hardly surprising, then, that complementary practitioners are generally best at mobilizing the placebo effect, says Arthur Kleinman, professor of social anthropology at Harvard University. | A London based researcher discovered that red pills should be taken off the market. | neutral |
id_6431 | The Placebo Effect. Want to devise a new form of alternative medicine? No problem. Here is the recipe. Be warm, sympathetic, reassuring and enthusiastic. Your treatment should involve physical contact, and each session with your patients should last at least half an hour. Encourage your patients to take an active part in their treatment and understand how their disorders relate to the rest of their lives. Tell them that their own bodies possess the true power to heal. Make them pay you out of their own pockets. Describe your treatment in familiar words, but embroidered with a hint of mysticism: energy fields, energy flows, energy blocks, meridians, forces, auras, rhythms and the like. Refer to the knowledge of an earlier age: wisdom carelessly swept aside by the rise and rise of blind, mechanistic science. Oh, come off it, you are saying. Something invented off the top of your head could not possibly work, could it? Well yes, it could and often well enough to earn you a living. A good living if you are sufficiently convincing, or, better still, really believes in your therapy. Many illnesses get better on their own, so if you are lucky and administer your treatment at just the right time you will get the credit. But that's only part of it. Some of the improvement really would be down to you. Your healing power would be the outcome of a paradoxical force that conventional medicine recognizes but remains oddly ambivalent about: the placebo effect. Placebos are treatments that have no direct effect on the body, yet still work because the patient has faith in their power to heal. Most often the term refers to a dummy pill, but it applies just as much to any device or procedure, from a sticking plaster to a crystal to an operation. The existence of the placebo effect implies that even quackery may confer real benefits, which is why any mention of placebo is a touchy subject for many practitioners of complementary and alternative medicine, who are likely to regard it as tantamount to a charge of charlatanism. In fact, the placebo effect is a powerful part of all medical care, orthodox or otherwise, though its role is often neglected or misunderstood. One of the great strengths of CAM (Complementary and Alternative Medicine) may be its practitioners' skill in deploying the placebo effect to accomplish real healing. Complementary practitioners are miles better at producing non-specific effects and good therapeutic relationships, says Edzard Ernst, professor of CAM at Exeter University. The question is whether CAM could be integrated into conventional medicine, as some would like, without losing much of this power. At one level, it should come as no surprise that our state of mind can influence our physiology: anger opens the superficial blood vessels of the face; sadness pumps the tear glands. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research done so far has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, morphine-like neurochemicals known to help control pain. But exactly how placebos work their medical magic is still largely unknown. Any of the neurochemicals involved in transmitting pain impulses or modulating them might also be involved in generating the placebo response, says Don Price, an oral surgeon at the University of Florida who studies the placebo effect in dental pain. But endorphins are still out in front. That case has been strengthened by the recent work of Fabrizio Benedetti of the University of Turin, who showed that the placebo effect can be abolished by a drug, naloxone, which blocks the effects of endorphins. Benedetti induced pain in human volunteers by inflating a blood-pressure cuff on the forearm. He did this several times a day for several days, using morphine each time to control the pain. On the final day, without saying anything, he replaced the morphine with a saline solution. This still relieved the subjects' pain: a placebo effect. But when he added naloxone to the saline the pain relief disappeared. Here was direct proof that placebo analgesia is mediated, at least in part, by these natural opiates. Still, no one knows how belief triggers endorphin release, or why most people can't achieve placebo pain relief simply by willing it. Though scientists don't know exactly how placebos work, they have accumulated a fair bit of knowledge about how to trigger the effect. A London rheumatologist found, for example, that red dummy capsules made more effective painkillers than blue, green or yellow ones. Research on American students revealed that blue pills make better sedatives than pink, a color more suitable for stimulants. Even branding can make a difference: if Aspro or Tylenol is what you like to take for a headache, their chemically identical generic equivalents may be less effective. It matters, too, how the treatment is delivered. Decades ago, when the major tranquillizer chlorpromazine was being introduced, a doctor in Kansas categorized his colleagues according to whether they were keen on it, openly skeptical of its benefits, or took a let's try and see attitude. His conclusion: the more enthusiastic the doctor, the better the drug performed. And this year Ernst surveyed published studies that compared doctors' bedside manners. The studies turned up one consistent finding: Physicians who adopt a warm, friendly and reassuring manner, he reported, are more effective than those whose consultations are formal and do not offer reassurance. Warm, friendly and reassuring is precisely CAM'S strong suits, of course. Many of the ingredients of that opening recipe the physical contact, the generous swathes of time, the strong hints of supernormal healing power are just the kind of thing likely to impress patients. It's hardly surprising, then, that complementary practitioners are generally best at mobilizing the placebo effect, says Arthur Kleinman, professor of social anthropology at Harvard University. | Medical doctors have a range of views of the newly introduced drug of chlorpromazine. | entailment |
id_6432 | The Placebo Effect. Want to devise a new form of alternative medicine? No problem. Here is the recipe. Be warm, sympathetic, reassuring and enthusiastic. Your treatment should involve physical contact, and each session with your patients should last at least half an hour. Encourage your patients to take an active part in their treatment and understand how their disorders relate to the rest of their lives. Tell them that their own bodies possess the true power to heal. Make them pay you out of their own pockets. Describe your treatment in familiar words, but embroidered with a hint of mysticism: energy fields, energy flows, energy blocks, meridians, forces, auras, rhythms and the like. Refer to the knowledge of an earlier age: wisdom carelessly swept aside by the rise and rise of blind, mechanistic science. Oh, come off it, you are saying. Something invented off the top of your head could not possibly work, could it? Well yes, it could and often well enough to earn you a living. A good living if you are sufficiently convincing, or, better still, really believes in your therapy. Many illnesses get better on their own, so if you are lucky and administer your treatment at just the right time you will get the credit. But that's only part of it. Some of the improvement really would be down to you. Your healing power would be the outcome of a paradoxical force that conventional medicine recognizes but remains oddly ambivalent about: the placebo effect. Placebos are treatments that have no direct effect on the body, yet still work because the patient has faith in their power to heal. Most often the term refers to a dummy pill, but it applies just as much to any device or procedure, from a sticking plaster to a crystal to an operation. The existence of the placebo effect implies that even quackery may confer real benefits, which is why any mention of placebo is a touchy subject for many practitioners of complementary and alternative medicine, who are likely to regard it as tantamount to a charge of charlatanism. In fact, the placebo effect is a powerful part of all medical care, orthodox or otherwise, though its role is often neglected or misunderstood. One of the great strengths of CAM (Complementary and Alternative Medicine) may be its practitioners' skill in deploying the placebo effect to accomplish real healing. Complementary practitioners are miles better at producing non-specific effects and good therapeutic relationships, says Edzard Ernst, professor of CAM at Exeter University. The question is whether CAM could be integrated into conventional medicine, as some would like, without losing much of this power. At one level, it should come as no surprise that our state of mind can influence our physiology: anger opens the superficial blood vessels of the face; sadness pumps the tear glands. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research done so far has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, morphine-like neurochemicals known to help control pain. But exactly how placebos work their medical magic is still largely unknown. Any of the neurochemicals involved in transmitting pain impulses or modulating them might also be involved in generating the placebo response, says Don Price, an oral surgeon at the University of Florida who studies the placebo effect in dental pain. But endorphins are still out in front. That case has been strengthened by the recent work of Fabrizio Benedetti of the University of Turin, who showed that the placebo effect can be abolished by a drug, naloxone, which blocks the effects of endorphins. Benedetti induced pain in human volunteers by inflating a blood-pressure cuff on the forearm. He did this several times a day for several days, using morphine each time to control the pain. On the final day, without saying anything, he replaced the morphine with a saline solution. This still relieved the subjects' pain: a placebo effect. But when he added naloxone to the saline the pain relief disappeared. Here was direct proof that placebo analgesia is mediated, at least in part, by these natural opiates. Still, no one knows how belief triggers endorphin release, or why most people can't achieve placebo pain relief simply by willing it. Though scientists don't know exactly how placebos work, they have accumulated a fair bit of knowledge about how to trigger the effect. A London rheumatologist found, for example, that red dummy capsules made more effective painkillers than blue, green or yellow ones. Research on American students revealed that blue pills make better sedatives than pink, a color more suitable for stimulants. Even branding can make a difference: if Aspro or Tylenol is what you like to take for a headache, their chemically identical generic equivalents may be less effective. It matters, too, how the treatment is delivered. Decades ago, when the major tranquillizer chlorpromazine was being introduced, a doctor in Kansas categorized his colleagues according to whether they were keen on it, openly skeptical of its benefits, or took a let's try and see attitude. His conclusion: the more enthusiastic the doctor, the better the drug performed. And this year Ernst surveyed published studies that compared doctors' bedside manners. The studies turned up one consistent finding: Physicians who adopt a warm, friendly and reassuring manner, he reported, are more effective than those whose consultations are formal and do not offer reassurance. Warm, friendly and reassuring is precisely CAM'S strong suits, of course. Many of the ingredients of that opening recipe the physical contact, the generous swathes of time, the strong hints of supernormal healing power are just the kind of thing likely to impress patients. It's hardly surprising, then, that complementary practitioners are generally best at mobilizing the placebo effect, says Arthur Kleinman, professor of social anthropology at Harvard University. | People's preference on brands would also have effect on their healing. | entailment |
id_6433 | The Placebo Effect. Want to devise a new form of alternative medicine? No problem. Here is the recipe. Be warm, sympathetic, reassuring and enthusiastic. Your treatment should involve physical contact, and each session with your patients should last at least half an hour. Encourage your patients to take an active part in their treatment and understand how their disorders relate to the rest of their lives. Tell them that their own bodies possess the true power to heal. Make them pay you out of their own pockets. Describe your treatment in familiar words, but embroidered with a hint of mysticism: energy fields, energy flows, energy blocks, meridians, forces, auras, rhythms and the like. Refer to the knowledge of an earlier age: wisdom carelessly swept aside by the rise and rise of blind, mechanistic science. Oh, come off it, you are saying. Something invented off the top of your head could not possibly work, could it? Well yes, it could and often well enough to earn you a living. A good living if you are sufficiently convincing, or, better still, really believes in your therapy. Many illnesses get better on their own, so if you are lucky and administer your treatment at just the right time you will get the credit. But that's only part of it. Some of the improvement really would be down to you. Your healing power would be the outcome of a paradoxical force that conventional medicine recognizes but remains oddly ambivalent about: the placebo effect. Placebos are treatments that have no direct effect on the body, yet still work because the patient has faith in their power to heal. Most often the term refers to a dummy pill, but it applies just as much to any device or procedure, from a sticking plaster to a crystal to an operation. The existence of the placebo effect implies that even quackery may confer real benefits, which is why any mention of placebo is a touchy subject for many practitioners of complementary and alternative medicine, who are likely to regard it as tantamount to a charge of charlatanism. In fact, the placebo effect is a powerful part of all medical care, orthodox or otherwise, though its role is often neglected or misunderstood. One of the great strengths of CAM (Complementary and Alternative Medicine) may be its practitioners' skill in deploying the placebo effect to accomplish real healing. Complementary practitioners are miles better at producing non-specific effects and good therapeutic relationships, says Edzard Ernst, professor of CAM at Exeter University. The question is whether CAM could be integrated into conventional medicine, as some would like, without losing much of this power. At one level, it should come as no surprise that our state of mind can influence our physiology: anger opens the superficial blood vessels of the face; sadness pumps the tear glands. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research done so far has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, morphine-like neurochemicals known to help control pain. But exactly how placebos work their medical magic is still largely unknown. Any of the neurochemicals involved in transmitting pain impulses or modulating them might also be involved in generating the placebo response, says Don Price, an oral surgeon at the University of Florida who studies the placebo effect in dental pain. But endorphins are still out in front. That case has been strengthened by the recent work of Fabrizio Benedetti of the University of Turin, who showed that the placebo effect can be abolished by a drug, naloxone, which blocks the effects of endorphins. Benedetti induced pain in human volunteers by inflating a blood-pressure cuff on the forearm. He did this several times a day for several days, using morphine each time to control the pain. On the final day, without saying anything, he replaced the morphine with a saline solution. This still relieved the subjects' pain: a placebo effect. But when he added naloxone to the saline the pain relief disappeared. Here was direct proof that placebo analgesia is mediated, at least in part, by these natural opiates. Still, no one knows how belief triggers endorphin release, or why most people can't achieve placebo pain relief simply by willing it. Though scientists don't know exactly how placebos work, they have accumulated a fair bit of knowledge about how to trigger the effect. A London rheumatologist found, for example, that red dummy capsules made more effective painkillers than blue, green or yellow ones. Research on American students revealed that blue pills make better sedatives than pink, a color more suitable for stimulants. Even branding can make a difference: if Aspro or Tylenol is what you like to take for a headache, their chemically identical generic equivalents may be less effective. It matters, too, how the treatment is delivered. Decades ago, when the major tranquillizer chlorpromazine was being introduced, a doctor in Kansas categorized his colleagues according to whether they were keen on it, openly skeptical of its benefits, or took a let's try and see attitude. His conclusion: the more enthusiastic the doctor, the better the drug performed. And this year Ernst surveyed published studies that compared doctors' bedside manners. The studies turned up one consistent finding: Physicians who adopt a warm, friendly and reassuring manner, he reported, are more effective than those whose consultations are formal and do not offer reassurance. Warm, friendly and reassuring is precisely CAM'S strong suits, of course. Many of the ingredients of that opening recipe the physical contact, the generous swathes of time, the strong hints of supernormal healing power are just the kind of thing likely to impress patients. It's hardly surprising, then, that complementary practitioners are generally best at mobilizing the placebo effect, says Arthur Kleinman, professor of social anthropology at Harvard University. | There is enough information for scientists to fully understand the placebo effect | contradiction |
id_6434 | The Placebo Effect. Want to devise a new form of alternative medicine? No problem. Here is the recipe. Be warm, sympathetic, reassuring and enthusiastic. Your treatment should involve physical contact, and each session with your patients should last at least half an hour. Encourage your patients to take an active part in their treatment and understand how their disorders relate to the rest of their lives. Tell them that their own bodies possess the true power to heal. Make them pay you out of their own pockets. Describe your treatment in familiar words, but embroidered with a hint of mysticism: energy fields, energy flows, energy blocks, meridians, forces, auras, rhythms and the like. Refer to the knowledge of an earlier age: wisdom carelessly swept aside by the rise and rise of blind, mechanistic science. Oh, come off it, you are saying. Something invented off the top of your head could not possibly work, could it? Well yes, it could and often well enough to earn you a living. A good living if you are sufficiently convincing, or, better still, really believes in your therapy. Many illnesses get better on their own, so if you are lucky and administer your treatment at just the right time you will get the credit. But that's only part of it. Some of the improvement really would be down to you. Your healing power would be the outcome of a paradoxical force that conventional medicine recognizes but remains oddly ambivalent about: the placebo effect. Placebos are treatments that have no direct effect on the body, yet still work because the patient has faith in their power to heal. Most often the term refers to a dummy pill, but it applies just as much to any device or procedure, from a sticking plaster to a crystal to an operation. The existence of the placebo effect implies that even quackery may confer real benefits, which is why any mention of placebo is a touchy subject for many practitioners of complementary and alternative medicine, who are likely to regard it as tantamount to a charge of charlatanism. In fact, the placebo effect is a powerful part of all medical care, orthodox or otherwise, though its role is often neglected or misunderstood. One of the great strengths of CAM (Complementary and Alternative Medicine) may be its practitioners' skill in deploying the placebo effect to accomplish real healing. Complementary practitioners are miles better at producing non-specific effects and good therapeutic relationships, says Edzard Ernst, professor of CAM at Exeter University. The question is whether CAM could be integrated into conventional medicine, as some would like, without losing much of this power. At one level, it should come as no surprise that our state of mind can influence our physiology: anger opens the superficial blood vessels of the face; sadness pumps the tear glands. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research done so far has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, morphine-like neurochemicals known to help control pain. But exactly how placebos work their medical magic is still largely unknown. Any of the neurochemicals involved in transmitting pain impulses or modulating them might also be involved in generating the placebo response, says Don Price, an oral surgeon at the University of Florida who studies the placebo effect in dental pain. But endorphins are still out in front. That case has been strengthened by the recent work of Fabrizio Benedetti of the University of Turin, who showed that the placebo effect can be abolished by a drug, naloxone, which blocks the effects of endorphins. Benedetti induced pain in human volunteers by inflating a blood-pressure cuff on the forearm. He did this several times a day for several days, using morphine each time to control the pain. On the final day, without saying anything, he replaced the morphine with a saline solution. This still relieved the subjects' pain: a placebo effect. But when he added naloxone to the saline the pain relief disappeared. Here was direct proof that placebo analgesia is mediated, at least in part, by these natural opiates. Still, no one knows how belief triggers endorphin release, or why most people can't achieve placebo pain relief simply by willing it. Though scientists don't know exactly how placebos work, they have accumulated a fair bit of knowledge about how to trigger the effect. A London rheumatologist found, for example, that red dummy capsules made more effective painkillers than blue, green or yellow ones. Research on American students revealed that blue pills make better sedatives than pink, a color more suitable for stimulants. Even branding can make a difference: if Aspro or Tylenol is what you like to take for a headache, their chemically identical generic equivalents may be less effective. It matters, too, how the treatment is delivered. Decades ago, when the major tranquillizer chlorpromazine was being introduced, a doctor in Kansas categorized his colleagues according to whether they were keen on it, openly skeptical of its benefits, or took a let's try and see attitude. His conclusion: the more enthusiastic the doctor, the better the drug performed. And this year Ernst surveyed published studies that compared doctors' bedside manners. The studies turned up one consistent finding: Physicians who adopt a warm, friendly and reassuring manner, he reported, are more effective than those whose consultations are formal and do not offer reassurance. Warm, friendly and reassuring is precisely CAM'S strong suits, of course. Many of the ingredients of that opening recipe the physical contact, the generous swathes of time, the strong hints of supernormal healing power are just the kind of thing likely to impress patients. It's hardly surprising, then, that complementary practitioners are generally best at mobilizing the placebo effect, says Arthur Kleinman, professor of social anthropology at Harvard University. | Alternative practitioners are seldom known for applying placebo effect. | contradiction |
id_6435 | The Power of Nothing Want to devise a new form of alternative medicine? No problem. Here is the recipe. Be warm, sympathetic, reassuring and enthusiastic. Your treatment should involve physical contact, and each session with your patients should last at least half an hour. treatment and understand how their disorders relate to the rest of their lives. Tell them that their own bodies possess the true power to heal. Make them pay you out of their own pockets. Describe your treatment in familiar words, but embroidered with a hint of mysticism: energy fields, energy flows, energy blocks, meridians, forces, auras, rhythms and the like. Refer to the K J knowledge of an earlier age: wisdom carelessly swept aside by the rise and rise of blind, mechanistic science. Oh, come off it, you are saying. Something invented off the top of your head could not possibly work, could it? Well yes, it could and often well enough to earn you living. A good living if you are sufficiently convincing, or better still, really believe in your therapy. Many illnesses get better on their own, so if you are lucky and administer your treatment at just the right time you will get the credit. But thats only part of it. Some of the improvements really would be down to you. Your healing power would be the outcome of a paradoxical force that conventional medicine recognizes but remains oddly ambivalent about: the placebo effect. Placebos are treatments that have no direct effect on the body, yet still work because the patient has faith in their power to heal. Most often the term refers to a dummy pill, but it applies just as much to any device or procedure, from a sticking plaster to a crystal to an operation. The existence of the placebo effect implies that even quackery may confer real benefits, which is why any mention of placebo is a touchy subject for many practitioners of complementary and alternative medicine, who are likely to regard it as tantamount to a charge of charlatanism. In fact, the placebo effect is a powerful part of all medical care, orthodox or otherwise, though its role is often neglected or misunderstood. One of the great strengths of CAM may be its practitioners skill in deploying the placebo effect to accomplish real healing. Complementary practitioners are miles better at producing non-specific effects and good therapeutic relationships, says Edzard Ernst, professor of CAM at Exeter University. The question is whether CAM could be integrated into conventional medicine, as some would like, without losing much of this power. At one level, it should come as no surprise that our state of mind can influence our physiology: anger opens the superficial blood vessels of the face; sadness pumps the tear glands. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research done so far has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, morphine-like neurochemicals known to help control pain. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research to date has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, natural counterparts of morphine that are known to help control pain. Any of the neurochemicals involved in transmitting pain impulses or modulating them might also be involved in generating the placebo response, says Don Price, an oral surgeon at the University of Florida who studies the placebo effect in dental pain. But endorphins are still out in front. That case has been strengthened by the recent work of Fabrizio Benedetti of the University of Turin, who showed that the placebo effect can be abolished by a drug, naloxone, which blocks the effects of endorphins. Benedetti induced pain in human volunteers by inflating a blood-pressure cuff on the forearm. He did this several times a day for several days, using morphine each time to control the pain. On the final day, without saying anything, he replaced the morphine with a saline solution. This still relieved the subjects pain: a placebo effect. But when he added naloxone to the saline the pain relief disappeared. Here was direct proof that placebo analgesia is mediated, at least in part, by these natural opiates Still, no one knows how belief triggers endorphin release, or why most people cant achieve placebo pain relief simply by willing it. Though scientists don't know exactly how placebos work, they have accumulated a fair bit of knowledge about how to trigger the effect. A London rheumatologist found, for example, that red dummy capsules made more effective painkillers than blue, green or yellow ones. Research on American students revealed that blue pills make better sedatives than pink, a color more suitable for stimulants. Even branding can make a difference: if Aspro or Tylenol are what you like to take for a headache, their chemically identical generic equivalents may be less effective. It matters, too, how the treatment is delivered. Decades ago, when the major tranquilizer chlorpromazine was being introduced, a doctor in Kansas categorized his colleagues according to whether they were keen on it, openly skeptical of its benefits or took a lets try and see, attitude. His conclusion: the more enthusiastic the doctor, the better the drug performed. And this, year Ernst surveyed published studies that compared doctors bedside manners. The studies turned up one consistent finding: Physicians who adopt a warm, friendly and reassuring manner, he reported, are more effective than those whose consultations are formal and do not offer reassurance Warm, friendly and reassuring are precisely CAM, s strong suits, of course. Many of the ingredients of that opening recipe the physical contact, the generous swathes of time, the strong hints of supernormal healing power are just the kind of thing likely to impress patients. Its hardly surprising, then, that complementary practitioners are generally best at mobilizing the placebo effect, says Arthur Kleinman, professor of social anthropology at Harvard University. | Alternative practitioners are seldom known for applying the placebo effect. | contradiction |
id_6436 | The Power of Nothing Want to devise a new form of alternative medicine? No problem. Here is the recipe. Be warm, sympathetic, reassuring and enthusiastic. Your treatment should involve physical contact, and each session with your patients should last at least half an hour. treatment and understand how their disorders relate to the rest of their lives. Tell them that their own bodies possess the true power to heal. Make them pay you out of their own pockets. Describe your treatment in familiar words, but embroidered with a hint of mysticism: energy fields, energy flows, energy blocks, meridians, forces, auras, rhythms and the like. Refer to the K J knowledge of an earlier age: wisdom carelessly swept aside by the rise and rise of blind, mechanistic science. Oh, come off it, you are saying. Something invented off the top of your head could not possibly work, could it? Well yes, it could and often well enough to earn you living. A good living if you are sufficiently convincing, or better still, really believe in your therapy. Many illnesses get better on their own, so if you are lucky and administer your treatment at just the right time you will get the credit. But thats only part of it. Some of the improvements really would be down to you. Your healing power would be the outcome of a paradoxical force that conventional medicine recognizes but remains oddly ambivalent about: the placebo effect. Placebos are treatments that have no direct effect on the body, yet still work because the patient has faith in their power to heal. Most often the term refers to a dummy pill, but it applies just as much to any device or procedure, from a sticking plaster to a crystal to an operation. The existence of the placebo effect implies that even quackery may confer real benefits, which is why any mention of placebo is a touchy subject for many practitioners of complementary and alternative medicine, who are likely to regard it as tantamount to a charge of charlatanism. In fact, the placebo effect is a powerful part of all medical care, orthodox or otherwise, though its role is often neglected or misunderstood. One of the great strengths of CAM may be its practitioners skill in deploying the placebo effect to accomplish real healing. Complementary practitioners are miles better at producing non-specific effects and good therapeutic relationships, says Edzard Ernst, professor of CAM at Exeter University. The question is whether CAM could be integrated into conventional medicine, as some would like, without losing much of this power. At one level, it should come as no surprise that our state of mind can influence our physiology: anger opens the superficial blood vessels of the face; sadness pumps the tear glands. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research done so far has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, morphine-like neurochemicals known to help control pain. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research to date has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, natural counterparts of morphine that are known to help control pain. Any of the neurochemicals involved in transmitting pain impulses or modulating them might also be involved in generating the placebo response, says Don Price, an oral surgeon at the University of Florida who studies the placebo effect in dental pain. But endorphins are still out in front. That case has been strengthened by the recent work of Fabrizio Benedetti of the University of Turin, who showed that the placebo effect can be abolished by a drug, naloxone, which blocks the effects of endorphins. Benedetti induced pain in human volunteers by inflating a blood-pressure cuff on the forearm. He did this several times a day for several days, using morphine each time to control the pain. On the final day, without saying anything, he replaced the morphine with a saline solution. This still relieved the subjects pain: a placebo effect. But when he added naloxone to the saline the pain relief disappeared. Here was direct proof that placebo analgesia is mediated, at least in part, by these natural opiates Still, no one knows how belief triggers endorphin release, or why most people cant achieve placebo pain relief simply by willing it. Though scientists don't know exactly how placebos work, they have accumulated a fair bit of knowledge about how to trigger the effect. A London rheumatologist found, for example, that red dummy capsules made more effective painkillers than blue, green or yellow ones. Research on American students revealed that blue pills make better sedatives than pink, a color more suitable for stimulants. Even branding can make a difference: if Aspro or Tylenol are what you like to take for a headache, their chemically identical generic equivalents may be less effective. It matters, too, how the treatment is delivered. Decades ago, when the major tranquilizer chlorpromazine was being introduced, a doctor in Kansas categorized his colleagues according to whether they were keen on it, openly skeptical of its benefits or took a lets try and see, attitude. His conclusion: the more enthusiastic the doctor, the better the drug performed. And this, year Ernst surveyed published studies that compared doctors bedside manners. The studies turned up one consistent finding: Physicians who adopt a warm, friendly and reassuring manner, he reported, are more effective than those whose consultations are formal and do not offer reassurance Warm, friendly and reassuring are precisely CAM, s strong suits, of course. Many of the ingredients of that opening recipe the physical contact, the generous swathes of time, the strong hints of supernormal healing power are just the kind of thing likely to impress patients. Its hardly surprising, then, that complementary practitioners are generally best at mobilizing the placebo effect, says Arthur Kleinman, professor of social anthropology at Harvard University. | There is enough information for scientists to fully understand the placebo effect. | contradiction |
id_6437 | The Power of Nothing Want to devise a new form of alternative medicine? No problem. Here is the recipe. Be warm, sympathetic, reassuring and enthusiastic. Your treatment should involve physical contact, and each session with your patients should last at least half an hour. treatment and understand how their disorders relate to the rest of their lives. Tell them that their own bodies possess the true power to heal. Make them pay you out of their own pockets. Describe your treatment in familiar words, but embroidered with a hint of mysticism: energy fields, energy flows, energy blocks, meridians, forces, auras, rhythms and the like. Refer to the K J knowledge of an earlier age: wisdom carelessly swept aside by the rise and rise of blind, mechanistic science. Oh, come off it, you are saying. Something invented off the top of your head could not possibly work, could it? Well yes, it could and often well enough to earn you living. A good living if you are sufficiently convincing, or better still, really believe in your therapy. Many illnesses get better on their own, so if you are lucky and administer your treatment at just the right time you will get the credit. But thats only part of it. Some of the improvements really would be down to you. Your healing power would be the outcome of a paradoxical force that conventional medicine recognizes but remains oddly ambivalent about: the placebo effect. Placebos are treatments that have no direct effect on the body, yet still work because the patient has faith in their power to heal. Most often the term refers to a dummy pill, but it applies just as much to any device or procedure, from a sticking plaster to a crystal to an operation. The existence of the placebo effect implies that even quackery may confer real benefits, which is why any mention of placebo is a touchy subject for many practitioners of complementary and alternative medicine, who are likely to regard it as tantamount to a charge of charlatanism. In fact, the placebo effect is a powerful part of all medical care, orthodox or otherwise, though its role is often neglected or misunderstood. One of the great strengths of CAM may be its practitioners skill in deploying the placebo effect to accomplish real healing. Complementary practitioners are miles better at producing non-specific effects and good therapeutic relationships, says Edzard Ernst, professor of CAM at Exeter University. The question is whether CAM could be integrated into conventional medicine, as some would like, without losing much of this power. At one level, it should come as no surprise that our state of mind can influence our physiology: anger opens the superficial blood vessels of the face; sadness pumps the tear glands. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research done so far has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, morphine-like neurochemicals known to help control pain. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research to date has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, natural counterparts of morphine that are known to help control pain. Any of the neurochemicals involved in transmitting pain impulses or modulating them might also be involved in generating the placebo response, says Don Price, an oral surgeon at the University of Florida who studies the placebo effect in dental pain. But endorphins are still out in front. That case has been strengthened by the recent work of Fabrizio Benedetti of the University of Turin, who showed that the placebo effect can be abolished by a drug, naloxone, which blocks the effects of endorphins. Benedetti induced pain in human volunteers by inflating a blood-pressure cuff on the forearm. He did this several times a day for several days, using morphine each time to control the pain. On the final day, without saying anything, he replaced the morphine with a saline solution. This still relieved the subjects pain: a placebo effect. But when he added naloxone to the saline the pain relief disappeared. Here was direct proof that placebo analgesia is mediated, at least in part, by these natural opiates Still, no one knows how belief triggers endorphin release, or why most people cant achieve placebo pain relief simply by willing it. Though scientists don't know exactly how placebos work, they have accumulated a fair bit of knowledge about how to trigger the effect. A London rheumatologist found, for example, that red dummy capsules made more effective painkillers than blue, green or yellow ones. Research on American students revealed that blue pills make better sedatives than pink, a color more suitable for stimulants. Even branding can make a difference: if Aspro or Tylenol are what you like to take for a headache, their chemically identical generic equivalents may be less effective. It matters, too, how the treatment is delivered. Decades ago, when the major tranquilizer chlorpromazine was being introduced, a doctor in Kansas categorized his colleagues according to whether they were keen on it, openly skeptical of its benefits or took a lets try and see, attitude. His conclusion: the more enthusiastic the doctor, the better the drug performed. And this, year Ernst surveyed published studies that compared doctors bedside manners. The studies turned up one consistent finding: Physicians who adopt a warm, friendly and reassuring manner, he reported, are more effective than those whose consultations are formal and do not offer reassurance Warm, friendly and reassuring are precisely CAM, s strong suits, of course. Many of the ingredients of that opening recipe the physical contact, the generous swathes of time, the strong hints of supernormal healing power are just the kind of thing likely to impress patients. Its hardly surprising, then, that complementary practitioners are generally best at mobilizing the placebo effect, says Arthur Kleinman, professor of social anthropology at Harvard University. | A London based researcher discovered that red pills should be taken off the market. | neutral |
id_6438 | The Power of Nothing Want to devise a new form of alternative medicine? No problem. Here is the recipe. Be warm, sympathetic, reassuring and enthusiastic. Your treatment should involve physical contact, and each session with your patients should last at least half an hour. treatment and understand how their disorders relate to the rest of their lives. Tell them that their own bodies possess the true power to heal. Make them pay you out of their own pockets. Describe your treatment in familiar words, but embroidered with a hint of mysticism: energy fields, energy flows, energy blocks, meridians, forces, auras, rhythms and the like. Refer to the K J knowledge of an earlier age: wisdom carelessly swept aside by the rise and rise of blind, mechanistic science. Oh, come off it, you are saying. Something invented off the top of your head could not possibly work, could it? Well yes, it could and often well enough to earn you living. A good living if you are sufficiently convincing, or better still, really believe in your therapy. Many illnesses get better on their own, so if you are lucky and administer your treatment at just the right time you will get the credit. But thats only part of it. Some of the improvements really would be down to you. Your healing power would be the outcome of a paradoxical force that conventional medicine recognizes but remains oddly ambivalent about: the placebo effect. Placebos are treatments that have no direct effect on the body, yet still work because the patient has faith in their power to heal. Most often the term refers to a dummy pill, but it applies just as much to any device or procedure, from a sticking plaster to a crystal to an operation. The existence of the placebo effect implies that even quackery may confer real benefits, which is why any mention of placebo is a touchy subject for many practitioners of complementary and alternative medicine, who are likely to regard it as tantamount to a charge of charlatanism. In fact, the placebo effect is a powerful part of all medical care, orthodox or otherwise, though its role is often neglected or misunderstood. One of the great strengths of CAM may be its practitioners skill in deploying the placebo effect to accomplish real healing. Complementary practitioners are miles better at producing non-specific effects and good therapeutic relationships, says Edzard Ernst, professor of CAM at Exeter University. The question is whether CAM could be integrated into conventional medicine, as some would like, without losing much of this power. At one level, it should come as no surprise that our state of mind can influence our physiology: anger opens the superficial blood vessels of the face; sadness pumps the tear glands. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research done so far has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, morphine-like neurochemicals known to help control pain. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research to date has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, natural counterparts of morphine that are known to help control pain. Any of the neurochemicals involved in transmitting pain impulses or modulating them might also be involved in generating the placebo response, says Don Price, an oral surgeon at the University of Florida who studies the placebo effect in dental pain. But endorphins are still out in front. That case has been strengthened by the recent work of Fabrizio Benedetti of the University of Turin, who showed that the placebo effect can be abolished by a drug, naloxone, which blocks the effects of endorphins. Benedetti induced pain in human volunteers by inflating a blood-pressure cuff on the forearm. He did this several times a day for several days, using morphine each time to control the pain. On the final day, without saying anything, he replaced the morphine with a saline solution. This still relieved the subjects pain: a placebo effect. But when he added naloxone to the saline the pain relief disappeared. Here was direct proof that placebo analgesia is mediated, at least in part, by these natural opiates Still, no one knows how belief triggers endorphin release, or why most people cant achieve placebo pain relief simply by willing it. Though scientists don't know exactly how placebos work, they have accumulated a fair bit of knowledge about how to trigger the effect. A London rheumatologist found, for example, that red dummy capsules made more effective painkillers than blue, green or yellow ones. Research on American students revealed that blue pills make better sedatives than pink, a color more suitable for stimulants. Even branding can make a difference: if Aspro or Tylenol are what you like to take for a headache, their chemically identical generic equivalents may be less effective. It matters, too, how the treatment is delivered. Decades ago, when the major tranquilizer chlorpromazine was being introduced, a doctor in Kansas categorized his colleagues according to whether they were keen on it, openly skeptical of its benefits or took a lets try and see, attitude. His conclusion: the more enthusiastic the doctor, the better the drug performed. And this, year Ernst surveyed published studies that compared doctors bedside manners. The studies turned up one consistent finding: Physicians who adopt a warm, friendly and reassuring manner, he reported, are more effective than those whose consultations are formal and do not offer reassurance Warm, friendly and reassuring are precisely CAM, s strong suits, of course. Many of the ingredients of that opening recipe the physical contact, the generous swathes of time, the strong hints of supernormal healing power are just the kind of thing likely to impress patients. Its hardly surprising, then, that complementary practitioners are generally best at mobilizing the placebo effect, says Arthur Kleinman, professor of social anthropology at Harvard University. | Peoples preferences for brands would also have an effect on their healing. | entailment |
id_6439 | The Power of Nothing Want to devise a new form of alternative medicine? No problem. Here is the recipe. Be warm, sympathetic, reassuring and enthusiastic. Your treatment should involve physical contact, and each session with your patients should last at least half an hour. treatment and understand how their disorders relate to the rest of their lives. Tell them that their own bodies possess the true power to heal. Make them pay you out of their own pockets. Describe your treatment in familiar words, but embroidered with a hint of mysticism: energy fields, energy flows, energy blocks, meridians, forces, auras, rhythms and the like. Refer to the K J knowledge of an earlier age: wisdom carelessly swept aside by the rise and rise of blind, mechanistic science. Oh, come off it, you are saying. Something invented off the top of your head could not possibly work, could it? Well yes, it could and often well enough to earn you living. A good living if you are sufficiently convincing, or better still, really believe in your therapy. Many illnesses get better on their own, so if you are lucky and administer your treatment at just the right time you will get the credit. But thats only part of it. Some of the improvements really would be down to you. Your healing power would be the outcome of a paradoxical force that conventional medicine recognizes but remains oddly ambivalent about: the placebo effect. Placebos are treatments that have no direct effect on the body, yet still work because the patient has faith in their power to heal. Most often the term refers to a dummy pill, but it applies just as much to any device or procedure, from a sticking plaster to a crystal to an operation. The existence of the placebo effect implies that even quackery may confer real benefits, which is why any mention of placebo is a touchy subject for many practitioners of complementary and alternative medicine, who are likely to regard it as tantamount to a charge of charlatanism. In fact, the placebo effect is a powerful part of all medical care, orthodox or otherwise, though its role is often neglected or misunderstood. One of the great strengths of CAM may be its practitioners skill in deploying the placebo effect to accomplish real healing. Complementary practitioners are miles better at producing non-specific effects and good therapeutic relationships, says Edzard Ernst, professor of CAM at Exeter University. The question is whether CAM could be integrated into conventional medicine, as some would like, without losing much of this power. At one level, it should come as no surprise that our state of mind can influence our physiology: anger opens the superficial blood vessels of the face; sadness pumps the tear glands. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research done so far has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, morphine-like neurochemicals known to help control pain. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research to date has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, natural counterparts of morphine that are known to help control pain. Any of the neurochemicals involved in transmitting pain impulses or modulating them might also be involved in generating the placebo response, says Don Price, an oral surgeon at the University of Florida who studies the placebo effect in dental pain. But endorphins are still out in front. That case has been strengthened by the recent work of Fabrizio Benedetti of the University of Turin, who showed that the placebo effect can be abolished by a drug, naloxone, which blocks the effects of endorphins. Benedetti induced pain in human volunteers by inflating a blood-pressure cuff on the forearm. He did this several times a day for several days, using morphine each time to control the pain. On the final day, without saying anything, he replaced the morphine with a saline solution. This still relieved the subjects pain: a placebo effect. But when he added naloxone to the saline the pain relief disappeared. Here was direct proof that placebo analgesia is mediated, at least in part, by these natural opiates Still, no one knows how belief triggers endorphin release, or why most people cant achieve placebo pain relief simply by willing it. Though scientists don't know exactly how placebos work, they have accumulated a fair bit of knowledge about how to trigger the effect. A London rheumatologist found, for example, that red dummy capsules made more effective painkillers than blue, green or yellow ones. Research on American students revealed that blue pills make better sedatives than pink, a color more suitable for stimulants. Even branding can make a difference: if Aspro or Tylenol are what you like to take for a headache, their chemically identical generic equivalents may be less effective. It matters, too, how the treatment is delivered. Decades ago, when the major tranquilizer chlorpromazine was being introduced, a doctor in Kansas categorized his colleagues according to whether they were keen on it, openly skeptical of its benefits or took a lets try and see, attitude. His conclusion: the more enthusiastic the doctor, the better the drug performed. And this, year Ernst surveyed published studies that compared doctors bedside manners. The studies turned up one consistent finding: Physicians who adopt a warm, friendly and reassuring manner, he reported, are more effective than those whose consultations are formal and do not offer reassurance Warm, friendly and reassuring are precisely CAM, s strong suits, of course. Many of the ingredients of that opening recipe the physical contact, the generous swathes of time, the strong hints of supernormal healing power are just the kind of thing likely to impress patients. Its hardly surprising, then, that complementary practitioners are generally best at mobilizing the placebo effect, says Arthur Kleinman, professor of social anthropology at Harvard University. | Medical doctors have a range of views of the newly introduced drug of | entailment |
id_6440 | The Problem of Scarce Resources The problem of how health-care resources should be allocated or apportioned, so that they are distributed in both the most just and most efficient way, is not a new one. Every health system in an economically developed society is faced with the need to decide (either formally or informally) what proportion of the community's total resources should be spent on health-care; how resources are to be apportioned; what diseases and disabilities and which forms of treatment are to be given priority; which members of the community are to be given special consideration in respect of their health needs; and which forms of treatment are the most cost-effective. What is new is that, from the 1950s onwards, there have been certain general changes in outlook about the finitude of resources as a whole and of health-care resources in particular, as well as more specific changes regarding the clientele of health-care resources and the cost to the community of those resources. Thus, in the 1950s and 1960s, there emerged an awareness in Western societies that resources for the provision of fossil fuel energy were finite and exhaustible and that the capacity of nature or the environment to sustain economic development and population was also finite. In other words, we became aware of the obvious fact that there were 'limits to growth'. The new consciousness that there were also severe limits to health-care resources was part of this general revelation of the obvious. Looking back, it now seems quite incredible that in the national health systems that emerged in many countries in the years immediately after the 1939-45 World War, it was assumed without question that all the basic health needs of any community could be satisfied, at least in principle; the 'invisible hand' of economic progress would provide. However, at exactly the same time as this new realisation of the finite character of health-care resources was sinking in, an awareness of a contrary kind was developing in Western societies: that people have a basic right to health-care as a necessary condition of a proper human life. Like education, political and legal processes and institutions, public order, communication, transport and money supply, health-care came to be seen as one of the fundamental social facilities necessary for people to exercise their other rights as autonomous human beings. People are not in a position to exercise personal liberty and to be self-determining if they are poverty-stricken, or deprived of basic education, or do not live within a context of law and order. In the same way, basic health-care is a condition of the exercise of autonomy. Although the language of 'rights' sometimes leads to confusion, by the late 1970s it was recognised in most societies that people have a right to health-care (though there has been considerable resistance in the United States to the idea that there is a formal right to health-care). It is also accepted that this right generates an obligation or duty for the state to ensure that adequate health-care resources are provided out of the public purse. The state has no obligation to provide a health-care system itself, but to ensure that such a system is provided. Put another way, basic health-care is now recognised as a 'public good', rather than a 'private good' that one is expected to buy for oneself. As the 1976 declaration of the World Health Organisation put it: 'The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition. ' As has just been remarked, in a liberal society basic health is seen as one of the indispensable conditions for the exercise of personal autonomy. Just at the time when it became obvious that health-care resources could not possibly meet the demands being made upon them, people were demanding that their fundamental right to health-care be satisfied by the state. The second set of more specific changes that have led to the present concern about the distribution of health-care resources stems from the dramatic rise in health costs in most OECD1 countries, accompanied by large-scale demographic and social changes which have meant, to take one example, that elderly people are now major (and relatively very expensive) consumers of health-care resources. Thus in OECD countries as a whole, health costs increased from 3.8% of GDP2 in 1960 to 7% of GDP in 1980, and it has been predicted that the proportion of health costs to GDP will continue to increase. (In the US the current figure is about 12% of GDP, and in Australia about 7.8% of GDP. ) As a consequence, during the 1980s a kind of doomsday scenario (analogous to similar doomsday extrapolations about energy needs and fossil fuels or about population increases) was projected by health administrators, economists and politicians. In this scenario, ever-rising health costs were matched against static or declining resources. | Personal liberty and independence have never been regarded as directly linked to health-care. | contradiction |
id_6441 | The Problem of Scarce Resources The problem of how health-care resources should be allocated or apportioned, so that they are distributed in both the most just and most efficient way, is not a new one. Every health system in an economically developed society is faced with the need to decide (either formally or informally) what proportion of the community's total resources should be spent on health-care; how resources are to be apportioned; what diseases and disabilities and which forms of treatment are to be given priority; which members of the community are to be given special consideration in respect of their health needs; and which forms of treatment are the most cost-effective. What is new is that, from the 1950s onwards, there have been certain general changes in outlook about the finitude of resources as a whole and of health-care resources in particular, as well as more specific changes regarding the clientele of health-care resources and the cost to the community of those resources. Thus, in the 1950s and 1960s, there emerged an awareness in Western societies that resources for the provision of fossil fuel energy were finite and exhaustible and that the capacity of nature or the environment to sustain economic development and population was also finite. In other words, we became aware of the obvious fact that there were 'limits to growth'. The new consciousness that there were also severe limits to health-care resources was part of this general revelation of the obvious. Looking back, it now seems quite incredible that in the national health systems that emerged in many countries in the years immediately after the 1939-45 World War, it was assumed without question that all the basic health needs of any community could be satisfied, at least in principle; the 'invisible hand' of economic progress would provide. However, at exactly the same time as this new realisation of the finite character of health-care resources was sinking in, an awareness of a contrary kind was developing in Western societies: that people have a basic right to health-care as a necessary condition of a proper human life. Like education, political and legal processes and institutions, public order, communication, transport and money supply, health-care came to be seen as one of the fundamental social facilities necessary for people to exercise their other rights as autonomous human beings. People are not in a position to exercise personal liberty and to be self-determining if they are poverty-stricken, or deprived of basic education, or do not live within a context of law and order. In the same way, basic health-care is a condition of the exercise of autonomy. Although the language of 'rights' sometimes leads to confusion, by the late 1970s it was recognised in most societies that people have a right to health-care (though there has been considerable resistance in the United States to the idea that there is a formal right to health-care). It is also accepted that this right generates an obligation or duty for the state to ensure that adequate health-care resources are provided out of the public purse. The state has no obligation to provide a health-care system itself, but to ensure that such a system is provided. Put another way, basic health-care is now recognised as a 'public good', rather than a 'private good' that one is expected to buy for oneself. As the 1976 declaration of the World Health Organisation put it: 'The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition. ' As has just been remarked, in a liberal society basic health is seen as one of the indispensable conditions for the exercise of personal autonomy. Just at the time when it became obvious that health-care resources could not possibly meet the demands being made upon them, people were demanding that their fundamental right to health-care be satisfied by the state. The second set of more specific changes that have led to the present concern about the distribution of health-care resources stems from the dramatic rise in health costs in most OECD1 countries, accompanied by large-scale demographic and social changes which have meant, to take one example, that elderly people are now major (and relatively very expensive) consumers of health-care resources. Thus in OECD countries as a whole, health costs increased from 3.8% of GDP2 in 1960 to 7% of GDP in 1980, and it has been predicted that the proportion of health costs to GDP will continue to increase. (In the US the current figure is about 12% of GDP, and in Australia about 7.8% of GDP. ) As a consequence, during the 1980s a kind of doomsday scenario (analogous to similar doomsday extrapolations about energy needs and fossil fuels or about population increases) was projected by health administrators, economists and politicians. In this scenario, ever-rising health costs were matched against static or declining resources. | OECD governments have consistently underestimated the level of health-care provision needed. | neutral |
id_6442 | The Problem of Scarce Resources The problem of how health-care resources should be allocated or apportioned, so that they are distributed in both the most just and most efficient way, is not a new one. Every health system in an economically developed society is faced with the need to decide (either formally or informally) what proportion of the community's total resources should be spent on health-care; how resources are to be apportioned; what diseases and disabilities and which forms of treatment are to be given priority; which members of the community are to be given special consideration in respect of their health needs; and which forms of treatment are the most cost-effective. What is new is that, from the 1950s onwards, there have been certain general changes in outlook about the finitude of resources as a whole and of health-care resources in particular, as well as more specific changes regarding the clientele of health-care resources and the cost to the community of those resources. Thus, in the 1950s and 1960s, there emerged an awareness in Western societies that resources for the provision of fossil fuel energy were finite and exhaustible and that the capacity of nature or the environment to sustain economic development and population was also finite. In other words, we became aware of the obvious fact that there were 'limits to growth'. The new consciousness that there were also severe limits to health-care resources was part of this general revelation of the obvious. Looking back, it now seems quite incredible that in the national health systems that emerged in many countries in the years immediately after the 1939-45 World War, it was assumed without question that all the basic health needs of any community could be satisfied, at least in principle; the 'invisible hand' of economic progress would provide. However, at exactly the same time as this new realisation of the finite character of health-care resources was sinking in, an awareness of a contrary kind was developing in Western societies: that people have a basic right to health-care as a necessary condition of a proper human life. Like education, political and legal processes and institutions, public order, communication, transport and money supply, health-care came to be seen as one of the fundamental social facilities necessary for people to exercise their other rights as autonomous human beings. People are not in a position to exercise personal liberty and to be self-determining if they are poverty-stricken, or deprived of basic education, or do not live within a context of law and order. In the same way, basic health-care is a condition of the exercise of autonomy. Although the language of 'rights' sometimes leads to confusion, by the late 1970s it was recognised in most societies that people have a right to health-care (though there has been considerable resistance in the United States to the idea that there is a formal right to health-care). It is also accepted that this right generates an obligation or duty for the state to ensure that adequate health-care resources are provided out of the public purse. The state has no obligation to provide a health-care system itself, but to ensure that such a system is provided. Put another way, basic health-care is now recognised as a 'public good', rather than a 'private good' that one is expected to buy for oneself. As the 1976 declaration of the World Health Organisation put it: 'The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition. ' As has just been remarked, in a liberal society basic health is seen as one of the indispensable conditions for the exercise of personal autonomy. Just at the time when it became obvious that health-care resources could not possibly meet the demands being made upon them, people were demanding that their fundamental right to health-care be satisfied by the state. The second set of more specific changes that have led to the present concern about the distribution of health-care resources stems from the dramatic rise in health costs in most OECD1 countries, accompanied by large-scale demographic and social changes which have meant, to take one example, that elderly people are now major (and relatively very expensive) consumers of health-care resources. Thus in OECD countries as a whole, health costs increased from 3.8% of GDP2 in 1960 to 7% of GDP in 1980, and it has been predicted that the proportion of health costs to GDP will continue to increase. (In the US the current figure is about 12% of GDP, and in Australia about 7.8% of GDP. ) As a consequence, during the 1980s a kind of doomsday scenario (analogous to similar doomsday extrapolations about energy needs and fossil fuels or about population increases) was projected by health administrators, economists and politicians. In this scenario, ever-rising health costs were matched against static or declining resources. | In OECD countries population changes have had an impact on health-care costs in recent years. | entailment |
id_6443 | The Problem of Scarce Resources The problem of how health-care resources should be allocated or apportioned, so that they are distributed in both the most just and most efficient way, is not a new one. Every health system in an economically developed society is faced with the need to decide (either formally or informally) what proportion of the community's total resources should be spent on health-care; how resources are to be apportioned; what diseases and disabilities and which forms of treatment are to be given priority; which members of the community are to be given special consideration in respect of their health needs; and which forms of treatment are the most cost-effective. What is new is that, from the 1950s onwards, there have been certain general changes in outlook about the finitude of resources as a whole and of health-care resources in particular, as well as more specific changes regarding the clientele of health-care resources and the cost to the community of those resources. Thus, in the 1950s and 1960s, there emerged an awareness in Western societies that resources for the provision of fossil fuel energy were finite and exhaustible and that the capacity of nature or the environment to sustain economic development and population was also finite. In other words, we became aware of the obvious fact that there were 'limits to growth'. The new consciousness that there were also severe limits to health-care resources was part of this general revelation of the obvious. Looking back, it now seems quite incredible that in the national health systems that emerged in many countries in the years immediately after the 1939-45 World War, it was assumed without question that all the basic health needs of any community could be satisfied, at least in principle; the 'invisible hand' of economic progress would provide. However, at exactly the same time as this new realisation of the finite character of health-care resources was sinking in, an awareness of a contrary kind was developing in Western societies: that people have a basic right to health-care as a necessary condition of a proper human life. Like education, political and legal processes and institutions, public order, communication, transport and money supply, health-care came to be seen as one of the fundamental social facilities necessary for people to exercise their other rights as autonomous human beings. People are not in a position to exercise personal liberty and to be self-determining if they are poverty-stricken, or deprived of basic education, or do not live within a context of law and order. In the same way, basic health-care is a condition of the exercise of autonomy. Although the language of 'rights' sometimes leads to confusion, by the late 1970s it was recognised in most societies that people have a right to health-care (though there has been considerable resistance in the United States to the idea that there is a formal right to health-care). It is also accepted that this right generates an obligation or duty for the state to ensure that adequate health-care resources are provided out of the public purse. The state has no obligation to provide a health-care system itself, but to ensure that such a system is provided. Put another way, basic health-care is now recognised as a 'public good', rather than a 'private good' that one is expected to buy for oneself. As the 1976 declaration of the World Health Organisation put it: 'The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition. ' As has just been remarked, in a liberal society basic health is seen as one of the indispensable conditions for the exercise of personal autonomy. Just at the time when it became obvious that health-care resources could not possibly meet the demands being made upon them, people were demanding that their fundamental right to health-care be satisfied by the state. The second set of more specific changes that have led to the present concern about the distribution of health-care resources stems from the dramatic rise in health costs in most OECD1 countries, accompanied by large-scale demographic and social changes which have meant, to take one example, that elderly people are now major (and relatively very expensive) consumers of health-care resources. Thus in OECD countries as a whole, health costs increased from 3.8% of GDP2 in 1960 to 7% of GDP in 1980, and it has been predicted that the proportion of health costs to GDP will continue to increase. (In the US the current figure is about 12% of GDP, and in Australia about 7.8% of GDP. ) As a consequence, during the 1980s a kind of doomsday scenario (analogous to similar doomsday extrapolations about energy needs and fossil fuels or about population increases) was projected by health administrators, economists and politicians. In this scenario, ever-rising health costs were matched against static or declining resources. | Health-care came to be seen as a right at about the same time that the limits of health-care resources became evident. | entailment |
id_6444 | The Problem of Scarce Resources The problem of how health-care resources should be allocated or apportioned, so that they are distributed in both the most just and most efficient way, is not a new one. Every health system in an economically developed society is faced with the need to decide (either formally or informally) what proportion of the community's total resources should be spent on health-care; how resources are to be apportioned; what diseases and disabilities and which forms of treatment are to be given priority; which members of the community are to be given special consideration in respect of their health needs; and which forms of treatment are the most cost-effective. What is new is that, from the 1950s onwards, there have been certain general changes in outlook about the finitude of resources as a whole and of health-care resources in particular, as well as more specific changes regarding the clientele of health-care resources and the cost to the community of those resources. Thus, in the 1950s and 1960s, there emerged an awareness in Western societies that resources for the provision of fossil fuel energy were finite and exhaustible and that the capacity of nature or the environment to sustain economic development and population was also finite. In other words, we became aware of the obvious fact that there were 'limits to growth'. The new consciousness that there were also severe limits to health-care resources was part of this general revelation of the obvious. Looking back, it now seems quite incredible that in the national health systems that emerged in many countries in the years immediately after the 1939-45 World War, it was assumed without question that all the basic health needs of any community could be satisfied, at least in principle; the 'invisible hand' of economic progress would provide. However, at exactly the same time as this new realisation of the finite character of health-care resources was sinking in, an awareness of a contrary kind was developing in Western societies: that people have a basic right to health-care as a necessary condition of a proper human life. Like education, political and legal processes and institutions, public order, communication, transport and money supply, health-care came to be seen as one of the fundamental social facilities necessary for people to exercise their other rights as autonomous human beings. People are not in a position to exercise personal liberty and to be self-determining if they are poverty-stricken, or deprived of basic education, or do not live within a context of law and order. In the same way, basic health-care is a condition of the exercise of autonomy. Although the language of 'rights' sometimes leads to confusion, by the late 1970s it was recognised in most societies that people have a right to health-care (though there has been considerable resistance in the United States to the idea that there is a formal right to health-care). It is also accepted that this right generates an obligation or duty for the state to ensure that adequate health-care resources are provided out of the public purse. The state has no obligation to provide a health-care system itself, but to ensure that such a system is provided. Put another way, basic health-care is now recognised as a 'public good', rather than a 'private good' that one is expected to buy for oneself. As the 1976 declaration of the World Health Organisation put it: 'The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition. ' As has just been remarked, in a liberal society basic health is seen as one of the indispensable conditions for the exercise of personal autonomy. Just at the time when it became obvious that health-care resources could not possibly meet the demands being made upon them, people were demanding that their fundamental right to health-care be satisfied by the state. The second set of more specific changes that have led to the present concern about the distribution of health-care resources stems from the dramatic rise in health costs in most OECD1 countries, accompanied by large-scale demographic and social changes which have meant, to take one example, that elderly people are now major (and relatively very expensive) consumers of health-care resources. Thus in OECD countries as a whole, health costs increased from 3.8% of GDP2 in 1960 to 7% of GDP in 1980, and it has been predicted that the proportion of health costs to GDP will continue to increase. (In the US the current figure is about 12% of GDP, and in Australia about 7.8% of GDP. ) As a consequence, during the 1980s a kind of doomsday scenario (analogous to similar doomsday extrapolations about energy needs and fossil fuels or about population increases) was projected by health administrators, economists and politicians. In this scenario, ever-rising health costs were matched against static or declining resources. | In most economically developed countries the elderly will have to make special provision for their health-care in the future. | neutral |
id_6445 | The Railways has earmarked two berths -one lower and one middle - in sleeper classes under the handicapped quota for physically challenged people travelling on concession. | Handicapped people need some privilege in trains. | entailment |
id_6446 | The Railways has earmarked two berths -one lower and one middle - in sleeper classes under the handicapped quota for physically challenged people travelling on concession. | Handicapped people will now not need any attendant while travelling in the trains. | contradiction |
id_6447 | The Railways has earmarked two berths -one lower and one middle - in sleeper classes under the handicapped quota for physically challenged people travelling on concession. | A good initiative by the railway for the handicapped people. | contradiction |
id_6448 | The Railways has earmarked two berths -one lower and one middle - in sleeper classes under the handicapped quota for physically challenged people travelling on concession. | Physically handicapped people can have hassle-free journey in trains. | contradiction |
id_6449 | The Return of Artificial Intelligence. It is becoming acceptable again to talk of computers performing. human tasks such as problem-solving and pattern-recognition. After years in the wilderness, the term 'artificial intelligence' (AI) seems poised to make a comeback. AI was big in the 1980s but vanished in the 1990s. It re-entered public consciousness with the release of AI, a movie about a robot boy. This has ignited public debate about AI, but the term is also being used once more within the computer industry. Researchers, executives and marketing people are now using the expression without irony or inverted commas. And it is not always hype. The term is being applied, with some justification, to products that depend on technology that was originally developed by AI researchers. Admittedly, the rehabilitation of the term has a long way to go, and some firms still prefer to avoid using it. But the fact that others are starting to use it again suggests that AI has moved on from being seen as an over-ambitious and under-achieving field of research. The field was launched, and the term 'artificial intelligence' coined, at a conference in 1956, by a group of researchers that included Marvin Minsky, John McCarthy, Herbert Simon and Alan Newell, all of whom went on to become leading figures in the field. The expression provided an attractive but informative name for a research programme that encompassed such previously disparate fields as operations research, cybernetics, logic and computer science. The goal they shared was an attempt to capture or mimic human abilities using machines. That said, different groups of researchers attacked different problems, from speech recognition to chess playing, in different ways; AI unified the field in name only. But it was a term that captured the public imagination. Most researchers agree that AI peaked around 1985. A public reared on science-fiction movies and excited by the growing power of computers had high expectations. For years, AI researchers had implied that a breakthrough was just around the corner. Marvin Minsky said in 1967 that within a generation the problem of creating 'artificial intelligence' would be substantially solved. Prototypes of medical-diagnosis programs and speech recognition software appeared to be making progress. It proved to be a false dawn. Thinking computers and household robots failed to materialise, and a backlash ensued. 'There was undue optimism in the early 1980s, ' says David Leake, a researcher at Indiana University. 'Then when people realised these were hard problems, there was retrenchment. By the late 1980s, the term AI was being avoided by many researchers, who opted instead to align themselves with specific sub-disciplines such as neural networks, agent technology, case-based reasoning, and so on. ' Ironically, in some ways AI was a victim of its own success. Whenever an apparently mundane problem was solved, such as building a system that could land an aircraft unattended, the problem was deemed not to have been AI in the first place. 'If it works, it can't be AI, ' as Dr Leake characterises it. The effect of repeatedly moving the goal-posts in this way was that AI came to refer to 'blue-sky' research that was still years away from commercialisation. Researchers joked that AI stood for 'almost implemented'. Meanwhile, the technologies that made it onto the market, such as speech recognition, language translation and decision-support software, were no longer regarded as AI. Yet all three once fell well within the umbrella of AI research. But the tide may now be turning, according to Dr Leake. HNC Software of San Diego, backed by a government agency, reckon that their new approach to artificial intelligence is the most powerful and promising approach ever discovered. HNC claim that their system, based on a cluster of 30 processors, could be used to spot camouflaged vehicles on a battlefield or extract a voice signal from a noisy background tasks humans can do well, but computers cannot. 'Whether or not their technology lives up to the claims made for it, the fact that HNC are emphasising the use of AI is itself an interesting development, ' says Dr Leake. Another factor that may boost the prospects for AI in the near future is that investors are now looking for firms using clever technology, rather than just a clever business model, to differentiate themselves. In particular, the problem of information overload, exacerbated by the growth of e-mail and the explosion in the number of web pages, means there are plenty of opportunities for new technologies to help filter and categorise information classic AI problems. That may mean that more artificial intelligence companies will start to emerge to meet this challenge. The 1969 film, 2001: A Space Odyssey, featured an intelligent computer called HAL 9000. As well as understanding and speaking English, HAL could play chess and even learned to lipread. HAL thus encapsulated the optimism of the 1960s that intelligent computers would be widespread by 2001. But 2001 has been and gone, and there is still no sign of a HAL-like computer. Individual systems can play chess or transcribe speech, but a general theory of machine intelligence still remains elusive. It may be, however, that the comparison with HAL no longer seems quite so important, and AI can now be judged by what it can do, rather than by how well it matches up to a 30-year-old science-fiction film. 'People are beginning to realise that there are impressive things that these systems can do. ' says Dr Leake hopefully. | In 1985, AI was at its lowest point. | contradiction |
id_6450 | The Return of Artificial Intelligence. It is becoming acceptable again to talk of computers performing. human tasks such as problem-solving and pattern-recognition. After years in the wilderness, the term 'artificial intelligence' (AI) seems poised to make a comeback. AI was big in the 1980s but vanished in the 1990s. It re-entered public consciousness with the release of AI, a movie about a robot boy. This has ignited public debate about AI, but the term is also being used once more within the computer industry. Researchers, executives and marketing people are now using the expression without irony or inverted commas. And it is not always hype. The term is being applied, with some justification, to products that depend on technology that was originally developed by AI researchers. Admittedly, the rehabilitation of the term has a long way to go, and some firms still prefer to avoid using it. But the fact that others are starting to use it again suggests that AI has moved on from being seen as an over-ambitious and under-achieving field of research. The field was launched, and the term 'artificial intelligence' coined, at a conference in 1956, by a group of researchers that included Marvin Minsky, John McCarthy, Herbert Simon and Alan Newell, all of whom went on to become leading figures in the field. The expression provided an attractive but informative name for a research programme that encompassed such previously disparate fields as operations research, cybernetics, logic and computer science. The goal they shared was an attempt to capture or mimic human abilities using machines. That said, different groups of researchers attacked different problems, from speech recognition to chess playing, in different ways; AI unified the field in name only. But it was a term that captured the public imagination. Most researchers agree that AI peaked around 1985. A public reared on science-fiction movies and excited by the growing power of computers had high expectations. For years, AI researchers had implied that a breakthrough was just around the corner. Marvin Minsky said in 1967 that within a generation the problem of creating 'artificial intelligence' would be substantially solved. Prototypes of medical-diagnosis programs and speech recognition software appeared to be making progress. It proved to be a false dawn. Thinking computers and household robots failed to materialise, and a backlash ensued. 'There was undue optimism in the early 1980s, ' says David Leake, a researcher at Indiana University. 'Then when people realised these were hard problems, there was retrenchment. By the late 1980s, the term AI was being avoided by many researchers, who opted instead to align themselves with specific sub-disciplines such as neural networks, agent technology, case-based reasoning, and so on. ' Ironically, in some ways AI was a victim of its own success. Whenever an apparently mundane problem was solved, such as building a system that could land an aircraft unattended, the problem was deemed not to have been AI in the first place. 'If it works, it can't be AI, ' as Dr Leake characterises it. The effect of repeatedly moving the goal-posts in this way was that AI came to refer to 'blue-sky' research that was still years away from commercialisation. Researchers joked that AI stood for 'almost implemented'. Meanwhile, the technologies that made it onto the market, such as speech recognition, language translation and decision-support software, were no longer regarded as AI. Yet all three once fell well within the umbrella of AI research. But the tide may now be turning, according to Dr Leake. HNC Software of San Diego, backed by a government agency, reckon that their new approach to artificial intelligence is the most powerful and promising approach ever discovered. HNC claim that their system, based on a cluster of 30 processors, could be used to spot camouflaged vehicles on a battlefield or extract a voice signal from a noisy background tasks humans can do well, but computers cannot. 'Whether or not their technology lives up to the claims made for it, the fact that HNC are emphasising the use of AI is itself an interesting development, ' says Dr Leake. Another factor that may boost the prospects for AI in the near future is that investors are now looking for firms using clever technology, rather than just a clever business model, to differentiate themselves. In particular, the problem of information overload, exacerbated by the growth of e-mail and the explosion in the number of web pages, means there are plenty of opportunities for new technologies to help filter and categorise information classic AI problems. That may mean that more artificial intelligence companies will start to emerge to meet this challenge. The 1969 film, 2001: A Space Odyssey, featured an intelligent computer called HAL 9000. As well as understanding and speaking English, HAL could play chess and even learned to lipread. HAL thus encapsulated the optimism of the 1960s that intelligent computers would be widespread by 2001. But 2001 has been and gone, and there is still no sign of a HAL-like computer. Individual systems can play chess or transcribe speech, but a general theory of machine intelligence still remains elusive. It may be, however, that the comparison with HAL no longer seems quite so important, and AI can now be judged by what it can do, rather than by how well it matches up to a 30-year-old science-fiction film. 'People are beginning to realise that there are impressive things that these systems can do. ' says Dr Leake hopefully. | Research into agent technology was more costly than research into neural networks. | neutral |
id_6451 | The Return of Artificial Intelligence. It is becoming acceptable again to talk of computers performing. human tasks such as problem-solving and pattern-recognition. After years in the wilderness, the term 'artificial intelligence' (AI) seems poised to make a comeback. AI was big in the 1980s but vanished in the 1990s. It re-entered public consciousness with the release of AI, a movie about a robot boy. This has ignited public debate about AI, but the term is also being used once more within the computer industry. Researchers, executives and marketing people are now using the expression without irony or inverted commas. And it is not always hype. The term is being applied, with some justification, to products that depend on technology that was originally developed by AI researchers. Admittedly, the rehabilitation of the term has a long way to go, and some firms still prefer to avoid using it. But the fact that others are starting to use it again suggests that AI has moved on from being seen as an over-ambitious and under-achieving field of research. The field was launched, and the term 'artificial intelligence' coined, at a conference in 1956, by a group of researchers that included Marvin Minsky, John McCarthy, Herbert Simon and Alan Newell, all of whom went on to become leading figures in the field. The expression provided an attractive but informative name for a research programme that encompassed such previously disparate fields as operations research, cybernetics, logic and computer science. The goal they shared was an attempt to capture or mimic human abilities using machines. That said, different groups of researchers attacked different problems, from speech recognition to chess playing, in different ways; AI unified the field in name only. But it was a term that captured the public imagination. Most researchers agree that AI peaked around 1985. A public reared on science-fiction movies and excited by the growing power of computers had high expectations. For years, AI researchers had implied that a breakthrough was just around the corner. Marvin Minsky said in 1967 that within a generation the problem of creating 'artificial intelligence' would be substantially solved. Prototypes of medical-diagnosis programs and speech recognition software appeared to be making progress. It proved to be a false dawn. Thinking computers and household robots failed to materialise, and a backlash ensued. 'There was undue optimism in the early 1980s, ' says David Leake, a researcher at Indiana University. 'Then when people realised these were hard problems, there was retrenchment. By the late 1980s, the term AI was being avoided by many researchers, who opted instead to align themselves with specific sub-disciplines such as neural networks, agent technology, case-based reasoning, and so on. ' Ironically, in some ways AI was a victim of its own success. Whenever an apparently mundane problem was solved, such as building a system that could land an aircraft unattended, the problem was deemed not to have been AI in the first place. 'If it works, it can't be AI, ' as Dr Leake characterises it. The effect of repeatedly moving the goal-posts in this way was that AI came to refer to 'blue-sky' research that was still years away from commercialisation. Researchers joked that AI stood for 'almost implemented'. Meanwhile, the technologies that made it onto the market, such as speech recognition, language translation and decision-support software, were no longer regarded as AI. Yet all three once fell well within the umbrella of AI research. But the tide may now be turning, according to Dr Leake. HNC Software of San Diego, backed by a government agency, reckon that their new approach to artificial intelligence is the most powerful and promising approach ever discovered. HNC claim that their system, based on a cluster of 30 processors, could be used to spot camouflaged vehicles on a battlefield or extract a voice signal from a noisy background tasks humans can do well, but computers cannot. 'Whether or not their technology lives up to the claims made for it, the fact that HNC are emphasising the use of AI is itself an interesting development, ' says Dr Leake. Another factor that may boost the prospects for AI in the near future is that investors are now looking for firms using clever technology, rather than just a clever business model, to differentiate themselves. In particular, the problem of information overload, exacerbated by the growth of e-mail and the explosion in the number of web pages, means there are plenty of opportunities for new technologies to help filter and categorise information classic AI problems. That may mean that more artificial intelligence companies will start to emerge to meet this challenge. The 1969 film, 2001: A Space Odyssey, featured an intelligent computer called HAL 9000. As well as understanding and speaking English, HAL could play chess and even learned to lipread. HAL thus encapsulated the optimism of the 1960s that intelligent computers would be widespread by 2001. But 2001 has been and gone, and there is still no sign of a HAL-like computer. Individual systems can play chess or transcribe speech, but a general theory of machine intelligence still remains elusive. It may be, however, that the comparison with HAL no longer seems quite so important, and AI can now be judged by what it can do, rather than by how well it matches up to a 30-year-old science-fiction film. 'People are beginning to realise that there are impressive things that these systems can do. ' says Dr Leake hopefully. | The researchers who launched the field of AI had worked together on other projects in the past. | neutral |
id_6452 | The Return of Artificial Intelligence. It is becoming acceptable again to talk of computers performing. human tasks such as problem-solving and pattern-recognition. After years in the wilderness, the term 'artificial intelligence' (AI) seems poised to make a comeback. AI was big in the 1980s but vanished in the 1990s. It re-entered public consciousness with the release of AI, a movie about a robot boy. This has ignited public debate about AI, but the term is also being used once more within the computer industry. Researchers, executives and marketing people are now using the expression without irony or inverted commas. And it is not always hype. The term is being applied, with some justification, to products that depend on technology that was originally developed by AI researchers. Admittedly, the rehabilitation of the term has a long way to go, and some firms still prefer to avoid using it. But the fact that others are starting to use it again suggests that AI has moved on from being seen as an over-ambitious and under-achieving field of research. The field was launched, and the term 'artificial intelligence' coined, at a conference in 1956, by a group of researchers that included Marvin Minsky, John McCarthy, Herbert Simon and Alan Newell, all of whom went on to become leading figures in the field. The expression provided an attractive but informative name for a research programme that encompassed such previously disparate fields as operations research, cybernetics, logic and computer science. The goal they shared was an attempt to capture or mimic human abilities using machines. That said, different groups of researchers attacked different problems, from speech recognition to chess playing, in different ways; AI unified the field in name only. But it was a term that captured the public imagination. Most researchers agree that AI peaked around 1985. A public reared on science-fiction movies and excited by the growing power of computers had high expectations. For years, AI researchers had implied that a breakthrough was just around the corner. Marvin Minsky said in 1967 that within a generation the problem of creating 'artificial intelligence' would be substantially solved. Prototypes of medical-diagnosis programs and speech recognition software appeared to be making progress. It proved to be a false dawn. Thinking computers and household robots failed to materialise, and a backlash ensued. 'There was undue optimism in the early 1980s, ' says David Leake, a researcher at Indiana University. 'Then when people realised these were hard problems, there was retrenchment. By the late 1980s, the term AI was being avoided by many researchers, who opted instead to align themselves with specific sub-disciplines such as neural networks, agent technology, case-based reasoning, and so on. ' Ironically, in some ways AI was a victim of its own success. Whenever an apparently mundane problem was solved, such as building a system that could land an aircraft unattended, the problem was deemed not to have been AI in the first place. 'If it works, it can't be AI, ' as Dr Leake characterises it. The effect of repeatedly moving the goal-posts in this way was that AI came to refer to 'blue-sky' research that was still years away from commercialisation. Researchers joked that AI stood for 'almost implemented'. Meanwhile, the technologies that made it onto the market, such as speech recognition, language translation and decision-support software, were no longer regarded as AI. Yet all three once fell well within the umbrella of AI research. But the tide may now be turning, according to Dr Leake. HNC Software of San Diego, backed by a government agency, reckon that their new approach to artificial intelligence is the most powerful and promising approach ever discovered. HNC claim that their system, based on a cluster of 30 processors, could be used to spot camouflaged vehicles on a battlefield or extract a voice signal from a noisy background tasks humans can do well, but computers cannot. 'Whether or not their technology lives up to the claims made for it, the fact that HNC are emphasising the use of AI is itself an interesting development, ' says Dr Leake. Another factor that may boost the prospects for AI in the near future is that investors are now looking for firms using clever technology, rather than just a clever business model, to differentiate themselves. In particular, the problem of information overload, exacerbated by the growth of e-mail and the explosion in the number of web pages, means there are plenty of opportunities for new technologies to help filter and categorise information classic AI problems. That may mean that more artificial intelligence companies will start to emerge to meet this challenge. The 1969 film, 2001: A Space Odyssey, featured an intelligent computer called HAL 9000. As well as understanding and speaking English, HAL could play chess and even learned to lipread. HAL thus encapsulated the optimism of the 1960s that intelligent computers would be widespread by 2001. But 2001 has been and gone, and there is still no sign of a HAL-like computer. Individual systems can play chess or transcribe speech, but a general theory of machine intelligence still remains elusive. It may be, however, that the comparison with HAL no longer seems quite so important, and AI can now be judged by what it can do, rather than by how well it matches up to a 30-year-old science-fiction film. 'People are beginning to realise that there are impressive things that these systems can do. ' says Dr Leake hopefully. | The film 2001: A Space Odyssey reflected contemporary ideas about the potential of AI computers. | entailment |
id_6453 | The Return of Artificial Intelligence. It is becoming acceptable again to talk of computers performing. human tasks such as problem-solving and pattern-recognition. After years in the wilderness, the term 'artificial intelligence' (AI) seems poised to make a comeback. AI was big in the 1980s but vanished in the 1990s. It re-entered public consciousness with the release of AI, a movie about a robot boy. This has ignited public debate about AI, but the term is also being used once more within the computer industry. Researchers, executives and marketing people are now using the expression without irony or inverted commas. And it is not always hype. The term is being applied, with some justification, to products that depend on technology that was originally developed by AI researchers. Admittedly, the rehabilitation of the term has a long way to go, and some firms still prefer to avoid using it. But the fact that others are starting to use it again suggests that AI has moved on from being seen as an over-ambitious and under-achieving field of research. The field was launched, and the term 'artificial intelligence' coined, at a conference in 1956, by a group of researchers that included Marvin Minsky, John McCarthy, Herbert Simon and Alan Newell, all of whom went on to become leading figures in the field. The expression provided an attractive but informative name for a research programme that encompassed such previously disparate fields as operations research, cybernetics, logic and computer science. The goal they shared was an attempt to capture or mimic human abilities using machines. That said, different groups of researchers attacked different problems, from speech recognition to chess playing, in different ways; AI unified the field in name only. But it was a term that captured the public imagination. Most researchers agree that AI peaked around 1985. A public reared on science-fiction movies and excited by the growing power of computers had high expectations. For years, AI researchers had implied that a breakthrough was just around the corner. Marvin Minsky said in 1967 that within a generation the problem of creating 'artificial intelligence' would be substantially solved. Prototypes of medical-diagnosis programs and speech recognition software appeared to be making progress. It proved to be a false dawn. Thinking computers and household robots failed to materialise, and a backlash ensued. 'There was undue optimism in the early 1980s, ' says David Leake, a researcher at Indiana University. 'Then when people realised these were hard problems, there was retrenchment. By the late 1980s, the term AI was being avoided by many researchers, who opted instead to align themselves with specific sub-disciplines such as neural networks, agent technology, case-based reasoning, and so on. ' Ironically, in some ways AI was a victim of its own success. Whenever an apparently mundane problem was solved, such as building a system that could land an aircraft unattended, the problem was deemed not to have been AI in the first place. 'If it works, it can't be AI, ' as Dr Leake characterises it. The effect of repeatedly moving the goal-posts in this way was that AI came to refer to 'blue-sky' research that was still years away from commercialisation. Researchers joked that AI stood for 'almost implemented'. Meanwhile, the technologies that made it onto the market, such as speech recognition, language translation and decision-support software, were no longer regarded as AI. Yet all three once fell well within the umbrella of AI research. But the tide may now be turning, according to Dr Leake. HNC Software of San Diego, backed by a government agency, reckon that their new approach to artificial intelligence is the most powerful and promising approach ever discovered. HNC claim that their system, based on a cluster of 30 processors, could be used to spot camouflaged vehicles on a battlefield or extract a voice signal from a noisy background tasks humans can do well, but computers cannot. 'Whether or not their technology lives up to the claims made for it, the fact that HNC are emphasising the use of AI is itself an interesting development, ' says Dr Leake. Another factor that may boost the prospects for AI in the near future is that investors are now looking for firms using clever technology, rather than just a clever business model, to differentiate themselves. In particular, the problem of information overload, exacerbated by the growth of e-mail and the explosion in the number of web pages, means there are plenty of opportunities for new technologies to help filter and categorise information classic AI problems. That may mean that more artificial intelligence companies will start to emerge to meet this challenge. The 1969 film, 2001: A Space Odyssey, featured an intelligent computer called HAL 9000. As well as understanding and speaking English, HAL could play chess and even learned to lipread. HAL thus encapsulated the optimism of the 1960s that intelligent computers would be widespread by 2001. But 2001 has been and gone, and there is still no sign of a HAL-like computer. Individual systems can play chess or transcribe speech, but a general theory of machine intelligence still remains elusive. It may be, however, that the comparison with HAL no longer seems quite so important, and AI can now be judged by what it can do, rather than by how well it matches up to a 30-year-old science-fiction film. 'People are beginning to realise that there are impressive things that these systems can do. ' says Dr Leake hopefully. | The problems waiting to be solved by AI have not changed since 1967. | contradiction |
id_6454 | The Return of Artificial Intelligence. It is becoming acceptable again to talk of computers performing. human tasks such as problem-solving and pattern-recognition. After years in the wilderness, the term 'artificial intelligence' (AI) seems poised to make a comeback. AI was big in the 1980s but vanished in the 1990s. It re-entered public consciousness with the release of AI, a movie about a robot boy. This has ignited public debate about AI, but the term is also being used once more within the computer industry. Researchers, executives and marketing people are now using the expression without irony or inverted commas. And it is not always hype. The term is being applied, with some justification, to products that depend on technology that was originally developed by AI researchers. Admittedly, the rehabilitation of the term has a long way to go, and some firms still prefer to avoid using it. But the fact that others are starting to use it again suggests that AI has moved on from being seen as an over-ambitious and under-achieving field of research. The field was launched, and the term 'artificial intelligence' coined, at a conference in 1956, by a group of researchers that included Marvin Minsky, John McCarthy, Herbert Simon and Alan Newell, all of whom went on to become leading figures in the field. The expression provided an attractive but informative name for a research programme that encompassed such previously disparate fields as operations research, cybernetics, logic and computer science. The goal they shared was an attempt to capture or mimic human abilities using machines. That said, different groups of researchers attacked different problems, from speech recognition to chess playing, in different ways; AI unified the field in name only. But it was a term that captured the public imagination. Most researchers agree that AI peaked around 1985. A public reared on science-fiction movies and excited by the growing power of computers had high expectations. For years, AI researchers had implied that a breakthrough was just around the corner. Marvin Minsky said in 1967 that within a generation the problem of creating 'artificial intelligence' would be substantially solved. Prototypes of medical-diagnosis programs and speech recognition software appeared to be making progress. It proved to be a false dawn. Thinking computers and household robots failed to materialise, and a backlash ensued. 'There was undue optimism in the early 1980s, ' says David Leake, a researcher at Indiana University. 'Then when people realised these were hard problems, there was retrenchment. By the late 1980s, the term AI was being avoided by many researchers, who opted instead to align themselves with specific sub-disciplines such as neural networks, agent technology, case-based reasoning, and so on. ' Ironically, in some ways AI was a victim of its own success. Whenever an apparently mundane problem was solved, such as building a system that could land an aircraft unattended, the problem was deemed not to have been AI in the first place. 'If it works, it can't be AI, ' as Dr Leake characterises it. The effect of repeatedly moving the goal-posts in this way was that AI came to refer to 'blue-sky' research that was still years away from commercialisation. Researchers joked that AI stood for 'almost implemented'. Meanwhile, the technologies that made it onto the market, such as speech recognition, language translation and decision-support software, were no longer regarded as AI. Yet all three once fell well within the umbrella of AI research. But the tide may now be turning, according to Dr Leake. HNC Software of San Diego, backed by a government agency, reckon that their new approach to artificial intelligence is the most powerful and promising approach ever discovered. HNC claim that their system, based on a cluster of 30 processors, could be used to spot camouflaged vehicles on a battlefield or extract a voice signal from a noisy background tasks humans can do well, but computers cannot. 'Whether or not their technology lives up to the claims made for it, the fact that HNC are emphasising the use of AI is itself an interesting development, ' says Dr Leake. Another factor that may boost the prospects for AI in the near future is that investors are now looking for firms using clever technology, rather than just a clever business model, to differentiate themselves. In particular, the problem of information overload, exacerbated by the growth of e-mail and the explosion in the number of web pages, means there are plenty of opportunities for new technologies to help filter and categorise information classic AI problems. That may mean that more artificial intelligence companies will start to emerge to meet this challenge. The 1969 film, 2001: A Space Odyssey, featured an intelligent computer called HAL 9000. As well as understanding and speaking English, HAL could play chess and even learned to lipread. HAL thus encapsulated the optimism of the 1960s that intelligent computers would be widespread by 2001. But 2001 has been and gone, and there is still no sign of a HAL-like computer. Individual systems can play chess or transcribe speech, but a general theory of machine intelligence still remains elusive. It may be, however, that the comparison with HAL no longer seems quite so important, and AI can now be judged by what it can do, rather than by how well it matches up to a 30-year-old science-fiction film. 'People are beginning to realise that there are impressive things that these systems can do. ' says Dr Leake hopefully. | Applications of AI have already had a degree of success. | entailment |
id_6455 | The Ring of Fire is an area of frequent seismic and volcanic activity that encircles the Pacific basin. Approximately 90% of the worlds earthquakes occur in this zone, including the largest ever recorded Chiles 1960 Valdivia earthquake. There are an estimated 452 volcanoes 75% of the worlds total located in this 40,000 km belt. On its Eastern side, the Ring of Fire stretches along South and Central America up to Canada and Alaska, and includes Californias well-known San Andreas fault zone. To the west of the Pacific, it extends from Russia down to Japan, the Philippines, Indonesia and New Zealand. The Ring of Fire finishes in Antarctica, which is home to Mount Erebus, the worlds southern-most active volcano. The volcanic eruptions and earthquakes that characterise the Ring of Fire can be explained by plate tectonics, a unifying geological theory first expounded in the 1960s. The Earths surface is comprised of tectonic plates that change size and shift over time. Earthquakes are caused when plates that are pushing against each other suddenly slip. Volcanoes occur only when two adjacent plates converge and one plate slides under the other, a process known as subduction. As it is pushed deeper into the Earth, the subducted plate encounters high temperatures and eventually molten rock rises to the surface and erupts. | Molten rock rises during a volcanic eruption. | entailment |
id_6456 | The Ring of Fire is an area of frequent seismic and volcanic activity that encircles the Pacific basin. Approximately 90% of the worlds earthquakes occur in this zone, including the largest ever recorded Chiles 1960 Valdivia earthquake. There are an estimated 452 volcanoes 75% of the worlds total located in this 40,000 km belt. On its Eastern side, the Ring of Fire stretches along South and Central America up to Canada and Alaska, and includes Californias well-known San Andreas fault zone. To the west of the Pacific, it extends from Russia down to Japan, the Philippines, Indonesia and New Zealand. The Ring of Fire finishes in Antarctica, which is home to Mount Erebus, the worlds southern-most active volcano. The volcanic eruptions and earthquakes that characterise the Ring of Fire can be explained by plate tectonics, a unifying geological theory first expounded in the 1960s. The Earths surface is comprised of tectonic plates that change size and shift over time. Earthquakes are caused when plates that are pushing against each other suddenly slip. Volcanoes occur only when two adjacent plates converge and one plate slides under the other, a process known as subduction. As it is pushed deeper into the Earth, the subducted plate encounters high temperatures and eventually molten rock rises to the surface and erupts. | Mexico is located along the eastern side of the Ring of Fire. | neutral |
id_6457 | The Ring of Fire is an area of frequent seismic and volcanic activity that encircles the Pacific basin. Approximately 90% of the worlds earthquakes occur in this zone, including the largest ever recorded Chiles 1960 Valdivia earthquake. There are an estimated 452 volcanoes 75% of the worlds total located in this 40,000 km belt. On its Eastern side, the Ring of Fire stretches along South and Central America up to Canada and Alaska, and includes Californias well-known San Andreas fault zone. To the west of the Pacific, it extends from Russia down to Japan, the Philippines, Indonesia and New Zealand. The Ring of Fire finishes in Antarctica, which is home to Mount Erebus, the worlds southern-most active volcano. The volcanic eruptions and earthquakes that characterise the Ring of Fire can be explained by plate tectonics, a unifying geological theory first expounded in the 1960s. The Earths surface is comprised of tectonic plates that change size and shift over time. Earthquakes are caused when plates that are pushing against each other suddenly slip. Volcanoes occur only when two adjacent plates converge and one plate slides under the other, a process known as subduction. As it is pushed deeper into the Earth, the subducted plate encounters high temperatures and eventually molten rock rises to the surface and erupts. | Subduction occurs whenever two tectonic plates move in opposite directions. | neutral |
id_6458 | The Ring of Fire is an area of frequent seismic and volcanic activity that encircles the Pacific basin. Approximately 90% of the worlds earthquakes occur in this zone, including the largest ever recorded Chiles 1960 Valdivia earthquake. There are an estimated 452 volcanoes 75% of the worlds total located in this 40,000 km belt. On its Eastern side, the Ring of Fire stretches along South and Central America up to Canada and Alaska, and includes Californias well-known San Andreas fault zone. To the west of the Pacific, it extends from Russia down to Japan, the Philippines, Indonesia and New Zealand. The Ring of Fire finishes in Antarctica, which is home to Mount Erebus, the worlds southern-most active volcano. The volcanic eruptions and earthquakes that characterise the Ring of Fire can be explained by plate tectonics, a unifying geological theory first expounded in the 1960s. The Earths surface is comprised of tectonic plates that change size and shift over time. Earthquakes are caused when plates that are pushing against each other suddenly slip. Volcanoes occur only when two adjacent plates converge and one plate slides under the other, a process known as subduction. As it is pushed deeper into the Earth, the subducted plate encounters high temperatures and eventually molten rock rises to the surface and erupts. | There are no volcanoes further south than Mount Erebus. | neutral |
id_6459 | The Ring of Fire is an area of frequent seismic and volcanic activity that encircles the Pacific basin. Approximately 90% of the worlds earthquakes occur in this zone, including the largest ever recorded Chiles 1960 Valdivia earthquake. There are an estimated 452 volcanoes 75% of the worlds total located in this 40,000 km belt. On its Eastern side, the Ring of Fire stretches along South and Central America up to Canada and Alaska, and includes Californias well-known San Andreas fault zone. To the west of the Pacific, it extends from Russia down to Japan, the Philippines, Indonesia and New Zealand. The Ring of Fire finishes in Antarctica, which is home to Mount Erebus, the worlds southern-most active volcano. The volcanic eruptions and earthquakes that characterise the Ring of Fire can be explained by plate tectonics, a unifying geological theory first expounded in the 1960s. The Earths surface is comprised of tectonic plates that change size and shift over time. Earthquakes are caused when plates that are pushing against each other suddenly slip. Volcanoes occur only when two adjacent plates converge and one plate slides under the other, a process known as subduction. As it is pushed deeper into the Earth, the subducted plate encounters high temperatures and eventually molten rock rises to the surface and erupts. | The worlds most severe earthquakes and volcanic eruptions occur within the Ring of Fire. | neutral |
id_6460 | The Risks of Cigarette Smoke Discovered in the early 1800s and named nicotianine, the oily essence now called nicotine is the main active insredient of tobacco. Nicotine, however, is only a small component of cigarette smoke, which contains more than 4,700 chemical compounds, including 43 cancer-causing substances. In recent times, scientific research has been providing evidence that years of cigarette smoking vastly increases the risk of developing fatal medical conditions. In addition to being responsible for more than 85 per cent of lung cancers, smoking is associated with cancers of, amongst others, the mouth, stomach and kidneys, and is thought to cause about 14 per cent of leukemia and cervical cancers. In 1990, smoking caused more than 84,000 deaths, mainly resulting from such problems as pneumonia, bronchitis and influenza. Smoking, it is believed, is responsible for 30 per cent of all deaths from cancer and clearly represents the most important preventable cause of cancer in countries like the United States today. Passive smoking, the breathing in of the side-stream smoke from the burning of tobacco between puffs or of the smoke exhaled by a smoker, also causes a serious health risk. A report published in 1992 by the US Environmental Protection Agency (EPA) emphasized the health dangers, especially from side-stream smoke. This type of smoke contains more, smaller particles and is therefore more likely to be deposited deep in the lungs. On the basis of this report, the EPA has classified environmental tobacco smoke in the highest risk category for causing cancer. As an illustration of the health risks, in the case of a married couple where one partner is a smoker and one a non-smoker, the latter is believed to have a 30 per cent higher risk of death from heart disease because of passive smoking. The risk of lung cancer also increases over the years of exposure and the figure jumps to 80 per cent if the spouse has been smoking four packs a day for 20 years. It has been calculated that 17 per cent of cases of lung cancer can be attributed to high levels of exposure to second- hand tobacco smoke during childhood and adolescence. A more recent study by researchers at the University of California at San Francisco (UCSF) has shown that second-hand cigarette smoke does more harm to non-smokers than to smokers. Leaving aside the philosophical question of whether anyone should have to breathe someone elses cigarette smoke, the report suggests that the smoke experienced by many people in their daily lives is enough to produce substantial adverse effects on a persons heart and lungs. The report, published in the Journal of the American Medical Association (AMA), was based on the researchers own earlier research but also includes a review of studies over the past few years. The American Medical Association represents about half of all US doctors and is a strong opponent of smoking. The study suggests that people who smoke cigarettes are continually damaging their cardiovascular system, which adapts in order to compensate for the effects of smoking. It further states that people who do not smoke do not have the benefit of their system adapting to the smoke inhalation. Consequently, the effects of passive smoking are far greater on non-smokers than on smokers. This report emphasizes that cancer is not caused by a single element in cigarette smoke; harmful effects to health are caused by many components. Carbon monoxide, for example, competes with oxygen in red blood cells and interferes with the bloods ability to deliver life- giving oxygen to the heart. Nicotine and other toxins in cigarette smoke activate small blood cells called platelets, which increases the likelihood of blood clots, thereby affecting blood circulation throughout the body. The researchers criticize the practice of some scientific consultants who work with the tobacco industry for assuming that cigarette smoke has the same impact on smokers as it does on non-smokers. They argue that those scientists are underestimating the damage done by passive smoking and, in support of their recent findings, cite some previous research which points to passive smoking as the cause for between 30,000 and 60,000 deaths from heart attacks each year in the United States. This means that passive smoking is the third most preventable cause of death after active smoking and alcohol-related diseases. The study argues that the type of action needed against passive smoking should be similar to that being taken against illegal drugs and AIDS (SIDA). The UCSF researchers maintain that the simplest and most cost-effective action is to establish smoke-free work places, schools and public places. | Thirty per cent of deaths in the United States are caused by smoking-related diseases. | contradiction |
id_6461 | The Risks of Cigarette Smoke Discovered in the early 1800s and named nicotianine, the oily essence now called nicotine is the main active insredient of tobacco. Nicotine, however, is only a small component of cigarette smoke, which contains more than 4,700 chemical compounds, including 43 cancer-causing substances. In recent times, scientific research has been providing evidence that years of cigarette smoking vastly increases the risk of developing fatal medical conditions. In addition to being responsible for more than 85 per cent of lung cancers, smoking is associated with cancers of, amongst others, the mouth, stomach and kidneys, and is thought to cause about 14 per cent of leukemia and cervical cancers. In 1990, smoking caused more than 84,000 deaths, mainly resulting from such problems as pneumonia, bronchitis and influenza. Smoking, it is believed, is responsible for 30 per cent of all deaths from cancer and clearly represents the most important preventable cause of cancer in countries like the United States today. Passive smoking, the breathing in of the side-stream smoke from the burning of tobacco between puffs or of the smoke exhaled by a smoker, also causes a serious health risk. A report published in 1992 by the US Environmental Protection Agency (EPA) emphasized the health dangers, especially from side-stream smoke. This type of smoke contains more, smaller particles and is therefore more likely to be deposited deep in the lungs. On the basis of this report, the EPA has classified environmental tobacco smoke in the highest risk category for causing cancer. As an illustration of the health risks, in the case of a married couple where one partner is a smoker and one a non-smoker, the latter is believed to have a 30 per cent higher risk of death from heart disease because of passive smoking. The risk of lung cancer also increases over the years of exposure and the figure jumps to 80 per cent if the spouse has been smoking four packs a day for 20 years. It has been calculated that 17 per cent of cases of lung cancer can be attributed to high levels of exposure to second- hand tobacco smoke during childhood and adolescence. A more recent study by researchers at the University of California at San Francisco (UCSF) has shown that second-hand cigarette smoke does more harm to non-smokers than to smokers. Leaving aside the philosophical question of whether anyone should have to breathe someone elses cigarette smoke, the report suggests that the smoke experienced by many people in their daily lives is enough to produce substantial adverse effects on a persons heart and lungs. The report, published in the Journal of the American Medical Association (AMA), was based on the researchers own earlier research but also includes a review of studies over the past few years. The American Medical Association represents about half of all US doctors and is a strong opponent of smoking. The study suggests that people who smoke cigarettes are continually damaging their cardiovascular system, which adapts in order to compensate for the effects of smoking. It further states that people who do not smoke do not have the benefit of their system adapting to the smoke inhalation. Consequently, the effects of passive smoking are far greater on non-smokers than on smokers. This report emphasizes that cancer is not caused by a single element in cigarette smoke; harmful effects to health are caused by many components. Carbon monoxide, for example, competes with oxygen in red blood cells and interferes with the bloods ability to deliver life- giving oxygen to the heart. Nicotine and other toxins in cigarette smoke activate small blood cells called platelets, which increases the likelihood of blood clots, thereby affecting blood circulation throughout the body. The researchers criticize the practice of some scientific consultants who work with the tobacco industry for assuming that cigarette smoke has the same impact on smokers as it does on non-smokers. They argue that those scientists are underestimating the damage done by passive smoking and, in support of their recent findings, cite some previous research which points to passive smoking as the cause for between 30,000 and 60,000 deaths from heart attacks each year in the United States. This means that passive smoking is the third most preventable cause of death after active smoking and alcohol-related diseases. The study argues that the type of action needed against passive smoking should be similar to that being taken against illegal drugs and AIDS (SIDA). The UCSF researchers maintain that the simplest and most cost-effective action is to establish smoke-free work places, schools and public places. | If one partner in a marriage smokes, the other is likely to take up smoking. | neutral |
id_6462 | The Risks of Cigarette Smoke Discovered in the early 1800s and named nicotianine, the oily essence now called nicotine is the main active insredient of tobacco. Nicotine, however, is only a small component of cigarette smoke, which contains more than 4,700 chemical compounds, including 43 cancer-causing substances. In recent times, scientific research has been providing evidence that years of cigarette smoking vastly increases the risk of developing fatal medical conditions. In addition to being responsible for more than 85 per cent of lung cancers, smoking is associated with cancers of, amongst others, the mouth, stomach and kidneys, and is thought to cause about 14 per cent of leukemia and cervical cancers. In 1990, smoking caused more than 84,000 deaths, mainly resulting from such problems as pneumonia, bronchitis and influenza. Smoking, it is believed, is responsible for 30 per cent of all deaths from cancer and clearly represents the most important preventable cause of cancer in countries like the United States today. Passive smoking, the breathing in of the side-stream smoke from the burning of tobacco between puffs or of the smoke exhaled by a smoker, also causes a serious health risk. A report published in 1992 by the US Environmental Protection Agency (EPA) emphasized the health dangers, especially from side-stream smoke. This type of smoke contains more, smaller particles and is therefore more likely to be deposited deep in the lungs. On the basis of this report, the EPA has classified environmental tobacco smoke in the highest risk category for causing cancer. As an illustration of the health risks, in the case of a married couple where one partner is a smoker and one a non-smoker, the latter is believed to have a 30 per cent higher risk of death from heart disease because of passive smoking. The risk of lung cancer also increases over the years of exposure and the figure jumps to 80 per cent if the spouse has been smoking four packs a day for 20 years. It has been calculated that 17 per cent of cases of lung cancer can be attributed to high levels of exposure to second- hand tobacco smoke during childhood and adolescence. A more recent study by researchers at the University of California at San Francisco (UCSF) has shown that second-hand cigarette smoke does more harm to non-smokers than to smokers. Leaving aside the philosophical question of whether anyone should have to breathe someone elses cigarette smoke, the report suggests that the smoke experienced by many people in their daily lives is enough to produce substantial adverse effects on a persons heart and lungs. The report, published in the Journal of the American Medical Association (AMA), was based on the researchers own earlier research but also includes a review of studies over the past few years. The American Medical Association represents about half of all US doctors and is a strong opponent of smoking. The study suggests that people who smoke cigarettes are continually damaging their cardiovascular system, which adapts in order to compensate for the effects of smoking. It further states that people who do not smoke do not have the benefit of their system adapting to the smoke inhalation. Consequently, the effects of passive smoking are far greater on non-smokers than on smokers. This report emphasizes that cancer is not caused by a single element in cigarette smoke; harmful effects to health are caused by many components. Carbon monoxide, for example, competes with oxygen in red blood cells and interferes with the bloods ability to deliver life- giving oxygen to the heart. Nicotine and other toxins in cigarette smoke activate small blood cells called platelets, which increases the likelihood of blood clots, thereby affecting blood circulation throughout the body. The researchers criticize the practice of some scientific consultants who work with the tobacco industry for assuming that cigarette smoke has the same impact on smokers as it does on non-smokers. They argue that those scientists are underestimating the damage done by passive smoking and, in support of their recent findings, cite some previous research which points to passive smoking as the cause for between 30,000 and 60,000 deaths from heart attacks each year in the United States. This means that passive smoking is the third most preventable cause of death after active smoking and alcohol-related diseases. The study argues that the type of action needed against passive smoking should be similar to that being taken against illegal drugs and AIDS (SIDA). The UCSF researchers maintain that the simplest and most cost-effective action is to establish smoke-free work places, schools and public places. | Opponents of smoking financed the UCSF study. | neutral |
id_6463 | The Risks of Cigarette Smoke Discovered in the early 1800s and named nicotianine, the oily essence now called nicotine is the main active insredient of tobacco. Nicotine, however, is only a small component of cigarette smoke, which contains more than 4,700 chemical compounds, including 43 cancer-causing substances. In recent times, scientific research has been providing evidence that years of cigarette smoking vastly increases the risk of developing fatal medical conditions. In addition to being responsible for more than 85 per cent of lung cancers, smoking is associated with cancers of, amongst others, the mouth, stomach and kidneys, and is thought to cause about 14 per cent of leukemia and cervical cancers. In 1990, smoking caused more than 84,000 deaths, mainly resulting from such problems as pneumonia, bronchitis and influenza. Smoking, it is believed, is responsible for 30 per cent of all deaths from cancer and clearly represents the most important preventable cause of cancer in countries like the United States today. Passive smoking, the breathing in of the side-stream smoke from the burning of tobacco between puffs or of the smoke exhaled by a smoker, also causes a serious health risk. A report published in 1992 by the US Environmental Protection Agency (EPA) emphasized the health dangers, especially from side-stream smoke. This type of smoke contains more, smaller particles and is therefore more likely to be deposited deep in the lungs. On the basis of this report, the EPA has classified environmental tobacco smoke in the highest risk category for causing cancer. As an illustration of the health risks, in the case of a married couple where one partner is a smoker and one a non-smoker, the latter is believed to have a 30 per cent higher risk of death from heart disease because of passive smoking. The risk of lung cancer also increases over the years of exposure and the figure jumps to 80 per cent if the spouse has been smoking four packs a day for 20 years. It has been calculated that 17 per cent of cases of lung cancer can be attributed to high levels of exposure to second- hand tobacco smoke during childhood and adolescence. A more recent study by researchers at the University of California at San Francisco (UCSF) has shown that second-hand cigarette smoke does more harm to non-smokers than to smokers. Leaving aside the philosophical question of whether anyone should have to breathe someone elses cigarette smoke, the report suggests that the smoke experienced by many people in their daily lives is enough to produce substantial adverse effects on a persons heart and lungs. The report, published in the Journal of the American Medical Association (AMA), was based on the researchers own earlier research but also includes a review of studies over the past few years. The American Medical Association represents about half of all US doctors and is a strong opponent of smoking. The study suggests that people who smoke cigarettes are continually damaging their cardiovascular system, which adapts in order to compensate for the effects of smoking. It further states that people who do not smoke do not have the benefit of their system adapting to the smoke inhalation. Consequently, the effects of passive smoking are far greater on non-smokers than on smokers. This report emphasizes that cancer is not caused by a single element in cigarette smoke; harmful effects to health are caused by many components. Carbon monoxide, for example, competes with oxygen in red blood cells and interferes with the bloods ability to deliver life- giving oxygen to the heart. Nicotine and other toxins in cigarette smoke activate small blood cells called platelets, which increases the likelihood of blood clots, thereby affecting blood circulation throughout the body. The researchers criticize the practice of some scientific consultants who work with the tobacco industry for assuming that cigarette smoke has the same impact on smokers as it does on non-smokers. They argue that those scientists are underestimating the damage done by passive smoking and, in support of their recent findings, cite some previous research which points to passive smoking as the cause for between 30,000 and 60,000 deaths from heart attacks each year in the United States. This means that passive smoking is the third most preventable cause of death after active smoking and alcohol-related diseases. The study argues that the type of action needed against passive smoking should be similar to that being taken against illegal drugs and AIDS (SIDA). The UCSF researchers maintain that the simplest and most cost-effective action is to establish smoke-free work places, schools and public places. | Teenagers whose parents smoke are at risk of getting lung cancer at some time during their lives. | entailment |
id_6464 | The Risks of Cigarette Smoke. Discovered in the early 1800s and named nicotlanine, the only essence now called nicotine is the main active ingredient of tobacco. Nicotine, however, is only a small component of cigarette smoke, which contains more than 4,700 chemical compounds, including 43 cancer-causing substances. In recent times, scientific research has been providing evidence that years of cigarette smoking vastly increases the risk of developing fatal medical conditions. In addition to being responsible for more than 85 per cent of lung cancers, smoking is associated with cancers of, amongst others, the mouth, stomach and kidneys, and is thought to cause about 14 per cent of leukemia and cervical cancers. In 1990, smoking caused more than 84,000 deaths, mainly resulting from such problems as pneumonia, bronchitis and influenza. Smoking, it is believed, is responsible for 30 per cent of all deaths from cancer and clearly represents the most important preventable cause of cancer in countries like the United States today. Passive smoking, the breathing in of the side-stream smoke from the burning of tobacco between puffs or of the smoke exhaled by a smoker, also causes a serious health risk. A report published in 1992 by the US Environmental Protection Agency (ERA) emphasized the health dangers, especially from side-stream smoke. This type of smoke contains more, smaller particles and is therefore more likely to be deposited deep in the lungs. On the basis of this report, the EPA has classified environmental tobacco smoke in the highest risk category for causing cancer. As an illustration of the health risks, in the case of a married couple where one partner is a smoker and one a non-smoker, the latter is believed to have a 30 per cent higher risk of death from heart disease because of passive smoking. The risk of lung cancer also increases over the years of exposure and the figure jumps to 80 per cent if the spouse has been smoking four packs a day for 20 years. It has been calculated that 17 per cent of cases of lung cancer can be attributed to high levels of exposure to second-hand tobacco smoke during childhood and adolescence. A more recent study by researchers at the University of California at San Francisco (UCSF) has shown that second-hand cigarette smoke does more harm to non-smokers than to smokers. Leaving aside the philosophical question of whether anyone should have to breathe someone else's cigarette smoke, the report suggests that the smoke experienced by many people in their daily lives is enough to produce substantial adverse effects on a person's heart and lungs. The report, published in the Journal of the American Medical Association (AMA), was based on the researchers' own earlier research but also includes a review of studies over the past few years. The American Medical Association represents about half of all US doctors and is a strong opponent of smoking. The study suggests that people who smoke cigarettes are continually damaging their cardiovascular system, which adapts in order to compensate for the effects of smoking. It further states that people who do not smoke do not have the benefit of their system adapting to the smoke inhalation. Consequently, the effects of passive smoking are far greater on non-smokers than on smokers. This report emphasizes that cancer is not caused by a single element in cigarette smoke; harmful effects to health are caused by many components. Carbon monoxide, for example, competes with oxygen in red blood cells and interferes with the bloods ability to deliver life-giving oxygen to the heart. Nicotine and other toxins in cigarette smoke activate small blood cells called platelets, which increases the likelihood of blood dots, thereby affecting blood circulation throughout the body. The researchers criticize the practice of some scientific consultants who work with the tobacco industry for assuming that cigarette smoke has the same impact on smokers as it does on non-smokers. They argue that those scientists are underestimating the damage done by passive smoking and, in support of their recent findings, cite some previous research which points to passive smoking as the cause for between 30,000 and 60,000 deaths from heart attacks each year in the United States. This means that passive smoking is the third most preventable cause of death after active smoking and alcohol-related diseases. The study argues that the type of action needed against passive smoking should be similar to that being taken against illegal drugs and AIDS (SIDA). The UCSF researchers maintain that the simplest and most cost-effective action is to establish smoke-free work places, schools and public places. | Thirty per cent of deaths in the United States are caused by smoking-related diseases. | contradiction |
id_6465 | The Risks of Cigarette Smoke. Discovered in the early 1800s and named nicotlanine, the only essence now called nicotine is the main active ingredient of tobacco. Nicotine, however, is only a small component of cigarette smoke, which contains more than 4,700 chemical compounds, including 43 cancer-causing substances. In recent times, scientific research has been providing evidence that years of cigarette smoking vastly increases the risk of developing fatal medical conditions. In addition to being responsible for more than 85 per cent of lung cancers, smoking is associated with cancers of, amongst others, the mouth, stomach and kidneys, and is thought to cause about 14 per cent of leukemia and cervical cancers. In 1990, smoking caused more than 84,000 deaths, mainly resulting from such problems as pneumonia, bronchitis and influenza. Smoking, it is believed, is responsible for 30 per cent of all deaths from cancer and clearly represents the most important preventable cause of cancer in countries like the United States today. Passive smoking, the breathing in of the side-stream smoke from the burning of tobacco between puffs or of the smoke exhaled by a smoker, also causes a serious health risk. A report published in 1992 by the US Environmental Protection Agency (ERA) emphasized the health dangers, especially from side-stream smoke. This type of smoke contains more, smaller particles and is therefore more likely to be deposited deep in the lungs. On the basis of this report, the EPA has classified environmental tobacco smoke in the highest risk category for causing cancer. As an illustration of the health risks, in the case of a married couple where one partner is a smoker and one a non-smoker, the latter is believed to have a 30 per cent higher risk of death from heart disease because of passive smoking. The risk of lung cancer also increases over the years of exposure and the figure jumps to 80 per cent if the spouse has been smoking four packs a day for 20 years. It has been calculated that 17 per cent of cases of lung cancer can be attributed to high levels of exposure to second-hand tobacco smoke during childhood and adolescence. A more recent study by researchers at the University of California at San Francisco (UCSF) has shown that second-hand cigarette smoke does more harm to non-smokers than to smokers. Leaving aside the philosophical question of whether anyone should have to breathe someone else's cigarette smoke, the report suggests that the smoke experienced by many people in their daily lives is enough to produce substantial adverse effects on a person's heart and lungs. The report, published in the Journal of the American Medical Association (AMA), was based on the researchers' own earlier research but also includes a review of studies over the past few years. The American Medical Association represents about half of all US doctors and is a strong opponent of smoking. The study suggests that people who smoke cigarettes are continually damaging their cardiovascular system, which adapts in order to compensate for the effects of smoking. It further states that people who do not smoke do not have the benefit of their system adapting to the smoke inhalation. Consequently, the effects of passive smoking are far greater on non-smokers than on smokers. This report emphasizes that cancer is not caused by a single element in cigarette smoke; harmful effects to health are caused by many components. Carbon monoxide, for example, competes with oxygen in red blood cells and interferes with the bloods ability to deliver life-giving oxygen to the heart. Nicotine and other toxins in cigarette smoke activate small blood cells called platelets, which increases the likelihood of blood dots, thereby affecting blood circulation throughout the body. The researchers criticize the practice of some scientific consultants who work with the tobacco industry for assuming that cigarette smoke has the same impact on smokers as it does on non-smokers. They argue that those scientists are underestimating the damage done by passive smoking and, in support of their recent findings, cite some previous research which points to passive smoking as the cause for between 30,000 and 60,000 deaths from heart attacks each year in the United States. This means that passive smoking is the third most preventable cause of death after active smoking and alcohol-related diseases. The study argues that the type of action needed against passive smoking should be similar to that being taken against illegal drugs and AIDS (SIDA). The UCSF researchers maintain that the simplest and most cost-effective action is to establish smoke-free work places, schools and public places. | Opponents of smoking financed the UCSF study. | neutral |
id_6466 | The Risks of Cigarette Smoke. Discovered in the early 1800s and named nicotlanine, the only essence now called nicotine is the main active ingredient of tobacco. Nicotine, however, is only a small component of cigarette smoke, which contains more than 4,700 chemical compounds, including 43 cancer-causing substances. In recent times, scientific research has been providing evidence that years of cigarette smoking vastly increases the risk of developing fatal medical conditions. In addition to being responsible for more than 85 per cent of lung cancers, smoking is associated with cancers of, amongst others, the mouth, stomach and kidneys, and is thought to cause about 14 per cent of leukemia and cervical cancers. In 1990, smoking caused more than 84,000 deaths, mainly resulting from such problems as pneumonia, bronchitis and influenza. Smoking, it is believed, is responsible for 30 per cent of all deaths from cancer and clearly represents the most important preventable cause of cancer in countries like the United States today. Passive smoking, the breathing in of the side-stream smoke from the burning of tobacco between puffs or of the smoke exhaled by a smoker, also causes a serious health risk. A report published in 1992 by the US Environmental Protection Agency (ERA) emphasized the health dangers, especially from side-stream smoke. This type of smoke contains more, smaller particles and is therefore more likely to be deposited deep in the lungs. On the basis of this report, the EPA has classified environmental tobacco smoke in the highest risk category for causing cancer. As an illustration of the health risks, in the case of a married couple where one partner is a smoker and one a non-smoker, the latter is believed to have a 30 per cent higher risk of death from heart disease because of passive smoking. The risk of lung cancer also increases over the years of exposure and the figure jumps to 80 per cent if the spouse has been smoking four packs a day for 20 years. It has been calculated that 17 per cent of cases of lung cancer can be attributed to high levels of exposure to second-hand tobacco smoke during childhood and adolescence. A more recent study by researchers at the University of California at San Francisco (UCSF) has shown that second-hand cigarette smoke does more harm to non-smokers than to smokers. Leaving aside the philosophical question of whether anyone should have to breathe someone else's cigarette smoke, the report suggests that the smoke experienced by many people in their daily lives is enough to produce substantial adverse effects on a person's heart and lungs. The report, published in the Journal of the American Medical Association (AMA), was based on the researchers' own earlier research but also includes a review of studies over the past few years. The American Medical Association represents about half of all US doctors and is a strong opponent of smoking. The study suggests that people who smoke cigarettes are continually damaging their cardiovascular system, which adapts in order to compensate for the effects of smoking. It further states that people who do not smoke do not have the benefit of their system adapting to the smoke inhalation. Consequently, the effects of passive smoking are far greater on non-smokers than on smokers. This report emphasizes that cancer is not caused by a single element in cigarette smoke; harmful effects to health are caused by many components. Carbon monoxide, for example, competes with oxygen in red blood cells and interferes with the bloods ability to deliver life-giving oxygen to the heart. Nicotine and other toxins in cigarette smoke activate small blood cells called platelets, which increases the likelihood of blood dots, thereby affecting blood circulation throughout the body. The researchers criticize the practice of some scientific consultants who work with the tobacco industry for assuming that cigarette smoke has the same impact on smokers as it does on non-smokers. They argue that those scientists are underestimating the damage done by passive smoking and, in support of their recent findings, cite some previous research which points to passive smoking as the cause for between 30,000 and 60,000 deaths from heart attacks each year in the United States. This means that passive smoking is the third most preventable cause of death after active smoking and alcohol-related diseases. The study argues that the type of action needed against passive smoking should be similar to that being taken against illegal drugs and AIDS (SIDA). The UCSF researchers maintain that the simplest and most cost-effective action is to establish smoke-free work places, schools and public places. | Teenagers whose parents smoke are at risk of getting lung cancer at some time during their lives. | entailment |
id_6467 | The Risks of Cigarette Smoke. Discovered in the early 1800s and named nicotlanine, the only essence now called nicotine is the main active ingredient of tobacco. Nicotine, however, is only a small component of cigarette smoke, which contains more than 4,700 chemical compounds, including 43 cancer-causing substances. In recent times, scientific research has been providing evidence that years of cigarette smoking vastly increases the risk of developing fatal medical conditions. In addition to being responsible for more than 85 per cent of lung cancers, smoking is associated with cancers of, amongst others, the mouth, stomach and kidneys, and is thought to cause about 14 per cent of leukemia and cervical cancers. In 1990, smoking caused more than 84,000 deaths, mainly resulting from such problems as pneumonia, bronchitis and influenza. Smoking, it is believed, is responsible for 30 per cent of all deaths from cancer and clearly represents the most important preventable cause of cancer in countries like the United States today. Passive smoking, the breathing in of the side-stream smoke from the burning of tobacco between puffs or of the smoke exhaled by a smoker, also causes a serious health risk. A report published in 1992 by the US Environmental Protection Agency (ERA) emphasized the health dangers, especially from side-stream smoke. This type of smoke contains more, smaller particles and is therefore more likely to be deposited deep in the lungs. On the basis of this report, the EPA has classified environmental tobacco smoke in the highest risk category for causing cancer. As an illustration of the health risks, in the case of a married couple where one partner is a smoker and one a non-smoker, the latter is believed to have a 30 per cent higher risk of death from heart disease because of passive smoking. The risk of lung cancer also increases over the years of exposure and the figure jumps to 80 per cent if the spouse has been smoking four packs a day for 20 years. It has been calculated that 17 per cent of cases of lung cancer can be attributed to high levels of exposure to second-hand tobacco smoke during childhood and adolescence. A more recent study by researchers at the University of California at San Francisco (UCSF) has shown that second-hand cigarette smoke does more harm to non-smokers than to smokers. Leaving aside the philosophical question of whether anyone should have to breathe someone else's cigarette smoke, the report suggests that the smoke experienced by many people in their daily lives is enough to produce substantial adverse effects on a person's heart and lungs. The report, published in the Journal of the American Medical Association (AMA), was based on the researchers' own earlier research but also includes a review of studies over the past few years. The American Medical Association represents about half of all US doctors and is a strong opponent of smoking. The study suggests that people who smoke cigarettes are continually damaging their cardiovascular system, which adapts in order to compensate for the effects of smoking. It further states that people who do not smoke do not have the benefit of their system adapting to the smoke inhalation. Consequently, the effects of passive smoking are far greater on non-smokers than on smokers. This report emphasizes that cancer is not caused by a single element in cigarette smoke; harmful effects to health are caused by many components. Carbon monoxide, for example, competes with oxygen in red blood cells and interferes with the bloods ability to deliver life-giving oxygen to the heart. Nicotine and other toxins in cigarette smoke activate small blood cells called platelets, which increases the likelihood of blood dots, thereby affecting blood circulation throughout the body. The researchers criticize the practice of some scientific consultants who work with the tobacco industry for assuming that cigarette smoke has the same impact on smokers as it does on non-smokers. They argue that those scientists are underestimating the damage done by passive smoking and, in support of their recent findings, cite some previous research which points to passive smoking as the cause for between 30,000 and 60,000 deaths from heart attacks each year in the United States. This means that passive smoking is the third most preventable cause of death after active smoking and alcohol-related diseases. The study argues that the type of action needed against passive smoking should be similar to that being taken against illegal drugs and AIDS (SIDA). The UCSF researchers maintain that the simplest and most cost-effective action is to establish smoke-free work places, schools and public places. | If one partner in a marriage smokes, the other is likely to take up smoking. | neutral |
id_6468 | The Roaring Twenties is a term used to describe the 1920s, particularly the decades cultural edge and distinctive economic prosperity across Europe and north America. The roaring twenties saw the large scale sale and use of cars, telephones, cinema and electricity, along with huge changes in lifestyle and culture. International media began to focus on celebrities, particularly movie stars and sporting heroes. Similarly, many brand new cinemas and sports stadiums were constructed. However, in 1929 the period of economic and cultural prosperity ended with the wall street crash of 1929, sending the world into the great depression. | The roaring twenties era ended in 1930. | contradiction |
id_6469 | The Roaring Twenties is a term used to describe the 1920s, particularly the decades cultural edge and distinctive economic prosperity across Europe and north America. The roaring twenties saw the large scale sale and use of cars, telephones, cinema and electricity, along with huge changes in lifestyle and culture. International media began to focus on celebrities, particularly movie stars and sporting heroes. Similarly, many brand new cinemas and sports stadiums were constructed. However, in 1929 the period of economic and cultural prosperity ended with the wall street crash of 1929, sending the world into the great depression. | The construction of new cinemas and stadiums helped increase media focus on celebrities. | neutral |
id_6470 | The Roaring Twenties is a term used to describe the 1920s, particularly the decades cultural edge and distinctive economic prosperity across Europe and north America. The roaring twenties saw the large scale sale and use of cars, telephones, cinema and electricity, along with huge changes in lifestyle and culture. International media began to focus on celebrities, particularly movie stars and sporting heroes. Similarly, many brand new cinemas and sports stadiums were constructed. However, in 1929 the period of economic and cultural prosperity ended with the wall street crash of 1929, sending the world into the great depression. | Movie stars became more culturally significant during the roaring twenties. | neutral |
id_6471 | The Roaring Twenties is a term used to describe the 1920s, particularly the decades cultural edge and distinctive economic prosperity across Europe and north America. The roaring twenties saw the large scale sale and use of cars, telephones, cinema and electricity, along with huge changes in lifestyle and culture. International media began to focus on celebrities, particularly movie stars and sporting heroes. Similarly, many brand new cinemas and sports stadiums were constructed. However, in 1929 the period of economic and cultural prosperity ended with the wall street crash of 1929, sending the world into the great depression. | Mass production of cars began in the 1920s. | neutral |
id_6472 | The Romantic Poets One of the most evocative eras in the history of poetry must surely be that of the Romantic Movement. During the late eighteenth and early nineteenth centuries a group of poets created a new mood in literary objectives, casting off their predecessors styles in favour of a gripping and forceful art which endures with us to this day. Five poets emerged as the main constituents of this movement William Wordsworth, Samuel Taylor Coleridge, George Gordon Byron, Percy Bysshe Shelley and John Keats. The strength of their works lies undoubtedly in the power of their imagination. Indeed, imagination was the most critical attribute of the Romantic poets. Each poet had the ability to portray remarkable images and visions, although differing to a certain degree in their intensity and presentation. Nature, mythology and emotion were of great importance and were used to explore the feelings of the poet himself. The lives of the poets often overlapped and tragedy was typical in most of them. Byron was born in London in 1788. The family moved to Aberdeen soon after, where Byron was brought up until he inherited the family seat of Newstead Abbey in Nottinghamshire from his great uncle. He graduated from Cambridge University in 1808 and left England the following year to embark on a tour of the Mediterranean. During this tour, he developed a passion for Greece which would later lead to his death in 1824. He left for Switzerland in 1816 where he was introduced to Shelley. Shelley was born to a wealthy family in 1792. He was educated at Eton and then went on to Oxford. Shelley was not happy in England, where his colourful lifestyle and unorthodox beliefs made him unpopular with the establishment In 1818 he left for Italy, where he was reunited with Byron. However, the friendship was tragically brought to an end in July 1822, when Shelley was drowned in a boating accident off the Italian coast. In somewhat dramatic form, Shelleys body was cremated on the beach, witnessed by a small group of friends, including Byron. Historically, Shelley and Byron are considered to have been the most outspoken and radical of the Romantic poets. By contrast, Wordsworth appears to have been of a pleasant and acceptable personality, even receiving the status of Poet Laureate in 1843. He was born in 1770 in Cockermouth, Cumbria. By the time he entered his early teens, both his parents had died. As he grew older, Wordsworth developed a passion for writing. In 1798 Wordsworth published a collection of poems with Coleridge, whom he had met, a few years earlier, when he settled in Somerset with his sister Dorothy. He married in 1802 and, as time passed, he deserted his former political views and became increasingly acceptable to popular society. Indeed, at the time of his death in the spring of 1850, he had become one of the most sought-after poets of his time. Wordsworth shared some of the years at Dove Cottage in Somerset with his friend and poetical contemporary, Coleridge. Coleridge was born in Devon in 1772. He was a bright young scholar but never achieved the same prolific output of his fellow Romantic poets. In 1804 he left for a position in Malta for three years. On his return he separated from his wife and went to live with the Wordsworths, where he produced a regular periodical. With failing health, he later moved to London. In 1816 he went to stay with a doctor and his family. He remained with them until his death in 1834. During these latter years, his poetry was abandoned for other forms of writing equally outstanding in their own right. Perhaps the most tragic of the Romantic poets was Keats. Keats was born in London in 1795. Similar to Wordsworth, both his parents had died by his early teens. He studied as a surgeon, qualifying in 1816. However, poetry was his great passion and he decided to devote himself to writing. For much of his adult life Keats was in poor health and fell gravely ill in early 1820. He knew he was dying and in the September of that year he left for Rome hoping that the more agreeable climate might ease his suffering. Keats died of consumption in February 1821 at the age of twenty-five. It is sad that such tragedy often accompanies those of outstanding artistic genius. We can only wonder at the possible outcome had they all lived to an old age. Perhaps even Byron and Shelley would have mellowed with the years, like Wordsworth. However, the contribution to poetry by all five writers is immeasurable. They introduced the concepts of individualism and imagination, allowing us to explore our own visions of beauty without retribution. We are not now required to restrain our thoughts and poetry to that of the socially acceptable. | The Romantic Movement lasted for more than a century. | contradiction |
id_6473 | The Romantic Poets One of the most evocative eras in the history of poetry must surely be that of the Romantic Movement. During the late eighteenth and early nineteenth centuries a group of poets created a new mood in literary objectives, casting off their predecessors styles in favour of a gripping and forceful art which endures with us to this day. Five poets emerged as the main constituents of this movement William Wordsworth, Samuel Taylor Coleridge, George Gordon Byron, Percy Bysshe Shelley and John Keats. The strength of their works lies undoubtedly in the power of their imagination. Indeed, imagination was the most critical attribute of the Romantic poets. Each poet had the ability to portray remarkable images and visions, although differing to a certain degree in their intensity and presentation. Nature, mythology and emotion were of great importance and were used to explore the feelings of the poet himself. The lives of the poets often overlapped and tragedy was typical in most of them. Byron was born in London in 1788. The family moved to Aberdeen soon after, where Byron was brought up until he inherited the family seat of Newstead Abbey in Nottinghamshire from his great uncle. He graduated from Cambridge University in 1808 and left England the following year to embark on a tour of the Mediterranean. During this tour, he developed a passion for Greece which would later lead to his death in 1824. He left for Switzerland in 1816 where he was introduced to Shelley. Shelley was born to a wealthy family in 1792. He was educated at Eton and then went on to Oxford. Shelley was not happy in England, where his colourful lifestyle and unorthodox beliefs made him unpopular with the establishment In 1818 he left for Italy, where he was reunited with Byron. However, the friendship was tragically brought to an end in July 1822, when Shelley was drowned in a boating accident off the Italian coast. In somewhat dramatic form, Shelleys body was cremated on the beach, witnessed by a small group of friends, including Byron. Historically, Shelley and Byron are considered to have been the most outspoken and radical of the Romantic poets. By contrast, Wordsworth appears to have been of a pleasant and acceptable personality, even receiving the status of Poet Laureate in 1843. He was born in 1770 in Cockermouth, Cumbria. By the time he entered his early teens, both his parents had died. As he grew older, Wordsworth developed a passion for writing. In 1798 Wordsworth published a collection of poems with Coleridge, whom he had met, a few years earlier, when he settled in Somerset with his sister Dorothy. He married in 1802 and, as time passed, he deserted his former political views and became increasingly acceptable to popular society. Indeed, at the time of his death in the spring of 1850, he had become one of the most sought-after poets of his time. Wordsworth shared some of the years at Dove Cottage in Somerset with his friend and poetical contemporary, Coleridge. Coleridge was born in Devon in 1772. He was a bright young scholar but never achieved the same prolific output of his fellow Romantic poets. In 1804 he left for a position in Malta for three years. On his return he separated from his wife and went to live with the Wordsworths, where he produced a regular periodical. With failing health, he later moved to London. In 1816 he went to stay with a doctor and his family. He remained with them until his death in 1834. During these latter years, his poetry was abandoned for other forms of writing equally outstanding in their own right. Perhaps the most tragic of the Romantic poets was Keats. Keats was born in London in 1795. Similar to Wordsworth, both his parents had died by his early teens. He studied as a surgeon, qualifying in 1816. However, poetry was his great passion and he decided to devote himself to writing. For much of his adult life Keats was in poor health and fell gravely ill in early 1820. He knew he was dying and in the September of that year he left for Rome hoping that the more agreeable climate might ease his suffering. Keats died of consumption in February 1821 at the age of twenty-five. It is sad that such tragedy often accompanies those of outstanding artistic genius. We can only wonder at the possible outcome had they all lived to an old age. Perhaps even Byron and Shelley would have mellowed with the years, like Wordsworth. However, the contribution to poetry by all five writers is immeasurable. They introduced the concepts of individualism and imagination, allowing us to explore our own visions of beauty without retribution. We are not now required to restrain our thoughts and poetry to that of the socially acceptable. | The Romantic poets adopted a style dissimilar to that of poets who had come before them. | entailment |
id_6474 | The Romantic Poets One of the most evocative eras in the history of poetry must surely be that of the Romantic Movement. During the late eighteenth and early nineteenth centuries a group of poets created a new mood in literary objectives, casting off their predecessors styles in favour of a gripping and forceful art which endures with us to this day. Five poets emerged as the main constituents of this movement William Wordsworth, Samuel Taylor Coleridge, George Gordon Byron, Percy Bysshe Shelley and John Keats. The strength of their works lies undoubtedly in the power of their imagination. Indeed, imagination was the most critical attribute of the Romantic poets. Each poet had the ability to portray remarkable images and visions, although differing to a certain degree in their intensity and presentation. Nature, mythology and emotion were of great importance and were used to explore the feelings of the poet himself. The lives of the poets often overlapped and tragedy was typical in most of them. Byron was born in London in 1788. The family moved to Aberdeen soon after, where Byron was brought up until he inherited the family seat of Newstead Abbey in Nottinghamshire from his great uncle. He graduated from Cambridge University in 1808 and left England the following year to embark on a tour of the Mediterranean. During this tour, he developed a passion for Greece which would later lead to his death in 1824. He left for Switzerland in 1816 where he was introduced to Shelley. Shelley was born to a wealthy family in 1792. He was educated at Eton and then went on to Oxford. Shelley was not happy in England, where his colourful lifestyle and unorthodox beliefs made him unpopular with the establishment In 1818 he left for Italy, where he was reunited with Byron. However, the friendship was tragically brought to an end in July 1822, when Shelley was drowned in a boating accident off the Italian coast. In somewhat dramatic form, Shelleys body was cremated on the beach, witnessed by a small group of friends, including Byron. Historically, Shelley and Byron are considered to have been the most outspoken and radical of the Romantic poets. By contrast, Wordsworth appears to have been of a pleasant and acceptable personality, even receiving the status of Poet Laureate in 1843. He was born in 1770 in Cockermouth, Cumbria. By the time he entered his early teens, both his parents had died. As he grew older, Wordsworth developed a passion for writing. In 1798 Wordsworth published a collection of poems with Coleridge, whom he had met, a few years earlier, when he settled in Somerset with his sister Dorothy. He married in 1802 and, as time passed, he deserted his former political views and became increasingly acceptable to popular society. Indeed, at the time of his death in the spring of 1850, he had become one of the most sought-after poets of his time. Wordsworth shared some of the years at Dove Cottage in Somerset with his friend and poetical contemporary, Coleridge. Coleridge was born in Devon in 1772. He was a bright young scholar but never achieved the same prolific output of his fellow Romantic poets. In 1804 he left for a position in Malta for three years. On his return he separated from his wife and went to live with the Wordsworths, where he produced a regular periodical. With failing health, he later moved to London. In 1816 he went to stay with a doctor and his family. He remained with them until his death in 1834. During these latter years, his poetry was abandoned for other forms of writing equally outstanding in their own right. Perhaps the most tragic of the Romantic poets was Keats. Keats was born in London in 1795. Similar to Wordsworth, both his parents had died by his early teens. He studied as a surgeon, qualifying in 1816. However, poetry was his great passion and he decided to devote himself to writing. For much of his adult life Keats was in poor health and fell gravely ill in early 1820. He knew he was dying and in the September of that year he left for Rome hoping that the more agreeable climate might ease his suffering. Keats died of consumption in February 1821 at the age of twenty-five. It is sad that such tragedy often accompanies those of outstanding artistic genius. We can only wonder at the possible outcome had they all lived to an old age. Perhaps even Byron and Shelley would have mellowed with the years, like Wordsworth. However, the contribution to poetry by all five writers is immeasurable. They introduced the concepts of individualism and imagination, allowing us to explore our own visions of beauty without retribution. We are not now required to restrain our thoughts and poetry to that of the socially acceptable. | The Romantics were gifted with a strong sense of imagination. | entailment |
id_6475 | The Romantic Poets One of the most evocative eras in the history of poetry must surely be that of the Romantic Movement. During the late eighteenth and early nineteenth centuries a group of poets created a new mood in literary objectives, casting off their predecessors styles in favour of a gripping and forceful art which endures with us to this day. Five poets emerged as the main constituents of this movement William Wordsworth, Samuel Taylor Coleridge, George Gordon Byron, Percy Bysshe Shelley and John Keats. The strength of their works lies undoubtedly in the power of their imagination. Indeed, imagination was the most critical attribute of the Romantic poets. Each poet had the ability to portray remarkable images and visions, although differing to a certain degree in their intensity and presentation. Nature, mythology and emotion were of great importance and were used to explore the feelings of the poet himself. The lives of the poets often overlapped and tragedy was typical in most of them. Byron was born in London in 1788. The family moved to Aberdeen soon after, where Byron was brought up until he inherited the family seat of Newstead Abbey in Nottinghamshire from his great uncle. He graduated from Cambridge University in 1808 and left England the following year to embark on a tour of the Mediterranean. During this tour, he developed a passion for Greece which would later lead to his death in 1824. He left for Switzerland in 1816 where he was introduced to Shelley. Shelley was born to a wealthy family in 1792. He was educated at Eton and then went on to Oxford. Shelley was not happy in England, where his colourful lifestyle and unorthodox beliefs made him unpopular with the establishment In 1818 he left for Italy, where he was reunited with Byron. However, the friendship was tragically brought to an end in July 1822, when Shelley was drowned in a boating accident off the Italian coast. In somewhat dramatic form, Shelleys body was cremated on the beach, witnessed by a small group of friends, including Byron. Historically, Shelley and Byron are considered to have been the most outspoken and radical of the Romantic poets. By contrast, Wordsworth appears to have been of a pleasant and acceptable personality, even receiving the status of Poet Laureate in 1843. He was born in 1770 in Cockermouth, Cumbria. By the time he entered his early teens, both his parents had died. As he grew older, Wordsworth developed a passion for writing. In 1798 Wordsworth published a collection of poems with Coleridge, whom he had met, a few years earlier, when he settled in Somerset with his sister Dorothy. He married in 1802 and, as time passed, he deserted his former political views and became increasingly acceptable to popular society. Indeed, at the time of his death in the spring of 1850, he had become one of the most sought-after poets of his time. Wordsworth shared some of the years at Dove Cottage in Somerset with his friend and poetical contemporary, Coleridge. Coleridge was born in Devon in 1772. He was a bright young scholar but never achieved the same prolific output of his fellow Romantic poets. In 1804 he left for a position in Malta for three years. On his return he separated from his wife and went to live with the Wordsworths, where he produced a regular periodical. With failing health, he later moved to London. In 1816 he went to stay with a doctor and his family. He remained with them until his death in 1834. During these latter years, his poetry was abandoned for other forms of writing equally outstanding in their own right. Perhaps the most tragic of the Romantic poets was Keats. Keats was born in London in 1795. Similar to Wordsworth, both his parents had died by his early teens. He studied as a surgeon, qualifying in 1816. However, poetry was his great passion and he decided to devote himself to writing. For much of his adult life Keats was in poor health and fell gravely ill in early 1820. He knew he was dying and in the September of that year he left for Rome hoping that the more agreeable climate might ease his suffering. Keats died of consumption in February 1821 at the age of twenty-five. It is sad that such tragedy often accompanies those of outstanding artistic genius. We can only wonder at the possible outcome had they all lived to an old age. Perhaps even Byron and Shelley would have mellowed with the years, like Wordsworth. However, the contribution to poetry by all five writers is immeasurable. They introduced the concepts of individualism and imagination, allowing us to explore our own visions of beauty without retribution. We are not now required to restrain our thoughts and poetry to that of the socially acceptable. | Much of the Romantics poetry was inspired by the natural world. | entailment |
id_6476 | The Romantic Poets One of the most evocative eras in the history of poetry must surely be that of the Romantic Movement. During the late eighteenth and early nineteenth centuries a group of poets created a new mood in literary objectives, casting off their predecessors styles in favour of a gripping and forceful art which endures with us to this day. Five poets emerged as the main constituents of this movement William Wordsworth, Samuel Taylor Coleridge, George Gordon Byron, Percy Bysshe Shelley and John Keats. The strength of their works lies undoubtedly in the power of their imagination. Indeed, imagination was the most critical attribute of the Romantic poets. Each poet had the ability to portray remarkable images and visions, although differing to a certain degree in their intensity and presentation. Nature, mythology and emotion were of great importance and were used to explore the feelings of the poet himself. The lives of the poets often overlapped and tragedy was typical in most of them. Byron was born in London in 1788. The family moved to Aberdeen soon after, where Byron was brought up until he inherited the family seat of Newstead Abbey in Nottinghamshire from his great uncle. He graduated from Cambridge University in 1808 and left England the following year to embark on a tour of the Mediterranean. During this tour, he developed a passion for Greece which would later lead to his death in 1824. He left for Switzerland in 1816 where he was introduced to Shelley. Shelley was born to a wealthy family in 1792. He was educated at Eton and then went on to Oxford. Shelley was not happy in England, where his colourful lifestyle and unorthodox beliefs made him unpopular with the establishment In 1818 he left for Italy, where he was reunited with Byron. However, the friendship was tragically brought to an end in July 1822, when Shelley was drowned in a boating accident off the Italian coast. In somewhat dramatic form, Shelleys body was cremated on the beach, witnessed by a small group of friends, including Byron. Historically, Shelley and Byron are considered to have been the most outspoken and radical of the Romantic poets. By contrast, Wordsworth appears to have been of a pleasant and acceptable personality, even receiving the status of Poet Laureate in 1843. He was born in 1770 in Cockermouth, Cumbria. By the time he entered his early teens, both his parents had died. As he grew older, Wordsworth developed a passion for writing. In 1798 Wordsworth published a collection of poems with Coleridge, whom he had met, a few years earlier, when he settled in Somerset with his sister Dorothy. He married in 1802 and, as time passed, he deserted his former political views and became increasingly acceptable to popular society. Indeed, at the time of his death in the spring of 1850, he had become one of the most sought-after poets of his time. Wordsworth shared some of the years at Dove Cottage in Somerset with his friend and poetical contemporary, Coleridge. Coleridge was born in Devon in 1772. He was a bright young scholar but never achieved the same prolific output of his fellow Romantic poets. In 1804 he left for a position in Malta for three years. On his return he separated from his wife and went to live with the Wordsworths, where he produced a regular periodical. With failing health, he later moved to London. In 1816 he went to stay with a doctor and his family. He remained with them until his death in 1834. During these latter years, his poetry was abandoned for other forms of writing equally outstanding in their own right. Perhaps the most tragic of the Romantic poets was Keats. Keats was born in London in 1795. Similar to Wordsworth, both his parents had died by his early teens. He studied as a surgeon, qualifying in 1816. However, poetry was his great passion and he decided to devote himself to writing. For much of his adult life Keats was in poor health and fell gravely ill in early 1820. He knew he was dying and in the September of that year he left for Rome hoping that the more agreeable climate might ease his suffering. Keats died of consumption in February 1821 at the age of twenty-five. It is sad that such tragedy often accompanies those of outstanding artistic genius. We can only wonder at the possible outcome had they all lived to an old age. Perhaps even Byron and Shelley would have mellowed with the years, like Wordsworth. However, the contribution to poetry by all five writers is immeasurable. They introduced the concepts of individualism and imagination, allowing us to explore our own visions of beauty without retribution. We are not now required to restrain our thoughts and poetry to that of the socially acceptable. | The Romantics had no respect for any style of poetry apart from their own. | neutral |
id_6477 | The Romantic Poets One of the most evocative eras in the history of poetry must surely be that of the Romantic Movement. During the late eighteenth and early nineteenth centuries a group of poets created a new mood in literary objectives, casting off their predecessors styles in favour of a gripping and forceful art which endures with us to this day. Five poets emerged as the main constituents of this movement William Wordsworth, Samuel Taylor Coleridge, George Gordon Byron, Percy Bysshe Shelley and John Keats. The strength of their works lies undoubtedly in the power of their imagination. Indeed, imagination was the most critical attribute of the Romantic poets. Each poet had the ability to portray remarkable images and visions, although differing to a certain degree in their intensity and presentation. Nature, mythology and emotion were of great importance and were used to explore the feelings of the poet himself. The lives of the poets often overlapped and tragedy was typical in most of them. Byron was born in London in 1788. The family moved to Aberdeen soon after, where Byron was brought up until he inherited the family seat of Newstead Abbey in Nottinghamshire from his great uncle. He graduated from Cambridge University in 1808 and left England the following year to embark on a tour of the Mediterranean. During this tour, he developed a passion for Greece which would later lead to his death in 1824. He left for Switzerland in 1816 where he was introduced to Shelley. Shelley was born to a wealthy family in 1792. He was educated at Eton and then went on to Oxford. Shelley was not happy in England, where his colourful lifestyle and unorthodox beliefs made him unpopular with the establishment In 1818 he left for Italy, where he was reunited with Byron. However, the friendship was tragically brought to an end in July 1822, when Shelley was drowned in a boating accident off the Italian coast. In somewhat dramatic form, Shelleys body was cremated on the beach, witnessed by a small group of friends, including Byron. Historically, Shelley and Byron are considered to have been the most outspoken and radical of the Romantic poets. By contrast, Wordsworth appears to have been of a pleasant and acceptable personality, even receiving the status of Poet Laureate in 1843. He was born in 1770 in Cockermouth, Cumbria. By the time he entered his early teens, both his parents had died. As he grew older, Wordsworth developed a passion for writing. In 1798 Wordsworth published a collection of poems with Coleridge, whom he had met, a few years earlier, when he settled in Somerset with his sister Dorothy. He married in 1802 and, as time passed, he deserted his former political views and became increasingly acceptable to popular society. Indeed, at the time of his death in the spring of 1850, he had become one of the most sought-after poets of his time. Wordsworth shared some of the years at Dove Cottage in Somerset with his friend and poetical contemporary, Coleridge. Coleridge was born in Devon in 1772. He was a bright young scholar but never achieved the same prolific output of his fellow Romantic poets. In 1804 he left for a position in Malta for three years. On his return he separated from his wife and went to live with the Wordsworths, where he produced a regular periodical. With failing health, he later moved to London. In 1816 he went to stay with a doctor and his family. He remained with them until his death in 1834. During these latter years, his poetry was abandoned for other forms of writing equally outstanding in their own right. Perhaps the most tragic of the Romantic poets was Keats. Keats was born in London in 1795. Similar to Wordsworth, both his parents had died by his early teens. He studied as a surgeon, qualifying in 1816. However, poetry was his great passion and he decided to devote himself to writing. For much of his adult life Keats was in poor health and fell gravely ill in early 1820. He knew he was dying and in the September of that year he left for Rome hoping that the more agreeable climate might ease his suffering. Keats died of consumption in February 1821 at the age of twenty-five. It is sad that such tragedy often accompanies those of outstanding artistic genius. We can only wonder at the possible outcome had they all lived to an old age. Perhaps even Byron and Shelley would have mellowed with the years, like Wordsworth. However, the contribution to poetry by all five writers is immeasurable. They introduced the concepts of individualism and imagination, allowing us to explore our own visions of beauty without retribution. We are not now required to restrain our thoughts and poetry to that of the socially acceptable. | Unfortunately, the works of the Romantics had no lasting impression on art. | contradiction |
id_6478 | The Royal Flying Doctor Service of Australia The Interior of Australia is a sparsely populated and extreme environment and the delivery of basic services has always been a problem. One of the most important of these services is medical care. Before the inception of the Royal Flying Doctor Service (RFDS), serious illness or accidents in the Inland often meant death. The RFDS was the first comprehensive aerial organization in the world and to this day remains unique for the range of emergency services that it provides. The story of the Flying Doctor Service is forever linked with its founder, the Very Reverend John Flynn. It is a story of achievement that gave courage to the pioneers of the outback. In 1911 the Reverend John Flynn took up his appointment at Beltana Mission in the north of South Australia. He began his missionary work at a time when only two doctors served an area of some 300 000 sq kms in the Northern Territory. In 1903 the first powered flight had taken place and by 1918 the aeroplane was beginning to improve itself as a means of transport. Radio, then very much in its infancy, was also displaying its remarkable capability to link people thousands of miles apart. Flynn saw the potential in these developments. The Service began in 1928 but it was not until 1942 that it was actually named the Flying Doctor Service and the Royal prefix was added in 1955. In 1928 the dream of a flying doctor was at last a reality but Flynn and his supporters still faced many problems in the months and years to come. The first years service was regarded as experimental, but the experiment succeeded and almost miraculously the service survived the Great Depression of the late 1920s and early 1930s. By 1932 the Australian Inland Mission (AIM) had a network of ten little hospitals across the coverage area. A succession of doctors and pilots followed and operations continued to expand over the next few years. The Service suffered severe financial difficulties in its early years. Flynn and his associates had to launch public appeals for donations. While some government financial aid was made available on occasions in the early days, regular government subsidies only became an established practice later on. Even today the Service continues to rely chiefly on money from trusts, donations and public appeals for its annual budget and raising money remains an integral part of the working day for the Service and its volunteers. In 1922 Flynn began a campaign for funding to buy some aircraft for the AIM. The first flight, on 17th May 1928, was made using a De Havilland model DH50 aircraft. This plane, named Victory, went on to fly 110 000 miles in the service of the Flying Doctor until 1934 when it was replaced with a DH83 Fox Moth. In 1928 flying was still in its early days. Pilots had no navigational aids, no radio and only a compass and inadequate maps, if any. They navigated by landmarks such as fences, rivers, dirt roads of just wheel tracks and telegraph lines. They also flew in an open cockpit, fully exposed to the weather. Flights were normally made during daylight hours although night flights were attempted in cases of extreme urgency. Fuel supplies were also carried on flights until fuel dumps were established at certain strategic locations. Nowadays twin engine craft, speed, pressurization, the ability to fly higher and further with more space for crew and medical personnel have all improved patient care and safety problems. There are hardly any places now that the RFDS cannot reach though safe landing at the remote areas is another issue altogether. Many outstations now have some sort of airstrip lighting but even now car headlights are sometimes used. Landings are therefore still often made in hazardous circumstances on remote fields or roads and it is pilots who continue to be responsible for determining if the flight can be safely undertaken. In the early 1900s basic telephone and telegraphic links existed only near larger towns. Radio communication was practically unknown and neighbours could be hundreds of miles away. What was needed was a simple, portable, cheap, and reliable two-way radio, with its own power source and with a range of 500 kms. In 1928 Alf Traeger, an Adelaide engineer, invented the Pedal Radio and over the next 10 years these were distributed around the stations and the operators were trained in Morse Code. Over the years radio developed with new technology and of course now telephones have taken its place. Whereas a few years ago, all calls for medical assistance were received by radio, today this represents only about 2% of all such calls. Over the years, the RFDS has developed to take along medical specialists, dentists and various health related professionals. Sister Myra Blanche was the first nurse employed by the RFDS in 1945 undertaking home nursing, immunizations, advising on prevention of illnesses and, on occasion, filling in for the doctor. However, flight nurses as we know them were not used by the Service on a regular basis until the 1960s. Today, based on the judgement of the doctor authorizing the flight, up nurse and pilot on board. | Telephones have now completely replaced radios for reporting emergencies to the RFDS. | contradiction |
id_6479 | The Royal Flying Doctor Service of Australia The Interior of Australia is a sparsely populated and extreme environment and the delivery of basic services has always been a problem. One of the most important of these services is medical care. Before the inception of the Royal Flying Doctor Service (RFDS), serious illness or accidents in the Inland often meant death. The RFDS was the first comprehensive aerial organization in the world and to this day remains unique for the range of emergency services that it provides. The story of the Flying Doctor Service is forever linked with its founder, the Very Reverend John Flynn. It is a story of achievement that gave courage to the pioneers of the outback. In 1911 the Reverend John Flynn took up his appointment at Beltana Mission in the north of South Australia. He began his missionary work at a time when only two doctors served an area of some 300 000 sq kms in the Northern Territory. In 1903 the first powered flight had taken place and by 1918 the aeroplane was beginning to improve itself as a means of transport. Radio, then very much in its infancy, was also displaying its remarkable capability to link people thousands of miles apart. Flynn saw the potential in these developments. The Service began in 1928 but it was not until 1942 that it was actually named the Flying Doctor Service and the Royal prefix was added in 1955. In 1928 the dream of a flying doctor was at last a reality but Flynn and his supporters still faced many problems in the months and years to come. The first years service was regarded as experimental, but the experiment succeeded and almost miraculously the service survived the Great Depression of the late 1920s and early 1930s. By 1932 the Australian Inland Mission (AIM) had a network of ten little hospitals across the coverage area. A succession of doctors and pilots followed and operations continued to expand over the next few years. The Service suffered severe financial difficulties in its early years. Flynn and his associates had to launch public appeals for donations. While some government financial aid was made available on occasions in the early days, regular government subsidies only became an established practice later on. Even today the Service continues to rely chiefly on money from trusts, donations and public appeals for its annual budget and raising money remains an integral part of the working day for the Service and its volunteers. In 1922 Flynn began a campaign for funding to buy some aircraft for the AIM. The first flight, on 17th May 1928, was made using a De Havilland model DH50 aircraft. This plane, named Victory, went on to fly 110 000 miles in the service of the Flying Doctor until 1934 when it was replaced with a DH83 Fox Moth. In 1928 flying was still in its early days. Pilots had no navigational aids, no radio and only a compass and inadequate maps, if any. They navigated by landmarks such as fences, rivers, dirt roads of just wheel tracks and telegraph lines. They also flew in an open cockpit, fully exposed to the weather. Flights were normally made during daylight hours although night flights were attempted in cases of extreme urgency. Fuel supplies were also carried on flights until fuel dumps were established at certain strategic locations. Nowadays twin engine craft, speed, pressurization, the ability to fly higher and further with more space for crew and medical personnel have all improved patient care and safety problems. There are hardly any places now that the RFDS cannot reach though safe landing at the remote areas is another issue altogether. Many outstations now have some sort of airstrip lighting but even now car headlights are sometimes used. Landings are therefore still often made in hazardous circumstances on remote fields or roads and it is pilots who continue to be responsible for determining if the flight can be safely undertaken. In the early 1900s basic telephone and telegraphic links existed only near larger towns. Radio communication was practically unknown and neighbours could be hundreds of miles away. What was needed was a simple, portable, cheap, and reliable two-way radio, with its own power source and with a range of 500 kms. In 1928 Alf Traeger, an Adelaide engineer, invented the Pedal Radio and over the next 10 years these were distributed around the stations and the operators were trained in Morse Code. Over the years radio developed with new technology and of course now telephones have taken its place. Whereas a few years ago, all calls for medical assistance were received by radio, today this represents only about 2% of all such calls. Over the years, the RFDS has developed to take along medical specialists, dentists and various health related professionals. Sister Myra Blanche was the first nurse employed by the RFDS in 1945 undertaking home nursing, immunizations, advising on prevention of illnesses and, on occasion, filling in for the doctor. However, flight nurses as we know them were not used by the Service on a regular basis until the 1960s. Today, based on the judgement of the doctor authorizing the flight, up nurse and pilot on board. | Quite a few RFDS flights today dont even have a doctor on board. | neutral |
id_6480 | The Royal Flying Doctor Service of Australia The Interior of Australia is a sparsely populated and extreme environment and the delivery of basic services has always been a problem. One of the most important of these services is medical care. Before the inception of the Royal Flying Doctor Service (RFDS), serious illness or accidents in the Inland often meant death. The RFDS was the first comprehensive aerial organization in the world and to this day remains unique for the range of emergency services that it provides. The story of the Flying Doctor Service is forever linked with its founder, the Very Reverend John Flynn. It is a story of achievement that gave courage to the pioneers of the outback. In 1911 the Reverend John Flynn took up his appointment at Beltana Mission in the north of South Australia. He began his missionary work at a time when only two doctors served an area of some 300 000 sq kms in the Northern Territory. In 1903 the first powered flight had taken place and by 1918 the aeroplane was beginning to improve itself as a means of transport. Radio, then very much in its infancy, was also displaying its remarkable capability to link people thousands of miles apart. Flynn saw the potential in these developments. The Service began in 1928 but it was not until 1942 that it was actually named the Flying Doctor Service and the Royal prefix was added in 1955. In 1928 the dream of a flying doctor was at last a reality but Flynn and his supporters still faced many problems in the months and years to come. The first years service was regarded as experimental, but the experiment succeeded and almost miraculously the service survived the Great Depression of the late 1920s and early 1930s. By 1932 the Australian Inland Mission (AIM) had a network of ten little hospitals across the coverage area. A succession of doctors and pilots followed and operations continued to expand over the next few years. The Service suffered severe financial difficulties in its early years. Flynn and his associates had to launch public appeals for donations. While some government financial aid was made available on occasions in the early days, regular government subsidies only became an established practice later on. Even today the Service continues to rely chiefly on money from trusts, donations and public appeals for its annual budget and raising money remains an integral part of the working day for the Service and its volunteers. In 1922 Flynn began a campaign for funding to buy some aircraft for the AIM. The first flight, on 17th May 1928, was made using a De Havilland model DH50 aircraft. This plane, named Victory, went on to fly 110 000 miles in the service of the Flying Doctor until 1934 when it was replaced with a DH83 Fox Moth. In 1928 flying was still in its early days. Pilots had no navigational aids, no radio and only a compass and inadequate maps, if any. They navigated by landmarks such as fences, rivers, dirt roads of just wheel tracks and telegraph lines. They also flew in an open cockpit, fully exposed to the weather. Flights were normally made during daylight hours although night flights were attempted in cases of extreme urgency. Fuel supplies were also carried on flights until fuel dumps were established at certain strategic locations. Nowadays twin engine craft, speed, pressurization, the ability to fly higher and further with more space for crew and medical personnel have all improved patient care and safety problems. There are hardly any places now that the RFDS cannot reach though safe landing at the remote areas is another issue altogether. Many outstations now have some sort of airstrip lighting but even now car headlights are sometimes used. Landings are therefore still often made in hazardous circumstances on remote fields or roads and it is pilots who continue to be responsible for determining if the flight can be safely undertaken. In the early 1900s basic telephone and telegraphic links existed only near larger towns. Radio communication was practically unknown and neighbours could be hundreds of miles away. What was needed was a simple, portable, cheap, and reliable two-way radio, with its own power source and with a range of 500 kms. In 1928 Alf Traeger, an Adelaide engineer, invented the Pedal Radio and over the next 10 years these were distributed around the stations and the operators were trained in Morse Code. Over the years radio developed with new technology and of course now telephones have taken its place. Whereas a few years ago, all calls for medical assistance were received by radio, today this represents only about 2% of all such calls. Over the years, the RFDS has developed to take along medical specialists, dentists and various health related professionals. Sister Myra Blanche was the first nurse employed by the RFDS in 1945 undertaking home nursing, immunizations, advising on prevention of illnesses and, on occasion, filling in for the doctor. However, flight nurses as we know them were not used by the Service on a regular basis until the 1960s. Today, based on the judgement of the doctor authorizing the flight, up nurse and pilot on board. | Today some landing areas still do not have proper lighting. | entailment |
id_6481 | The Royal Flying Doctor Service of Australia The Interior of Australia is a sparsely populated and extreme environment and the delivery of basic services has always been a problem. One of the most important of these services is medical care. Before the inception of the Royal Flying Doctor Service (RFDS), serious illness or accidents in the Inland often meant death. The RFDS was the first comprehensive aerial organization in the world and to this day remains unique for the range of emergency services that it provides. The story of the Flying Doctor Service is forever linked with its founder, the Very Reverend John Flynn. It is a story of achievement that gave courage to the pioneers of the outback. In 1911 the Reverend John Flynn took up his appointment at Beltana Mission in the north of South Australia. He began his missionary work at a time when only two doctors served an area of some 300 000 sq kms in the Northern Territory. In 1903 the first powered flight had taken place and by 1918 the aeroplane was beginning to improve itself as a means of transport. Radio, then very much in its infancy, was also displaying its remarkable capability to link people thousands of miles apart. Flynn saw the potential in these developments. The Service began in 1928 but it was not until 1942 that it was actually named the Flying Doctor Service and the Royal prefix was added in 1955. In 1928 the dream of a flying doctor was at last a reality but Flynn and his supporters still faced many problems in the months and years to come. The first years service was regarded as experimental, but the experiment succeeded and almost miraculously the service survived the Great Depression of the late 1920s and early 1930s. By 1932 the Australian Inland Mission (AIM) had a network of ten little hospitals across the coverage area. A succession of doctors and pilots followed and operations continued to expand over the next few years. The Service suffered severe financial difficulties in its early years. Flynn and his associates had to launch public appeals for donations. While some government financial aid was made available on occasions in the early days, regular government subsidies only became an established practice later on. Even today the Service continues to rely chiefly on money from trusts, donations and public appeals for its annual budget and raising money remains an integral part of the working day for the Service and its volunteers. In 1922 Flynn began a campaign for funding to buy some aircraft for the AIM. The first flight, on 17th May 1928, was made using a De Havilland model DH50 aircraft. This plane, named Victory, went on to fly 110 000 miles in the service of the Flying Doctor until 1934 when it was replaced with a DH83 Fox Moth. In 1928 flying was still in its early days. Pilots had no navigational aids, no radio and only a compass and inadequate maps, if any. They navigated by landmarks such as fences, rivers, dirt roads of just wheel tracks and telegraph lines. They also flew in an open cockpit, fully exposed to the weather. Flights were normally made during daylight hours although night flights were attempted in cases of extreme urgency. Fuel supplies were also carried on flights until fuel dumps were established at certain strategic locations. Nowadays twin engine craft, speed, pressurization, the ability to fly higher and further with more space for crew and medical personnel have all improved patient care and safety problems. There are hardly any places now that the RFDS cannot reach though safe landing at the remote areas is another issue altogether. Many outstations now have some sort of airstrip lighting but even now car headlights are sometimes used. Landings are therefore still often made in hazardous circumstances on remote fields or roads and it is pilots who continue to be responsible for determining if the flight can be safely undertaken. In the early 1900s basic telephone and telegraphic links existed only near larger towns. Radio communication was practically unknown and neighbours could be hundreds of miles away. What was needed was a simple, portable, cheap, and reliable two-way radio, with its own power source and with a range of 500 kms. In 1928 Alf Traeger, an Adelaide engineer, invented the Pedal Radio and over the next 10 years these were distributed around the stations and the operators were trained in Morse Code. Over the years radio developed with new technology and of course now telephones have taken its place. Whereas a few years ago, all calls for medical assistance were received by radio, today this represents only about 2% of all such calls. Over the years, the RFDS has developed to take along medical specialists, dentists and various health related professionals. Sister Myra Blanche was the first nurse employed by the RFDS in 1945 undertaking home nursing, immunizations, advising on prevention of illnesses and, on occasion, filling in for the doctor. However, flight nurses as we know them were not used by the Service on a regular basis until the 1960s. Today, based on the judgement of the doctor authorizing the flight, up nurse and pilot on board. | The RFDS today gets most of its operational money from charities. | entailment |
id_6482 | The Royal Flying Doctor Service of Australia The Interior of Australia is a sparsely populated and extreme environment and the delivery of basic services has always been a problem. One of the most important of these services is medical care. Before the inception of the Royal Flying Doctor Service (RFDS), serious illness or accidents in the Inland often meant death. The RFDS was the first comprehensive aerial organization in the world and to this day remains unique for the range of emergency services that it provides. The story of the Flying Doctor Service is forever linked with its founder, the Very Reverend John Flynn. It is a story of achievement that gave courage to the pioneers of the outback. In 1911 the Reverend John Flynn took up his appointment at Beltana Mission in the north of South Australia. He began his missionary work at a time when only two doctors served an area of some 300 000 sq kms in the Northern Territory. In 1903 the first powered flight had taken place and by 1918 the aeroplane was beginning to improve itself as a means of transport. Radio, then very much in its infancy, was also displaying its remarkable capability to link people thousands of miles apart. Flynn saw the potential in these developments. The Service began in 1928 but it was not until 1942 that it was actually named the Flying Doctor Service and the Royal prefix was added in 1955. In 1928 the dream of a flying doctor was at last a reality but Flynn and his supporters still faced many problems in the months and years to come. The first years service was regarded as experimental, but the experiment succeeded and almost miraculously the service survived the Great Depression of the late 1920s and early 1930s. By 1932 the Australian Inland Mission (AIM) had a network of ten little hospitals across the coverage area. A succession of doctors and pilots followed and operations continued to expand over the next few years. The Service suffered severe financial difficulties in its early years. Flynn and his associates had to launch public appeals for donations. While some government financial aid was made available on occasions in the early days, regular government subsidies only became an established practice later on. Even today the Service continues to rely chiefly on money from trusts, donations and public appeals for its annual budget and raising money remains an integral part of the working day for the Service and its volunteers. In 1922 Flynn began a campaign for funding to buy some aircraft for the AIM. The first flight, on 17th May 1928, was made using a De Havilland model DH50 aircraft. This plane, named Victory, went on to fly 110 000 miles in the service of the Flying Doctor until 1934 when it was replaced with a DH83 Fox Moth. In 1928 flying was still in its early days. Pilots had no navigational aids, no radio and only a compass and inadequate maps, if any. They navigated by landmarks such as fences, rivers, dirt roads of just wheel tracks and telegraph lines. They also flew in an open cockpit, fully exposed to the weather. Flights were normally made during daylight hours although night flights were attempted in cases of extreme urgency. Fuel supplies were also carried on flights until fuel dumps were established at certain strategic locations. Nowadays twin engine craft, speed, pressurization, the ability to fly higher and further with more space for crew and medical personnel have all improved patient care and safety problems. There are hardly any places now that the RFDS cannot reach though safe landing at the remote areas is another issue altogether. Many outstations now have some sort of airstrip lighting but even now car headlights are sometimes used. Landings are therefore still often made in hazardous circumstances on remote fields or roads and it is pilots who continue to be responsible for determining if the flight can be safely undertaken. In the early 1900s basic telephone and telegraphic links existed only near larger towns. Radio communication was practically unknown and neighbours could be hundreds of miles away. What was needed was a simple, portable, cheap, and reliable two-way radio, with its own power source and with a range of 500 kms. In 1928 Alf Traeger, an Adelaide engineer, invented the Pedal Radio and over the next 10 years these were distributed around the stations and the operators were trained in Morse Code. Over the years radio developed with new technology and of course now telephones have taken its place. Whereas a few years ago, all calls for medical assistance were received by radio, today this represents only about 2% of all such calls. Over the years, the RFDS has developed to take along medical specialists, dentists and various health related professionals. Sister Myra Blanche was the first nurse employed by the RFDS in 1945 undertaking home nursing, immunizations, advising on prevention of illnesses and, on occasion, filling in for the doctor. However, flight nurses as we know them were not used by the Service on a regular basis until the 1960s. Today, based on the judgement of the doctor authorizing the flight, up nurse and pilot on board. | Test flights before 1928 proved that John Flynns ideas were possible. | neutral |
id_6483 | The Royal Flying Doctor Service of Australia The Interior of Australia is a sparsely populated and extreme environment and the delivery of basic services has always been a problem. One of the most important of these services is medical care. Before the inception of the Royal Flying Doctor Service (RFDS), serious illness or accidents in the Inland often meant death. The RFDS was the first comprehensive aerial organization in the world and to this day remains unique for the range of emergency services that it provides. The story of the Flying Doctor Service is forever linked with its founder, the Very Reverend John Flynn. It is a story of achievement that gave courage to the pioneers of the outback. In 1911 the Reverend John Flynn took up his appointment at Beltana Mission in the north of South Australia. He began his missionary work at a time when only two doctors served an area of some 300 000 sq kms in the Northern Territory. In 1903 the first powered flight had taken place and by 1918 the aeroplane was beginning to improve itself as a means of transport. Radio, then very much in its infancy, was also displaying its remarkable capability to link people thousands of miles apart. Flynn saw the potential in these developments. The Service began in 1928 but it was not until 1942 that it was actually named the Flying Doctor Service and the Royal prefix was added in 1955. In 1928 the dream of a flying doctor was at last a reality but Flynn and his supporters still faced many problems in the months and years to come. The first years service was regarded as experimental, but the experiment succeeded and almost miraculously the service survived the Great Depression of the late 1920s and early 1930s. By 1932 the Australian Inland Mission (AIM) had a network of ten little hospitals across the coverage area. A succession of doctors and pilots followed and operations continued to expand over the next few years. The Service suffered severe financial difficulties in its early years. Flynn and his associates had to launch public appeals for donations. While some government financial aid was made available on occasions in the early days, regular government subsidies only became an established practice later on. Even today the Service continues to rely chiefly on money from trusts, donations and public appeals for its annual budget and raising money remains an integral part of the working day for the Service and its volunteers. In 1922 Flynn began a campaign for funding to buy some aircraft for the AIM. The first flight, on 17th May 1928, was made using a De Havilland model DH50 aircraft. This plane, named Victory, went on to fly 110 000 miles in the service of the Flying Doctor until 1934 when it was replaced with a DH83 Fox Moth. In 1928 flying was still in its early days. Pilots had no navigational aids, no radio and only a compass and inadequate maps, if any. They navigated by landmarks such as fences, rivers, dirt roads of just wheel tracks and telegraph lines. They also flew in an open cockpit, fully exposed to the weather. Flights were normally made during daylight hours although night flights were attempted in cases of extreme urgency. Fuel supplies were also carried on flights until fuel dumps were established at certain strategic locations. Nowadays twin engine craft, speed, pressurization, the ability to fly higher and further with more space for crew and medical personnel have all improved patient care and safety problems. There are hardly any places now that the RFDS cannot reach though safe landing at the remote areas is another issue altogether. Many outstations now have some sort of airstrip lighting but even now car headlights are sometimes used. Landings are therefore still often made in hazardous circumstances on remote fields or roads and it is pilots who continue to be responsible for determining if the flight can be safely undertaken. In the early 1900s basic telephone and telegraphic links existed only near larger towns. Radio communication was practically unknown and neighbours could be hundreds of miles away. What was needed was a simple, portable, cheap, and reliable two-way radio, with its own power source and with a range of 500 kms. In 1928 Alf Traeger, an Adelaide engineer, invented the Pedal Radio and over the next 10 years these were distributed around the stations and the operators were trained in Morse Code. Over the years radio developed with new technology and of course now telephones have taken its place. Whereas a few years ago, all calls for medical assistance were received by radio, today this represents only about 2% of all such calls. Over the years, the RFDS has developed to take along medical specialists, dentists and various health related professionals. Sister Myra Blanche was the first nurse employed by the RFDS in 1945 undertaking home nursing, immunizations, advising on prevention of illnesses and, on occasion, filling in for the doctor. However, flight nurses as we know them were not used by the Service on a regular basis until the 1960s. Today, based on the judgement of the doctor authorizing the flight, up nurse and pilot on board. | In the early years RFDS fliers had only compasses to help them find their way. | contradiction |
id_6484 | The Rufous Hare-Wallaby The Rufous Hare-Wallaby is a species of Australian kangaroo, usually known by its Aboriginal name, mala. At one time, there may have been as many as ten million of these little animals across the arid and semi-arid landscape of Australia, but their populations, like those of so many other small endemic species, were devastated when cats and foxes were introduced indeed, during the 1950s it was thought that the mala was extinct. But in 1964, a small colony was found 450 miles northwest of Alice Springs in the Tanami Desert. And 12 years later, a second small colony was found nearby. Very extensive surveys were made throughout historical mala range but no other traces were found. Throughout the 1970s and 1980s, scientists from the Parks and Wildlife Commission of the Northern Territory monitored these two populations. At first it seemed that they were holding their own. Then in late 1987, every one of the individuals of the second and smaller of the wild colonies was killed. From examination of the tracks in the sand, it seemed that just one single fox had been responsible. And then, in October 1991, a wild-fire destroyed the entire area occupied by the remaining colony. Thus the mala was finally pronounced extinct in the wild. Fortunately, ten years earlier, seven individuals had been captured, and had become the founders of a captive breeding programme at the Arid Zone Research Institute in Alice Springs; and that group had thrived. Part of this success is due to the fact that the female can breed when she is just five months old and can produce up to three young a year. Like other kangaroo species, the mother carries her young known as a joey in her pouch for about 15 weeks, and she can have more than one joey at the same time. In the early 1980s, there were enough mala in the captive population to make it feasible to start a reintroduction programme. But first it was necessary to discuss this with the leaders of the Yapa people. Traditionally, the mala had been an important animal in their culture, with strong medicinal powers for old people. It had also been an important food source, and there were concerns that any mala returned to the wild would be killed for the pot. And so, in 1980, a group of key Yapa men was invited to visit the proposed reintroduction area. The skills and knowledge of the Yapa would play a significant and enduring role in this and all other mala projects. With the help of the local Yapa, an electric fence was erected around 250 acres of suitable habitat, about 300 miles northwest of Alice Springs so that the mala could adapt while protected from predators. By 1992, there were about 150 mala in their enclosure, which became known as the Mala Paddock. However, all attempts to reintroduce mala from the paddocks into the unfenced wild were unsuccessful, so in the end the reintroduction programme was abandoned. The team now faced a situation where mala could be bred, but not released into the wild again. Thus, in 1993, a Mala Recovery Team was established to boost mala numbers, and goals for a new programme were set: the team concentrated on finding suitable predator-free or predator-controlled conservation sites within the malas known range. Finally, in March 1999, twelve adult females, eight adult males, and eight joeys were transferred from the Mala Paddock to Dryandra Woodland in Western Australia. Then, a few months later, a second group was transferred to Trimouille, an island off the coast of western Australia. First, it had been necessary to rid the island of rats and cats a task that had taken two years of hard work. Six weeks after their release into this conservation site, a team returned to the island to find out how things were going. Each of the malas had been fitted with a radio collar that transmits for about 14 months, after which it falls off. The team was able to locate 29 out of the 30 transmitters only one came from the collar of a mala that had died of unknown causes. So far the recovery programme had gone even better than expected. Today, there are many signs suggesting that the mala population on the island is continuing to do well. | Scientists were satisfied with the initial results of the recovery programme. | entailment |
id_6485 | The Rufous Hare-Wallaby The Rufous Hare-Wallaby is a species of Australian kangaroo, usually known by its Aboriginal name, mala. At one time, there may have been as many as ten million of these little animals across the arid and semi-arid landscape of Australia, but their populations, like those of so many other small endemic species, were devastated when cats and foxes were introduced indeed, during the 1950s it was thought that the mala was extinct. But in 1964, a small colony was found 450 miles northwest of Alice Springs in the Tanami Desert. And 12 years later, a second small colony was found nearby. Very extensive surveys were made throughout historical mala range but no other traces were found. Throughout the 1970s and 1980s, scientists from the Parks and Wildlife Commission of the Northern Territory monitored these two populations. At first it seemed that they were holding their own. Then in late 1987, every one of the individuals of the second and smaller of the wild colonies was killed. From examination of the tracks in the sand, it seemed that just one single fox had been responsible. And then, in October 1991, a wild-fire destroyed the entire area occupied by the remaining colony. Thus the mala was finally pronounced extinct in the wild. Fortunately, ten years earlier, seven individuals had been captured, and had become the founders of a captive breeding programme at the Arid Zone Research Institute in Alice Springs; and that group had thrived. Part of this success is due to the fact that the female can breed when she is just five months old and can produce up to three young a year. Like other kangaroo species, the mother carries her young known as a joey in her pouch for about 15 weeks, and she can have more than one joey at the same time. In the early 1980s, there were enough mala in the captive population to make it feasible to start a reintroduction programme. But first it was necessary to discuss this with the leaders of the Yapa people. Traditionally, the mala had been an important animal in their culture, with strong medicinal powers for old people. It had also been an important food source, and there were concerns that any mala returned to the wild would be killed for the pot. And so, in 1980, a group of key Yapa men was invited to visit the proposed reintroduction area. The skills and knowledge of the Yapa would play a significant and enduring role in this and all other mala projects. With the help of the local Yapa, an electric fence was erected around 250 acres of suitable habitat, about 300 miles northwest of Alice Springs so that the mala could adapt while protected from predators. By 1992, there were about 150 mala in their enclosure, which became known as the Mala Paddock. However, all attempts to reintroduce mala from the paddocks into the unfenced wild were unsuccessful, so in the end the reintroduction programme was abandoned. The team now faced a situation where mala could be bred, but not released into the wild again. Thus, in 1993, a Mala Recovery Team was established to boost mala numbers, and goals for a new programme were set: the team concentrated on finding suitable predator-free or predator-controlled conservation sites within the malas known range. Finally, in March 1999, twelve adult females, eight adult males, and eight joeys were transferred from the Mala Paddock to Dryandra Woodland in Western Australia. Then, a few months later, a second group was transferred to Trimouille, an island off the coast of western Australia. First, it had been necessary to rid the island of rats and cats a task that had taken two years of hard work. Six weeks after their release into this conservation site, a team returned to the island to find out how things were going. Each of the malas had been fitted with a radio collar that transmits for about 14 months, after which it falls off. The team was able to locate 29 out of the 30 transmitters only one came from the collar of a mala that had died of unknown causes. So far the recovery programme had gone even better than expected. Today, there are many signs suggesting that the mala population on the island is continuing to do well. | The mala population which was transferred to Dryandra Woodland quickly increased in size. | neutral |
id_6486 | The Rufous Hare-Wallaby The Rufous Hare-Wallaby is a species of Australian kangaroo, usually known by its Aboriginal name, mala. At one time, there may have been as many as ten million of these little animals across the arid and semi-arid landscape of Australia, but their populations, like those of so many other small endemic species, were devastated when cats and foxes were introduced indeed, during the 1950s it was thought that the mala was extinct. But in 1964, a small colony was found 450 miles northwest of Alice Springs in the Tanami Desert. And 12 years later, a second small colony was found nearby. Very extensive surveys were made throughout historical mala range but no other traces were found. Throughout the 1970s and 1980s, scientists from the Parks and Wildlife Commission of the Northern Territory monitored these two populations. At first it seemed that they were holding their own. Then in late 1987, every one of the individuals of the second and smaller of the wild colonies was killed. From examination of the tracks in the sand, it seemed that just one single fox had been responsible. And then, in October 1991, a wild-fire destroyed the entire area occupied by the remaining colony. Thus the mala was finally pronounced extinct in the wild. Fortunately, ten years earlier, seven individuals had been captured, and had become the founders of a captive breeding programme at the Arid Zone Research Institute in Alice Springs; and that group had thrived. Part of this success is due to the fact that the female can breed when she is just five months old and can produce up to three young a year. Like other kangaroo species, the mother carries her young known as a joey in her pouch for about 15 weeks, and she can have more than one joey at the same time. In the early 1980s, there were enough mala in the captive population to make it feasible to start a reintroduction programme. But first it was necessary to discuss this with the leaders of the Yapa people. Traditionally, the mala had been an important animal in their culture, with strong medicinal powers for old people. It had also been an important food source, and there were concerns that any mala returned to the wild would be killed for the pot. And so, in 1980, a group of key Yapa men was invited to visit the proposed reintroduction area. The skills and knowledge of the Yapa would play a significant and enduring role in this and all other mala projects. With the help of the local Yapa, an electric fence was erected around 250 acres of suitable habitat, about 300 miles northwest of Alice Springs so that the mala could adapt while protected from predators. By 1992, there were about 150 mala in their enclosure, which became known as the Mala Paddock. However, all attempts to reintroduce mala from the paddocks into the unfenced wild were unsuccessful, so in the end the reintroduction programme was abandoned. The team now faced a situation where mala could be bred, but not released into the wild again. Thus, in 1993, a Mala Recovery Team was established to boost mala numbers, and goals for a new programme were set: the team concentrated on finding suitable predator-free or predator-controlled conservation sites within the malas known range. Finally, in March 1999, twelve adult females, eight adult males, and eight joeys were transferred from the Mala Paddock to Dryandra Woodland in Western Australia. Then, a few months later, a second group was transferred to Trimouille, an island off the coast of western Australia. First, it had been necessary to rid the island of rats and cats a task that had taken two years of hard work. Six weeks after their release into this conservation site, a team returned to the island to find out how things were going. Each of the malas had been fitted with a radio collar that transmits for about 14 months, after which it falls off. The team was able to locate 29 out of the 30 transmitters only one came from the collar of a mala that had died of unknown causes. So far the recovery programme had gone even better than expected. Today, there are many signs suggesting that the mala population on the island is continuing to do well. | Scientists eventually gave up their efforts to release captive mala into the unprotected wild. | entailment |
id_6487 | The Rufous Hare-Wallaby The Rufous Hare-Wallaby is a species of Australian kangaroo, usually known by its Aboriginal name, mala. At one time, there may have been as many as ten million of these little animals across the arid and semi-arid landscape of Australia, but their populations, like those of so many other small endemic species, were devastated when cats and foxes were introduced indeed, during the 1950s it was thought that the mala was extinct. But in 1964, a small colony was found 450 miles northwest of Alice Springs in the Tanami Desert. And 12 years later, a second small colony was found nearby. Very extensive surveys were made throughout historical mala range but no other traces were found. Throughout the 1970s and 1980s, scientists from the Parks and Wildlife Commission of the Northern Territory monitored these two populations. At first it seemed that they were holding their own. Then in late 1987, every one of the individuals of the second and smaller of the wild colonies was killed. From examination of the tracks in the sand, it seemed that just one single fox had been responsible. And then, in October 1991, a wild-fire destroyed the entire area occupied by the remaining colony. Thus the mala was finally pronounced extinct in the wild. Fortunately, ten years earlier, seven individuals had been captured, and had become the founders of a captive breeding programme at the Arid Zone Research Institute in Alice Springs; and that group had thrived. Part of this success is due to the fact that the female can breed when she is just five months old and can produce up to three young a year. Like other kangaroo species, the mother carries her young known as a joey in her pouch for about 15 weeks, and she can have more than one joey at the same time. In the early 1980s, there were enough mala in the captive population to make it feasible to start a reintroduction programme. But first it was necessary to discuss this with the leaders of the Yapa people. Traditionally, the mala had been an important animal in their culture, with strong medicinal powers for old people. It had also been an important food source, and there were concerns that any mala returned to the wild would be killed for the pot. And so, in 1980, a group of key Yapa men was invited to visit the proposed reintroduction area. The skills and knowledge of the Yapa would play a significant and enduring role in this and all other mala projects. With the help of the local Yapa, an electric fence was erected around 250 acres of suitable habitat, about 300 miles northwest of Alice Springs so that the mala could adapt while protected from predators. By 1992, there were about 150 mala in their enclosure, which became known as the Mala Paddock. However, all attempts to reintroduce mala from the paddocks into the unfenced wild were unsuccessful, so in the end the reintroduction programme was abandoned. The team now faced a situation where mala could be bred, but not released into the wild again. Thus, in 1993, a Mala Recovery Team was established to boost mala numbers, and goals for a new programme were set: the team concentrated on finding suitable predator-free or predator-controlled conservation sites within the malas known range. Finally, in March 1999, twelve adult females, eight adult males, and eight joeys were transferred from the Mala Paddock to Dryandra Woodland in Western Australia. Then, a few months later, a second group was transferred to Trimouille, an island off the coast of western Australia. First, it had been necessary to rid the island of rats and cats a task that had taken two years of hard work. Six weeks after their release into this conservation site, a team returned to the island to find out how things were going. Each of the malas had been fitted with a radio collar that transmits for about 14 months, after which it falls off. The team was able to locate 29 out of the 30 transmitters only one came from the collar of a mala that had died of unknown causes. So far the recovery programme had gone even better than expected. Today, there are many signs suggesting that the mala population on the island is continuing to do well. | Natural defences were sufficient to protect the area called Mala Paddock. | contradiction |
id_6488 | The Saiga Antelope In 1993 more than a million saiga antelope (Saiga tatarica) crowded the steppes of Central Asia. However, by 2004 just 30,000 remained, many of them female. The species had fallen prey to relentless poaching - with motorbikes and automatic weapons - in the wake of the Soviet Union's collapse. This 97% decline is one of the most dramatic population crashes of a large mammal ever seen. Poachers harvest males for their horns, which are used in fever cures in traditional Chinese medicine. The slaughter is embarrassing for conservationists. In the early 1990s, groups such as WWF actively encouraged the saiga hunt, promoting its horn as an alternative to the horn of the endangered rhino. "The saiga was an important resource, well managed by the Soviet Union, " says John Robinson, at the Wildlife Conservation Society (WCS) in New York City, US. "But with the breakdown of civil society and law and order, that management ceased. " | In the early nineties Central Asias steppes was home to over one million saiga. | entailment |
id_6489 | The Saiga Antelope In 1993 more than a million saiga antelope (Saiga tatarica) crowded the steppes of Central Asia. However, by 2004 just 30,000 remained, many of them female. The species had fallen prey to relentless poaching - with motorbikes and automatic weapons - in the wake of the Soviet Union's collapse. This 97% decline is one of the most dramatic population crashes of a large mammal ever seen. Poachers harvest males for their horns, which are used in fever cures in traditional Chinese medicine. The slaughter is embarrassing for conservationists. In the early 1990s, groups such as WWF actively encouraged the saiga hunt, promoting its horn as an alternative to the horn of the endangered rhino. "The saiga was an important resource, well managed by the Soviet Union, " says John Robinson, at the Wildlife Conservation Society (WCS) in New York City, US. "But with the breakdown of civil society and law and order, that management ceased. " | The WWF managed to save many rhinos because it encouraged the hunting of saiga. | neutral |
id_6490 | The Saiga Antelope In 1993 more than a million saiga antelope (Saiga tatarica) crowded the steppes of Central Asia. However, by 2004 just 30,000 remained, many of them female. The species had fallen prey to relentless poaching - with motorbikes and automatic weapons - in the wake of the Soviet Union's collapse. This 97% decline is one of the most dramatic population crashes of a large mammal ever seen. Poachers harvest males for their horns, which are used in fever cures in traditional Chinese medicine. The slaughter is embarrassing for conservationists. In the early 1990s, groups such as WWF actively encouraged the saiga hunt, promoting its horn as an alternative to the horn of the endangered rhino. "The saiga was an important resource, well managed by the Soviet Union, " says John Robinson, at the Wildlife Conservation Society (WCS) in New York City, US. "But with the breakdown of civil society and law and order, that management ceased. " | Traditional medicine uses the poached horns of male members of the group. | entailment |
id_6491 | The Saiga Antelope In 1993 more than a million saiga antelope (Saiga tatarica) crowded the steppes of Central Asia. However, by 2004 just 30,000 remained, many of them female. The species had fallen prey to relentless poaching - with motorbikes and automatic weapons - in the wake of the Soviet Union's collapse. This 97% decline is one of the most dramatic population crashes of a large mammal ever seen. Poachers harvest males for their horns, which are used in fever cures in traditional Chinese medicine. The slaughter is embarrassing for conservationists. In the early 1990s, groups such as WWF actively encouraged the saiga hunt, promoting its horn as an alternative to the horn of the endangered rhino. "The saiga was an important resource, well managed by the Soviet Union, " says John Robinson, at the Wildlife Conservation Society (WCS) in New York City, US. "But with the breakdown of civil society and law and order, that management ceased. " | This 97% decline is the most dramatic population crash of a large mammal ever seen. | contradiction |
id_6492 | The Santa Monica is a well defined unit of transverse mountain ranges in Southern California, situated at the core of the island. These ranges are perpendicular to the coast of Sierra Nevada and the Pennsylvania ranges. A mile of pink blossom trees brightens the pathway of each peak. Santa Monica, or the high one as it is known, is often hidden away in the heavens. On a winters day, one can see only the barks of blossom trees. Each mountain top has a layer of snow like icing on a cake. The triple peaks glisten like diamonds in a river, beneath the sunshine over the heart of the coast. Opposite Santa Monica lies its twin, Santa Louisa with three snow-layered peaks. However, Santa Louisa still looks like a baby when compared to the high one. | The Santa Monica is at right angles to the coast of Sierra Nevada | entailment |
id_6493 | The Santa Monica is a well defined unit of transverse mountain ranges in Southern California, situated at the core of the island. These ranges are perpendicular to the coast of Sierra Nevada and the Pennsylvania ranges. A mile of pink blossom trees brightens the pathway of each peak. Santa Monica, or the high one as it is known, is often hidden away in the heavens. On a winters day, one can see only the barks of blossom trees. Each mountain top has a layer of snow like icing on a cake. The triple peaks glisten like diamonds in a river, beneath the sunshine over the heart of the coast. Opposite Santa Monica lies its twin, Santa Louisa with three snow-layered peaks. However, Santa Louisa still looks like a baby when compared to the high one. | Santa Monica is high enough to almost reach the sky | entailment |
id_6494 | The Santa Monica is a well defined unit of transverse mountain ranges in Southern California, situated at the core of the island. These ranges are perpendicular to the coast of Sierra Nevada and the Pennsylvania ranges. A mile of pink blossom trees brightens the pathway of each peak. Santa Monica, or the high one as it is known, is often hidden away in the heavens. On a winters day, one can see only the barks of blossom trees. Each mountain top has a layer of snow like icing on a cake. The triple peaks glisten like diamonds in a river, beneath the sunshine over the heart of the coast. Opposite Santa Monica lies its twin, Santa Louisa with three snow-layered peaks. However, Santa Louisa still looks like a baby when compared to the high one. | Mount Santa Monica is to be found at the heart of the island | entailment |
id_6495 | The Santa Monica is a well defined unit of transverse mountain ranges in Southern California, situated at the core of the island. These ranges are perpendicular to the coast of Sierra Nevada and the Pennsylvania ranges. A mile of pink blossom trees brightens the pathway of each peak. Santa Monica, or the high one as it is known, is often hidden away in the heavens. On a winters day, one can see only the barks of blossom trees. Each mountain top has a layer of snow like icing on a cake. The triple peaks glisten like diamonds in a river, beneath the sunshine over the heart of the coast. Opposite Santa Monica lies its twin, Santa Louisa with three snow-layered peaks. However, Santa Louisa still looks like a baby when compared to the high one. | The Mountain has an identical twin across the island | contradiction |
id_6496 | The Scottish and Newcastle brewery have blamed the unpredictable weather for the drop in their profits for the year 2007. Statistics show last years net profits of 76 million had dropped by 9% in the first six months of this year compared to the same period the previous year. The wet weather has been prominent throughout summer and, due to the fact that there is no large sporting event such as the 2006 football World Cup to increase sales, this has led to a further 4.3% drop for the remainder of the year. The chairman of Scottish and Newcastle has claimed that the continuation of this bad weather in the UK and France will make it most challenging to reach this years target. The Fosters beer company introduced a new low calorie line in 2006 with the intention of inflating their profits. This inflated Fosters 2006 profits by 55.5%. This figure is still rising today. While the profits for Fosters low calorie beer continue to rise, the proportion of the original Fosters profits were 50% of Scottish and Newcastles previous net profits. | Fosters profits from 2006 and 2007 are lower than the profits Scottish and Newcastle breweries made in 2006 | neutral |
id_6497 | The Scottish and Newcastle brewery have blamed the unpredictable weather for the drop in their profits for the year 2007. Statistics show last years net profits of 76 million had dropped by 9% in the first six months of this year compared to the same period the previous year. The wet weather has been prominent throughout summer and, due to the fact that there is no large sporting event such as the 2006 football World Cup to increase sales, this has led to a further 4.3% drop for the remainder of the year. The chairman of Scottish and Newcastle has claimed that the continuation of this bad weather in the UK and France will make it most challenging to reach this years target. The Fosters beer company introduced a new low calorie line in 2006 with the intention of inflating their profits. This inflated Fosters 2006 profits by 55.5%. This figure is still rising today. While the profits for Fosters low calorie beer continue to rise, the proportion of the original Fosters profits were 50% of Scottish and Newcastles previous net profits. | The breweries are suffering profit decrease due to long periods of bad weather | neutral |
id_6498 | The Scottish and Newcastle brewery have blamed the unpredictable weather for the drop in their profits for the year 2007. Statistics show last years net profits of 76 million had dropped by 9% in the first six months of this year compared to the same period the previous year. The wet weather has been prominent throughout summer and, due to the fact that there is no large sporting event such as the 2006 football World Cup to increase sales, this has led to a further 4.3% drop for the remainder of the year. The chairman of Scottish and Newcastle has claimed that the continuation of this bad weather in the UK and France will make it most challenging to reach this years target. The Fosters beer company introduced a new low calorie line in 2006 with the intention of inflating their profits. This inflated Fosters 2006 profits by 55.5%. This figure is still rising today. While the profits for Fosters low calorie beer continue to rise, the proportion of the original Fosters profits were 50% of Scottish and Newcastles previous net profits. | The original Fosters line profits for 2007 are 38 million | entailment |
id_6499 | The Scottish and Newcastle brewery have blamed the unpredictable weather for the drop in their profits for the year 2007. Statistics show last years net profits of 76 million had dropped by 9% in the first six months of this year compared to the same period the previous year. The wet weather has been prominent throughout summer and, due to the fact that there is no large sporting event such as the 2006 football World Cup to increase sales, this has led to a further 4.3% drop for the remainder of the year. The chairman of Scottish and Newcastle has claimed that the continuation of this bad weather in the UK and France will make it most challenging to reach this years target. The Fosters beer company introduced a new low calorie line in 2006 with the intention of inflating their profits. This inflated Fosters 2006 profits by 55.5%. This figure is still rising today. While the profits for Fosters low calorie beer continue to rise, the proportion of the original Fosters profits were 50% of Scottish and Newcastles previous net profits. | Scottish and Newcastles profits in 2007 were 65.9 million | entailment |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.