uid
stringlengths
4
7
premise
stringlengths
19
9.21k
hypothesis
stringlengths
13
488
label
stringclasses
3 values
id_6200
The Great Australian Fence A war has been going on for almost a hundred years between the sheep farmers of Australia and the dingo, Australias wild dog. To protect their livelihood, the farmers built a wire fence, 3,307 miles of continuous wire mesh, reaching from the coast of South Australia all the way to the cotton fields of eastern Queensland, just short of the Pacific Ocean. The Fence is Australias version of the Great Wall of China, but even longer, erected to keep out hostile invaders, in this case hordes of yellow dogs. The empire it preserves is that of the woolgrowers, sovereigns of the worlds second largest sheep flock, after Chinas some 123 million head and keepers of a wool export business worth four billion dollars. Never mind that more and more people conservationists, politicians, taxpayers and animal lovers say that such a barrier would never be allowed today on ecological grounds. With sections of it almost a hundred years old, the dog fence has become, as conservationist Lindsay Fairweather ruefully admits, an icon of Australian frontier ingenuity. To appreciate this unusual outback monument and to meet the people whose livelihoods depend on it, I spent part of an Australian autumn travelling the wire. Its known by different names in different states: the Dog Fence in South Australia, the Border Fence in New South Wales and the Barrier Fence in Queensland. I would call it simply the Fence. For most of its prodigious length, this epic fence winds like a river across a landscape that, unless a big rain has fallen, scarcely has rivers. The eccentric route, prescribed mostly by property lines, provides a sampler of outback topography: the Fence goes over sand dunes, past salt lakes, up and down rock-strewn hills, through dense scrub and across barren plains. The Fence stays away from towns. Where it passes near a town, it has actually become a tourist attraction visited on bus tours. It marks the traditional dividing line between cattle and sheep. Inside, where the dingoes are legally classified as vermin, they are shot, poisoned and trapped. Sheep and dingoes do not mix and the Fence sends that message mile after mile. What is this creature that by itself threatens an entire industry, inflicting several millions of dollars of damage a year despite the presence of the worlds most obsessive fence? Cousin to the coyote and the jackal, descended from the Asian wolf, Cams lupus dingo is an introduced species of wild dog. Skeletal remains indicate that the dingo was introduced to Australia more than 3,500 years ago probably with Asian seafarers who landed on the north coast. The adaptable dingo spread rapidly and in a short time became the top predator, killing off all its marsupial competitors. The dingo looks like a small wolf with a long nose, short pointed ears and a bushy tail. Dingoes rarely bark; they yelp and howl. Standing about 22 inches at the shoulder slightly taller than a coyote the dingo is Australias largest land carnivore. The woolgrowers war against dingoes, which is similar to the sheep ranchers rage against coyotes in the US, started not long after the first European settlers disembarked in 1788, bringing with them a cargo of sheep. Dingoes officially became outlaws in 1830 when governments placed a bounty on their heads. Today bounties for problem dogs killing sheep inside the Fence can reach $500. As pioneers penetrated the interior with their flocks of sheep, fences replaced shepherds until, by the end of the 19th century, thousands of miles of barrier fencing crisscrossed the vast grazing lands. The dingo started out as a quiet observer, writes Roland Breckwoldt, in A Very Elegant Animal: The Dingo, but soon came to represent everything that was dark and dangerous on the continent. It is estimated that since sheep arrived in Australia, dingo numbers have increased a hundredfold. Though dingoes have been eradicated from parts of Australia, an educated guess puts the population at more than a million. Eventually government officials and graziers agreed that one well-maintained fence, placed on the outer rim of sheep country and paid for by taxes levied on woolgrowers, should supplant the maze of private netting. By 1960, three states joined their barriers to form a single dog fence. The intense private battles between woolgrowers and dingoes have usually served to define the Fence only in economic terms. It marks the difference between profit and loss. Yet the Fence casts a much broader ecological shadow for it has become a kind of terrestrial dam, deflecting the flow of animals inside and out. The ecological side effects appear most vividly at Sturt National Park. In 1845, explorer Charles Sturt led an expedition through these parts on a futile search for an inland sea. For Sturt and other early explorers, it was a rare event to see a kangaroo. Now they are ubiquitous for without a native predator the kangaroo population has exploded inside the Fence. Kangaroos are now cursed more than dingoes. They have become the rivals of sheep, competing for water and grass. In response state governments cull* more than three million kangaroos a year to keep Australias national symbol from overrunning the pastoral lands. Park officials, who recognise that the fence is to blame, respond to the excess of kangaroos by saying The fence is there, and we have to live with it.
The author does not agree with the culling of kangaroos.
neutral
id_6201
The Great Australian Fence A war has been going on for almost a hundred years between the sheep farmers of Australia and the dingo, Australias wild dog. To protect their livelihood, the farmers built a wire fence, 3,307 miles of continuous wire mesh, reaching from the coast of South Australia all the way to the cotton fields of eastern Queensland, just short of the Pacific Ocean. The Fence is Australias version of the Great Wall of China, but even longer, erected to keep out hostile invaders, in this case hordes of yellow dogs. The empire it preserves is that of the woolgrowers, sovereigns of the worlds second largest sheep flock, after Chinas some 123 million head and keepers of a wool export business worth four billion dollars. Never mind that more and more people conservationists, politicians, taxpayers and animal lovers say that such a barrier would never be allowed today on ecological grounds. With sections of it almost a hundred years old, the dog fence has become, as conservationist Lindsay Fairweather ruefully admits, an icon of Australian frontier ingenuity. To appreciate this unusual outback monument and to meet the people whose livelihoods depend on it, I spent part of an Australian autumn travelling the wire. Its known by different names in different states: the Dog Fence in South Australia, the Border Fence in New South Wales and the Barrier Fence in Queensland. I would call it simply the Fence. For most of its prodigious length, this epic fence winds like a river across a landscape that, unless a big rain has fallen, scarcely has rivers. The eccentric route, prescribed mostly by property lines, provides a sampler of outback topography: the Fence goes over sand dunes, past salt lakes, up and down rock-strewn hills, through dense scrub and across barren plains. The Fence stays away from towns. Where it passes near a town, it has actually become a tourist attraction visited on bus tours. It marks the traditional dividing line between cattle and sheep. Inside, where the dingoes are legally classified as vermin, they are shot, poisoned and trapped. Sheep and dingoes do not mix and the Fence sends that message mile after mile. What is this creature that by itself threatens an entire industry, inflicting several millions of dollars of damage a year despite the presence of the worlds most obsessive fence? Cousin to the coyote and the jackal, descended from the Asian wolf, Cams lupus dingo is an introduced species of wild dog. Skeletal remains indicate that the dingo was introduced to Australia more than 3,500 years ago probably with Asian seafarers who landed on the north coast. The adaptable dingo spread rapidly and in a short time became the top predator, killing off all its marsupial competitors. The dingo looks like a small wolf with a long nose, short pointed ears and a bushy tail. Dingoes rarely bark; they yelp and howl. Standing about 22 inches at the shoulder slightly taller than a coyote the dingo is Australias largest land carnivore. The woolgrowers war against dingoes, which is similar to the sheep ranchers rage against coyotes in the US, started not long after the first European settlers disembarked in 1788, bringing with them a cargo of sheep. Dingoes officially became outlaws in 1830 when governments placed a bounty on their heads. Today bounties for problem dogs killing sheep inside the Fence can reach $500. As pioneers penetrated the interior with their flocks of sheep, fences replaced shepherds until, by the end of the 19th century, thousands of miles of barrier fencing crisscrossed the vast grazing lands. The dingo started out as a quiet observer, writes Roland Breckwoldt, in A Very Elegant Animal: The Dingo, but soon came to represent everything that was dark and dangerous on the continent. It is estimated that since sheep arrived in Australia, dingo numbers have increased a hundredfold. Though dingoes have been eradicated from parts of Australia, an educated guess puts the population at more than a million. Eventually government officials and graziers agreed that one well-maintained fence, placed on the outer rim of sheep country and paid for by taxes levied on woolgrowers, should supplant the maze of private netting. By 1960, three states joined their barriers to form a single dog fence. The intense private battles between woolgrowers and dingoes have usually served to define the Fence only in economic terms. It marks the difference between profit and loss. Yet the Fence casts a much broader ecological shadow for it has become a kind of terrestrial dam, deflecting the flow of animals inside and out. The ecological side effects appear most vividly at Sturt National Park. In 1845, explorer Charles Sturt led an expedition through these parts on a futile search for an inland sea. For Sturt and other early explorers, it was a rare event to see a kangaroo. Now they are ubiquitous for without a native predator the kangaroo population has exploded inside the Fence. Kangaroos are now cursed more than dingoes. They have become the rivals of sheep, competing for water and grass. In response state governments cull* more than three million kangaroos a year to keep Australias national symbol from overrunning the pastoral lands. Park officials, who recognise that the fence is to blame, respond to the excess of kangaroos by saying The fence is there, and we have to live with it.
Dingoes have flourished as a result of the sheep industry.
entailment
id_6202
The Great Australian Fence A war has been going on for almost a hundred years between the sheep farmers of Australia and the dingo, Australias wild dog. To protect their livelihood, the farmers built a wire fence, 3,307 miles of continuous wire mesh, reaching from the coast of South Australia all the way to the cotton fields of eastern Queensland, just short of the Pacific Ocean. The Fence is Australias version of the Great Wall of China, but even longer, erected to keep out hostile invaders, in this case hordes of yellow dogs. The empire it preserves is that of the woolgrowers, sovereigns of the worlds second largest sheep flock, after Chinas some 123 million head and keepers of a wool export business worth four billion dollars. Never mind that more and more people conservationists, politicians, taxpayers and animal lovers say that such a barrier would never be allowed today on ecological grounds. With sections of it almost a hundred years old, the dog fence has become, as conservationist Lindsay Fairweather ruefully admits, an icon of Australian frontier ingenuity. To appreciate this unusual outback monument and to meet the people whose livelihoods depend on it, I spent part of an Australian autumn travelling the wire. Its known by different names in different states: the Dog Fence in South Australia, the Border Fence in New South Wales and the Barrier Fence in Queensland. I would call it simply the Fence. For most of its prodigious length, this epic fence winds like a river across a landscape that, unless a big rain has fallen, scarcely has rivers. The eccentric route, prescribed mostly by property lines, provides a sampler of outback topography: the Fence goes over sand dunes, past salt lakes, up and down rock-strewn hills, through dense scrub and across barren plains. The Fence stays away from towns. Where it passes near a town, it has actually become a tourist attraction visited on bus tours. It marks the traditional dividing line between cattle and sheep. Inside, where the dingoes are legally classified as vermin, they are shot, poisoned and trapped. Sheep and dingoes do not mix and the Fence sends that message mile after mile. What is this creature that by itself threatens an entire industry, inflicting several millions of dollars of damage a year despite the presence of the worlds most obsessive fence? Cousin to the coyote and the jackal, descended from the Asian wolf, Cams lupus dingo is an introduced species of wild dog. Skeletal remains indicate that the dingo was introduced to Australia more than 3,500 years ago probably with Asian seafarers who landed on the north coast. The adaptable dingo spread rapidly and in a short time became the top predator, killing off all its marsupial competitors. The dingo looks like a small wolf with a long nose, short pointed ears and a bushy tail. Dingoes rarely bark; they yelp and howl. Standing about 22 inches at the shoulder slightly taller than a coyote the dingo is Australias largest land carnivore. The woolgrowers war against dingoes, which is similar to the sheep ranchers rage against coyotes in the US, started not long after the first European settlers disembarked in 1788, bringing with them a cargo of sheep. Dingoes officially became outlaws in 1830 when governments placed a bounty on their heads. Today bounties for problem dogs killing sheep inside the Fence can reach $500. As pioneers penetrated the interior with their flocks of sheep, fences replaced shepherds until, by the end of the 19th century, thousands of miles of barrier fencing crisscrossed the vast grazing lands. The dingo started out as a quiet observer, writes Roland Breckwoldt, in A Very Elegant Animal: The Dingo, but soon came to represent everything that was dark and dangerous on the continent. It is estimated that since sheep arrived in Australia, dingo numbers have increased a hundredfold. Though dingoes have been eradicated from parts of Australia, an educated guess puts the population at more than a million. Eventually government officials and graziers agreed that one well-maintained fence, placed on the outer rim of sheep country and paid for by taxes levied on woolgrowers, should supplant the maze of private netting. By 1960, three states joined their barriers to form a single dog fence. The intense private battles between woolgrowers and dingoes have usually served to define the Fence only in economic terms. It marks the difference between profit and loss. Yet the Fence casts a much broader ecological shadow for it has become a kind of terrestrial dam, deflecting the flow of animals inside and out. The ecological side effects appear most vividly at Sturt National Park. In 1845, explorer Charles Sturt led an expedition through these parts on a futile search for an inland sea. For Sturt and other early explorers, it was a rare event to see a kangaroo. Now they are ubiquitous for without a native predator the kangaroo population has exploded inside the Fence. Kangaroos are now cursed more than dingoes. They have become the rivals of sheep, competing for water and grass. In response state governments cull* more than three million kangaroos a year to keep Australias national symbol from overrunning the pastoral lands. Park officials, who recognise that the fence is to blame, respond to the excess of kangaroos by saying The fence is there, and we have to live with it.
Dingoes are known to attack humans.
neutral
id_6203
The Great Australian Fence A war has been going on for almost a hundred years between the sheep farmers of Australia and the dingo, Australias wild dog. To protect their livelihood, the farmers built a wire fence, 3,307 miles of continuous wire mesh, reaching from the coast of South Australia all the way to the cotton fields of eastern Queensland, just short of the Pacific Ocean. The Fence is Australias version of the Great Wall of China, but even longer, erected to keep out hostile invaders, in this case hordes of yellow dogs. The empire it preserves is that of the woolgrowers, sovereigns of the worlds second largest sheep flock, after Chinas some 123 million head and keepers of a wool export business worth four billion dollars. Never mind that more and more people conservationists, politicians, taxpayers and animal lovers say that such a barrier would never be allowed today on ecological grounds. With sections of it almost a hundred years old, the dog fence has become, as conservationist Lindsay Fairweather ruefully admits, an icon of Australian frontier ingenuity. To appreciate this unusual outback monument and to meet the people whose livelihoods depend on it, I spent part of an Australian autumn travelling the wire. Its known by different names in different states: the Dog Fence in South Australia, the Border Fence in New South Wales and the Barrier Fence in Queensland. I would call it simply the Fence. For most of its prodigious length, this epic fence winds like a river across a landscape that, unless a big rain has fallen, scarcely has rivers. The eccentric route, prescribed mostly by property lines, provides a sampler of outback topography: the Fence goes over sand dunes, past salt lakes, up and down rock-strewn hills, through dense scrub and across barren plains. The Fence stays away from towns. Where it passes near a town, it has actually become a tourist attraction visited on bus tours. It marks the traditional dividing line between cattle and sheep. Inside, where the dingoes are legally classified as vermin, they are shot, poisoned and trapped. Sheep and dingoes do not mix and the Fence sends that message mile after mile. What is this creature that by itself threatens an entire industry, inflicting several millions of dollars of damage a year despite the presence of the worlds most obsessive fence? Cousin to the coyote and the jackal, descended from the Asian wolf, Cams lupus dingo is an introduced species of wild dog. Skeletal remains indicate that the dingo was introduced to Australia more than 3,500 years ago probably with Asian seafarers who landed on the north coast. The adaptable dingo spread rapidly and in a short time became the top predator, killing off all its marsupial competitors. The dingo looks like a small wolf with a long nose, short pointed ears and a bushy tail. Dingoes rarely bark; they yelp and howl. Standing about 22 inches at the shoulder slightly taller than a coyote the dingo is Australias largest land carnivore. The woolgrowers war against dingoes, which is similar to the sheep ranchers rage against coyotes in the US, started not long after the first European settlers disembarked in 1788, bringing with them a cargo of sheep. Dingoes officially became outlaws in 1830 when governments placed a bounty on their heads. Today bounties for problem dogs killing sheep inside the Fence can reach $500. As pioneers penetrated the interior with their flocks of sheep, fences replaced shepherds until, by the end of the 19th century, thousands of miles of barrier fencing crisscrossed the vast grazing lands. The dingo started out as a quiet observer, writes Roland Breckwoldt, in A Very Elegant Animal: The Dingo, but soon came to represent everything that was dark and dangerous on the continent. It is estimated that since sheep arrived in Australia, dingo numbers have increased a hundredfold. Though dingoes have been eradicated from parts of Australia, an educated guess puts the population at more than a million. Eventually government officials and graziers agreed that one well-maintained fence, placed on the outer rim of sheep country and paid for by taxes levied on woolgrowers, should supplant the maze of private netting. By 1960, three states joined their barriers to form a single dog fence. The intense private battles between woolgrowers and dingoes have usually served to define the Fence only in economic terms. It marks the difference between profit and loss. Yet the Fence casts a much broader ecological shadow for it has become a kind of terrestrial dam, deflecting the flow of animals inside and out. The ecological side effects appear most vividly at Sturt National Park. In 1845, explorer Charles Sturt led an expedition through these parts on a futile search for an inland sea. For Sturt and other early explorers, it was a rare event to see a kangaroo. Now they are ubiquitous for without a native predator the kangaroo population has exploded inside the Fence. Kangaroos are now cursed more than dingoes. They have become the rivals of sheep, competing for water and grass. In response state governments cull* more than three million kangaroos a year to keep Australias national symbol from overrunning the pastoral lands. Park officials, who recognise that the fence is to blame, respond to the excess of kangaroos by saying The fence is there, and we have to live with it.
The fence serves a different purpose in each state.
contradiction
id_6204
The Great Barrier Reef extends over 2,000 km, and has been built by tiny animals called coral polyps. Some of the Great Barrier Reefs coral skeleton deposits date over half a million years old. The individual coral polyps that comprise the reef grow very slowly, increasing by only 1-3 cm a year. A cultural and ecological icon, the Great Barrier Reef has been visited by Aboriginal Australians for over 40,000 years and today attracts over two million tourists annually. Unfortunately the fragility of the reefs ecosystem is now threatened by the effects of climate change on the temperature of the water in which it sits: the Coral Sea. Over the last decade sea pollution caused by farm runoff has caused coral bleaching, thus diminishing the appearance of one of the worlds greatest sights. The ecological damage also threatens those endemic creatures that rely upon the Great Barrier Reef for food and/or shelter. Many of these are themselves endangered species. The Great Barrier Reef is in fact a system of over 3,000 reefs and islands. The northern section of the reef contains deltaic and ribbon reefs. The most common occurrences of fringing and lagoonal reefs are in the southern sections of the reef. In the middle section you are most likely to find crescentic reefs, although this type is also found in the northern reef.
The northern section of the Great Barrier Reef only contains three types of reef.
neutral
id_6205
The Great Barrier Reef extends over 2,000 km, and has been built by tiny animals called coral polyps. Some of the Great Barrier Reefs coral skeleton deposits date over half a million years old. The individual coral polyps that comprise the reef grow very slowly, increasing by only 1-3 cm a year. A cultural and ecological icon, the Great Barrier Reef has been visited by Aboriginal Australians for over 40,000 years and today attracts over two million tourists annually. Unfortunately the fragility of the reefs ecosystem is now threatened by the effects of climate change on the temperature of the water in which it sits: the Coral Sea. Over the last decade sea pollution caused by farm runoff has caused coral bleaching, thus diminishing the appearance of one of the worlds greatest sights. The ecological damage also threatens those endemic creatures that rely upon the Great Barrier Reef for food and/or shelter. Many of these are themselves endangered species. The Great Barrier Reef is in fact a system of over 3,000 reefs and islands. The northern section of the reef contains deltaic and ribbon reefs. The most common occurrences of fringing and lagoonal reefs are in the southern sections of the reef. In the middle section you are most likely to find crescentic reefs, although this type is also found in the northern reef.
Ocean warming is hazardous to coral systems.
neutral
id_6206
The Great Barrier Reef extends over 2,000 km, and has been built by tiny animals called coral polyps. Some of the Great Barrier Reefs coral skeleton deposits date over half a million years old. The individual coral polyps that comprise the reef grow very slowly, increasing by only 1-3 cm a year. A cultural and ecological icon, the Great Barrier Reef has been visited by Aboriginal Australians for over 40,000 years and today attracts over two million tourists annually. Unfortunately the fragility of the reefs ecosystem is now threatened by the effects of climate change on the temperature of the water in which it sits: the Coral Sea. Over the last decade sea pollution caused by farm runoff has caused coral bleaching, thus diminishing the appearance of one of the worlds greatest sights. The ecological damage also threatens those endemic creatures that rely upon the Great Barrier Reef for food and/or shelter. Many of these are themselves endangered species. The Great Barrier Reef is in fact a system of over 3,000 reefs and islands. The northern section of the reef contains deltaic and ribbon reefs. The most common occurrences of fringing and lagoonal reefs are in the southern sections of the reef. In the middle section you are most likely to find crescentic reefs, although this type is also found in the northern reef.
The Great Barrier Reef is in the Coral Sea.
entailment
id_6207
The Great Barrier Reef extends over 2,000 km, and has been built by tiny animals called coral polyps. Some of the Great Barrier Reefs coral skeleton deposits date over half a million years old. The individual coral polyps that comprise the reef grow very slowly, increasing by only 1-3 cm a year. A cultural and ecological icon, the Great Barrier Reef has been visited by Aboriginal Australians for over 40,000 years and today attracts over two million tourists annually. Unfortunately the fragility of the reefs ecosystem is now threatened by the effects of climate change on the temperature of the water in which it sits: the Coral Sea. Over the last decade sea pollution caused by farm runoff has caused coral bleaching, thus diminishing the appearance of one of the worlds greatest sights. The ecological damage also threatens those endemic creatures that rely upon the Great Barrier Reef for food and/or shelter. Many of these are themselves endangered species. The Great Barrier Reef is in fact a system of over 3,000 reefs and islands. The northern section of the reef contains deltaic and ribbon reefs. The most common occurrences of fringing and lagoonal reefs are in the southern sections of the reef. In the middle section you are most likely to find crescentic reefs, although this type is also found in the northern reef.
There has been an aesthetic decline in the Great Barrier Reef.
entailment
id_6208
The Great Barrier Reef extends over 2,000 km, and has been built by tiny animals called coral polyps. Some of the Great Barrier Reefs coral skeleton deposits date over half a million years old. The individual coral polyps that comprise the reef grow very slowly, increasing by only 1-3 cm a year. A cultural and ecological icon, the Great Barrier Reef has been visited by Aboriginal Australians for over 40,000 years and today attracts over two million tourists annually. Unfortunately the fragility of the reefs ecosystem is now threatened by the effects of climate change on the temperature of the water in which it sits: the Coral Sea. Over the last decade sea pollution caused by farm runoff has caused coral bleaching, thus diminishing the appearance of one of the worlds greatest sights. The ecological damage also threatens those endemic creatures that rely upon the Great Barrier Reef for food and/or shelter. Many of these are themselves endangered species. The Great Barrier Reef is in fact a system of over 3,000 reefs and islands. The northern section of the reef contains deltaic and ribbon reefs. The most common occurrences of fringing and lagoonal reefs are in the southern sections of the reef. In the middle section you are most likely to find crescentic reefs, although this type is also found in the northern reef.
Farm runoff can affect sea water temperature.
neutral
id_6209
The History Of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu-the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after the Louis XVI, the Republic of France reestablished the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England,10,000 people was arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in the Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is, in reality, an elaborate Shinto rite a handful is thrown into the centre to drive off malevolent spirits In the Southwest of the United States, the Pueblo worships the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency.
It has been suggested that salt was responsible for the first war.
entailment
id_6210
The History Of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu-the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after the Louis XVI, the Republic of France reestablished the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England,10,000 people was arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in the Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is, in reality, an elaborate Shinto rite a handful is thrown into the centre to drive off malevolent spirits In the Southwest of the United States, the Pueblo worships the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency.
Most of the money for the construction of the Erie Canal came from salt taxes.
contradiction
id_6211
The History Of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu-the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after the Louis XVI, the Republic of France reestablished the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England,10,000 people was arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in the Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is, in reality, an elaborate Shinto rite a handful is thrown into the centre to drive off malevolent spirits In the Southwest of the United States, the Pueblo worships the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency.
Hopi legend believes that salt deposits were placed far away from civilization to penalize mankind.
entailment
id_6212
The History Of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu-the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after the Louis XVI, the Republic of France reestablished the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England,10,000 people was arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in the Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is, in reality, an elaborate Shinto rite a handful is thrown into the centre to drive off malevolent spirits In the Southwest of the United States, the Pueblo worships the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency.
The first tax on salt was imposed by a Chinese emperor.
neutral
id_6213
The History Of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu-the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after the Louis XVI, the Republic of France reestablished the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England,10,000 people was arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in the Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is, in reality, an elaborate Shinto rite a handful is thrown into the centre to drive off malevolent spirits In the Southwest of the United States, the Pueblo worships the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency.
A lack of salt is connected with the deaths of some soldiers.
entailment
id_6214
The History Of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu-the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after the Louis XVI, the Republic of France reestablished the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England,10,000 people was arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in the Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is, in reality, an elaborate Shinto rite a handful is thrown into the centre to drive off malevolent spirits In the Southwest of the United States, the Pueblo worships the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency.
Salt is no longer used as a form of currency.
contradiction
id_6215
The History of Glass. From our earliest origins, man has been making use of glass. Historians have discovered that a type of natural glass - obsidian - formed in places such as the mouth of a volcano as a result of the intense heat of an eruption melting sand - was first used as tips for spears. Archaeologists have even found evidence of man-made glass which dates back to 4000 BC; this took the form of glazes used for coating stone beads. It was not until 1500 BC, however, that the first hollow glass container was made by covering a sand core with a layer of molten glass. Glass blowing became the most common way to make glass containers from the first century BC. The glass made during this time was highly coloured due to the impurities of the raw material. In the first century AD, methods of creating colourless glass were developed, which was then tinted by the addition of colouring materials. The secret of glass making was taken across Europe by the Romans during this century. However, they guarded the skills and technology required to make glass very closely, and it was not until their empire collapsed in 476 AD that glass-making knowledge became widespread throughout Europe and the Middle East. From the 10th century onwards, the Venetians gained a reputation for technical skill and artistic ability in the making of glass bottles, and many of the city's craftsmen left Italy to set up glassworks throughout Europe. A major milestone in the history of glass occurred with the invention of lead crystal glass by the English glass manufacturer George Ravenscroft (1632-1683). He attempted to counter the effect of clouding that sometimes occurred in blown glass by introducing lead to the raw materials used in the process. The new glass he created was softer and easier to decorate, and had a higher refractive index, adding to its brilliance and beauty, and it proved invaluable to the optical industry. It is thanks to Ravenscroft's invention that optical lenses, astronomical telescopes, microscopes and the like became possible. In Britain, the modern glass industry only really started to develop after the repeal of the Excise Act in 1845. Before that time, heavy taxes had been placed on the amount of glass melted in a glasshouse, and were levied continuously from 1745 to 1845. Joseph Paxton's Crystal Palace at London's Great Exhibition of 1851 marked the beginning of glass as a material used in the building industry. This revolutionary new building encouraged the use of glass in public, domestic and horticultural architecture. Glass manufacturing techniques also improved with the advancement of science and the development of better technology. From 1887 onwards, glass making developed from traditional mouth-blowing to a semi-automatic process, after factory- owner HM Ashley introduced a machine capable of producing 200 bottles per hour in Castleford, Yorkshire, England - more than three times quicker than any previous production method. Then in 1907, the first fully automated machine was developed in the USA by Michael Owens - founder of the Owens Bottle Machine Company (later the major manufacturers Owens- Illinois) - and installed in its factory. Owens' invention could produce an impressive 2,500 bottles per hour. Other developments followed rapidly, but it was not until the First World War, when Britain became cut off from essential glass suppliers, that glass became part of the scientific sector. Previous to this, glass had been seen as a craft rather than a precise science. Today, glass making is big business. It has become a modern, hi-tech industry operating in a fiercely competitive global market where quality, design and service levels are critical to maintaining market share. Modern glass plants are capable of making millions of glass containers a day in many different colours, with green, brown and clear remaining the most popular. Few of us can imagine modern life without glass. It features in almost every aspect of our lives - in our homes, our cars and whenever we sit down to eat or drink. Glass packaging is used for many products, many beverages are sold in glass, as are numerous foodstuffs, as well as medicines and cosmetics. Glass is an ideal material for recycling, and with growing consumer concern for green issues, glass bottles and jars are becoming ever more popular. Glass recycling is good news for the environment. It saves used glass containers being sent to landfill. As less energy is needed to melt recycled glass than to melt down raw materials, this also saves fuel and production costs. Recycling also reduces the need for raw materials to be quarried, thus saving precious resources.
Concern for the environment is leading to an increased demand for glass containers.
entailment
id_6216
The History of Glass. From our earliest origins, man has been making use of glass. Historians have discovered that a type of natural glass - obsidian - formed in places such as the mouth of a volcano as a result of the intense heat of an eruption melting sand - was first used as tips for spears. Archaeologists have even found evidence of man-made glass which dates back to 4000 BC; this took the form of glazes used for coating stone beads. It was not until 1500 BC, however, that the first hollow glass container was made by covering a sand core with a layer of molten glass. Glass blowing became the most common way to make glass containers from the first century BC. The glass made during this time was highly coloured due to the impurities of the raw material. In the first century AD, methods of creating colourless glass were developed, which was then tinted by the addition of colouring materials. The secret of glass making was taken across Europe by the Romans during this century. However, they guarded the skills and technology required to make glass very closely, and it was not until their empire collapsed in 476 AD that glass-making knowledge became widespread throughout Europe and the Middle East. From the 10th century onwards, the Venetians gained a reputation for technical skill and artistic ability in the making of glass bottles, and many of the city's craftsmen left Italy to set up glassworks throughout Europe. A major milestone in the history of glass occurred with the invention of lead crystal glass by the English glass manufacturer George Ravenscroft (1632-1683). He attempted to counter the effect of clouding that sometimes occurred in blown glass by introducing lead to the raw materials used in the process. The new glass he created was softer and easier to decorate, and had a higher refractive index, adding to its brilliance and beauty, and it proved invaluable to the optical industry. It is thanks to Ravenscroft's invention that optical lenses, astronomical telescopes, microscopes and the like became possible. In Britain, the modern glass industry only really started to develop after the repeal of the Excise Act in 1845. Before that time, heavy taxes had been placed on the amount of glass melted in a glasshouse, and were levied continuously from 1745 to 1845. Joseph Paxton's Crystal Palace at London's Great Exhibition of 1851 marked the beginning of glass as a material used in the building industry. This revolutionary new building encouraged the use of glass in public, domestic and horticultural architecture. Glass manufacturing techniques also improved with the advancement of science and the development of better technology. From 1887 onwards, glass making developed from traditional mouth-blowing to a semi-automatic process, after factory- owner HM Ashley introduced a machine capable of producing 200 bottles per hour in Castleford, Yorkshire, England - more than three times quicker than any previous production method. Then in 1907, the first fully automated machine was developed in the USA by Michael Owens - founder of the Owens Bottle Machine Company (later the major manufacturers Owens- Illinois) - and installed in its factory. Owens' invention could produce an impressive 2,500 bottles per hour. Other developments followed rapidly, but it was not until the First World War, when Britain became cut off from essential glass suppliers, that glass became part of the scientific sector. Previous to this, glass had been seen as a craft rather than a precise science. Today, glass making is big business. It has become a modern, hi-tech industry operating in a fiercely competitive global market where quality, design and service levels are critical to maintaining market share. Modern glass plants are capable of making millions of glass containers a day in many different colours, with green, brown and clear remaining the most popular. Few of us can imagine modern life without glass. It features in almost every aspect of our lives - in our homes, our cars and whenever we sit down to eat or drink. Glass packaging is used for many products, many beverages are sold in glass, as are numerous foodstuffs, as well as medicines and cosmetics. Glass is an ideal material for recycling, and with growing consumer concern for green issues, glass bottles and jars are becoming ever more popular. Glass recycling is good news for the environment. It saves used glass containers being sent to landfill. As less energy is needed to melt recycled glass than to melt down raw materials, this also saves fuel and production costs. Recycling also reduces the need for raw materials to be quarried, thus saving precious resources.
In 1887, HM Ashley had the fastest bottle-producing machine that existed at the time.
entailment
id_6217
The History of Glass. From our earliest origins, man has been making use of glass. Historians have discovered that a type of natural glass - obsidian - formed in places such as the mouth of a volcano as a result of the intense heat of an eruption melting sand - was first used as tips for spears. Archaeologists have even found evidence of man-made glass which dates back to 4000 BC; this took the form of glazes used for coating stone beads. It was not until 1500 BC, however, that the first hollow glass container was made by covering a sand core with a layer of molten glass. Glass blowing became the most common way to make glass containers from the first century BC. The glass made during this time was highly coloured due to the impurities of the raw material. In the first century AD, methods of creating colourless glass were developed, which was then tinted by the addition of colouring materials. The secret of glass making was taken across Europe by the Romans during this century. However, they guarded the skills and technology required to make glass very closely, and it was not until their empire collapsed in 476 AD that glass-making knowledge became widespread throughout Europe and the Middle East. From the 10th century onwards, the Venetians gained a reputation for technical skill and artistic ability in the making of glass bottles, and many of the city's craftsmen left Italy to set up glassworks throughout Europe. A major milestone in the history of glass occurred with the invention of lead crystal glass by the English glass manufacturer George Ravenscroft (1632-1683). He attempted to counter the effect of clouding that sometimes occurred in blown glass by introducing lead to the raw materials used in the process. The new glass he created was softer and easier to decorate, and had a higher refractive index, adding to its brilliance and beauty, and it proved invaluable to the optical industry. It is thanks to Ravenscroft's invention that optical lenses, astronomical telescopes, microscopes and the like became possible. In Britain, the modern glass industry only really started to develop after the repeal of the Excise Act in 1845. Before that time, heavy taxes had been placed on the amount of glass melted in a glasshouse, and were levied continuously from 1745 to 1845. Joseph Paxton's Crystal Palace at London's Great Exhibition of 1851 marked the beginning of glass as a material used in the building industry. This revolutionary new building encouraged the use of glass in public, domestic and horticultural architecture. Glass manufacturing techniques also improved with the advancement of science and the development of better technology. From 1887 onwards, glass making developed from traditional mouth-blowing to a semi-automatic process, after factory- owner HM Ashley introduced a machine capable of producing 200 bottles per hour in Castleford, Yorkshire, England - more than three times quicker than any previous production method. Then in 1907, the first fully automated machine was developed in the USA by Michael Owens - founder of the Owens Bottle Machine Company (later the major manufacturers Owens- Illinois) - and installed in its factory. Owens' invention could produce an impressive 2,500 bottles per hour. Other developments followed rapidly, but it was not until the First World War, when Britain became cut off from essential glass suppliers, that glass became part of the scientific sector. Previous to this, glass had been seen as a craft rather than a precise science. Today, glass making is big business. It has become a modern, hi-tech industry operating in a fiercely competitive global market where quality, design and service levels are critical to maintaining market share. Modern glass plants are capable of making millions of glass containers a day in many different colours, with green, brown and clear remaining the most popular. Few of us can imagine modern life without glass. It features in almost every aspect of our lives - in our homes, our cars and whenever we sit down to eat or drink. Glass packaging is used for many products, many beverages are sold in glass, as are numerous foodstuffs, as well as medicines and cosmetics. Glass is an ideal material for recycling, and with growing consumer concern for green issues, glass bottles and jars are becoming ever more popular. Glass recycling is good news for the environment. It saves used glass containers being sent to landfill. As less energy is needed to melt recycled glass than to melt down raw materials, this also saves fuel and production costs. Recycling also reduces the need for raw materials to be quarried, thus saving precious resources.
Nowadays, most glass is produced by large international manufacturers.
neutral
id_6218
The History of Glass. From our earliest origins, man has been making use of glass. Historians have discovered that a type of natural glass - obsidian - formed in places such as the mouth of a volcano as a result of the intense heat of an eruption melting sand - was first used as tips for spears. Archaeologists have even found evidence of man-made glass which dates back to 4000 BC; this took the form of glazes used for coating stone beads. It was not until 1500 BC, however, that the first hollow glass container was made by covering a sand core with a layer of molten glass. Glass blowing became the most common way to make glass containers from the first century BC. The glass made during this time was highly coloured due to the impurities of the raw material. In the first century AD, methods of creating colourless glass were developed, which was then tinted by the addition of colouring materials. The secret of glass making was taken across Europe by the Romans during this century. However, they guarded the skills and technology required to make glass very closely, and it was not until their empire collapsed in 476 AD that glass-making knowledge became widespread throughout Europe and the Middle East. From the 10th century onwards, the Venetians gained a reputation for technical skill and artistic ability in the making of glass bottles, and many of the city's craftsmen left Italy to set up glassworks throughout Europe. A major milestone in the history of glass occurred with the invention of lead crystal glass by the English glass manufacturer George Ravenscroft (1632-1683). He attempted to counter the effect of clouding that sometimes occurred in blown glass by introducing lead to the raw materials used in the process. The new glass he created was softer and easier to decorate, and had a higher refractive index, adding to its brilliance and beauty, and it proved invaluable to the optical industry. It is thanks to Ravenscroft's invention that optical lenses, astronomical telescopes, microscopes and the like became possible. In Britain, the modern glass industry only really started to develop after the repeal of the Excise Act in 1845. Before that time, heavy taxes had been placed on the amount of glass melted in a glasshouse, and were levied continuously from 1745 to 1845. Joseph Paxton's Crystal Palace at London's Great Exhibition of 1851 marked the beginning of glass as a material used in the building industry. This revolutionary new building encouraged the use of glass in public, domestic and horticultural architecture. Glass manufacturing techniques also improved with the advancement of science and the development of better technology. From 1887 onwards, glass making developed from traditional mouth-blowing to a semi-automatic process, after factory- owner HM Ashley introduced a machine capable of producing 200 bottles per hour in Castleford, Yorkshire, England - more than three times quicker than any previous production method. Then in 1907, the first fully automated machine was developed in the USA by Michael Owens - founder of the Owens Bottle Machine Company (later the major manufacturers Owens- Illinois) - and installed in its factory. Owens' invention could produce an impressive 2,500 bottles per hour. Other developments followed rapidly, but it was not until the First World War, when Britain became cut off from essential glass suppliers, that glass became part of the scientific sector. Previous to this, glass had been seen as a craft rather than a precise science. Today, glass making is big business. It has become a modern, hi-tech industry operating in a fiercely competitive global market where quality, design and service levels are critical to maintaining market share. Modern glass plants are capable of making millions of glass containers a day in many different colours, with green, brown and clear remaining the most popular. Few of us can imagine modern life without glass. It features in almost every aspect of our lives - in our homes, our cars and whenever we sit down to eat or drink. Glass packaging is used for many products, many beverages are sold in glass, as are numerous foodstuffs, as well as medicines and cosmetics. Glass is an ideal material for recycling, and with growing consumer concern for green issues, glass bottles and jars are becoming ever more popular. Glass recycling is good news for the environment. It saves used glass containers being sent to landfill. As less energy is needed to melt recycled glass than to melt down raw materials, this also saves fuel and production costs. Recycling also reduces the need for raw materials to be quarried, thus saving precious resources.
Michael Owens was hired by a large US company to design a fully-automated bottle manufacturing machine for them.
contradiction
id_6219
The History of Glass. From our earliest origins, man has been making use of glass. Historians have discovered that a type of natural glass - obsidian - formed in places such as the mouth of a volcano as a result of the intense heat of an eruption melting sand - was first used as tips for spears. Archaeologists have even found evidence of man-made glass which dates back to 4000 BC; this took the form of glazes used for coating stone beads. It was not until 1500 BC, however, that the first hollow glass container was made by covering a sand core with a layer of molten glass. Glass blowing became the most common way to make glass containers from the first century BC. The glass made during this time was highly coloured due to the impurities of the raw material. In the first century AD, methods of creating colourless glass were developed, which was then tinted by the addition of colouring materials. The secret of glass making was taken across Europe by the Romans during this century. However, they guarded the skills and technology required to make glass very closely, and it was not until their empire collapsed in 476 AD that glass-making knowledge became widespread throughout Europe and the Middle East. From the 10th century onwards, the Venetians gained a reputation for technical skill and artistic ability in the making of glass bottles, and many of the city's craftsmen left Italy to set up glassworks throughout Europe. A major milestone in the history of glass occurred with the invention of lead crystal glass by the English glass manufacturer George Ravenscroft (1632-1683). He attempted to counter the effect of clouding that sometimes occurred in blown glass by introducing lead to the raw materials used in the process. The new glass he created was softer and easier to decorate, and had a higher refractive index, adding to its brilliance and beauty, and it proved invaluable to the optical industry. It is thanks to Ravenscroft's invention that optical lenses, astronomical telescopes, microscopes and the like became possible. In Britain, the modern glass industry only really started to develop after the repeal of the Excise Act in 1845. Before that time, heavy taxes had been placed on the amount of glass melted in a glasshouse, and were levied continuously from 1745 to 1845. Joseph Paxton's Crystal Palace at London's Great Exhibition of 1851 marked the beginning of glass as a material used in the building industry. This revolutionary new building encouraged the use of glass in public, domestic and horticultural architecture. Glass manufacturing techniques also improved with the advancement of science and the development of better technology. From 1887 onwards, glass making developed from traditional mouth-blowing to a semi-automatic process, after factory- owner HM Ashley introduced a machine capable of producing 200 bottles per hour in Castleford, Yorkshire, England - more than three times quicker than any previous production method. Then in 1907, the first fully automated machine was developed in the USA by Michael Owens - founder of the Owens Bottle Machine Company (later the major manufacturers Owens- Illinois) - and installed in its factory. Owens' invention could produce an impressive 2,500 bottles per hour. Other developments followed rapidly, but it was not until the First World War, when Britain became cut off from essential glass suppliers, that glass became part of the scientific sector. Previous to this, glass had been seen as a craft rather than a precise science. Today, glass making is big business. It has become a modern, hi-tech industry operating in a fiercely competitive global market where quality, design and service levels are critical to maintaining market share. Modern glass plants are capable of making millions of glass containers a day in many different colours, with green, brown and clear remaining the most popular. Few of us can imagine modern life without glass. It features in almost every aspect of our lives - in our homes, our cars and whenever we sit down to eat or drink. Glass packaging is used for many products, many beverages are sold in glass, as are numerous foodstuffs, as well as medicines and cosmetics. Glass is an ideal material for recycling, and with growing consumer concern for green issues, glass bottles and jars are becoming ever more popular. Glass recycling is good news for the environment. It saves used glass containers being sent to landfill. As less energy is needed to melt recycled glass than to melt down raw materials, this also saves fuel and production costs. Recycling also reduces the need for raw materials to be quarried, thus saving precious resources.
It is more expensive to produce recycled glass than to manufacture new glass.
contradiction
id_6220
The History of Glass. From our earliest origins, man has been making use of glass. Historians have discovered that a type of natural glass - obsidian - formed in places such as the mouth of a volcano as a result of the intense heat of an eruption melting sand - was first used as tips for spears. Archaeologists have even found evidence of man-made glass which dates back to 4000 BC; this took the form of glazes used for coating stone beads. It was not until 1500 BC, however, that the first hollow glass container was made by covering a sand core with a layer of molten glass. Glass blowing became the most common way to make glass containers from the first century BC. The glass made during this time was highly coloured due to the impurities of the raw material. In the first century AD, methods of creating colourless glass were developed, which was then tinted by the addition of colouring materials. The secret of glass making was taken across Europe by the Romans during this century. However, they guarded the skills and technology required to make glass very closely, and it was not until their empire collapsed in 476 AD that glass-making knowledge became widespread throughout Europe and the Middle East. From the 10th century onwards, the Venetians gained a reputation for technical skill and artistic ability in the making of glass bottles, and many of the citys craftsmen left Italy to set up glassworks throughout Europe. A major milestone in the history of glass occurred with the invention of lead crystal glass by the English glass manufacturer George Ravenscroft (1632-1683). He attempted to counter the effect of clouding that sometimes occurred in blown glass by introducing lead to the raw materials used in the process. The new glass he created was softer and easier to decorate, and had a higher refractive index, adding to its brilliance and beauty, and it proved invaluable to the optical industry. It is thanks to Ravenscrofts invention that optical lenses, astronomical telescopes, microscopes and the like became possible. In Britain, the modem glass industry only really started to develop after the repeal of the Excise Act in 1845. Before that time, heavy taxes had been placed on the amount of glass melted in a glasshouse, and were levied continuously from 1745 to 1845. Joseph Paxtons Crystal Palace at Londons Great Exhibition of 1851 marked the beginning of glass as a material used in the building industry. This revolutionary new building encouraged the use of glass in public, domestic and horticultural architecture. Glass manufacturing techniques also improved with the advancement of science and the development of better technology. From 1887 onwards, glass making developed from traditional mouth-blowing to a semi-automatic process, after factory- owner HM Ashley introduced a machine capable of producing 200 bottles per hour in Castleford, Yorkshire, England - more than three times quicker than any previous production method. Then in 1907, the first fully automated machine was developed in the USA by Michael Owens - founder of the Owens Bottle Machine Company (later the major manufacturers Owens- Illinois) - and installed in its factory. Owens invention could produce an impressive 2,500 bottles per hour. Other developments followed rapidly, but it was not until the First World War, when Britain became cut off from essential glass suppliers, that glass became part of the scientific sector. Previous to this, glass had been seen as a craft rather than a precise science. Today, glass making is big business. It has become a modem, hi-tech industry operating in a fiercely competitive global market where quality, design and service levels are critical to maintaining market share. Modem glass plants are capable of making millions of glass containers a day in many different colours, with green, brown and clear remaining the most popular. Few of us can imagine modem life without glass. It features in almost every aspect of our lives - in our homes, our cars and whenever we sit down to eat or drink. Glass packaging is used for many products, many beverages are sold in glass, as are numerous foodstuffs, as well as medicines and cosmetics. Glass is an ideal material for recycling, and with growing consumer concern for green issues, glass bottles and jars are becoming ever more popular. Glass recycling is good news for the environment. It saves used glass containers being sent to landfill. As less energy is needed to melt recycled glass than to melt down raw materials, this also saves fuel and production costs. Recycling also reduces the need for raw materials to be quarried, thus saving precious resources.
It is more expensive to produce recycled glass than to manufacture new glass.
contradiction
id_6221
The History of Glass. From our earliest origins, man has been making use of glass. Historians have discovered that a type of natural glass - obsidian - formed in places such as the mouth of a volcano as a result of the intense heat of an eruption melting sand - was first used as tips for spears. Archaeologists have even found evidence of man-made glass which dates back to 4000 BC; this took the form of glazes used for coating stone beads. It was not until 1500 BC, however, that the first hollow glass container was made by covering a sand core with a layer of molten glass. Glass blowing became the most common way to make glass containers from the first century BC. The glass made during this time was highly coloured due to the impurities of the raw material. In the first century AD, methods of creating colourless glass were developed, which was then tinted by the addition of colouring materials. The secret of glass making was taken across Europe by the Romans during this century. However, they guarded the skills and technology required to make glass very closely, and it was not until their empire collapsed in 476 AD that glass-making knowledge became widespread throughout Europe and the Middle East. From the 10th century onwards, the Venetians gained a reputation for technical skill and artistic ability in the making of glass bottles, and many of the citys craftsmen left Italy to set up glassworks throughout Europe. A major milestone in the history of glass occurred with the invention of lead crystal glass by the English glass manufacturer George Ravenscroft (1632-1683). He attempted to counter the effect of clouding that sometimes occurred in blown glass by introducing lead to the raw materials used in the process. The new glass he created was softer and easier to decorate, and had a higher refractive index, adding to its brilliance and beauty, and it proved invaluable to the optical industry. It is thanks to Ravenscrofts invention that optical lenses, astronomical telescopes, microscopes and the like became possible. In Britain, the modem glass industry only really started to develop after the repeal of the Excise Act in 1845. Before that time, heavy taxes had been placed on the amount of glass melted in a glasshouse, and were levied continuously from 1745 to 1845. Joseph Paxtons Crystal Palace at Londons Great Exhibition of 1851 marked the beginning of glass as a material used in the building industry. This revolutionary new building encouraged the use of glass in public, domestic and horticultural architecture. Glass manufacturing techniques also improved with the advancement of science and the development of better technology. From 1887 onwards, glass making developed from traditional mouth-blowing to a semi-automatic process, after factory- owner HM Ashley introduced a machine capable of producing 200 bottles per hour in Castleford, Yorkshire, England - more than three times quicker than any previous production method. Then in 1907, the first fully automated machine was developed in the USA by Michael Owens - founder of the Owens Bottle Machine Company (later the major manufacturers Owens- Illinois) - and installed in its factory. Owens invention could produce an impressive 2,500 bottles per hour. Other developments followed rapidly, but it was not until the First World War, when Britain became cut off from essential glass suppliers, that glass became part of the scientific sector. Previous to this, glass had been seen as a craft rather than a precise science. Today, glass making is big business. It has become a modem, hi-tech industry operating in a fiercely competitive global market where quality, design and service levels are critical to maintaining market share. Modem glass plants are capable of making millions of glass containers a day in many different colours, with green, brown and clear remaining the most popular. Few of us can imagine modem life without glass. It features in almost every aspect of our lives - in our homes, our cars and whenever we sit down to eat or drink. Glass packaging is used for many products, many beverages are sold in glass, as are numerous foodstuffs, as well as medicines and cosmetics. Glass is an ideal material for recycling, and with growing consumer concern for green issues, glass bottles and jars are becoming ever more popular. Glass recycling is good news for the environment. It saves used glass containers being sent to landfill. As less energy is needed to melt recycled glass than to melt down raw materials, this also saves fuel and production costs. Recycling also reduces the need for raw materials to be quarried, thus saving precious resources.
Nowadays, most glass is produced by large international manufacturers.
neutral
id_6222
The History of Glass. From our earliest origins, man has been making use of glass. Historians have discovered that a type of natural glass - obsidian - formed in places such as the mouth of a volcano as a result of the intense heat of an eruption melting sand - was first used as tips for spears. Archaeologists have even found evidence of man-made glass which dates back to 4000 BC; this took the form of glazes used for coating stone beads. It was not until 1500 BC, however, that the first hollow glass container was made by covering a sand core with a layer of molten glass. Glass blowing became the most common way to make glass containers from the first century BC. The glass made during this time was highly coloured due to the impurities of the raw material. In the first century AD, methods of creating colourless glass were developed, which was then tinted by the addition of colouring materials. The secret of glass making was taken across Europe by the Romans during this century. However, they guarded the skills and technology required to make glass very closely, and it was not until their empire collapsed in 476 AD that glass-making knowledge became widespread throughout Europe and the Middle East. From the 10th century onwards, the Venetians gained a reputation for technical skill and artistic ability in the making of glass bottles, and many of the citys craftsmen left Italy to set up glassworks throughout Europe. A major milestone in the history of glass occurred with the invention of lead crystal glass by the English glass manufacturer George Ravenscroft (1632-1683). He attempted to counter the effect of clouding that sometimes occurred in blown glass by introducing lead to the raw materials used in the process. The new glass he created was softer and easier to decorate, and had a higher refractive index, adding to its brilliance and beauty, and it proved invaluable to the optical industry. It is thanks to Ravenscrofts invention that optical lenses, astronomical telescopes, microscopes and the like became possible. In Britain, the modem glass industry only really started to develop after the repeal of the Excise Act in 1845. Before that time, heavy taxes had been placed on the amount of glass melted in a glasshouse, and were levied continuously from 1745 to 1845. Joseph Paxtons Crystal Palace at Londons Great Exhibition of 1851 marked the beginning of glass as a material used in the building industry. This revolutionary new building encouraged the use of glass in public, domestic and horticultural architecture. Glass manufacturing techniques also improved with the advancement of science and the development of better technology. From 1887 onwards, glass making developed from traditional mouth-blowing to a semi-automatic process, after factory- owner HM Ashley introduced a machine capable of producing 200 bottles per hour in Castleford, Yorkshire, England - more than three times quicker than any previous production method. Then in 1907, the first fully automated machine was developed in the USA by Michael Owens - founder of the Owens Bottle Machine Company (later the major manufacturers Owens- Illinois) - and installed in its factory. Owens invention could produce an impressive 2,500 bottles per hour. Other developments followed rapidly, but it was not until the First World War, when Britain became cut off from essential glass suppliers, that glass became part of the scientific sector. Previous to this, glass had been seen as a craft rather than a precise science. Today, glass making is big business. It has become a modem, hi-tech industry operating in a fiercely competitive global market where quality, design and service levels are critical to maintaining market share. Modem glass plants are capable of making millions of glass containers a day in many different colours, with green, brown and clear remaining the most popular. Few of us can imagine modem life without glass. It features in almost every aspect of our lives - in our homes, our cars and whenever we sit down to eat or drink. Glass packaging is used for many products, many beverages are sold in glass, as are numerous foodstuffs, as well as medicines and cosmetics. Glass is an ideal material for recycling, and with growing consumer concern for green issues, glass bottles and jars are becoming ever more popular. Glass recycling is good news for the environment. It saves used glass containers being sent to landfill. As less energy is needed to melt recycled glass than to melt down raw materials, this also saves fuel and production costs. Recycling also reduces the need for raw materials to be quarried, thus saving precious resources.
Concern for the environment is leading to an increased demand for glass containers.
entailment
id_6223
The History of Papermaking in the United Kingdom The first reference to a paper mill in the United Kingdom was in a book printed by Wynken de Worde in about 1495. This mill belonged to a certain John Tate and was near Hertford. Other early mills included one at Dartford, owned by Sir John Speilman, who was granted special privileges for the collection of rags by Queen Elizabeth and one built in Buckinghamshire before the end of the sixteenth century. During the first half of the seventeenth century, mills were established near Edinburgh, at Cannock Chase in Staffordshire, and several in Buckinghamshire, Oxfordshire and Surrey. The Bank of England has been issuing bank notes since 1694, with simple watermarks in them since at least 1697. Henri de Portal was awarded the contract in December 1724 for producing the Bank of England watermarked bank-note paper at Bere Mill in Hampshire. Portals have retained this contract ever since but production is no longer at Bere Mill. There were two major developments at about the middle of the eighteenth century in the paper industry in the UK. The first was the introduction of the rag engine or hollander, invented in Holland sometime before 1670, which replaced the stamping mills, which had previously been used, for the disintegration of the rags and beating of the pulp. The second was in the design and construction of the mould used for forming the sheet. Early moulds had straight wires sewn down on to the wooden foundation, this produced an irregular surface showing the characteristic laid marks, and, when printed on, the ink did not give clear, sharp lines. Baskerville, a Birmingham printer, wanted a smoother paper. James Whatman the Elder developed a woven wire fabric, thus leading to his production of the first woven paper in 1757. Increasing demands for more paper during the late eighteenth and early nineteenth centuries led to shortages of the rags needed to produce the paper. Part of the problem was that no satisfactory method of bleaching pulp had yet been devised, and so only white rags could be used to produce white paper. Chlorine bleaching was being used by the end of the eighteenth century, but excessive use produced papers that were of poor quality and deteriorated quickly. By 1800 up to 24 million pounds of rags were being used annually, to produce 10,000 tons of paper in England and Wales, and 1000 tons in Scotland, the home market being supplemented by imports, mainly from the continent. Experiments in using other materials, such as sawdust, rye straw, cabbage stumps and spruce wood had been conducted in 1765 by Jacob Christian Schaffer. Similarly, Matthias Koops carried out many experiments on straw and other materials at the Neckinger Mill, Bermondsey around 1800, but it was not until the middle of the nineteenth century that pulp produced using straw or wood was utilised in the production of paper. By 1800 there were 430 (564 in 1821) paper mills in England and Wales (mostly single vat mills), under 50 (74 in 1823) in Scotland and 60 in Ireland, but all the production was by hand and the output was low. The first attempt at a paper machine to mechanise the process was patented in 1799 by Frenchman Nicholas Louis Robert, but it was not a success. However, the drawings were brought to England by John Gamble in 1801 and passed on to the brothers Henry and Sealy Fourdrinier, who financed the engineer Henry Donkin to build the machine. The first successful machine was installed at Frogmore, Hertfordshire, in 1803. The paper was pressed onto an endless wire cloth, transferred to a continuous felt blanket and then pressed again. Finally it was cut off the reel into sheets and loft dried in the same way as hand made paper. In 1809 John Dickinson patented a machine that that used a wire cloth covered cylinder revolving in a pulp suspension, the water being removed through the centre of the cylinder and the layer of pulp removed from the surface by a felt covered roller (later replaced by a continuous felt passing round a roller). This machine was the forerunner of the present day cylinder mould or vat machine, used mainly for the production of boards. Both these machines produced paper as a wet sheet, which require drying after removal from the machine, but in 1821 T B Crompton patented a method of drying the paper continuously, using a woven fabric to hold the sheet against steam heated drying cylinders. After it had been pressed, the paper was cut into sheets by a cutter fixed at the end of the last cylinder. By the middle of the nineteenth century the pattern for the mechanised production of paper had been set. Subsequent developments concentrated on increasing the size and production of the machines. Similarly, developments in alternative pulps to rags, mainly wood and esparto grass, enabled production increases. Conversely, despite the increase in paper production, there was a decrease, by 1884, in the number of paper mills in England and Wales to 250 and in Ireland to 14 (Scotland increased to 60), production being concentrated into fewer, larger units. Geographical changes also took place as many of the early mills were small and had been situated in rural areas. The change was to larger mills in, or near, urban areas closer to suppliers of the raw materials (esparto mills were generally situated near a port as the raw material was brought in by ship) and the paper markets.
The development of bigger mills near larger towns was so that mill owners could take advantage of potential larger workforces.
contradiction
id_6224
The History of Papermaking in the United Kingdom The first reference to a paper mill in the United Kingdom was in a book printed by Wynken de Worde in about 1495. This mill belonged to a certain John Tate and was near Hertford. Other early mills included one at Dartford, owned by Sir John Speilman, who was granted special privileges for the collection of rags by Queen Elizabeth and one built in Buckinghamshire before the end of the sixteenth century. During the first half of the seventeenth century, mills were established near Edinburgh, at Cannock Chase in Staffordshire, and several in Buckinghamshire, Oxfordshire and Surrey. The Bank of England has been issuing bank notes since 1694, with simple watermarks in them since at least 1697. Henri de Portal was awarded the contract in December 1724 for producing the Bank of England watermarked bank-note paper at Bere Mill in Hampshire. Portals have retained this contract ever since but production is no longer at Bere Mill. There were two major developments at about the middle of the eighteenth century in the paper industry in the UK. The first was the introduction of the rag engine or hollander, invented in Holland sometime before 1670, which replaced the stamping mills, which had previously been used, for the disintegration of the rags and beating of the pulp. The second was in the design and construction of the mould used for forming the sheet. Early moulds had straight wires sewn down on to the wooden foundation, this produced an irregular surface showing the characteristic laid marks, and, when printed on, the ink did not give clear, sharp lines. Baskerville, a Birmingham printer, wanted a smoother paper. James Whatman the Elder developed a woven wire fabric, thus leading to his production of the first woven paper in 1757. Increasing demands for more paper during the late eighteenth and early nineteenth centuries led to shortages of the rags needed to produce the paper. Part of the problem was that no satisfactory method of bleaching pulp had yet been devised, and so only white rags could be used to produce white paper. Chlorine bleaching was being used by the end of the eighteenth century, but excessive use produced papers that were of poor quality and deteriorated quickly. By 1800 up to 24 million pounds of rags were being used annually, to produce 10,000 tons of paper in England and Wales, and 1000 tons in Scotland, the home market being supplemented by imports, mainly from the continent. Experiments in using other materials, such as sawdust, rye straw, cabbage stumps and spruce wood had been conducted in 1765 by Jacob Christian Schaffer. Similarly, Matthias Koops carried out many experiments on straw and other materials at the Neckinger Mill, Bermondsey around 1800, but it was not until the middle of the nineteenth century that pulp produced using straw or wood was utilised in the production of paper. By 1800 there were 430 (564 in 1821) paper mills in England and Wales (mostly single vat mills), under 50 (74 in 1823) in Scotland and 60 in Ireland, but all the production was by hand and the output was low. The first attempt at a paper machine to mechanise the process was patented in 1799 by Frenchman Nicholas Louis Robert, but it was not a success. However, the drawings were brought to England by John Gamble in 1801 and passed on to the brothers Henry and Sealy Fourdrinier, who financed the engineer Henry Donkin to build the machine. The first successful machine was installed at Frogmore, Hertfordshire, in 1803. The paper was pressed onto an endless wire cloth, transferred to a continuous felt blanket and then pressed again. Finally it was cut off the reel into sheets and loft dried in the same way as hand made paper. In 1809 John Dickinson patented a machine that that used a wire cloth covered cylinder revolving in a pulp suspension, the water being removed through the centre of the cylinder and the layer of pulp removed from the surface by a felt covered roller (later replaced by a continuous felt passing round a roller). This machine was the forerunner of the present day cylinder mould or vat machine, used mainly for the production of boards. Both these machines produced paper as a wet sheet, which require drying after removal from the machine, but in 1821 T B Crompton patented a method of drying the paper continuously, using a woven fabric to hold the sheet against steam heated drying cylinders. After it had been pressed, the paper was cut into sheets by a cutter fixed at the end of the last cylinder. By the middle of the nineteenth century the pattern for the mechanised production of paper had been set. Subsequent developments concentrated on increasing the size and production of the machines. Similarly, developments in alternative pulps to rags, mainly wood and esparto grass, enabled production increases. Conversely, despite the increase in paper production, there was a decrease, by 1884, in the number of paper mills in England and Wales to 250 and in Ireland to 14 (Scotland increased to 60), production being concentrated into fewer, larger units. Geographical changes also took place as many of the early mills were small and had been situated in rural areas. The change was to larger mills in, or near, urban areas closer to suppliers of the raw materials (esparto mills were generally situated near a port as the raw material was brought in by ship) and the paper markets.
Modern paper making machines are still based on John Dickinsons 1809 patent.
entailment
id_6225
The History of Papermaking in the United Kingdom The first reference to a paper mill in the United Kingdom was in a book printed by Wynken de Worde in about 1495. This mill belonged to a certain John Tate and was near Hertford. Other early mills included one at Dartford, owned by Sir John Speilman, who was granted special privileges for the collection of rags by Queen Elizabeth and one built in Buckinghamshire before the end of the sixteenth century. During the first half of the seventeenth century, mills were established near Edinburgh, at Cannock Chase in Staffordshire, and several in Buckinghamshire, Oxfordshire and Surrey. The Bank of England has been issuing bank notes since 1694, with simple watermarks in them since at least 1697. Henri de Portal was awarded the contract in December 1724 for producing the Bank of England watermarked bank-note paper at Bere Mill in Hampshire. Portals have retained this contract ever since but production is no longer at Bere Mill. There were two major developments at about the middle of the eighteenth century in the paper industry in the UK. The first was the introduction of the rag engine or hollander, invented in Holland sometime before 1670, which replaced the stamping mills, which had previously been used, for the disintegration of the rags and beating of the pulp. The second was in the design and construction of the mould used for forming the sheet. Early moulds had straight wires sewn down on to the wooden foundation, this produced an irregular surface showing the characteristic laid marks, and, when printed on, the ink did not give clear, sharp lines. Baskerville, a Birmingham printer, wanted a smoother paper. James Whatman the Elder developed a woven wire fabric, thus leading to his production of the first woven paper in 1757. Increasing demands for more paper during the late eighteenth and early nineteenth centuries led to shortages of the rags needed to produce the paper. Part of the problem was that no satisfactory method of bleaching pulp had yet been devised, and so only white rags could be used to produce white paper. Chlorine bleaching was being used by the end of the eighteenth century, but excessive use produced papers that were of poor quality and deteriorated quickly. By 1800 up to 24 million pounds of rags were being used annually, to produce 10,000 tons of paper in England and Wales, and 1000 tons in Scotland, the home market being supplemented by imports, mainly from the continent. Experiments in using other materials, such as sawdust, rye straw, cabbage stumps and spruce wood had been conducted in 1765 by Jacob Christian Schaffer. Similarly, Matthias Koops carried out many experiments on straw and other materials at the Neckinger Mill, Bermondsey around 1800, but it was not until the middle of the nineteenth century that pulp produced using straw or wood was utilised in the production of paper. By 1800 there were 430 (564 in 1821) paper mills in England and Wales (mostly single vat mills), under 50 (74 in 1823) in Scotland and 60 in Ireland, but all the production was by hand and the output was low. The first attempt at a paper machine to mechanise the process was patented in 1799 by Frenchman Nicholas Louis Robert, but it was not a success. However, the drawings were brought to England by John Gamble in 1801 and passed on to the brothers Henry and Sealy Fourdrinier, who financed the engineer Henry Donkin to build the machine. The first successful machine was installed at Frogmore, Hertfordshire, in 1803. The paper was pressed onto an endless wire cloth, transferred to a continuous felt blanket and then pressed again. Finally it was cut off the reel into sheets and loft dried in the same way as hand made paper. In 1809 John Dickinson patented a machine that that used a wire cloth covered cylinder revolving in a pulp suspension, the water being removed through the centre of the cylinder and the layer of pulp removed from the surface by a felt covered roller (later replaced by a continuous felt passing round a roller). This machine was the forerunner of the present day cylinder mould or vat machine, used mainly for the production of boards. Both these machines produced paper as a wet sheet, which require drying after removal from the machine, but in 1821 T B Crompton patented a method of drying the paper continuously, using a woven fabric to hold the sheet against steam heated drying cylinders. After it had been pressed, the paper was cut into sheets by a cutter fixed at the end of the last cylinder. By the middle of the nineteenth century the pattern for the mechanised production of paper had been set. Subsequent developments concentrated on increasing the size and production of the machines. Similarly, developments in alternative pulps to rags, mainly wood and esparto grass, enabled production increases. Conversely, despite the increase in paper production, there was a decrease, by 1884, in the number of paper mills in England and Wales to 250 and in Ireland to 14 (Scotland increased to 60), production being concentrated into fewer, larger units. Geographical changes also took place as many of the early mills were small and had been situated in rural areas. The change was to larger mills in, or near, urban areas closer to suppliers of the raw materials (esparto mills were generally situated near a port as the raw material was brought in by ship) and the paper markets.
The first mechanised process that had any success still used elements of the hand made paper-making process.
entailment
id_6226
The History of Papermaking in the United Kingdom The first reference to a paper mill in the United Kingdom was in a book printed by Wynken de Worde in about 1495. This mill belonged to a certain John Tate and was near Hertford. Other early mills included one at Dartford, owned by Sir John Speilman, who was granted special privileges for the collection of rags by Queen Elizabeth and one built in Buckinghamshire before the end of the sixteenth century. During the first half of the seventeenth century, mills were established near Edinburgh, at Cannock Chase in Staffordshire, and several in Buckinghamshire, Oxfordshire and Surrey. The Bank of England has been issuing bank notes since 1694, with simple watermarks in them since at least 1697. Henri de Portal was awarded the contract in December 1724 for producing the Bank of England watermarked bank-note paper at Bere Mill in Hampshire. Portals have retained this contract ever since but production is no longer at Bere Mill. There were two major developments at about the middle of the eighteenth century in the paper industry in the UK. The first was the introduction of the rag engine or hollander, invented in Holland sometime before 1670, which replaced the stamping mills, which had previously been used, for the disintegration of the rags and beating of the pulp. The second was in the design and construction of the mould used for forming the sheet. Early moulds had straight wires sewn down on to the wooden foundation, this produced an irregular surface showing the characteristic laid marks, and, when printed on, the ink did not give clear, sharp lines. Baskerville, a Birmingham printer, wanted a smoother paper. James Whatman the Elder developed a woven wire fabric, thus leading to his production of the first woven paper in 1757. Increasing demands for more paper during the late eighteenth and early nineteenth centuries led to shortages of the rags needed to produce the paper. Part of the problem was that no satisfactory method of bleaching pulp had yet been devised, and so only white rags could be used to produce white paper. Chlorine bleaching was being used by the end of the eighteenth century, but excessive use produced papers that were of poor quality and deteriorated quickly. By 1800 up to 24 million pounds of rags were being used annually, to produce 10,000 tons of paper in England and Wales, and 1000 tons in Scotland, the home market being supplemented by imports, mainly from the continent. Experiments in using other materials, such as sawdust, rye straw, cabbage stumps and spruce wood had been conducted in 1765 by Jacob Christian Schaffer. Similarly, Matthias Koops carried out many experiments on straw and other materials at the Neckinger Mill, Bermondsey around 1800, but it was not until the middle of the nineteenth century that pulp produced using straw or wood was utilised in the production of paper. By 1800 there were 430 (564 in 1821) paper mills in England and Wales (mostly single vat mills), under 50 (74 in 1823) in Scotland and 60 in Ireland, but all the production was by hand and the output was low. The first attempt at a paper machine to mechanise the process was patented in 1799 by Frenchman Nicholas Louis Robert, but it was not a success. However, the drawings were brought to England by John Gamble in 1801 and passed on to the brothers Henry and Sealy Fourdrinier, who financed the engineer Henry Donkin to build the machine. The first successful machine was installed at Frogmore, Hertfordshire, in 1803. The paper was pressed onto an endless wire cloth, transferred to a continuous felt blanket and then pressed again. Finally it was cut off the reel into sheets and loft dried in the same way as hand made paper. In 1809 John Dickinson patented a machine that that used a wire cloth covered cylinder revolving in a pulp suspension, the water being removed through the centre of the cylinder and the layer of pulp removed from the surface by a felt covered roller (later replaced by a continuous felt passing round a roller). This machine was the forerunner of the present day cylinder mould or vat machine, used mainly for the production of boards. Both these machines produced paper as a wet sheet, which require drying after removal from the machine, but in 1821 T B Crompton patented a method of drying the paper continuously, using a woven fabric to hold the sheet against steam heated drying cylinders. After it had been pressed, the paper was cut into sheets by a cutter fixed at the end of the last cylinder. By the middle of the nineteenth century the pattern for the mechanised production of paper had been set. Subsequent developments concentrated on increasing the size and production of the machines. Similarly, developments in alternative pulps to rags, mainly wood and esparto grass, enabled production increases. Conversely, despite the increase in paper production, there was a decrease, by 1884, in the number of paper mills in England and Wales to 250 and in Ireland to 14 (Scotland increased to 60), production being concentrated into fewer, larger units. Geographical changes also took place as many of the early mills were small and had been situated in rural areas. The change was to larger mills in, or near, urban areas closer to suppliers of the raw materials (esparto mills were generally situated near a port as the raw material was brought in by ship) and the paper markets.
Chlorine bleaching proved the answer to the need for more white paper in the 18th and 19th centuries.
contradiction
id_6227
The History of Papermaking in the United Kingdom The first reference to a paper mill in the United Kingdom was in a book printed by Wynken de Worde in about 1495. This mill belonged to a certain John Tate and was near Hertford. Other early mills included one at Dartford, owned by Sir John Speilman, who was granted special privileges for the collection of rags by Queen Elizabeth and one built in Buckinghamshire before the end of the sixteenth century. During the first half of the seventeenth century, mills were established near Edinburgh, at Cannock Chase in Staffordshire, and several in Buckinghamshire, Oxfordshire and Surrey. The Bank of England has been issuing bank notes since 1694, with simple watermarks in them since at least 1697. Henri de Portal was awarded the contract in December 1724 for producing the Bank of England watermarked bank-note paper at Bere Mill in Hampshire. Portals have retained this contract ever since but production is no longer at Bere Mill. There were two major developments at about the middle of the eighteenth century in the paper industry in the UK. The first was the introduction of the rag engine or hollander, invented in Holland sometime before 1670, which replaced the stamping mills, which had previously been used, for the disintegration of the rags and beating of the pulp. The second was in the design and construction of the mould used for forming the sheet. Early moulds had straight wires sewn down on to the wooden foundation, this produced an irregular surface showing the characteristic laid marks, and, when printed on, the ink did not give clear, sharp lines. Baskerville, a Birmingham printer, wanted a smoother paper. James Whatman the Elder developed a woven wire fabric, thus leading to his production of the first woven paper in 1757. Increasing demands for more paper during the late eighteenth and early nineteenth centuries led to shortages of the rags needed to produce the paper. Part of the problem was that no satisfactory method of bleaching pulp had yet been devised, and so only white rags could be used to produce white paper. Chlorine bleaching was being used by the end of the eighteenth century, but excessive use produced papers that were of poor quality and deteriorated quickly. By 1800 up to 24 million pounds of rags were being used annually, to produce 10,000 tons of paper in England and Wales, and 1000 tons in Scotland, the home market being supplemented by imports, mainly from the continent. Experiments in using other materials, such as sawdust, rye straw, cabbage stumps and spruce wood had been conducted in 1765 by Jacob Christian Schaffer. Similarly, Matthias Koops carried out many experiments on straw and other materials at the Neckinger Mill, Bermondsey around 1800, but it was not until the middle of the nineteenth century that pulp produced using straw or wood was utilised in the production of paper. By 1800 there were 430 (564 in 1821) paper mills in England and Wales (mostly single vat mills), under 50 (74 in 1823) in Scotland and 60 in Ireland, but all the production was by hand and the output was low. The first attempt at a paper machine to mechanise the process was patented in 1799 by Frenchman Nicholas Louis Robert, but it was not a success. However, the drawings were brought to England by John Gamble in 1801 and passed on to the brothers Henry and Sealy Fourdrinier, who financed the engineer Henry Donkin to build the machine. The first successful machine was installed at Frogmore, Hertfordshire, in 1803. The paper was pressed onto an endless wire cloth, transferred to a continuous felt blanket and then pressed again. Finally it was cut off the reel into sheets and loft dried in the same way as hand made paper. In 1809 John Dickinson patented a machine that that used a wire cloth covered cylinder revolving in a pulp suspension, the water being removed through the centre of the cylinder and the layer of pulp removed from the surface by a felt covered roller (later replaced by a continuous felt passing round a roller). This machine was the forerunner of the present day cylinder mould or vat machine, used mainly for the production of boards. Both these machines produced paper as a wet sheet, which require drying after removal from the machine, but in 1821 T B Crompton patented a method of drying the paper continuously, using a woven fabric to hold the sheet against steam heated drying cylinders. After it had been pressed, the paper was cut into sheets by a cutter fixed at the end of the last cylinder. By the middle of the nineteenth century the pattern for the mechanised production of paper had been set. Subsequent developments concentrated on increasing the size and production of the machines. Similarly, developments in alternative pulps to rags, mainly wood and esparto grass, enabled production increases. Conversely, despite the increase in paper production, there was a decrease, by 1884, in the number of paper mills in England and Wales to 250 and in Ireland to 14 (Scotland increased to 60), production being concentrated into fewer, larger units. Geographical changes also took place as many of the early mills were small and had been situated in rural areas. The change was to larger mills in, or near, urban areas closer to suppliers of the raw materials (esparto mills were generally situated near a port as the raw material was brought in by ship) and the paper markets.
18th Century developments in moulds led to the improvement of a flatter, more even paper.
entailment
id_6228
The History of Papermaking in the United Kingdom The first reference to a paper mill in the United Kingdom was in a book printed by Wynken de Worde in about 1495. This mill belonged to a certain John Tate and was near Hertford. Other early mills included one at Dartford, owned by Sir John Speilman, who was granted special privileges for the collection of rags by Queen Elizabeth and one built in Buckinghamshire before the end of the sixteenth century. During the first half of the seventeenth century, mills were established near Edinburgh, at Cannock Chase in Staffordshire, and several in Buckinghamshire, Oxfordshire and Surrey. The Bank of England has been issuing bank notes since 1694, with simple watermarks in them since at least 1697. Henri de Portal was awarded the contract in December 1724 for producing the Bank of England watermarked bank-note paper at Bere Mill in Hampshire. Portals have retained this contract ever since but production is no longer at Bere Mill. There were two major developments at about the middle of the eighteenth century in the paper industry in the UK. The first was the introduction of the rag engine or hollander, invented in Holland sometime before 1670, which replaced the stamping mills, which had previously been used, for the disintegration of the rags and beating of the pulp. The second was in the design and construction of the mould used for forming the sheet. Early moulds had straight wires sewn down on to the wooden foundation, this produced an irregular surface showing the characteristic laid marks, and, when printed on, the ink did not give clear, sharp lines. Baskerville, a Birmingham printer, wanted a smoother paper. James Whatman the Elder developed a woven wire fabric, thus leading to his production of the first woven paper in 1757. Increasing demands for more paper during the late eighteenth and early nineteenth centuries led to shortages of the rags needed to produce the paper. Part of the problem was that no satisfactory method of bleaching pulp had yet been devised, and so only white rags could be used to produce white paper. Chlorine bleaching was being used by the end of the eighteenth century, but excessive use produced papers that were of poor quality and deteriorated quickly. By 1800 up to 24 million pounds of rags were being used annually, to produce 10,000 tons of paper in England and Wales, and 1000 tons in Scotland, the home market being supplemented by imports, mainly from the continent. Experiments in using other materials, such as sawdust, rye straw, cabbage stumps and spruce wood had been conducted in 1765 by Jacob Christian Schaffer. Similarly, Matthias Koops carried out many experiments on straw and other materials at the Neckinger Mill, Bermondsey around 1800, but it was not until the middle of the nineteenth century that pulp produced using straw or wood was utilised in the production of paper. By 1800 there were 430 (564 in 1821) paper mills in England and Wales (mostly single vat mills), under 50 (74 in 1823) in Scotland and 60 in Ireland, but all the production was by hand and the output was low. The first attempt at a paper machine to mechanise the process was patented in 1799 by Frenchman Nicholas Louis Robert, but it was not a success. However, the drawings were brought to England by John Gamble in 1801 and passed on to the brothers Henry and Sealy Fourdrinier, who financed the engineer Henry Donkin to build the machine. The first successful machine was installed at Frogmore, Hertfordshire, in 1803. The paper was pressed onto an endless wire cloth, transferred to a continuous felt blanket and then pressed again. Finally it was cut off the reel into sheets and loft dried in the same way as hand made paper. In 1809 John Dickinson patented a machine that that used a wire cloth covered cylinder revolving in a pulp suspension, the water being removed through the centre of the cylinder and the layer of pulp removed from the surface by a felt covered roller (later replaced by a continuous felt passing round a roller). This machine was the forerunner of the present day cylinder mould or vat machine, used mainly for the production of boards. Both these machines produced paper as a wet sheet, which require drying after removal from the machine, but in 1821 T B Crompton patented a method of drying the paper continuously, using a woven fabric to hold the sheet against steam heated drying cylinders. After it had been pressed, the paper was cut into sheets by a cutter fixed at the end of the last cylinder. By the middle of the nineteenth century the pattern for the mechanised production of paper had been set. Subsequent developments concentrated on increasing the size and production of the machines. Similarly, developments in alternative pulps to rags, mainly wood and esparto grass, enabled production increases. Conversely, despite the increase in paper production, there was a decrease, by 1884, in the number of paper mills in England and Wales to 250 and in Ireland to 14 (Scotland increased to 60), production being concentrated into fewer, larger units. Geographical changes also took place as many of the early mills were small and had been situated in rural areas. The change was to larger mills in, or near, urban areas closer to suppliers of the raw materials (esparto mills were generally situated near a port as the raw material was brought in by ship) and the paper markets.
Early paper making in Europe was at its peak in Holland in the 18th century.
neutral
id_6229
The History of Papermaking in the United Kingdom The first reference to a paper mill in the United Kingdom was in a book printed by Wynken de Worde in about 1495. This mill belonged to a certain John Tate and was near Hertford. Other early mills included one at Dartford, owned by Sir John Speilman, who was granted special privileges for the collection of rags by Queen Elizabeth and one built in Buckinghamshire before the end of the sixteenth century. During the first half of the seventeenth century, mills were established near Edinburgh, at Cannock Chase in Staffordshire, and several in Buckinghamshire, Oxfordshire and Surrey. The Bank of England has been issuing bank notes since 1694, with simple watermarks in them since at least 1697. Henri de Portal was awarded the contract in December 1724 for producing the Bank of England watermarked bank-note paper at Bere Mill in Hampshire. Portals have retained this contract ever since but production is no longer at Bere Mill. There were two major developments at about the middle of the eighteenth century in the paper industry in the UK. The first was the introduction of the rag engine or hollander, invented in Holland sometime before 1670, which replaced the stamping mills, which had previously been used, for the disintegration of the rags and beating of the pulp. The second was in the design and construction of the mould used for forming the sheet. Early moulds had straight wires sewn down on to the wooden foundation, this produced an irregular surface showing the characteristic laid marks, and, when printed on, the ink did not give clear, sharp lines. Baskerville, a Birmingham printer, wanted a smoother paper. James Whatman the Elder developed a woven wire fabric, thus leading to his production of the first woven paper in 1757. Increasing demands for more paper during the late eighteenth and early nineteenth centuries led to shortages of the rags needed to produce the paper. Part of the problem was that no satisfactory method of bleaching pulp had yet been devised, and so only white rags could be used to produce white paper. Chlorine bleaching was being used by the end of the eighteenth century, but excessive use produced papers that were of poor quality and deteriorated quickly. By 1800 up to 24 million pounds of rags were being used annually, to produce 10,000 tons of paper in England and Wales, and 1000 tons in Scotland, the home market being supplemented by imports, mainly from the continent. Experiments in using other materials, such as sawdust, rye straw, cabbage stumps and spruce wood had been conducted in 1765 by Jacob Christian Schaffer. Similarly, Matthias Koops carried out many experiments on straw and other materials at the Neckinger Mill, Bermondsey around 1800, but it was not until the middle of the nineteenth century that pulp produced using straw or wood was utilised in the production of paper. By 1800 there were 430 (564 in 1821) paper mills in England and Wales (mostly single vat mills), under 50 (74 in 1823) in Scotland and 60 in Ireland, but all the production was by hand and the output was low. The first attempt at a paper machine to mechanise the process was patented in 1799 by Frenchman Nicholas Louis Robert, but it was not a success. However, the drawings were brought to England by John Gamble in 1801 and passed on to the brothers Henry and Sealy Fourdrinier, who financed the engineer Henry Donkin to build the machine. The first successful machine was installed at Frogmore, Hertfordshire, in 1803. The paper was pressed onto an endless wire cloth, transferred to a continuous felt blanket and then pressed again. Finally it was cut off the reel into sheets and loft dried in the same way as hand made paper. In 1809 John Dickinson patented a machine that that used a wire cloth covered cylinder revolving in a pulp suspension, the water being removed through the centre of the cylinder and the layer of pulp removed from the surface by a felt covered roller (later replaced by a continuous felt passing round a roller). This machine was the forerunner of the present day cylinder mould or vat machine, used mainly for the production of boards. Both these machines produced paper as a wet sheet, which require drying after removal from the machine, but in 1821 T B Crompton patented a method of drying the paper continuously, using a woven fabric to hold the sheet against steam heated drying cylinders. After it had been pressed, the paper was cut into sheets by a cutter fixed at the end of the last cylinder. By the middle of the nineteenth century the pattern for the mechanised production of paper had been set. Subsequent developments concentrated on increasing the size and production of the machines. Similarly, developments in alternative pulps to rags, mainly wood and esparto grass, enabled production increases. Conversely, despite the increase in paper production, there was a decrease, by 1884, in the number of paper mills in England and Wales to 250 and in Ireland to 14 (Scotland increased to 60), production being concentrated into fewer, larger units. Geographical changes also took place as many of the early mills were small and had been situated in rural areas. The change was to larger mills in, or near, urban areas closer to suppliers of the raw materials (esparto mills were generally situated near a port as the raw material was brought in by ship) and the paper markets.
The printing of paper money in the UK has always been done by the same company.
entailment
id_6230
The History of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with a chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after Louis XVI, the Republic of France re-established the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England, 10,000 people were arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is in reality an elaborate Shinto rite a handful is thrown into the center to drive off malevolent spirits. In the Southwest of the United States, the Pueblo worship the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt. Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. In 1933, the Dalai Lama was buried sitting up in a bed of salt. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war, when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency
The first tax on salt was imposed by a Chinese emperor.
neutral
id_6231
The History of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with a chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after Louis XVI, the Republic of France re-established the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England, 10,000 people were arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is in reality an elaborate Shinto rite a handful is thrown into the center to drive off malevolent spirits. In the Southwest of the United States, the Pueblo worship the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt. Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. In 1933, the Dalai Lama was buried sitting up in a bed of salt. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war, when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency
Hopi legend believes that salt deposits were placed far away from civilization to penalize mankind.
entailment
id_6232
The History of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with a chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after Louis XVI, the Republic of France re-established the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England, 10,000 people were arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is in reality an elaborate Shinto rite a handful is thrown into the center to drive off malevolent spirits. In the Southwest of the United States, the Pueblo worship the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt. Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. In 1933, the Dalai Lama was buried sitting up in a bed of salt. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war, when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency
A lack of salt is connected with the deaths of many of Napoleons soldiers during the French retreat from Moscow.
entailment
id_6233
The History of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with a chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after Louis XVI, the Republic of France re-established the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England, 10,000 people were arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is in reality an elaborate Shinto rite a handful is thrown into the center to drive off malevolent spirits. In the Southwest of the United States, the Pueblo worship the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt. Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. In 1933, the Dalai Lama was buried sitting up in a bed of salt. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war, when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency
It has been suggested that salt was responsible for the first war.
entailment
id_6234
The History of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with a chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after Louis XVI, the Republic of France re-established the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England, 10,000 people were arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is in reality an elaborate Shinto rite a handful is thrown into the center to drive off malevolent spirits. In the Southwest of the United States, the Pueblo worship the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt. Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. In 1933, the Dalai Lama was buried sitting up in a bed of salt. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war, when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency
Most of the money for the construction of the Erie Canal came from salt taxes.
contradiction
id_6235
The History of Salt Salt is so simple and plentiful that we almost take it for granted. In chemical terms, salt is the combination of a sodium ion with a chloride ion, making it one of the most basic molecules on earth. It is also one of the most plentiful: it has been estimated that salt deposits under the state of Kansas alone could supply the entire worlds needs for the next 250,000 years. But salt is also an essential element. Without it, life itself would be impossible since the human body requires the mineral in order to function properly. The concentration of sodium ions in the blood is directly related to the regulation of safe body fluid levels. And while we are all familiar with its many uses in cooking, we may not be aware that this element is used in some 14,000 commercial applications. From manufacturing pulp and paper to setting dyes in textiles and fabric, from producing soaps and detergents to making our roads safe in winter, salt plays an essential part in our daily lives. Salt has a long and influential role in world history. From the dawn of civilization, it has been a key factor in economic, religious, social and political development. In every corner of the world, it has been the subject of superstition, folklore, and warfare, and has even been used as currency. As a precious and portable commodity, salt has long been a cornerstone of economies throughout history. In fact, researcher M. R. Bloch conjectured that civilization began along the edges of the desert because of the natural surface deposits of salt found there. Bloch also believed that the first war likely fought near the ancient city of Essalt on the Jordan River could have been fought over the citys precious supplies of the mineral. In 2200 BC, the Chinese emperor Hsia Yu levied one of the first known taxes. He taxed salt. In Tibet, Marco Polo noted that tiny cakes of salt were pressed with images of the Grand Khan to be used as coins and to this day among the nomads of Ethiopias Danakil Plains it is still used as money. Greek slave traders often bartered it for slaves, giving rise to the expression that someone was not worth his salt. Roman legionnaires were paid in salt a salarium, the Latin origin of the word salary. Merchants in 12th-century Timbuktu the gateway to the Sahara Desert and the seat of scholars valued this mineral as highly as books and gold. In France, Charles of Anjou levied the gabelle, a salt tax, in 1259 to finance his conquest of the Kingdom of Naples. Outrage over the gabelle fueled the French Revolution. Though the revolutionaries eliminated the tax shortly after Louis XVI, the Republic of France re-established the gabelle in the early 19th Century; only in 1946 was it removed from the books. The Erie Canal, an engineering marvel that connected the Great Lakes to New Yorks Hudson River in 1825, was called the ditch that salt built. Salt tax revenues paid for half the cost of construction of the canal. The British monarchy supported itself with high salt taxes, leading to a bustling black market for the white crystal. In 1785, the earl of Dundonald wrote that every year in England, 10,000 people were arrested for salt smuggling. And protesting against British rule in 1930, Mahatma Gandhi led a 200-mile march to the Arabian Ocean to collect untaxed salt for Indias poor. In religion and culture, salt long held an important place with Greek worshippers consecrating it in their rituals. Further, in Buddhist tradition, salt repels evil spirits, which is why it is customary to throw it over your shoulder before entering your house after a funeral: it scares off any evil spirits that may be clinging to your back. Shinto religion also uses it to purify an area. Before sumo wrestlers enter the ring for a match which is in reality an elaborate Shinto rite a handful is thrown into the center to drive off malevolent spirits. In the Southwest of the United States, the Pueblo worship the Salt Mother. Other native tribes had significant restrictions on who was permitted to eat salt. Hopi legend holds that the angry Warrior Twins punished mankind by placing valuable salt deposits far from civilization, requiring hard work and bravery to harvest the precious mineral. In 1933, the Dalai Lama was buried sitting up in a bed of salt. Today, a gift of salt endures in India as a potent symbol of good luck and a reference to Mahatma Gandhis liberation of India. The effects of salt deficiency are highlighted in times of war, when human bodies and national economies are strained to their limits. Thousands of Napoleons troops died during the French retreat from Moscow due to inadequate wound healing and lowered resistance to disease the results of salt deficiency
Salt is no longer used as a form of currency.
neutral
id_6236
The History of building Telegraph lines The idea of electrical communication seems to have begun as long ago as 1746, when about 200 monks at monastery in Paris arranged themselves in a line over a mile long, each holding ends of 25 ft iron wires. The abbot, also a scientist, discharged a primitive electrical battery into the wire, giving all the monks a simultaneous electrical shock. "This all sounds very silly, but is in fact extremely important because, firstly, they all said 'ow' which showed that you were sending a signal right along the line; and, secondly, they all said 'ow' at the same- time, and that meant that you were sending the signal very quickly, "exp1ains Tom Standage, author of the Victorian Internet and technology editor at the Economist. Given a more humane detection system, this could be a way of signaling over long distances. With wars in Europe and colonies beyond, such a signaling system was urgently needed. All sorts of electrical possibilities were proposed, some of them quite ridiculous. Two Englishmen, William Cooke and Charles Wheatstone came up with a system in which dials were made to point at different letters, but that involved five wires and would have been expensive to construct. Much simpler was that of an American, Samuel Morse, whose system only required a single wire to send a code of dots and dashes. At first, it was imagined that only a few highly skilled encoders would be able to use it but it soon became clear that many people could become proficient in Morse code. A system of lines strung on telegraph poles began to spread in Europe and America. The next problem was to cross the sea. Britain, as an island with an empire, led the way. Any such cable had to be insulated and the first breakthrough came with the discovery that a rubber-like latex from a tropical tree on the Malay peninsula could do the trick It was called gutta percha. The first attempt at a cross channel cable came in 1850. With thin wire and thick installation, it floated and had to be weighed down with lead pipe. It never worked well as the effect of water on its electrical properties was not understood, and it is reputed that a French fishermen hooked out a section and took it home as a strange new form of seaweed The cable was too big for a single boat so two had to start in the middle of the Atlantic, join their cables and sail in opposite directions. Amazingly, they succeeded in 1858, and this enabled Queen Victoria to send a telegraph message to President Buchanan. However, the 98-word message took more than 19 hours to send and misguided attempt to increase the speed by increasing the voltage resulted in failure of the line a week later. By 1870, a submarine cable was heading towards Australia. It seemed likely that it would come ashore at the northern port of Darwin from where it might connect around the coast to Queensland and New South Wales. It was an undertaking more ambitious than spanning an ocean. Flocks of sheep had to be driven with the 400 workers to provide food They needed horses and bullock carts and, for the parched interior, camels. In the north, tropical rains left the teams flooded In the center, it seemed that they would die of thirst. One critical section in the red heart of Australia involved finding a route through the McDonnell mountain range and then finding water on the other side. The water was not only essential for the construction team. There had to be telegraph repeater stations every few hundred miles to boost the signal and the staff obviously had to have a supply of water. Just as one mapping team was about to give up and resort to drinking brackish water, some aboriginals took pity on them. Altogether, 40,000 telegraph poles were used in the Australian overland wire. Some were cut from trees. Where there were no trees, or where termites ate the wood, steel poles were imported. On Thursday, August 22, 1872, the overland line was completed and the first messages could be sent across the continent; and within a few months, Australia was at last in direct contact with England via the submarine cable, too. The line remained in service to bring news of the Japanese attack on Darwin in 1942. It could cost several pounds to send a message and it might take several hours for it to reach its destination on the other side of the globe, but the world would never be same again. Governments could be in touch with their colonies. Traders could send cargoes based on demand and the latest prices. Newspapers could publish news that had just happened and was not many months old.
Using Morse Code to send message need to simplify the message firstly
contradiction
id_6237
The History of building Telegraph lines The idea of electrical communication seems to have begun as long ago as 1746, when about 200 monks at monastery in Paris arranged themselves in a line over a mile long, each holding ends of 25 ft iron wires. The abbot, also a scientist, discharged a primitive electrical battery into the wire, giving all the monks a simultaneous electrical shock. "This all sounds very silly, but is in fact extremely important because, firstly, they all said 'ow' which showed that you were sending a signal right along the line; and, secondly, they all said 'ow' at the same- time, and that meant that you were sending the signal very quickly, "exp1ains Tom Standage, author of the Victorian Internet and technology editor at the Economist. Given a more humane detection system, this could be a way of signaling over long distances. With wars in Europe and colonies beyond, such a signaling system was urgently needed. All sorts of electrical possibilities were proposed, some of them quite ridiculous. Two Englishmen, William Cooke and Charles Wheatstone came up with a system in which dials were made to point at different letters, but that involved five wires and would have been expensive to construct. Much simpler was that of an American, Samuel Morse, whose system only required a single wire to send a code of dots and dashes. At first, it was imagined that only a few highly skilled encoders would be able to use it but it soon became clear that many people could become proficient in Morse code. A system of lines strung on telegraph poles began to spread in Europe and America. The next problem was to cross the sea. Britain, as an island with an empire, led the way. Any such cable had to be insulated and the first breakthrough came with the discovery that a rubber-like latex from a tropical tree on the Malay peninsula could do the trick It was called gutta percha. The first attempt at a cross channel cable came in 1850. With thin wire and thick installation, it floated and had to be weighed down with lead pipe. It never worked well as the effect of water on its electrical properties was not understood, and it is reputed that a French fishermen hooked out a section and took it home as a strange new form of seaweed The cable was too big for a single boat so two had to start in the middle of the Atlantic, join their cables and sail in opposite directions. Amazingly, they succeeded in 1858, and this enabled Queen Victoria to send a telegraph message to President Buchanan. However, the 98-word message took more than 19 hours to send and misguided attempt to increase the speed by increasing the voltage resulted in failure of the line a week later. By 1870, a submarine cable was heading towards Australia. It seemed likely that it would come ashore at the northern port of Darwin from where it might connect around the coast to Queensland and New South Wales. It was an undertaking more ambitious than spanning an ocean. Flocks of sheep had to be driven with the 400 workers to provide food They needed horses and bullock carts and, for the parched interior, camels. In the north, tropical rains left the teams flooded In the center, it seemed that they would die of thirst. One critical section in the red heart of Australia involved finding a route through the McDonnell mountain range and then finding water on the other side. The water was not only essential for the construction team. There had to be telegraph repeater stations every few hundred miles to boost the signal and the staff obviously had to have a supply of water. Just as one mapping team was about to give up and resort to drinking brackish water, some aboriginals took pity on them. Altogether, 40,000 telegraph poles were used in the Australian overland wire. Some were cut from trees. Where there were no trees, or where termites ate the wood, steel poles were imported. On Thursday, August 22, 1872, the overland line was completed and the first messages could be sent across the continent; and within a few months, Australia was at last in direct contact with England via the submarine cable, too. The line remained in service to bring news of the Japanese attack on Darwin in 1942. It could cost several pounds to send a message and it might take several hours for it to reach its destination on the other side of the globe, but the world would never be same again. Governments could be in touch with their colonies. Traders could send cargoes based on demand and the latest prices. Newspapers could publish news that had just happened and was not many months old.
Abbots gave the monks an electrical shock at the same time, which constitutes the exploration on the long-distance signaling.
entailment
id_6238
The History of building Telegraph lines The idea of electrical communication seems to have begun as long ago as 1746, when about 200 monks at monastery in Paris arranged themselves in a line over a mile long, each holding ends of 25 ft iron wires. The abbot, also a scientist, discharged a primitive electrical battery into the wire, giving all the monks a simultaneous electrical shock. "This all sounds very silly, but is in fact extremely important because, firstly, they all said 'ow' which showed that you were sending a signal right along the line; and, secondly, they all said 'ow' at the same- time, and that meant that you were sending the signal very quickly, "exp1ains Tom Standage, author of the Victorian Internet and technology editor at the Economist. Given a more humane detection system, this could be a way of signaling over long distances. With wars in Europe and colonies beyond, such a signaling system was urgently needed. All sorts of electrical possibilities were proposed, some of them quite ridiculous. Two Englishmen, William Cooke and Charles Wheatstone came up with a system in which dials were made to point at different letters, but that involved five wires and would have been expensive to construct. Much simpler was that of an American, Samuel Morse, whose system only required a single wire to send a code of dots and dashes. At first, it was imagined that only a few highly skilled encoders would be able to use it but it soon became clear that many people could become proficient in Morse code. A system of lines strung on telegraph poles began to spread in Europe and America. The next problem was to cross the sea. Britain, as an island with an empire, led the way. Any such cable had to be insulated and the first breakthrough came with the discovery that a rubber-like latex from a tropical tree on the Malay peninsula could do the trick It was called gutta percha. The first attempt at a cross channel cable came in 1850. With thin wire and thick installation, it floated and had to be weighed down with lead pipe. It never worked well as the effect of water on its electrical properties was not understood, and it is reputed that a French fishermen hooked out a section and took it home as a strange new form of seaweed The cable was too big for a single boat so two had to start in the middle of the Atlantic, join their cables and sail in opposite directions. Amazingly, they succeeded in 1858, and this enabled Queen Victoria to send a telegraph message to President Buchanan. However, the 98-word message took more than 19 hours to send and misguided attempt to increase the speed by increasing the voltage resulted in failure of the line a week later. By 1870, a submarine cable was heading towards Australia. It seemed likely that it would come ashore at the northern port of Darwin from where it might connect around the coast to Queensland and New South Wales. It was an undertaking more ambitious than spanning an ocean. Flocks of sheep had to be driven with the 400 workers to provide food They needed horses and bullock carts and, for the parched interior, camels. In the north, tropical rains left the teams flooded In the center, it seemed that they would die of thirst. One critical section in the red heart of Australia involved finding a route through the McDonnell mountain range and then finding water on the other side. The water was not only essential for the construction team. There had to be telegraph repeater stations every few hundred miles to boost the signal and the staff obviously had to have a supply of water. Just as one mapping team was about to give up and resort to drinking brackish water, some aboriginals took pity on them. Altogether, 40,000 telegraph poles were used in the Australian overland wire. Some were cut from trees. Where there were no trees, or where termites ate the wood, steel poles were imported. On Thursday, August 22, 1872, the overland line was completed and the first messages could be sent across the continent; and within a few months, Australia was at last in direct contact with England via the submarine cable, too. The line remained in service to bring news of the Japanese attack on Darwin in 1942. It could cost several pounds to send a message and it might take several hours for it to reach its destination on the other side of the globe, but the world would never be same again. Governments could be in touch with their colonies. Traders could send cargoes based on demand and the latest prices. Newspapers could publish news that had just happened and was not many months old.
US Government offered fund to the 1st overland line across the continent
neutral
id_6239
The History of building Telegraph lines The idea of electrical communication seems to have begun as long ago as 1746, when about 200 monks at monastery in Paris arranged themselves in a line over a mile long, each holding ends of 25 ft iron wires. The abbot, also a scientist, discharged a primitive electrical battery into the wire, giving all the monks a simultaneous electrical shock. "This all sounds very silly, but is in fact extremely important because, firstly, they all said 'ow' which showed that you were sending a signal right along the line; and, secondly, they all said 'ow' at the same- time, and that meant that you were sending the signal very quickly, "exp1ains Tom Standage, author of the Victorian Internet and technology editor at the Economist. Given a more humane detection system, this could be a way of signaling over long distances. With wars in Europe and colonies beyond, such a signaling system was urgently needed. All sorts of electrical possibilities were proposed, some of them quite ridiculous. Two Englishmen, William Cooke and Charles Wheatstone came up with a system in which dials were made to point at different letters, but that involved five wires and would have been expensive to construct. Much simpler was that of an American, Samuel Morse, whose system only required a single wire to send a code of dots and dashes. At first, it was imagined that only a few highly skilled encoders would be able to use it but it soon became clear that many people could become proficient in Morse code. A system of lines strung on telegraph poles began to spread in Europe and America. The next problem was to cross the sea. Britain, as an island with an empire, led the way. Any such cable had to be insulated and the first breakthrough came with the discovery that a rubber-like latex from a tropical tree on the Malay peninsula could do the trick It was called gutta percha. The first attempt at a cross channel cable came in 1850. With thin wire and thick installation, it floated and had to be weighed down with lead pipe. It never worked well as the effect of water on its electrical properties was not understood, and it is reputed that a French fishermen hooked out a section and took it home as a strange new form of seaweed The cable was too big for a single boat so two had to start in the middle of the Atlantic, join their cables and sail in opposite directions. Amazingly, they succeeded in 1858, and this enabled Queen Victoria to send a telegraph message to President Buchanan. However, the 98-word message took more than 19 hours to send and misguided attempt to increase the speed by increasing the voltage resulted in failure of the line a week later. By 1870, a submarine cable was heading towards Australia. It seemed likely that it would come ashore at the northern port of Darwin from where it might connect around the coast to Queensland and New South Wales. It was an undertaking more ambitious than spanning an ocean. Flocks of sheep had to be driven with the 400 workers to provide food They needed horses and bullock carts and, for the parched interior, camels. In the north, tropical rains left the teams flooded In the center, it seemed that they would die of thirst. One critical section in the red heart of Australia involved finding a route through the McDonnell mountain range and then finding water on the other side. The water was not only essential for the construction team. There had to be telegraph repeater stations every few hundred miles to boost the signal and the staff obviously had to have a supply of water. Just as one mapping team was about to give up and resort to drinking brackish water, some aboriginals took pity on them. Altogether, 40,000 telegraph poles were used in the Australian overland wire. Some were cut from trees. Where there were no trees, or where termites ate the wood, steel poles were imported. On Thursday, August 22, 1872, the overland line was completed and the first messages could be sent across the continent; and within a few months, Australia was at last in direct contact with England via the submarine cable, too. The line remained in service to bring news of the Japanese attack on Darwin in 1942. It could cost several pounds to send a message and it might take several hours for it to reach its destination on the other side of the globe, but the world would never be same again. Governments could be in touch with their colonies. Traders could send cargoes based on demand and the latest prices. Newspapers could publish news that had just happened and was not many months old.
In the research of French scientists, the metal lines were used to send message
entailment
id_6240
The History of building Telegraph lines The idea of electrical communication seems to have begun as long ago as 1746, when about 200 monks at monastery in Paris arranged themselves in a line over a mile long, each holding ends of 25 ft iron wires. The abbot, also a scientist, discharged a primitive electrical battery into the wire, giving all the monks a simultaneous electrical shock. "This all sounds very silly, but is in fact extremely important because, firstly, they all said 'ow' which showed that you were sending a signal right along the line; and, secondly, they all said 'ow' at the same- time, and that meant that you were sending the signal very quickly, "exp1ains Tom Standage, author of the Victorian Internet and technology editor at the Economist. Given a more humane detection system, this could be a way of signaling over long distances. With wars in Europe and colonies beyond, such a signaling system was urgently needed. All sorts of electrical possibilities were proposed, some of them quite ridiculous. Two Englishmen, William Cooke and Charles Wheatstone came up with a system in which dials were made to point at different letters, but that involved five wires and would have been expensive to construct. Much simpler was that of an American, Samuel Morse, whose system only required a single wire to send a code of dots and dashes. At first, it was imagined that only a few highly skilled encoders would be able to use it but it soon became clear that many people could become proficient in Morse code. A system of lines strung on telegraph poles began to spread in Europe and America. The next problem was to cross the sea. Britain, as an island with an empire, led the way. Any such cable had to be insulated and the first breakthrough came with the discovery that a rubber-like latex from a tropical tree on the Malay peninsula could do the trick It was called gutta percha. The first attempt at a cross channel cable came in 1850. With thin wire and thick installation, it floated and had to be weighed down with lead pipe. It never worked well as the effect of water on its electrical properties was not understood, and it is reputed that a French fishermen hooked out a section and took it home as a strange new form of seaweed The cable was too big for a single boat so two had to start in the middle of the Atlantic, join their cables and sail in opposite directions. Amazingly, they succeeded in 1858, and this enabled Queen Victoria to send a telegraph message to President Buchanan. However, the 98-word message took more than 19 hours to send and misguided attempt to increase the speed by increasing the voltage resulted in failure of the line a week later. By 1870, a submarine cable was heading towards Australia. It seemed likely that it would come ashore at the northern port of Darwin from where it might connect around the coast to Queensland and New South Wales. It was an undertaking more ambitious than spanning an ocean. Flocks of sheep had to be driven with the 400 workers to provide food They needed horses and bullock carts and, for the parched interior, camels. In the north, tropical rains left the teams flooded In the center, it seemed that they would die of thirst. One critical section in the red heart of Australia involved finding a route through the McDonnell mountain range and then finding water on the other side. The water was not only essential for the construction team. There had to be telegraph repeater stations every few hundred miles to boost the signal and the staff obviously had to have a supply of water. Just as one mapping team was about to give up and resort to drinking brackish water, some aboriginals took pity on them. Altogether, 40,000 telegraph poles were used in the Australian overland wire. Some were cut from trees. Where there were no trees, or where termites ate the wood, steel poles were imported. On Thursday, August 22, 1872, the overland line was completed and the first messages could be sent across the continent; and within a few months, Australia was at last in direct contact with England via the submarine cable, too. The line remained in service to bring news of the Japanese attack on Darwin in 1942. It could cost several pounds to send a message and it might take several hours for it to reach its destination on the other side of the globe, but the world would never be same again. Governments could be in touch with their colonies. Traders could send cargoes based on demand and the latest prices. Newspapers could publish news that had just happened and was not many months old.
Morse was a famous inventor before he invented the code 5
neutral
id_6241
The History of building Telegraph lines The idea of electrical communication seems to have begun as long ago as 1746, when about 200 monks at monastery in Paris arranged themselves in a line over a mile long, each holding ends of 25 ft iron wires. The abbot, also a scientist, discharged a primitive electrical battery into the wire, giving all the monks a simultaneous electrical shock. "This all sounds very silly, but is in fact extremely important because, firstly, they all said 'ow' which showed that you were sending a signal right along the line; and, secondly, they all said 'ow' at the same- time, and that meant that you were sending the signal very quickly, "exp1ains Tom Standage, author of the Victorian Internet and technology editor at the Economist. Given a more humane detection system, this could be a way of signaling over long distances. With wars in Europe and colonies beyond, such a signaling system was urgently needed. All sorts of electrical possibilities were proposed, some of them quite ridiculous. Two Englishmen, William Cooke and Charles Wheatstone came up with a system in which dials were made to point at different letters, but that involved five wires and would have been expensive to construct. Much simpler was that of an American, Samuel Morse, whose system only required a single wire to send a code of dots and dashes. At first, it was imagined that only a few highly skilled encoders would be able to use it but it soon became clear that many people could become proficient in Morse code. A system of lines strung on telegraph poles began to spread in Europe and America. The next problem was to cross the sea. Britain, as an island with an empire, led the way. Any such cable had to be insulated and the first breakthrough came with the discovery that a rubber-like latex from a tropical tree on the Malay peninsula could do the trick It was called gutta percha. The first attempt at a cross channel cable came in 1850. With thin wire and thick installation, it floated and had to be weighed down with lead pipe. It never worked well as the effect of water on its electrical properties was not understood, and it is reputed that a French fishermen hooked out a section and took it home as a strange new form of seaweed The cable was too big for a single boat so two had to start in the middle of the Atlantic, join their cables and sail in opposite directions. Amazingly, they succeeded in 1858, and this enabled Queen Victoria to send a telegraph message to President Buchanan. However, the 98-word message took more than 19 hours to send and misguided attempt to increase the speed by increasing the voltage resulted in failure of the line a week later. By 1870, a submarine cable was heading towards Australia. It seemed likely that it would come ashore at the northern port of Darwin from where it might connect around the coast to Queensland and New South Wales. It was an undertaking more ambitious than spanning an ocean. Flocks of sheep had to be driven with the 400 workers to provide food They needed horses and bullock carts and, for the parched interior, camels. In the north, tropical rains left the teams flooded In the center, it seemed that they would die of thirst. One critical section in the red heart of Australia involved finding a route through the McDonnell mountain range and then finding water on the other side. The water was not only essential for the construction team. There had to be telegraph repeater stations every few hundred miles to boost the signal and the staff obviously had to have a supply of water. Just as one mapping team was about to give up and resort to drinking brackish water, some aboriginals took pity on them. Altogether, 40,000 telegraph poles were used in the Australian overland wire. Some were cut from trees. Where there were no trees, or where termites ate the wood, steel poles were imported. On Thursday, August 22, 1872, the overland line was completed and the first messages could be sent across the continent; and within a few months, Australia was at last in direct contact with England via the submarine cable, too. The line remained in service to bring news of the Japanese attack on Darwin in 1942. It could cost several pounds to send a message and it might take several hours for it to reach its destination on the other side of the globe, but the world would never be same again. Governments could be in touch with their colonies. Traders could send cargoes based on demand and the latest prices. Newspapers could publish news that had just happened and was not many months old.
The water is significant to early telegraph repeater on continent.
entailment
id_6242
The History of pencil The beginning of the story of pencils started with a lightning. Graphite, the main material for producing pencil, was discovered in 1564 in Boirowdale in England when a lightning struck a local tree during a thunder. Local people found out that the black substance spotted at the root of the unlucky tree was different from burning ash of wood. It was soft, thus left marks everywhere. Chemistry was barely out of its infancy at the time, so people mistook it for lead, equally black but much heavier. It was soon put to use by locals in marking their sheep for signs of ownership and calculation. Britain turns out to be the major country where mines of graphite can be detected and developed. Even so, the first pencil was invented elsewhere. As graphite is soft, it requires some form of encasement. In Italy, graphite sticks were initially wrapped in string or sheepskin for stability, becoming perhaps the very first pencil in the world. Then around 1560, an Italian couple made what are likely the first blueprints for the modem, wood-encased carpentry pencil. Their version was a flat, oval, more compact type of pencil. Their concept involved the hollowing out of a stick of juniper wood. Shortly thereafter in 1662, a superior technique was discovered by German people: two wooden halves were carved, a graphite stick inserted, and the halves then glued together essentially the same method in use to this day. The news of usefulness of these early pencils spread far and wide, attracting the attention of artists all over the known world. Although graphite core in pencils is still referred to as lead, modem pencils do not contain lead as the lead of the pencil is actually a mix of finely ground graphite and clay powders. This mixture is important because the amount of clay content added to the graphite depends on intended pencil hardness, and the amount of time spent on grinding the mixture determines the quality of the lead. The more clay you put in, the higher hardness the core has. Many pencils across the world, and almost all in Europe, are graded on the European system. This system of naming used B for black and H for hard; a pencils grade was described by a sequence or successive Hs or Bs such as BB and BBB for successively softer leads, and HH and HHH for successively harder ones. Then the standard writing pencil is graded HB. In England, pencils continued to be made from whole sawn graphite. But with the mass production of pencils, they are getting drastically more popular in many countries with each passing decade. As demands rise, appetite for graphite soars. According to the United States Geological Survey (USGS), world production of natural graphite in 2012 was 1,100,000 tonnes, of which the following major exporters are: China, India, Brazil, North Korea and Canada. When the value of graphite was realised, the mines were taken over by the government and guarded. One of its chief uses during the reign of Elizabeth I in the second half of the 16th century was as moulds for the manufacture of camion balls. Graphite was transported from Keswick to London in armed stagecoaches. In 1751 an Act of Parliament was passed making it an offence to steal or receive wad. This crime was punishable by hard labour or transportation. That the United States did not use pencils in the outer space till they spent $1000 to make a pencil to use in zero gravity conditions is in fact a fiction. It is widely known that astronauts in Russia used grease pencils, which dont have breakage problems. But it is also a fact that their counterparts in the United States used pencils in the outer space before real zero gravity pencil was invented . They preferred mechanical pencils, which produced fine lines, much clearer than the smudgy lines left by the grease pencils that Russians favoured. But the lead tips of these mechanical pencils broke often. That bit of graphite floating around the space capsule could get into someones eye, or even find its way into machinery or electronics short or other problems. But despite the fact that the Americans did invent zero gravity pencil later, they stuck to mechanical pencils for many years. Against the backcloth of a digitalized world, the prospect of pencils seems bleak. In reality, it does not. The application of pencils has by now become so widespread that they can be seen everywhere, such as classrooms, meeting rooms and art rooms, etc. A spectrum of users are likely to continue to use it into the future: students to do math works, artists to draw on sketch pads, waiters or waitresses to mark on order boards, make-up professionals to apply to faces, and architects to produce blue prints. The possibilities seem limitless
Pencils are unlikely to be used in the fixture.
contradiction
id_6243
The History of pencil The beginning of the story of pencils started with a lightning. Graphite, the main material for producing pencil, was discovered in 1564 in Boirowdale in England when a lightning struck a local tree during a thunder. Local people found out that the black substance spotted at the root of the unlucky tree was different from burning ash of wood. It was soft, thus left marks everywhere. Chemistry was barely out of its infancy at the time, so people mistook it for lead, equally black but much heavier. It was soon put to use by locals in marking their sheep for signs of ownership and calculation. Britain turns out to be the major country where mines of graphite can be detected and developed. Even so, the first pencil was invented elsewhere. As graphite is soft, it requires some form of encasement. In Italy, graphite sticks were initially wrapped in string or sheepskin for stability, becoming perhaps the very first pencil in the world. Then around 1560, an Italian couple made what are likely the first blueprints for the modem, wood-encased carpentry pencil. Their version was a flat, oval, more compact type of pencil. Their concept involved the hollowing out of a stick of juniper wood. Shortly thereafter in 1662, a superior technique was discovered by German people: two wooden halves were carved, a graphite stick inserted, and the halves then glued together essentially the same method in use to this day. The news of usefulness of these early pencils spread far and wide, attracting the attention of artists all over the known world. Although graphite core in pencils is still referred to as lead, modem pencils do not contain lead as the lead of the pencil is actually a mix of finely ground graphite and clay powders. This mixture is important because the amount of clay content added to the graphite depends on intended pencil hardness, and the amount of time spent on grinding the mixture determines the quality of the lead. The more clay you put in, the higher hardness the core has. Many pencils across the world, and almost all in Europe, are graded on the European system. This system of naming used B for black and H for hard; a pencils grade was described by a sequence or successive Hs or Bs such as BB and BBB for successively softer leads, and HH and HHH for successively harder ones. Then the standard writing pencil is graded HB. In England, pencils continued to be made from whole sawn graphite. But with the mass production of pencils, they are getting drastically more popular in many countries with each passing decade. As demands rise, appetite for graphite soars. According to the United States Geological Survey (USGS), world production of natural graphite in 2012 was 1,100,000 tonnes, of which the following major exporters are: China, India, Brazil, North Korea and Canada. When the value of graphite was realised, the mines were taken over by the government and guarded. One of its chief uses during the reign of Elizabeth I in the second half of the 16th century was as moulds for the manufacture of camion balls. Graphite was transported from Keswick to London in armed stagecoaches. In 1751 an Act of Parliament was passed making it an offence to steal or receive wad. This crime was punishable by hard labour or transportation. That the United States did not use pencils in the outer space till they spent $1000 to make a pencil to use in zero gravity conditions is in fact a fiction. It is widely known that astronauts in Russia used grease pencils, which dont have breakage problems. But it is also a fact that their counterparts in the United States used pencils in the outer space before real zero gravity pencil was invented . They preferred mechanical pencils, which produced fine lines, much clearer than the smudgy lines left by the grease pencils that Russians favoured. But the lead tips of these mechanical pencils broke often. That bit of graphite floating around the space capsule could get into someones eye, or even find its way into machinery or electronics short or other problems. But despite the fact that the Americans did invent zero gravity pencil later, they stuck to mechanical pencils for many years. Against the backcloth of a digitalized world, the prospect of pencils seems bleak. In reality, it does not. The application of pencils has by now become so widespread that they can be seen everywhere, such as classrooms, meeting rooms and art rooms, etc. A spectrum of users are likely to continue to use it into the future: students to do math works, artists to draw on sketch pads, waiters or waitresses to mark on order boards, make-up professionals to apply to faces, and architects to produce blue prints. The possibilities seem limitless
American astronauts did not replace mechanical pencils immediately after the zero gravity pencils were invented.
entailment
id_6244
The History of pencil The beginning of the story of pencils started with a lightning. Graphite, the main material for producing pencil, was discovered in 1564 in Boirowdale in England when a lightning struck a local tree during a thunder. Local people found out that the black substance spotted at the root of the unlucky tree was different from burning ash of wood. It was soft, thus left marks everywhere. Chemistry was barely out of its infancy at the time, so people mistook it for lead, equally black but much heavier. It was soon put to use by locals in marking their sheep for signs of ownership and calculation. Britain turns out to be the major country where mines of graphite can be detected and developed. Even so, the first pencil was invented elsewhere. As graphite is soft, it requires some form of encasement. In Italy, graphite sticks were initially wrapped in string or sheepskin for stability, becoming perhaps the very first pencil in the world. Then around 1560, an Italian couple made what are likely the first blueprints for the modem, wood-encased carpentry pencil. Their version was a flat, oval, more compact type of pencil. Their concept involved the hollowing out of a stick of juniper wood. Shortly thereafter in 1662, a superior technique was discovered by German people: two wooden halves were carved, a graphite stick inserted, and the halves then glued together essentially the same method in use to this day. The news of usefulness of these early pencils spread far and wide, attracting the attention of artists all over the known world. Although graphite core in pencils is still referred to as lead, modem pencils do not contain lead as the lead of the pencil is actually a mix of finely ground graphite and clay powders. This mixture is important because the amount of clay content added to the graphite depends on intended pencil hardness, and the amount of time spent on grinding the mixture determines the quality of the lead. The more clay you put in, the higher hardness the core has. Many pencils across the world, and almost all in Europe, are graded on the European system. This system of naming used B for black and H for hard; a pencils grade was described by a sequence or successive Hs or Bs such as BB and BBB for successively softer leads, and HH and HHH for successively harder ones. Then the standard writing pencil is graded HB. In England, pencils continued to be made from whole sawn graphite. But with the mass production of pencils, they are getting drastically more popular in many countries with each passing decade. As demands rise, appetite for graphite soars. According to the United States Geological Survey (USGS), world production of natural graphite in 2012 was 1,100,000 tonnes, of which the following major exporters are: China, India, Brazil, North Korea and Canada. When the value of graphite was realised, the mines were taken over by the government and guarded. One of its chief uses during the reign of Elizabeth I in the second half of the 16th century was as moulds for the manufacture of camion balls. Graphite was transported from Keswick to London in armed stagecoaches. In 1751 an Act of Parliament was passed making it an offence to steal or receive wad. This crime was punishable by hard labour or transportation. That the United States did not use pencils in the outer space till they spent $1000 to make a pencil to use in zero gravity conditions is in fact a fiction. It is widely known that astronauts in Russia used grease pencils, which dont have breakage problems. But it is also a fact that their counterparts in the United States used pencils in the outer space before real zero gravity pencil was invented . They preferred mechanical pencils, which produced fine lines, much clearer than the smudgy lines left by the grease pencils that Russians favoured. But the lead tips of these mechanical pencils broke often. That bit of graphite floating around the space capsule could get into someones eye, or even find its way into machinery or electronics short or other problems. But despite the fact that the Americans did invent zero gravity pencil later, they stuck to mechanical pencils for many years. Against the backcloth of a digitalized world, the prospect of pencils seems bleak. In reality, it does not. The application of pencils has by now become so widespread that they can be seen everywhere, such as classrooms, meeting rooms and art rooms, etc. A spectrum of users are likely to continue to use it into the future: students to do math works, artists to draw on sketch pads, waiters or waitresses to mark on order boards, make-up professionals to apply to faces, and architects to produce blue prints. The possibilities seem limitless
Pencil was used during the first American space expedition.
neutral
id_6245
The History of pencil The beginning of the story of pencils started with a lightning. Graphite, the main material for producing pencil, was discovered in 1564 in Boirowdale in England when a lightning struck a local tree during a thunder. Local people found out that the black substance spotted at the root of the unlucky tree was different from burning ash of wood. It was soft, thus left marks everywhere. Chemistry was barely out of its infancy at the time, so people mistook it for lead, equally black but much heavier. It was soon put to use by locals in marking their sheep for signs of ownership and calculation. Britain turns out to be the major country where mines of graphite can be detected and developed. Even so, the first pencil was invented elsewhere. As graphite is soft, it requires some form of encasement. In Italy, graphite sticks were initially wrapped in string or sheepskin for stability, becoming perhaps the very first pencil in the world. Then around 1560, an Italian couple made what are likely the first blueprints for the modem, wood-encased carpentry pencil. Their version was a flat, oval, more compact type of pencil. Their concept involved the hollowing out of a stick of juniper wood. Shortly thereafter in 1662, a superior technique was discovered by German people: two wooden halves were carved, a graphite stick inserted, and the halves then glued together essentially the same method in use to this day. The news of usefulness of these early pencils spread far and wide, attracting the attention of artists all over the known world. Although graphite core in pencils is still referred to as lead, modem pencils do not contain lead as the lead of the pencil is actually a mix of finely ground graphite and clay powders. This mixture is important because the amount of clay content added to the graphite depends on intended pencil hardness, and the amount of time spent on grinding the mixture determines the quality of the lead. The more clay you put in, the higher hardness the core has. Many pencils across the world, and almost all in Europe, are graded on the European system. This system of naming used B for black and H for hard; a pencils grade was described by a sequence or successive Hs or Bs such as BB and BBB for successively softer leads, and HH and HHH for successively harder ones. Then the standard writing pencil is graded HB. In England, pencils continued to be made from whole sawn graphite. But with the mass production of pencils, they are getting drastically more popular in many countries with each passing decade. As demands rise, appetite for graphite soars. According to the United States Geological Survey (USGS), world production of natural graphite in 2012 was 1,100,000 tonnes, of which the following major exporters are: China, India, Brazil, North Korea and Canada. When the value of graphite was realised, the mines were taken over by the government and guarded. One of its chief uses during the reign of Elizabeth I in the second half of the 16th century was as moulds for the manufacture of camion balls. Graphite was transported from Keswick to London in armed stagecoaches. In 1751 an Act of Parliament was passed making it an offence to steal or receive wad. This crime was punishable by hard labour or transportation. That the United States did not use pencils in the outer space till they spent $1000 to make a pencil to use in zero gravity conditions is in fact a fiction. It is widely known that astronauts in Russia used grease pencils, which dont have breakage problems. But it is also a fact that their counterparts in the United States used pencils in the outer space before real zero gravity pencil was invented . They preferred mechanical pencils, which produced fine lines, much clearer than the smudgy lines left by the grease pencils that Russians favoured. But the lead tips of these mechanical pencils broke often. That bit of graphite floating around the space capsule could get into someones eye, or even find its way into machinery or electronics short or other problems. But despite the fact that the Americans did invent zero gravity pencil later, they stuck to mechanical pencils for many years. Against the backcloth of a digitalized world, the prospect of pencils seems bleak. In reality, it does not. The application of pencils has by now become so widespread that they can be seen everywhere, such as classrooms, meeting rooms and art rooms, etc. A spectrum of users are likely to continue to use it into the future: students to do math works, artists to draw on sketch pads, waiters or waitresses to mark on order boards, make-up professionals to apply to faces, and architects to produce blue prints. The possibilities seem limitless
Pencils are not produced any more since the reign of Elizabeth
entailment
id_6246
The History of pencil The beginning of the story of pencils started with a lightning. Graphite, the main material for producing pencil, was discovered in 1564 in Boirowdale in England when a lightning struck a local tree during a thunder. Local people found out that the black substance spotted at the root of the unlucky tree was different from burning ash of wood. It was soft, thus left marks everywhere. Chemistry was barely out of its infancy at the time, so people mistook it for lead, equally black but much heavier. It was soon put to use by locals in marking their sheep for signs of ownership and calculation. Britain turns out to be the major country where mines of graphite can be detected and developed. Even so, the first pencil was invented elsewhere. As graphite is soft, it requires some form of encasement. In Italy, graphite sticks were initially wrapped in string or sheepskin for stability, becoming perhaps the very first pencil in the world. Then around 1560, an Italian couple made what are likely the first blueprints for the modem, wood-encased carpentry pencil. Their version was a flat, oval, more compact type of pencil. Their concept involved the hollowing out of a stick of juniper wood. Shortly thereafter in 1662, a superior technique was discovered by German people: two wooden halves were carved, a graphite stick inserted, and the halves then glued together essentially the same method in use to this day. The news of usefulness of these early pencils spread far and wide, attracting the attention of artists all over the known world. Although graphite core in pencils is still referred to as lead, modem pencils do not contain lead as the lead of the pencil is actually a mix of finely ground graphite and clay powders. This mixture is important because the amount of clay content added to the graphite depends on intended pencil hardness, and the amount of time spent on grinding the mixture determines the quality of the lead. The more clay you put in, the higher hardness the core has. Many pencils across the world, and almost all in Europe, are graded on the European system. This system of naming used B for black and H for hard; a pencils grade was described by a sequence or successive Hs or Bs such as BB and BBB for successively softer leads, and HH and HHH for successively harder ones. Then the standard writing pencil is graded HB. In England, pencils continued to be made from whole sawn graphite. But with the mass production of pencils, they are getting drastically more popular in many countries with each passing decade. As demands rise, appetite for graphite soars. According to the United States Geological Survey (USGS), world production of natural graphite in 2012 was 1,100,000 tonnes, of which the following major exporters are: China, India, Brazil, North Korea and Canada. When the value of graphite was realised, the mines were taken over by the government and guarded. One of its chief uses during the reign of Elizabeth I in the second half of the 16th century was as moulds for the manufacture of camion balls. Graphite was transported from Keswick to London in armed stagecoaches. In 1751 an Act of Parliament was passed making it an offence to steal or receive wad. This crime was punishable by hard labour or transportation. That the United States did not use pencils in the outer space till they spent $1000 to make a pencil to use in zero gravity conditions is in fact a fiction. It is widely known that astronauts in Russia used grease pencils, which dont have breakage problems. But it is also a fact that their counterparts in the United States used pencils in the outer space before real zero gravity pencil was invented . They preferred mechanical pencils, which produced fine lines, much clearer than the smudgy lines left by the grease pencils that Russians favoured. But the lead tips of these mechanical pencils broke often. That bit of graphite floating around the space capsule could get into someones eye, or even find its way into machinery or electronics short or other problems. But despite the fact that the Americans did invent zero gravity pencil later, they stuck to mechanical pencils for many years. Against the backcloth of a digitalized world, the prospect of pencils seems bleak. In reality, it does not. The application of pencils has by now become so widespread that they can be seen everywhere, such as classrooms, meeting rooms and art rooms, etc. A spectrum of users are likely to continue to use it into the future: students to do math works, artists to draw on sketch pads, waiters or waitresses to mark on order boards, make-up professionals to apply to faces, and architects to produce blue prints. The possibilities seem limitless
Graphite makes a pencil harder and sharper.
contradiction
id_6247
The History of pencil The beginning of the story of pencils started with a lightning. Graphite, the main material for producing pencil, was discovered in 1564 in Boirowdale in England when a lightning struck a local tree during a thunder. Local people found out that the black substance spotted at the root of the unlucky tree was different from burning ash of wood. It was soft, thus left marks everywhere. Chemistry was barely out of its infancy at the time, so people mistook it for lead, equally black but much heavier. It was soon put to use by locals in marking their sheep for signs of ownership and calculation. Britain turns out to be the major country where mines of graphite can be detected and developed. Even so, the first pencil was invented elsewhere. As graphite is soft, it requires some form of encasement. In Italy, graphite sticks were initially wrapped in string or sheepskin for stability, becoming perhaps the very first pencil in the world. Then around 1560, an Italian couple made what are likely the first blueprints for the modem, wood-encased carpentry pencil. Their version was a flat, oval, more compact type of pencil. Their concept involved the hollowing out of a stick of juniper wood. Shortly thereafter in 1662, a superior technique was discovered by German people: two wooden halves were carved, a graphite stick inserted, and the halves then glued together essentially the same method in use to this day. The news of usefulness of these early pencils spread far and wide, attracting the attention of artists all over the known world. Although graphite core in pencils is still referred to as lead, modem pencils do not contain lead as the lead of the pencil is actually a mix of finely ground graphite and clay powders. This mixture is important because the amount of clay content added to the graphite depends on intended pencil hardness, and the amount of time spent on grinding the mixture determines the quality of the lead. The more clay you put in, the higher hardness the core has. Many pencils across the world, and almost all in Europe, are graded on the European system. This system of naming used B for black and H for hard; a pencils grade was described by a sequence or successive Hs or Bs such as BB and BBB for successively softer leads, and HH and HHH for successively harder ones. Then the standard writing pencil is graded HB. In England, pencils continued to be made from whole sawn graphite. But with the mass production of pencils, they are getting drastically more popular in many countries with each passing decade. As demands rise, appetite for graphite soars. According to the United States Geological Survey (USGS), world production of natural graphite in 2012 was 1,100,000 tonnes, of which the following major exporters are: China, India, Brazil, North Korea and Canada. When the value of graphite was realised, the mines were taken over by the government and guarded. One of its chief uses during the reign of Elizabeth I in the second half of the 16th century was as moulds for the manufacture of camion balls. Graphite was transported from Keswick to London in armed stagecoaches. In 1751 an Act of Parliament was passed making it an offence to steal or receive wad. This crime was punishable by hard labour or transportation. That the United States did not use pencils in the outer space till they spent $1000 to make a pencil to use in zero gravity conditions is in fact a fiction. It is widely known that astronauts in Russia used grease pencils, which dont have breakage problems. But it is also a fact that their counterparts in the United States used pencils in the outer space before real zero gravity pencil was invented . They preferred mechanical pencils, which produced fine lines, much clearer than the smudgy lines left by the grease pencils that Russians favoured. But the lead tips of these mechanical pencils broke often. That bit of graphite floating around the space capsule could get into someones eye, or even find its way into machinery or electronics short or other problems. But despite the fact that the Americans did invent zero gravity pencil later, they stuck to mechanical pencils for many years. Against the backcloth of a digitalized world, the prospect of pencils seems bleak. In reality, it does not. The application of pencils has by now become so widespread that they can be seen everywhere, such as classrooms, meeting rooms and art rooms, etc. A spectrum of users are likely to continue to use it into the future: students to do math works, artists to draw on sketch pads, waiters or waitresses to mark on order boards, make-up professionals to apply to faces, and architects to produce blue prints. The possibilities seem limitless
Germany used various kinds of wood to make pencils.
neutral
id_6248
The History of pencil The beginning of the story of pencils started with a lightning. Graphite, the main material for producing pencil, was discovered in 1564 in Boirowdale in England when a lightning struck a local tree during a thunder. Local people found out that the black substance spotted at the root of the unlucky tree was different from burning ash of wood. It was soft, thus left marks everywhere. Chemistry was barely out of its infancy at the time, so people mistook it for lead, equally black but much heavier. It was soon put to use by locals in marking their sheep for signs of ownership and calculation. Britain turns out to be the major country where mines of graphite can be detected and developed. Even so, the first pencil was invented elsewhere. As graphite is soft, it requires some form of encasement. In Italy, graphite sticks were initially wrapped in string or sheepskin for stability, becoming perhaps the very first pencil in the world. Then around 1560, an Italian couple made what are likely the first blueprints for the modem, wood-encased carpentry pencil. Their version was a flat, oval, more compact type of pencil. Their concept involved the hollowing out of a stick of juniper wood. Shortly thereafter in 1662, a superior technique was discovered by German people: two wooden halves were carved, a graphite stick inserted, and the halves then glued together essentially the same method in use to this day. The news of usefulness of these early pencils spread far and wide, attracting the attention of artists all over the known world. Although graphite core in pencils is still referred to as lead, modem pencils do not contain lead as the lead of the pencil is actually a mix of finely ground graphite and clay powders. This mixture is important because the amount of clay content added to the graphite depends on intended pencil hardness, and the amount of time spent on grinding the mixture determines the quality of the lead. The more clay you put in, the higher hardness the core has. Many pencils across the world, and almost all in Europe, are graded on the European system. This system of naming used B for black and H for hard; a pencils grade was described by a sequence or successive Hs or Bs such as BB and BBB for successively softer leads, and HH and HHH for successively harder ones. Then the standard writing pencil is graded HB. In England, pencils continued to be made from whole sawn graphite. But with the mass production of pencils, they are getting drastically more popular in many countries with each passing decade. As demands rise, appetite for graphite soars. According to the United States Geological Survey (USGS), world production of natural graphite in 2012 was 1,100,000 tonnes, of which the following major exporters are: China, India, Brazil, North Korea and Canada. When the value of graphite was realised, the mines were taken over by the government and guarded. One of its chief uses during the reign of Elizabeth I in the second half of the 16th century was as moulds for the manufacture of camion balls. Graphite was transported from Keswick to London in armed stagecoaches. In 1751 an Act of Parliament was passed making it an offence to steal or receive wad. This crime was punishable by hard labour or transportation. That the United States did not use pencils in the outer space till they spent $1000 to make a pencil to use in zero gravity conditions is in fact a fiction. It is widely known that astronauts in Russia used grease pencils, which dont have breakage problems. But it is also a fact that their counterparts in the United States used pencils in the outer space before real zero gravity pencil was invented . They preferred mechanical pencils, which produced fine lines, much clearer than the smudgy lines left by the grease pencils that Russians favoured. But the lead tips of these mechanical pencils broke often. That bit of graphite floating around the space capsule could get into someones eye, or even find its way into machinery or electronics short or other problems. But despite the fact that the Americans did invent zero gravity pencil later, they stuck to mechanical pencils for many years. Against the backcloth of a digitalized world, the prospect of pencils seems bleak. In reality, it does not. The application of pencils has by now become so widespread that they can be seen everywhere, such as classrooms, meeting rooms and art rooms, etc. A spectrum of users are likely to continue to use it into the future: students to do math works, artists to draw on sketch pads, waiters or waitresses to mark on order boards, make-up professionals to apply to faces, and architects to produce blue prints. The possibilities seem limitless
Italy is probably the first country of the whole world to make pencils.
entailment
id_6249
The Hitler o* Bicycle The bicycle was not invented by one individual or in one country. It took nearly 100 years and many individuals for the modem bicycle to be born. By the end of those 100 years, bicycles had revolutionized the way people travel from place to place. Bicycles first appeared in Scotland in the early 1800s, and were called velocipedes. These early bicycles had two wheels, but they had no pedals. The rider sat on a pillow and walked his feet along the ground to move his velocipede forward. Soon a French inventor added pedals to the front wheel. Instead of walking their vehicles, riders used their feet to run the pedals. However, pedaling was hard because velocipedes were very heavy. The framework was made of solid steel tubes and the wooden wheels were covered with steel. Even so, velocipedes were popular among rich young men, who raced them in Paris parks. Because of the velocipedes were so hard to ride, no one thought about using them for transportation. People didn't ride velocipedes to the market or to their jobs. Instead, people thought velocipedes were just toys. Around 1870, American manufacturers saw that velocipedes were very popular overseas. They began building velocipedes, too, but with one difference. They made the frameworks from hollow steel tubes. This alteration made velocipedes much lighter, but riders still had to work hard to pedal just a short distance. In addition, roads were bumpy so steering was difficult. In fact, most riders preferred indoor tracks where they could rent a velocipede for a small fee and take riding lessons. Subsequent changes by British engineers altered the wheels to make pedaling more efficient. They saw that when a rider turned the pedals once, the front wheel turned once. If the front wheel was small, the bicycle traveled just a small distance with each turn. They reasoned that if the front wheel were larger, the bicycle would travel a greater distance. So they designed a bicycle with a giant front wheel. They made the rear wheel small. Its primary purpose was to help the rider balance. Balancing was hard because the rider had to sit high above the giant front wheel in order to reach the pedals. This meant he was in danger of falling off the bicycle and injuring himself if he lost his balance. Despite this inherent danger, "high wheelers" became very popular in England. American manufacturers once again tried to design a better bicycle. Their goal was to make a safer bicycle. They substituted a small wheel for the giant front wheel and put the driving mechanism in a larger rear to wheel. It would be impossible for a rider to pedal the rear wheel, so engineers designed a system of foot levers. By pressing first the right one and then the left, the rider moved a long metal bar up and down. This bar turned the rear axle1. This axle turned the rear wheel and the bicycle minimized the dangers inherent in bicycle riding, more and more people began using bicycles in their daily activities. The British altered the design one last time. They made the two wheels equal in size and created a mechanism that uses a chain to turn the rear wheel. With this final change, the modern bicycle was born. Subsequent improvements, such as brakes, rubber tires, and lights were added to make bicycles more comfortable to ride. By 1900, bicycle riding had become very popular with men and women of all ages. Bicycles revolutionized the way people worldwide ride bicycles for transportation, enjoyment, sport, and exercise.
The changes by British inventors altered the wheels to make pedaling more efficient
entailment
id_6250
The Hitler o* Bicycle The bicycle was not invented by one individual or in one country. It took nearly 100 years and many individuals for the modem bicycle to be born. By the end of those 100 years, bicycles had revolutionized the way people travel from place to place. Bicycles first appeared in Scotland in the early 1800s, and were called velocipedes. These early bicycles had two wheels, but they had no pedals. The rider sat on a pillow and walked his feet along the ground to move his velocipede forward. Soon a French inventor added pedals to the front wheel. Instead of walking their vehicles, riders used their feet to run the pedals. However, pedaling was hard because velocipedes were very heavy. The framework was made of solid steel tubes and the wooden wheels were covered with steel. Even so, velocipedes were popular among rich young men, who raced them in Paris parks. Because of the velocipedes were so hard to ride, no one thought about using them for transportation. People didn't ride velocipedes to the market or to their jobs. Instead, people thought velocipedes were just toys. Around 1870, American manufacturers saw that velocipedes were very popular overseas. They began building velocipedes, too, but with one difference. They made the frameworks from hollow steel tubes. This alteration made velocipedes much lighter, but riders still had to work hard to pedal just a short distance. In addition, roads were bumpy so steering was difficult. In fact, most riders preferred indoor tracks where they could rent a velocipede for a small fee and take riding lessons. Subsequent changes by British engineers altered the wheels to make pedaling more efficient. They saw that when a rider turned the pedals once, the front wheel turned once. If the front wheel was small, the bicycle traveled just a small distance with each turn. They reasoned that if the front wheel were larger, the bicycle would travel a greater distance. So they designed a bicycle with a giant front wheel. They made the rear wheel small. Its primary purpose was to help the rider balance. Balancing was hard because the rider had to sit high above the giant front wheel in order to reach the pedals. This meant he was in danger of falling off the bicycle and injuring himself if he lost his balance. Despite this inherent danger, "high wheelers" became very popular in England. American manufacturers once again tried to design a better bicycle. Their goal was to make a safer bicycle. They substituted a small wheel for the giant front wheel and put the driving mechanism in a larger rear to wheel. It would be impossible for a rider to pedal the rear wheel, so engineers designed a system of foot levers. By pressing first the right one and then the left, the rider moved a long metal bar up and down. This bar turned the rear axle1. This axle turned the rear wheel and the bicycle minimized the dangers inherent in bicycle riding, more and more people began using bicycles in their daily activities. The British altered the design one last time. They made the two wheels equal in size and created a mechanism that uses a chain to turn the rear wheel. With this final change, the modern bicycle was born. Subsequent improvements, such as brakes, rubber tires, and lights were added to make bicycles more comfortable to ride. By 1900, bicycle riding had become very popular with men and women of all ages. Bicycles revolutionized the way people worldwide ride bicycles for transportation, enjoyment, sport, and exercise.
The bicycle was invited by Americans only
contradiction
id_6251
The Hitler o* Bicycle The bicycle was not invented by one individual or in one country. It took nearly 100 years and many individuals for the modem bicycle to be born. By the end of those 100 years, bicycles had revolutionized the way people travel from place to place. Bicycles first appeared in Scotland in the early 1800s, and were called velocipedes. These early bicycles had two wheels, but they had no pedals. The rider sat on a pillow and walked his feet along the ground to move his velocipede forward. Soon a French inventor added pedals to the front wheel. Instead of walking their vehicles, riders used their feet to run the pedals. However, pedaling was hard because velocipedes were very heavy. The framework was made of solid steel tubes and the wooden wheels were covered with steel. Even so, velocipedes were popular among rich young men, who raced them in Paris parks. Because of the velocipedes were so hard to ride, no one thought about using them for transportation. People didn't ride velocipedes to the market or to their jobs. Instead, people thought velocipedes were just toys. Around 1870, American manufacturers saw that velocipedes were very popular overseas. They began building velocipedes, too, but with one difference. They made the frameworks from hollow steel tubes. This alteration made velocipedes much lighter, but riders still had to work hard to pedal just a short distance. In addition, roads were bumpy so steering was difficult. In fact, most riders preferred indoor tracks where they could rent a velocipede for a small fee and take riding lessons. Subsequent changes by British engineers altered the wheels to make pedaling more efficient. They saw that when a rider turned the pedals once, the front wheel turned once. If the front wheel was small, the bicycle traveled just a small distance with each turn. They reasoned that if the front wheel were larger, the bicycle would travel a greater distance. So they designed a bicycle with a giant front wheel. They made the rear wheel small. Its primary purpose was to help the rider balance. Balancing was hard because the rider had to sit high above the giant front wheel in order to reach the pedals. This meant he was in danger of falling off the bicycle and injuring himself if he lost his balance. Despite this inherent danger, "high wheelers" became very popular in England. American manufacturers once again tried to design a better bicycle. Their goal was to make a safer bicycle. They substituted a small wheel for the giant front wheel and put the driving mechanism in a larger rear to wheel. It would be impossible for a rider to pedal the rear wheel, so engineers designed a system of foot levers. By pressing first the right one and then the left, the rider moved a long metal bar up and down. This bar turned the rear axle1. This axle turned the rear wheel and the bicycle minimized the dangers inherent in bicycle riding, more and more people began using bicycles in their daily activities. The British altered the design one last time. They made the two wheels equal in size and created a mechanism that uses a chain to turn the rear wheel. With this final change, the modern bicycle was born. Subsequent improvements, such as brakes, rubber tires, and lights were added to make bicycles more comfortable to ride. By 1900, bicycle riding had become very popular with men and women of all ages. Bicycles revolutionized the way people worldwide ride bicycles for transportation, enjoyment, sport, and exercise.
It was too hard to lead the velocipedes due to their heaviness
entailment
id_6252
The Hitler o* Bicycle The bicycle was not invented by one individual or in one country. It took nearly 100 years and many individuals for the modem bicycle to be born. By the end of those 100 years, bicycles had revolutionized the way people travel from place to place. Bicycles first appeared in Scotland in the early 1800s, and were called velocipedes. These early bicycles had two wheels, but they had no pedals. The rider sat on a pillow and walked his feet along the ground to move his velocipede forward. Soon a French inventor added pedals to the front wheel. Instead of walking their vehicles, riders used their feet to run the pedals. However, pedaling was hard because velocipedes were very heavy. The framework was made of solid steel tubes and the wooden wheels were covered with steel. Even so, velocipedes were popular among rich young men, who raced them in Paris parks. Because of the velocipedes were so hard to ride, no one thought about using them for transportation. People didn't ride velocipedes to the market or to their jobs. Instead, people thought velocipedes were just toys. Around 1870, American manufacturers saw that velocipedes were very popular overseas. They began building velocipedes, too, but with one difference. They made the frameworks from hollow steel tubes. This alteration made velocipedes much lighter, but riders still had to work hard to pedal just a short distance. In addition, roads were bumpy so steering was difficult. In fact, most riders preferred indoor tracks where they could rent a velocipede for a small fee and take riding lessons. Subsequent changes by British engineers altered the wheels to make pedaling more efficient. They saw that when a rider turned the pedals once, the front wheel turned once. If the front wheel was small, the bicycle traveled just a small distance with each turn. They reasoned that if the front wheel were larger, the bicycle would travel a greater distance. So they designed a bicycle with a giant front wheel. They made the rear wheel small. Its primary purpose was to help the rider balance. Balancing was hard because the rider had to sit high above the giant front wheel in order to reach the pedals. This meant he was in danger of falling off the bicycle and injuring himself if he lost his balance. Despite this inherent danger, "high wheelers" became very popular in England. American manufacturers once again tried to design a better bicycle. Their goal was to make a safer bicycle. They substituted a small wheel for the giant front wheel and put the driving mechanism in a larger rear to wheel. It would be impossible for a rider to pedal the rear wheel, so engineers designed a system of foot levers. By pressing first the right one and then the left, the rider moved a long metal bar up and down. This bar turned the rear axle1. This axle turned the rear wheel and the bicycle minimized the dangers inherent in bicycle riding, more and more people began using bicycles in their daily activities. The British altered the design one last time. They made the two wheels equal in size and created a mechanism that uses a chain to turn the rear wheel. With this final change, the modern bicycle was born. Subsequent improvements, such as brakes, rubber tires, and lights were added to make bicycles more comfortable to ride. By 1900, bicycle riding had become very popular with men and women of all ages. Bicycles revolutionized the way people worldwide ride bicycles for transportation, enjoyment, sport, and exercise.
The alteration of velocipedes made the life of people much more easy
neutral
id_6253
The Hollywood Film Industry This chapter examines the Golden Age of the Hollywood film studio system and explores how a particular kind of filmmaking developed during this period in US film history. It also focuses on the two key elements which influenced the emergence of the classic Hollywood studio system: the advent of sound and the business ideal of vertical integration. In addition to its historical interest, inspecting the growth of the studio system may offer clues regarding the kinds of struggles that accompany the growth of any new medium. It might, in fact, be intriguing to examine which changes occurred during the growth of the Hollywood studio, and compare those changes to contemporary struggles in which production companies are trying to define and control emerging industries, such as online film and interactive television. The shift of the industry away from silent films began during the late 1920s. Warner Bros. 1927 film The Jazz Singer was the first to feature synchronized speech, and with it came a period of turmoil for the industry. Studios now had proof that talkie films would make them money, but the financial investment this kind of filmmaking would require, from new camera equipment to new projection facilities, made the studios hesitant to invest at first. In the end, the power of cinematic sound to both move audiences and enhance the story persuaded studios that talkies were worth investing in. Overall, the use of sound in film was well-received by audiences, but there were still many technical factors to consider. Although full integration of sound into movies was complete by 1930, it would take somewhat longer for them to regain their stylistic elegance and dexterity. The camera now had to be encased in a big, clumsy, unmoveable soundproof box. In addition, actors struggled, having to direct their speech to awkwardly-hidden microphones in huge plants, telephones or even costumes. Vertical integration is the other key component in the rise of the Hollywood studio system. The major studios realized they could increase their profits by handling each stage of a films life: production (making the film), distribution (getting the film out to people) and exhibition (owning the theaters in major cities where films were shown first). Five studios, The Big Five, worked to achieve vertical integration through the late 1940s, owning vast real estate on which to construct elaborate sets. In addition, these studios set the exact terms of films release dates and patterns. Warner Bros. , Paramount, 20th Century Fox, MGM and RKO formed this exclusive club. The Little Three studios Universal, Columbia and United Artists also made pictures, but each lacked one of the crucial elements of vertical integration. Together these eight companies operated as a mature oligopoly, essentially running the entire market. During the Golden Age, the studios were remarkably consistent and stable enterprises, due in large part to long-term management heads the infamous movie moguls who ruled their kingdoms with iron fists. At MGM, Warner Bros, and Columbia, the same men ran their studios for decades. The rise of the studio system also hinges on the treatment of stars, who were constructed and exploited to suit a studios image and schedule. Actors were bound up in seven-year contracts to a single studio, and the studio boss generally held all the options. Stars could be loaned out to other production companies at any time. Studio bosses could also force bad roles on actors, and manipulate every single detail of stars images with their mammoth in-house publicity departments. Some have compared the Hollywood studio system to a factory, and it is useful to remember that studios were out to make money first and art second. On the other hand, studios also had to cultivate flexibility, in addition to consistent factory output. Studio heads realized that they couldnt make virtually the same film over and over again with the same cast of stars and still expect to keep turning a profit. They also had to create product differentiation. Examining how each production company tried to differentiate itself has led to loose characterizations of individual studios styles. MGM tended to put out a lot of all-star productions while Paramount excelled in comedy and Warner Bros, developed a reputation for gritty social realism. 20th Century Fox forged the musical and a great deal of prestige biographies, while Universal specialized in classic horror movies. In 1948, struggling independent movie producers and exhibitors finally triumphed in their battle against the big studios monopolistic behavior. In the United States versus Paramount federal decree of that year, the studios were ordered to give up their theaters in what is commonly referred to as divestiture opening the market to smaller producers. This, coupled with the advent of television in the 1950s, seriously compromised the studio systems influence and profits. Hence, 1930 and 1948 are generally considered bookends to Hollywoods Golden Age.
Studios had total control over how their actors were perceived by the public.
entailment
id_6254
The Hollywood Film Industry This chapter examines the Golden Age of the Hollywood film studio system and explores how a particular kind of filmmaking developed during this period in US film history. It also focuses on the two key elements which influenced the emergence of the classic Hollywood studio system: the advent of sound and the business ideal of vertical integration. In addition to its historical interest, inspecting the growth of the studio system may offer clues regarding the kinds of struggles that accompany the growth of any new medium. It might, in fact, be intriguing to examine which changes occurred during the growth of the Hollywood studio, and compare those changes to contemporary struggles in which production companies are trying to define and control emerging industries, such as online film and interactive television. The shift of the industry away from silent films began during the late 1920s. Warner Bros. 1927 film The Jazz Singer was the first to feature synchronized speech, and with it came a period of turmoil for the industry. Studios now had proof that talkie films would make them money, but the financial investment this kind of filmmaking would require, from new camera equipment to new projection facilities, made the studios hesitant to invest at first. In the end, the power of cinematic sound to both move audiences and enhance the story persuaded studios that talkies were worth investing in. Overall, the use of sound in film was well-received by audiences, but there were still many technical factors to consider. Although full integration of sound into movies was complete by 1930, it would take somewhat longer for them to regain their stylistic elegance and dexterity. The camera now had to be encased in a big, clumsy, unmoveable soundproof box. In addition, actors struggled, having to direct their speech to awkwardly-hidden microphones in huge plants, telephones or even costumes. Vertical integration is the other key component in the rise of the Hollywood studio system. The major studios realized they could increase their profits by handling each stage of a films life: production (making the film), distribution (getting the film out to people) and exhibition (owning the theaters in major cities where films were shown first). Five studios, The Big Five, worked to achieve vertical integration through the late 1940s, owning vast real estate on which to construct elaborate sets. In addition, these studios set the exact terms of films release dates and patterns. Warner Bros. , Paramount, 20th Century Fox, MGM and RKO formed this exclusive club. The Little Three studios Universal, Columbia and United Artists also made pictures, but each lacked one of the crucial elements of vertical integration. Together these eight companies operated as a mature oligopoly, essentially running the entire market. During the Golden Age, the studios were remarkably consistent and stable enterprises, due in large part to long-term management heads the infamous movie moguls who ruled their kingdoms with iron fists. At MGM, Warner Bros, and Columbia, the same men ran their studios for decades. The rise of the studio system also hinges on the treatment of stars, who were constructed and exploited to suit a studios image and schedule. Actors were bound up in seven-year contracts to a single studio, and the studio boss generally held all the options. Stars could be loaned out to other production companies at any time. Studio bosses could also force bad roles on actors, and manipulate every single detail of stars images with their mammoth in-house publicity departments. Some have compared the Hollywood studio system to a factory, and it is useful to remember that studios were out to make money first and art second. On the other hand, studios also had to cultivate flexibility, in addition to consistent factory output. Studio heads realized that they couldnt make virtually the same film over and over again with the same cast of stars and still expect to keep turning a profit. They also had to create product differentiation. Examining how each production company tried to differentiate itself has led to loose characterizations of individual studios styles. MGM tended to put out a lot of all-star productions while Paramount excelled in comedy and Warner Bros, developed a reputation for gritty social realism. 20th Century Fox forged the musical and a great deal of prestige biographies, while Universal specialized in classic horror movies. In 1948, struggling independent movie producers and exhibitors finally triumphed in their battle against the big studios monopolistic behavior. In the United States versus Paramount federal decree of that year, the studios were ordered to give up their theaters in what is commonly referred to as divestiture opening the market to smaller producers. This, coupled with the advent of television in the 1950s, seriously compromised the studio systems influence and profits. Hence, 1930 and 1948 are generally considered bookends to Hollywoods Golden Age.
There was intense competition between actors for contracts with the leading studios.
neutral
id_6255
The Hollywood Film Industry This chapter examines the Golden Age of the Hollywood film studio system and explores how a particular kind of filmmaking developed during this period in US film history. It also focuses on the two key elements which influenced the emergence of the classic Hollywood studio system: the advent of sound and the business ideal of vertical integration. In addition to its historical interest, inspecting the growth of the studio system may offer clues regarding the kinds of struggles that accompany the growth of any new medium. It might, in fact, be intriguing to examine which changes occurred during the growth of the Hollywood studio, and compare those changes to contemporary struggles in which production companies are trying to define and control emerging industries, such as online film and interactive television. The shift of the industry away from silent films began during the late 1920s. Warner Bros. 1927 film The Jazz Singer was the first to feature synchronized speech, and with it came a period of turmoil for the industry. Studios now had proof that talkie films would make them money, but the financial investment this kind of filmmaking would require, from new camera equipment to new projection facilities, made the studios hesitant to invest at first. In the end, the power of cinematic sound to both move audiences and enhance the story persuaded studios that talkies were worth investing in. Overall, the use of sound in film was well-received by audiences, but there were still many technical factors to consider. Although full integration of sound into movies was complete by 1930, it would take somewhat longer for them to regain their stylistic elegance and dexterity. The camera now had to be encased in a big, clumsy, unmoveable soundproof box. In addition, actors struggled, having to direct their speech to awkwardly-hidden microphones in huge plants, telephones or even costumes. Vertical integration is the other key component in the rise of the Hollywood studio system. The major studios realized they could increase their profits by handling each stage of a films life: production (making the film), distribution (getting the film out to people) and exhibition (owning the theaters in major cities where films were shown first). Five studios, The Big Five, worked to achieve vertical integration through the late 1940s, owning vast real estate on which to construct elaborate sets. In addition, these studios set the exact terms of films release dates and patterns. Warner Bros. , Paramount, 20th Century Fox, MGM and RKO formed this exclusive club. The Little Three studios Universal, Columbia and United Artists also made pictures, but each lacked one of the crucial elements of vertical integration. Together these eight companies operated as a mature oligopoly, essentially running the entire market. During the Golden Age, the studios were remarkably consistent and stable enterprises, due in large part to long-term management heads the infamous movie moguls who ruled their kingdoms with iron fists. At MGM, Warner Bros, and Columbia, the same men ran their studios for decades. The rise of the studio system also hinges on the treatment of stars, who were constructed and exploited to suit a studios image and schedule. Actors were bound up in seven-year contracts to a single studio, and the studio boss generally held all the options. Stars could be loaned out to other production companies at any time. Studio bosses could also force bad roles on actors, and manipulate every single detail of stars images with their mammoth in-house publicity departments. Some have compared the Hollywood studio system to a factory, and it is useful to remember that studios were out to make money first and art second. On the other hand, studios also had to cultivate flexibility, in addition to consistent factory output. Studio heads realized that they couldnt make virtually the same film over and over again with the same cast of stars and still expect to keep turning a profit. They also had to create product differentiation. Examining how each production company tried to differentiate itself has led to loose characterizations of individual studios styles. MGM tended to put out a lot of all-star productions while Paramount excelled in comedy and Warner Bros, developed a reputation for gritty social realism. 20th Century Fox forged the musical and a great deal of prestige biographies, while Universal specialized in classic horror movies. In 1948, struggling independent movie producers and exhibitors finally triumphed in their battle against the big studios monopolistic behavior. In the United States versus Paramount federal decree of that year, the studios were ordered to give up their theaters in what is commonly referred to as divestiture opening the market to smaller producers. This, coupled with the advent of television in the 1950s, seriously compromised the studio systems influence and profits. Hence, 1930 and 1948 are generally considered bookends to Hollywoods Golden Age.
There were some drawbacks to recording movie actors voices in the early 1930s.
entailment
id_6256
The Hollywood Film Industry This chapter examines the Golden Age of the Hollywood film studio system and explores how a particular kind of filmmaking developed during this period in US film history. It also focuses on the two key elements which influenced the emergence of the classic Hollywood studio system: the advent of sound and the business ideal of vertical integration. In addition to its historical interest, inspecting the growth of the studio system may offer clues regarding the kinds of struggles that accompany the growth of any new medium. It might, in fact, be intriguing to examine which changes occurred during the growth of the Hollywood studio, and compare those changes to contemporary struggles in which production companies are trying to define and control emerging industries, such as online film and interactive television. The shift of the industry away from silent films began during the late 1920s. Warner Bros. 1927 film The Jazz Singer was the first to feature synchronized speech, and with it came a period of turmoil for the industry. Studios now had proof that talkie films would make them money, but the financial investment this kind of filmmaking would require, from new camera equipment to new projection facilities, made the studios hesitant to invest at first. In the end, the power of cinematic sound to both move audiences and enhance the story persuaded studios that talkies were worth investing in. Overall, the use of sound in film was well-received by audiences, but there were still many technical factors to consider. Although full integration of sound into movies was complete by 1930, it would take somewhat longer for them to regain their stylistic elegance and dexterity. The camera now had to be encased in a big, clumsy, unmoveable soundproof box. In addition, actors struggled, having to direct their speech to awkwardly-hidden microphones in huge plants, telephones or even costumes. Vertical integration is the other key component in the rise of the Hollywood studio system. The major studios realized they could increase their profits by handling each stage of a films life: production (making the film), distribution (getting the film out to people) and exhibition (owning the theaters in major cities where films were shown first). Five studios, The Big Five, worked to achieve vertical integration through the late 1940s, owning vast real estate on which to construct elaborate sets. In addition, these studios set the exact terms of films release dates and patterns. Warner Bros. , Paramount, 20th Century Fox, MGM and RKO formed this exclusive club. The Little Three studios Universal, Columbia and United Artists also made pictures, but each lacked one of the crucial elements of vertical integration. Together these eight companies operated as a mature oligopoly, essentially running the entire market. During the Golden Age, the studios were remarkably consistent and stable enterprises, due in large part to long-term management heads the infamous movie moguls who ruled their kingdoms with iron fists. At MGM, Warner Bros, and Columbia, the same men ran their studios for decades. The rise of the studio system also hinges on the treatment of stars, who were constructed and exploited to suit a studios image and schedule. Actors were bound up in seven-year contracts to a single studio, and the studio boss generally held all the options. Stars could be loaned out to other production companies at any time. Studio bosses could also force bad roles on actors, and manipulate every single detail of stars images with their mammoth in-house publicity departments. Some have compared the Hollywood studio system to a factory, and it is useful to remember that studios were out to make money first and art second. On the other hand, studios also had to cultivate flexibility, in addition to consistent factory output. Studio heads realized that they couldnt make virtually the same film over and over again with the same cast of stars and still expect to keep turning a profit. They also had to create product differentiation. Examining how each production company tried to differentiate itself has led to loose characterizations of individual studios styles. MGM tended to put out a lot of all-star productions while Paramount excelled in comedy and Warner Bros, developed a reputation for gritty social realism. 20th Century Fox forged the musical and a great deal of prestige biographies, while Universal specialized in classic horror movies. In 1948, struggling independent movie producers and exhibitors finally triumphed in their battle against the big studios monopolistic behavior. In the United States versus Paramount federal decree of that year, the studios were ordered to give up their theaters in what is commonly referred to as divestiture opening the market to smaller producers. This, coupled with the advent of television in the 1950s, seriously compromised the studio systems influence and profits. Hence, 1930 and 1948 are generally considered bookends to Hollywoods Golden Age.
After The Jazz Singer came out, other studios immediately began making movies with synchronized sound.
contradiction
id_6257
The Honourable Society of the Middle Temple was established in the 14th century. It is situated on London Embankment, on the original site of the Knights Templar. The Society itself has a rich history, containing members such as Sir Walter Raleigh. The building also boasts historical visitors such as Queen Elizabeth 1st and Edward VII. More recently, the Middle Temple building has also been home to some famous guests, with scenes from the Harry Potter films taking place there. The society itself is now a legal education establishment, playing a key role in the education of those wishing to become barristers in England and Wales.
The Honourable Society of the Middle Temple has been visited by Queen Elizabeth 1st and Edward VII.
entailment
id_6258
The Honourable Society of the Middle Temple was established in the 14th century. It is situated on London Embankment, on the original site of the Knights Templar. The Society itself has a rich history, containing members such as Sir Walter Raleigh. The building also boasts historical visitors such as Queen Elizabeth 1st and Edward VII. More recently, the Middle Temple building has also been home to some famous guests, with scenes from the Harry Potter films taking place there. The society itself is now a legal education establishment, playing a key role in the education of those wishing to become barristers in England and Wales.
The Honourable Society of the Middle Temple is a legal education establishment.
entailment
id_6259
The Honourable Society of the Middle Temple was established in the 14th century. It is situated on London Embankment, on the original site of the Knights Templar. The Society itself has a rich history, containing members such as Sir Walter Raleigh. The building also boasts historical visitors such as Queen Elizabeth 1st and Edward VII. More recently, the Middle Temple building has also been home to some famous guests, with scenes from the Harry Potter films taking place there. The society itself is now a legal education establishment, playing a key role in the education of those wishing to become barristers in England and Wales.
The Honourable Society of the Middle Temple was purposely built near the London Embankment Underground station.
contradiction
id_6260
The Honourable Society of the Middle Temple was established in the 14th century. It is situated on London Embankment, on the original site of the Knights Templar. The Society itself has a rich history, containing members such as Sir Walter Raleigh. The building also boasts historical visitors such as Queen Elizabeth 1st and Edward VII. More recently, the Middle Temple building has also been home to some famous guests, with scenes from the Harry Potter films taking place there. The society itself is now a legal education establishment, playing a key role in the education of those wishing to become barristers in England and Wales.
The Honourable Society of the Middle Temple has had well known members such as Sir Walter Raleigh.
entailment
id_6261
The Impact of Wilderness Tourism. The market for tourism in remote areas is booming as never before. Countries all across the world are actively promoting their wilderness regions such as mountains, Arctic lands, deserts, small islands and wetland to high-spending tourists. The attraction of these areas is obvious: by definition, wilderness tourism requires little or no initial investment. But that does not mean that there is no cost. As the 1992 United Nations Conference on Environment and Development recognized, these regions are fragile (i. e. highly vulnerable to abnormal pressures) not just in terms of their ecology, but also in terms of the culture of their inhabitants. The three most significant types of fragile environment in these respects, and also in terms of the proportion of the Earths surface they cover, are deserts, mountains and Arctic areas. An important characteristic is their marked seasonality, with harsh conditions prevailing for many months each year. Consequently, most human activities, including tourism, are limited to quite clearly defined parts of the year. Tourists are drawn to these regions by their natural landscape beauty and the unique cultures of their indigenous people. And poor governments in these isolated areas have welcomed the new breed of adventure tourist, grateful for the hard currency they bring. For several years now, tourism has been the prime source of foreign exchange in Nepal and Bhutan. Tourism is also a key element in the economies of Arctic zones such as Lapland and Alaska and in desert areas such as Ayers Rock in Australia and Arizonas Monument Valley. Once a location is established as a main tourist destination, the effects on the local community are profound. When hill-farmers, for example, can make more money in a few weeks working as porters for foreign trekkers than they can in a year working in their fields, it is not surprising that many of them give up their farm-work, which is thus left to other members of the family. In some hill-regions, this has led to a serious decline in farm output and a change in the local diet, because there is insufficient labour to maintain terraces and irrigation systems and tend to crops. The result has been that many people in these regions have turned to outside supplies of rice and other foods. In Arctic and desert societies, year-round survival has traditionally depended on hunting animals and fish and collecting fruit over a relatively short season. However, as some inhabitants become involved in tourism, they no longer have time to collect wild food; this has led to increasing dependence on bought food and stores. Tourism is not always the culprit behind such changes. All kinds of wage labour, or government handouts, tend to undermine traditional survival systems. Whatever the cause, the dilemma is always the same: what happens if these new, external sources of income dry up? The physical impact of visitors is another serious problem associated with the growth in adventure tourism. Much attention has focused on erosion along major trails, but perhaps more important are the deforestation and impacts on water supplies arising from the need to provide tourists with cooked food and hot showers. In both mountains and deserts, slow-growing trees are often the main sources of fuel and water supplies may be limited or vulnerable to degradation through heavy use. Stories about the problems of tourism have become legion in the last few years. Yet it does not have to be a problem. Although tourism inevitably affects the region in which it takes place, the costs to these fragile environments and their local cultures can be minimized. Indeed, it can even be a vehicle for reinvigorating local cultures, as has happened with the Sherpas of Nepals Khumbu Valley and in some Alpine villages. And a growing number of adventure tourism operators are trying to ensure that their activities benefit the local population and environment over the long term. In the Swiss Alps, communities have decided that their future depends on integrating tourism more effectively with the local economy. Local concern about the rising number of second home developments in the Swiss Pays Enhaut resulted in limits being imposed on their growth. There has also been a renaissance in communal cheese production in the area, providing the locals with a reliable source of income that does not depend on outside visitors. Many of the Arctic tourist destinations have been exploited by outside companies, who employ transient workers and repatriate most of the profits to their home base. But some Arctic communities are now operating tour businesses themselves, thereby ensuring that the benefits accrue locally. For instance, a native corporation in Alaska, employing local people, is running an air tour from Anchorage to Kotzebue, where tourists eat Arctic food, walk on the tundra and watch local musicians and dancers. Native people in the desert regions of the American Southwest have followed similar strategies, encouraging tourists to visit their pueblos and reservations to purchase high-quality handicrafts and artwork. The Acoma and San lldefonso pueblos have established highly profitable pottery businesses, while the Navajo and Hopi groups have been similarly successful with jewellery. Too many people living in fragile environments have lost control over their economies, their culture and their environment when tourism has penetrated their homelands. Merely restricting tourism cannot be the solution to the imbalance, because peoples desire to see new places will not just disappear. Instead, communities in fragile environments must achieve greater control over tourism ventures in their regions, in order to balance their needs and aspirations with the demands of tourism. A growing number of communities are demonstrating that, with firm communal decision-making, this is possible. The critical question now is whether this can become the norm, rather than the exception.
Traditional food-gathering in desert societies was distributed evenly over the year.
contradiction
id_6262
The Impact of Wilderness Tourism. The market for tourism in remote areas is booming as never before. Countries all across the world are actively promoting their wilderness regions such as mountains, Arctic lands, deserts, small islands and wetland to high-spending tourists. The attraction of these areas is obvious: by definition, wilderness tourism requires little or no initial investment. But that does not mean that there is no cost. As the 1992 United Nations Conference on Environment and Development recognized, these regions are fragile (i. e. highly vulnerable to abnormal pressures) not just in terms of their ecology, but also in terms of the culture of their inhabitants. The three most significant types of fragile environment in these respects, and also in terms of the proportion of the Earths surface they cover, are deserts, mountains and Arctic areas. An important characteristic is their marked seasonality, with harsh conditions prevailing for many months each year. Consequently, most human activities, including tourism, are limited to quite clearly defined parts of the year. Tourists are drawn to these regions by their natural landscape beauty and the unique cultures of their indigenous people. And poor governments in these isolated areas have welcomed the new breed of adventure tourist, grateful for the hard currency they bring. For several years now, tourism has been the prime source of foreign exchange in Nepal and Bhutan. Tourism is also a key element in the economies of Arctic zones such as Lapland and Alaska and in desert areas such as Ayers Rock in Australia and Arizonas Monument Valley. Once a location is established as a main tourist destination, the effects on the local community are profound. When hill-farmers, for example, can make more money in a few weeks working as porters for foreign trekkers than they can in a year working in their fields, it is not surprising that many of them give up their farm-work, which is thus left to other members of the family. In some hill-regions, this has led to a serious decline in farm output and a change in the local diet, because there is insufficient labour to maintain terraces and irrigation systems and tend to crops. The result has been that many people in these regions have turned to outside supplies of rice and other foods. In Arctic and desert societies, year-round survival has traditionally depended on hunting animals and fish and collecting fruit over a relatively short season. However, as some inhabitants become involved in tourism, they no longer have time to collect wild food; this has led to increasing dependence on bought food and stores. Tourism is not always the culprit behind such changes. All kinds of wage labour, or government handouts, tend to undermine traditional survival systems. Whatever the cause, the dilemma is always the same: what happens if these new, external sources of income dry up? The physical impact of visitors is another serious problem associated with the growth in adventure tourism. Much attention has focused on erosion along major trails, but perhaps more important are the deforestation and impacts on water supplies arising from the need to provide tourists with cooked food and hot showers. In both mountains and deserts, slow-growing trees are often the main sources of fuel and water supplies may be limited or vulnerable to degradation through heavy use. Stories about the problems of tourism have become legion in the last few years. Yet it does not have to be a problem. Although tourism inevitably affects the region in which it takes place, the costs to these fragile environments and their local cultures can be minimized. Indeed, it can even be a vehicle for reinvigorating local cultures, as has happened with the Sherpas of Nepals Khumbu Valley and in some Alpine villages. And a growing number of adventure tourism operators are trying to ensure that their activities benefit the local population and environment over the long term. In the Swiss Alps, communities have decided that their future depends on integrating tourism more effectively with the local economy. Local concern about the rising number of second home developments in the Swiss Pays Enhaut resulted in limits being imposed on their growth. There has also been a renaissance in communal cheese production in the area, providing the locals with a reliable source of income that does not depend on outside visitors. Many of the Arctic tourist destinations have been exploited by outside companies, who employ transient workers and repatriate most of the profits to their home base. But some Arctic communities are now operating tour businesses themselves, thereby ensuring that the benefits accrue locally. For instance, a native corporation in Alaska, employing local people, is running an air tour from Anchorage to Kotzebue, where tourists eat Arctic food, walk on the tundra and watch local musicians and dancers. Native people in the desert regions of the American Southwest have followed similar strategies, encouraging tourists to visit their pueblos and reservations to purchase high-quality handicrafts and artwork. The Acoma and San lldefonso pueblos have established highly profitable pottery businesses, while the Navajo and Hopi groups have been similarly successful with jewellery. Too many people living in fragile environments have lost control over their economies, their culture and their environment when tourism has penetrated their homelands. Merely restricting tourism cannot be the solution to the imbalance, because peoples desire to see new places will not just disappear. Instead, communities in fragile environments must achieve greater control over tourism ventures in their regions, in order to balance their needs and aspirations with the demands of tourism. A growing number of communities are demonstrating that, with firm communal decision-making, this is possible. The critical question now is whether this can become the norm, rather than the exception.
The spread of tourism in certain hill-regions has resulted in a fall in the amount of food produced locally.
entailment
id_6263
The Impact of Wilderness Tourism. The market for tourism in remote areas is booming as never before. Countries all across the world are actively promoting their wilderness regions such as mountains, Arctic lands, deserts, small islands and wetland to high-spending tourists. The attraction of these areas is obvious: by definition, wilderness tourism requires little or no initial investment. But that does not mean that there is no cost. As the 1992 United Nations Conference on Environment and Development recognized, these regions are fragile (i. e. highly vulnerable to abnormal pressures) not just in terms of their ecology, but also in terms of the culture of their inhabitants. The three most significant types of fragile environment in these respects, and also in terms of the proportion of the Earths surface they cover, are deserts, mountains and Arctic areas. An important characteristic is their marked seasonality, with harsh conditions prevailing for many months each year. Consequently, most human activities, including tourism, are limited to quite clearly defined parts of the year. Tourists are drawn to these regions by their natural landscape beauty and the unique cultures of their indigenous people. And poor governments in these isolated areas have welcomed the new breed of adventure tourist, grateful for the hard currency they bring. For several years now, tourism has been the prime source of foreign exchange in Nepal and Bhutan. Tourism is also a key element in the economies of Arctic zones such as Lapland and Alaska and in desert areas such as Ayers Rock in Australia and Arizonas Monument Valley. Once a location is established as a main tourist destination, the effects on the local community are profound. When hill-farmers, for example, can make more money in a few weeks working as porters for foreign trekkers than they can in a year working in their fields, it is not surprising that many of them give up their farm-work, which is thus left to other members of the family. In some hill-regions, this has led to a serious decline in farm output and a change in the local diet, because there is insufficient labour to maintain terraces and irrigation systems and tend to crops. The result has been that many people in these regions have turned to outside supplies of rice and other foods. In Arctic and desert societies, year-round survival has traditionally depended on hunting animals and fish and collecting fruit over a relatively short season. However, as some inhabitants become involved in tourism, they no longer have time to collect wild food; this has led to increasing dependence on bought food and stores. Tourism is not always the culprit behind such changes. All kinds of wage labour, or government handouts, tend to undermine traditional survival systems. Whatever the cause, the dilemma is always the same: what happens if these new, external sources of income dry up? The physical impact of visitors is another serious problem associated with the growth in adventure tourism. Much attention has focused on erosion along major trails, but perhaps more important are the deforestation and impacts on water supplies arising from the need to provide tourists with cooked food and hot showers. In both mountains and deserts, slow-growing trees are often the main sources of fuel and water supplies may be limited or vulnerable to degradation through heavy use. Stories about the problems of tourism have become legion in the last few years. Yet it does not have to be a problem. Although tourism inevitably affects the region in which it takes place, the costs to these fragile environments and their local cultures can be minimized. Indeed, it can even be a vehicle for reinvigorating local cultures, as has happened with the Sherpas of Nepals Khumbu Valley and in some Alpine villages. And a growing number of adventure tourism operators are trying to ensure that their activities benefit the local population and environment over the long term. In the Swiss Alps, communities have decided that their future depends on integrating tourism more effectively with the local economy. Local concern about the rising number of second home developments in the Swiss Pays Enhaut resulted in limits being imposed on their growth. There has also been a renaissance in communal cheese production in the area, providing the locals with a reliable source of income that does not depend on outside visitors. Many of the Arctic tourist destinations have been exploited by outside companies, who employ transient workers and repatriate most of the profits to their home base. But some Arctic communities are now operating tour businesses themselves, thereby ensuring that the benefits accrue locally. For instance, a native corporation in Alaska, employing local people, is running an air tour from Anchorage to Kotzebue, where tourists eat Arctic food, walk on the tundra and watch local musicians and dancers. Native people in the desert regions of the American Southwest have followed similar strategies, encouraging tourists to visit their pueblos and reservations to purchase high-quality handicrafts and artwork. The Acoma and San lldefonso pueblos have established highly profitable pottery businesses, while the Navajo and Hopi groups have been similarly successful with jewellery. Too many people living in fragile environments have lost control over their economies, their culture and their environment when tourism has penetrated their homelands. Merely restricting tourism cannot be the solution to the imbalance, because peoples desire to see new places will not just disappear. Instead, communities in fragile environments must achieve greater control over tourism ventures in their regions, in order to balance their needs and aspirations with the demands of tourism. A growing number of communities are demonstrating that, with firm communal decision-making, this is possible. The critical question now is whether this can become the norm, rather than the exception.
Government handouts do more damage than tourism does to traditional patterns of food-gathering.
neutral
id_6264
The Impact of Wilderness Tourism. The market for tourism in remote areas is booming as never before. Countries all across the world are actively promoting their wilderness regions such as mountains, Arctic lands, deserts, small islands and wetland to high-spending tourists. The attraction of these areas is obvious: by definition, wilderness tourism requires little or no initial investment. But that does not mean that there is no cost. As the 1992 United Nations Conference on Environment and Development recognized, these regions are fragile (i. e. highly vulnerable to abnormal pressures) not just in terms of their ecology, but also in terms of the culture of their inhabitants. The three most significant types of fragile environment in these respects, and also in terms of the proportion of the Earths surface they cover, are deserts, mountains and Arctic areas. An important characteristic is their marked seasonality, with harsh conditions prevailing for many months each year. Consequently, most human activities, including tourism, are limited to quite clearly defined parts of the year. Tourists are drawn to these regions by their natural landscape beauty and the unique cultures of their indigenous people. And poor governments in these isolated areas have welcomed the new breed of adventure tourist, grateful for the hard currency they bring. For several years now, tourism has been the prime source of foreign exchange in Nepal and Bhutan. Tourism is also a key element in the economies of Arctic zones such as Lapland and Alaska and in desert areas such as Ayers Rock in Australia and Arizonas Monument Valley. Once a location is established as a main tourist destination, the effects on the local community are profound. When hill-farmers, for example, can make more money in a few weeks working as porters for foreign trekkers than they can in a year working in their fields, it is not surprising that many of them give up their farm-work, which is thus left to other members of the family. In some hill-regions, this has led to a serious decline in farm output and a change in the local diet, because there is insufficient labour to maintain terraces and irrigation systems and tend to crops. The result has been that many people in these regions have turned to outside supplies of rice and other foods. In Arctic and desert societies, year-round survival has traditionally depended on hunting animals and fish and collecting fruit over a relatively short season. However, as some inhabitants become involved in tourism, they no longer have time to collect wild food; this has led to increasing dependence on bought food and stores. Tourism is not always the culprit behind such changes. All kinds of wage labour, or government handouts, tend to undermine traditional survival systems. Whatever the cause, the dilemma is always the same: what happens if these new, external sources of income dry up? The physical impact of visitors is another serious problem associated with the growth in adventure tourism. Much attention has focused on erosion along major trails, but perhaps more important are the deforestation and impacts on water supplies arising from the need to provide tourists with cooked food and hot showers. In both mountains and deserts, slow-growing trees are often the main sources of fuel and water supplies may be limited or vulnerable to degradation through heavy use. Stories about the problems of tourism have become legion in the last few years. Yet it does not have to be a problem. Although tourism inevitably affects the region in which it takes place, the costs to these fragile environments and their local cultures can be minimized. Indeed, it can even be a vehicle for reinvigorating local cultures, as has happened with the Sherpas of Nepals Khumbu Valley and in some Alpine villages. And a growing number of adventure tourism operators are trying to ensure that their activities benefit the local population and environment over the long term. In the Swiss Alps, communities have decided that their future depends on integrating tourism more effectively with the local economy. Local concern about the rising number of second home developments in the Swiss Pays Enhaut resulted in limits being imposed on their growth. There has also been a renaissance in communal cheese production in the area, providing the locals with a reliable source of income that does not depend on outside visitors. Many of the Arctic tourist destinations have been exploited by outside companies, who employ transient workers and repatriate most of the profits to their home base. But some Arctic communities are now operating tour businesses themselves, thereby ensuring that the benefits accrue locally. For instance, a native corporation in Alaska, employing local people, is running an air tour from Anchorage to Kotzebue, where tourists eat Arctic food, walk on the tundra and watch local musicians and dancers. Native people in the desert regions of the American Southwest have followed similar strategies, encouraging tourists to visit their pueblos and reservations to purchase high-quality handicrafts and artwork. The Acoma and San lldefonso pueblos have established highly profitable pottery businesses, while the Navajo and Hopi groups have been similarly successful with jewellery. Too many people living in fragile environments have lost control over their economies, their culture and their environment when tourism has penetrated their homelands. Merely restricting tourism cannot be the solution to the imbalance, because peoples desire to see new places will not just disappear. Instead, communities in fragile environments must achieve greater control over tourism ventures in their regions, in order to balance their needs and aspirations with the demands of tourism. A growing number of communities are demonstrating that, with firm communal decision-making, this is possible. The critical question now is whether this can become the norm, rather than the exception.
Wilderness tourism operates throughout the year in fragile areas.
contradiction
id_6265
The Impact of Wilderness Tourism. The market for tourism in remote areas is booming as never before. Countries all across the world are actively promoting their wilderness regions such as mountains, Arctic lands, deserts, small islands and wetland to high-spending tourists. The attraction of these areas is obvious: by definition, wilderness tourism requires little or no initial investment. But that does not mean that there is no cost. As the 1992 United Nations Conference on Environment and Development recognized, these regions are fragile (i. e. highly vulnerable to abnormal pressures) not just in terms of their ecology, but also in terms of the culture of their inhabitants. The three most significant types of fragile environment in these respects, and also in terms of the proportion of the Earths surface they cover, are deserts, mountains and Arctic areas. An important characteristic is their marked seasonality, with harsh conditions prevailing for many months each year. Consequently, most human activities, including tourism, are limited to quite clearly defined parts of the year. Tourists are drawn to these regions by their natural landscape beauty and the unique cultures of their indigenous people. And poor governments in these isolated areas have welcomed the new breed of adventure tourist, grateful for the hard currency they bring. For several years now, tourism has been the prime source of foreign exchange in Nepal and Bhutan. Tourism is also a key element in the economies of Arctic zones such as Lapland and Alaska and in desert areas such as Ayers Rock in Australia and Arizonas Monument Valley. Once a location is established as a main tourist destination, the effects on the local community are profound. When hill-farmers, for example, can make more money in a few weeks working as porters for foreign trekkers than they can in a year working in their fields, it is not surprising that many of them give up their farm-work, which is thus left to other members of the family. In some hill-regions, this has led to a serious decline in farm output and a change in the local diet, because there is insufficient labour to maintain terraces and irrigation systems and tend to crops. The result has been that many people in these regions have turned to outside supplies of rice and other foods. In Arctic and desert societies, year-round survival has traditionally depended on hunting animals and fish and collecting fruit over a relatively short season. However, as some inhabitants become involved in tourism, they no longer have time to collect wild food; this has led to increasing dependence on bought food and stores. Tourism is not always the culprit behind such changes. All kinds of wage labour, or government handouts, tend to undermine traditional survival systems. Whatever the cause, the dilemma is always the same: what happens if these new, external sources of income dry up? The physical impact of visitors is another serious problem associated with the growth in adventure tourism. Much attention has focused on erosion along major trails, but perhaps more important are the deforestation and impacts on water supplies arising from the need to provide tourists with cooked food and hot showers. In both mountains and deserts, slow-growing trees are often the main sources of fuel and water supplies may be limited or vulnerable to degradation through heavy use. Stories about the problems of tourism have become legion in the last few years. Yet it does not have to be a problem. Although tourism inevitably affects the region in which it takes place, the costs to these fragile environments and their local cultures can be minimized. Indeed, it can even be a vehicle for reinvigorating local cultures, as has happened with the Sherpas of Nepals Khumbu Valley and in some Alpine villages. And a growing number of adventure tourism operators are trying to ensure that their activities benefit the local population and environment over the long term. In the Swiss Alps, communities have decided that their future depends on integrating tourism more effectively with the local economy. Local concern about the rising number of second home developments in the Swiss Pays Enhaut resulted in limits being imposed on their growth. There has also been a renaissance in communal cheese production in the area, providing the locals with a reliable source of income that does not depend on outside visitors. Many of the Arctic tourist destinations have been exploited by outside companies, who employ transient workers and repatriate most of the profits to their home base. But some Arctic communities are now operating tour businesses themselves, thereby ensuring that the benefits accrue locally. For instance, a native corporation in Alaska, employing local people, is running an air tour from Anchorage to Kotzebue, where tourists eat Arctic food, walk on the tundra and watch local musicians and dancers. Native people in the desert regions of the American Southwest have followed similar strategies, encouraging tourists to visit their pueblos and reservations to purchase high-quality handicrafts and artwork. The Acoma and San lldefonso pueblos have established highly profitable pottery businesses, while the Navajo and Hopi groups have been similarly successful with jewellery. Too many people living in fragile environments have lost control over their economies, their culture and their environment when tourism has penetrated their homelands. Merely restricting tourism cannot be the solution to the imbalance, because peoples desire to see new places will not just disappear. Instead, communities in fragile environments must achieve greater control over tourism ventures in their regions, in order to balance their needs and aspirations with the demands of tourism. A growing number of communities are demonstrating that, with firm communal decision-making, this is possible. The critical question now is whether this can become the norm, rather than the exception.
Deserts, mountains and Arctic regions are examples of environments that are both ecologically and culturally fragile.
entailment
id_6266
The Impact of Wilderness Tourism. The market for tourism in remote areas is booming as never before. Countries all across the world are actively promoting their wilderness regions such as mountains, Arctic lands, deserts, small islands and wetland to high-spending tourists. The attraction of these areas is obvious: by definition, wilderness tourism requires little or no initial investment. But that does not mean that there is no cost. As the 1992 United Nations Conference on Environment and Development recognized, these regions are fragile (i. e. highly vulnerable to abnormal pressures) not just in terms of their ecology, but also in terms of the culture of their inhabitants. The three most significant types of fragile environment in these respects, and also in terms of the proportion of the Earths surface they cover, are deserts, mountains and Arctic areas. An important characteristic is their marked seasonality, with harsh conditions prevailing for many months each year. Consequently, most human activities, including tourism, are limited to quite clearly defined parts of the year. Tourists are drawn to these regions by their natural landscape beauty and the unique cultures of their indigenous people. And poor governments in these isolated areas have welcomed the new breed of adventure tourist, grateful for the hard currency they bring. For several years now, tourism has been the prime source of foreign exchange in Nepal and Bhutan. Tourism is also a key element in the economies of Arctic zones such as Lapland and Alaska and in desert areas such as Ayers Rock in Australia and Arizonas Monument Valley. Once a location is established as a main tourist destination, the effects on the local community are profound. When hill-farmers, for example, can make more money in a few weeks working as porters for foreign trekkers than they can in a year working in their fields, it is not surprising that many of them give up their farm-work, which is thus left to other members of the family. In some hill-regions, this has led to a serious decline in farm output and a change in the local diet, because there is insufficient labour to maintain terraces and irrigation systems and tend to crops. The result has been that many people in these regions have turned to outside supplies of rice and other foods. In Arctic and desert societies, year-round survival has traditionally depended on hunting animals and fish and collecting fruit over a relatively short season. However, as some inhabitants become involved in tourism, they no longer have time to collect wild food; this has led to increasing dependence on bought food and stores. Tourism is not always the culprit behind such changes. All kinds of wage labour, or government handouts, tend to undermine traditional survival systems. Whatever the cause, the dilemma is always the same: what happens if these new, external sources of income dry up? The physical impact of visitors is another serious problem associated with the growth in adventure tourism. Much attention has focused on erosion along major trails, but perhaps more important are the deforestation and impacts on water supplies arising from the need to provide tourists with cooked food and hot showers. In both mountains and deserts, slow-growing trees are often the main sources of fuel and water supplies may be limited or vulnerable to degradation through heavy use. Stories about the problems of tourism have become legion in the last few years. Yet it does not have to be a problem. Although tourism inevitably affects the region in which it takes place, the costs to these fragile environments and their local cultures can be minimized. Indeed, it can even be a vehicle for reinvigorating local cultures, as has happened with the Sherpas of Nepals Khumbu Valley and in some Alpine villages. And a growing number of adventure tourism operators are trying to ensure that their activities benefit the local population and environment over the long term. In the Swiss Alps, communities have decided that their future depends on integrating tourism more effectively with the local economy. Local concern about the rising number of second home developments in the Swiss Pays Enhaut resulted in limits being imposed on their growth. There has also been a renaissance in communal cheese production in the area, providing the locals with a reliable source of income that does not depend on outside visitors. Many of the Arctic tourist destinations have been exploited by outside companies, who employ transient workers and repatriate most of the profits to their home base. But some Arctic communities are now operating tour businesses themselves, thereby ensuring that the benefits accrue locally. For instance, a native corporation in Alaska, employing local people, is running an air tour from Anchorage to Kotzebue, where tourists eat Arctic food, walk on the tundra and watch local musicians and dancers. Native people in the desert regions of the American Southwest have followed similar strategies, encouraging tourists to visit their pueblos and reservations to purchase high-quality handicrafts and artwork. The Acoma and San lldefonso pueblos have established highly profitable pottery businesses, while the Navajo and Hopi groups have been similarly successful with jewellery. Too many people living in fragile environments have lost control over their economies, their culture and their environment when tourism has penetrated their homelands. Merely restricting tourism cannot be the solution to the imbalance, because peoples desire to see new places will not just disappear. Instead, communities in fragile environments must achieve greater control over tourism ventures in their regions, in order to balance their needs and aspirations with the demands of tourism. A growing number of communities are demonstrating that, with firm communal decision-making, this is possible. The critical question now is whether this can become the norm, rather than the exception.
The low financial cost of setting up wilderness tourism makes it attractive to many countries.
entailment
id_6267
The Impact of the Potato. Jeff Chapman relates the story of history the most important vegetable. The potato was first cultivated in South America between three and seven thousand years ago, though scientists believe they may have grown wild in the region as long as 13,000 years ago. The genetic patterns of potato distribution indicate that the potato probably originated in the mountainous west-central region of the continent. Early Spanish chroniclers who misused the Indian word batata (sweet potato) as the name for the potato noted the importance of the tuber to the Incan Empire. The Incas had learned to preserve the potato for storage by dehydrating and mashing potatoes into a substance called Chuchu could be stored in a room for up to 10 years, providing excellent insurance against possible crop failures. As well as using the food as a staple crop, the Incas thought potatoes made childbirth easier and used it to treat injuries. The Spanish conquistadors first encountered the potato when they arrived in Peru in 1532 in search of gold, and noted Inca miners eating chuchu. At the time the Spaniards failed to realize that the potato represented a far more important treasure than either silver or gold, but they did gradually begin to use potatoes as basic rations aboard their ships. After the arrival of the potato in Spain in 1570, a few Spanish farmers began to cultivate them on a small scale, mostly as food for livestock. Throughout Europe, potatoes were regarded with suspicion, distaste and fear. Generally considered to be unfit for human consumption, they were used only as animal fodder and sustenance for the starving. In northern Europe, potatoes were primarily grown in botanical gardens as an exotic novelty. Even peasants refused to eat from a plant that produced ugly, misshapen tubers and that had come from a heathen civilization. Some felt that the potato plants resemblance to plants in the nightshade family hinted that it was the creation of witches or devils. In meat-loving England, farmers and urban workers regarded potatoes with extreme distaste. In 1662, the Royal Society recommended the cultivation of the tuber to the English government and the nation, but this recommendation had little impact. Potatoes did not become a staple until, during the food shortages associated with the Revolutionary Wars, the English government began to officially encourage potato cultivation. In 1795, the Board of Agriculture issued a pamphlet entitled Hints Respecting the Culture and Use of Potatoes; this was followed shortly by pro-potato editorials and potato recipes in The Times. Gradually, the lower classes began to follow the lead of the upper classes. A similar pattern emerged across the English Channel in the Netherlands, Belgium and France. While the potato slowly gained ground in eastern France (where it was often the only crop remaining after marauding soldiers plundered wheat fields and vineyards), it did not achieve widespread acceptance until the late 1700s. The peasants remained suspicious, in spite of a 1771 paper from the Facult de Paris testifying that the potato was not harmful but beneficial. The people began to overcome their distaste when the plant received the royal seal of approval: Louis XVI began to sport a potato flower in his buttonhole, and Marie-Antoinette wore the purple potato blossom in her hair. Frederick the Great of Prussia saw the potatos potential to help feed his nation and lower the price of bread, but faced the challenge of overcoming the peoples prejudice against the plant. When he issued a 1774 order for his subjects to grow potatoes as protection against famine, the town of Kolberg replied: The things have neither smell nor taste, not even the dogs will eat them, so what use are they to us? Trying a less direct approach to encourage his subjects to begin planting potatoes, Frederick used a bit of reverse psychology: he planted a royal field of potato plants and stationed a heavy guard to protect this field from thieves. Nearby peasants naturally assumed that anything worth guarding was worth stealing, and so snuck into the field and snatched the plants for their home gardens. Of course, this was entirely in line with Fredericks wishes. Historians debate whether the potato was primarily a cause or an effect of the huge population boom in industrial-era England and Wales. Prior to 1800, the English diet had consisted primarily of meat, supplemented by bread, butter and cheese. Few vegetables were consumed, most vegetables being regarded as nutritionally worthless and potentially harmful. This view began to change gradually in the late 1700s. The Industrial Revolution was drawing an ever increasing percentage of the populace into crowded cities, where only the richest could afford homes with ovens or coal storage rooms, and people were working 12-16 hour days which left them with little time or energy to prepare food. High yielding, easily prepared potato crops were the obvious solution to Englands food problems. Whereas most of their neighbors regarded the potato with suspicion and had to be persuaded to use it by the upper classes, the Irish peasantry embraced the tuber more passionately than anyone since the Incas. The potato was well suited to the Irish the soil and climate, and its high yield suited the most important concern of most Irish farmers: to feed their families. The most dramatic example of the potatos potential to alter population patterns occurred in Ireland, where the potato had become a staple by 1800. The Irish population doubled to eight million between 1780 and 1841, this without any significant expansion of industry or reform of agricultural techniques beyond the widespread cultivation of the potato. Though Irish landholding practices were primitive in comparison with those of England, the potatos high yields allowed even the poorest farmers to produce more healthy food than they needed with scarcely any investment or hard labor. Even children could easily plant, harvest and cook potatoes, which of course required no threshing, curing or grinding. The abundance provided by potatoes greatly decreased infant mortality and encouraged early marriage
The Spanish believed that the potato has the same nutrients as other vegetables.
neutral
id_6268
The Impact of the Potato. Jeff Chapman relates the story of history the most important vegetable. The potato was first cultivated in South America between three and seven thousand years ago, though scientists believe they may have grown wild in the region as long as 13,000 years ago. The genetic patterns of potato distribution indicate that the potato probably originated in the mountainous west-central region of the continent. Early Spanish chroniclers who misused the Indian word batata (sweet potato) as the name for the potato noted the importance of the tuber to the Incan Empire. The Incas had learned to preserve the potato for storage by dehydrating and mashing potatoes into a substance called Chuchu could be stored in a room for up to 10 years, providing excellent insurance against possible crop failures. As well as using the food as a staple crop, the Incas thought potatoes made childbirth easier and used it to treat injuries. The Spanish conquistadors first encountered the potato when they arrived in Peru in 1532 in search of gold, and noted Inca miners eating chuchu. At the time the Spaniards failed to realize that the potato represented a far more important treasure than either silver or gold, but they did gradually begin to use potatoes as basic rations aboard their ships. After the arrival of the potato in Spain in 1570, a few Spanish farmers began to cultivate them on a small scale, mostly as food for livestock. Throughout Europe, potatoes were regarded with suspicion, distaste and fear. Generally considered to be unfit for human consumption, they were used only as animal fodder and sustenance for the starving. In northern Europe, potatoes were primarily grown in botanical gardens as an exotic novelty. Even peasants refused to eat from a plant that produced ugly, misshapen tubers and that had come from a heathen civilization. Some felt that the potato plants resemblance to plants in the nightshade family hinted that it was the creation of witches or devils. In meat-loving England, farmers and urban workers regarded potatoes with extreme distaste. In 1662, the Royal Society recommended the cultivation of the tuber to the English government and the nation, but this recommendation had little impact. Potatoes did not become a staple until, during the food shortages associated with the Revolutionary Wars, the English government began to officially encourage potato cultivation. In 1795, the Board of Agriculture issued a pamphlet entitled Hints Respecting the Culture and Use of Potatoes; this was followed shortly by pro-potato editorials and potato recipes in The Times. Gradually, the lower classes began to follow the lead of the upper classes. A similar pattern emerged across the English Channel in the Netherlands, Belgium and France. While the potato slowly gained ground in eastern France (where it was often the only crop remaining after marauding soldiers plundered wheat fields and vineyards), it did not achieve widespread acceptance until the late 1700s. The peasants remained suspicious, in spite of a 1771 paper from the Facult de Paris testifying that the potato was not harmful but beneficial. The people began to overcome their distaste when the plant received the royal seal of approval: Louis XVI began to sport a potato flower in his buttonhole, and Marie-Antoinette wore the purple potato blossom in her hair. Frederick the Great of Prussia saw the potatos potential to help feed his nation and lower the price of bread, but faced the challenge of overcoming the peoples prejudice against the plant. When he issued a 1774 order for his subjects to grow potatoes as protection against famine, the town of Kolberg replied: The things have neither smell nor taste, not even the dogs will eat them, so what use are they to us? Trying a less direct approach to encourage his subjects to begin planting potatoes, Frederick used a bit of reverse psychology: he planted a royal field of potato plants and stationed a heavy guard to protect this field from thieves. Nearby peasants naturally assumed that anything worth guarding was worth stealing, and so snuck into the field and snatched the plants for their home gardens. Of course, this was entirely in line with Fredericks wishes. Historians debate whether the potato was primarily a cause or an effect of the huge population boom in industrial-era England and Wales. Prior to 1800, the English diet had consisted primarily of meat, supplemented by bread, butter and cheese. Few vegetables were consumed, most vegetables being regarded as nutritionally worthless and potentially harmful. This view began to change gradually in the late 1700s. The Industrial Revolution was drawing an ever increasing percentage of the populace into crowded cities, where only the richest could afford homes with ovens or coal storage rooms, and people were working 12-16 hour days which left them with little time or energy to prepare food. High yielding, easily prepared potato crops were the obvious solution to Englands food problems. Whereas most of their neighbors regarded the potato with suspicion and had to be persuaded to use it by the upper classes, the Irish peasantry embraced the tuber more passionately than anyone since the Incas. The potato was well suited to the Irish the soil and climate, and its high yield suited the most important concern of most Irish farmers: to feed their families. The most dramatic example of the potatos potential to alter population patterns occurred in Ireland, where the potato had become a staple by 1800. The Irish population doubled to eight million between 1780 and 1841, this without any significant expansion of industry or reform of agricultural techniques beyond the widespread cultivation of the potato. Though Irish landholding practices were primitive in comparison with those of England, the potatos high yields allowed even the poorest farmers to produce more healthy food than they needed with scarcely any investment or hard labor. Even children could easily plant, harvest and cook potatoes, which of course required no threshing, curing or grinding. The abundance provided by potatoes greatly decreased infant mortality and encouraged early marriage
The purposes of Spanish coming to Peru were to find out potatoes.
contradiction
id_6269
The Impact of the Potato. Jeff Chapman relates the story of history the most important vegetable. The potato was first cultivated in South America between three and seven thousand years ago, though scientists believe they may have grown wild in the region as long as 13,000 years ago. The genetic patterns of potato distribution indicate that the potato probably originated in the mountainous west-central region of the continent. Early Spanish chroniclers who misused the Indian word batata (sweet potato) as the name for the potato noted the importance of the tuber to the Incan Empire. The Incas had learned to preserve the potato for storage by dehydrating and mashing potatoes into a substance called Chuchu could be stored in a room for up to 10 years, providing excellent insurance against possible crop failures. As well as using the food as a staple crop, the Incas thought potatoes made childbirth easier and used it to treat injuries. The Spanish conquistadors first encountered the potato when they arrived in Peru in 1532 in search of gold, and noted Inca miners eating chuchu. At the time the Spaniards failed to realize that the potato represented a far more important treasure than either silver or gold, but they did gradually begin to use potatoes as basic rations aboard their ships. After the arrival of the potato in Spain in 1570, a few Spanish farmers began to cultivate them on a small scale, mostly as food for livestock. Throughout Europe, potatoes were regarded with suspicion, distaste and fear. Generally considered to be unfit for human consumption, they were used only as animal fodder and sustenance for the starving. In northern Europe, potatoes were primarily grown in botanical gardens as an exotic novelty. Even peasants refused to eat from a plant that produced ugly, misshapen tubers and that had come from a heathen civilization. Some felt that the potato plants resemblance to plants in the nightshade family hinted that it was the creation of witches or devils. In meat-loving England, farmers and urban workers regarded potatoes with extreme distaste. In 1662, the Royal Society recommended the cultivation of the tuber to the English government and the nation, but this recommendation had little impact. Potatoes did not become a staple until, during the food shortages associated with the Revolutionary Wars, the English government began to officially encourage potato cultivation. In 1795, the Board of Agriculture issued a pamphlet entitled Hints Respecting the Culture and Use of Potatoes; this was followed shortly by pro-potato editorials and potato recipes in The Times. Gradually, the lower classes began to follow the lead of the upper classes. A similar pattern emerged across the English Channel in the Netherlands, Belgium and France. While the potato slowly gained ground in eastern France (where it was often the only crop remaining after marauding soldiers plundered wheat fields and vineyards), it did not achieve widespread acceptance until the late 1700s. The peasants remained suspicious, in spite of a 1771 paper from the Facult de Paris testifying that the potato was not harmful but beneficial. The people began to overcome their distaste when the plant received the royal seal of approval: Louis XVI began to sport a potato flower in his buttonhole, and Marie-Antoinette wore the purple potato blossom in her hair. Frederick the Great of Prussia saw the potatos potential to help feed his nation and lower the price of bread, but faced the challenge of overcoming the peoples prejudice against the plant. When he issued a 1774 order for his subjects to grow potatoes as protection against famine, the town of Kolberg replied: The things have neither smell nor taste, not even the dogs will eat them, so what use are they to us? Trying a less direct approach to encourage his subjects to begin planting potatoes, Frederick used a bit of reverse psychology: he planted a royal field of potato plants and stationed a heavy guard to protect this field from thieves. Nearby peasants naturally assumed that anything worth guarding was worth stealing, and so snuck into the field and snatched the plants for their home gardens. Of course, this was entirely in line with Fredericks wishes. Historians debate whether the potato was primarily a cause or an effect of the huge population boom in industrial-era England and Wales. Prior to 1800, the English diet had consisted primarily of meat, supplemented by bread, butter and cheese. Few vegetables were consumed, most vegetables being regarded as nutritionally worthless and potentially harmful. This view began to change gradually in the late 1700s. The Industrial Revolution was drawing an ever increasing percentage of the populace into crowded cities, where only the richest could afford homes with ovens or coal storage rooms, and people were working 12-16 hour days which left them with little time or energy to prepare food. High yielding, easily prepared potato crops were the obvious solution to Englands food problems. Whereas most of their neighbors regarded the potato with suspicion and had to be persuaded to use it by the upper classes, the Irish peasantry embraced the tuber more passionately than anyone since the Incas. The potato was well suited to the Irish the soil and climate, and its high yield suited the most important concern of most Irish farmers: to feed their families. The most dramatic example of the potatos potential to alter population patterns occurred in Ireland, where the potato had become a staple by 1800. The Irish population doubled to eight million between 1780 and 1841, this without any significant expansion of industry or reform of agricultural techniques beyond the widespread cultivation of the potato. Though Irish landholding practices were primitive in comparison with those of England, the potatos high yields allowed even the poorest farmers to produce more healthy food than they needed with scarcely any investment or hard labor. Even children could easily plant, harvest and cook potatoes, which of course required no threshing, curing or grinding. The abundance provided by potatoes greatly decreased infant mortality and encouraged early marriage
The early Spanish called potato as the Incan name Chuchu
contradiction
id_6270
The Impact of the Potato. Jeff Chapman relates the story of history the most important vegetable. The potato was first cultivated in South America between three and seven thousand years ago, though scientists believe they may have grown wild in the region as long as 13,000 years ago. The genetic patterns of potato distribution indicate that the potato probably originated in the mountainous west-central region of the continent. Early Spanish chroniclers who misused the Indian word batata (sweet potato) as the name for the potato noted the importance of the tuber to the Incan Empire. The Incas had learned to preserve the potato for storage by dehydrating and mashing potatoes into a substance called Chuchu could be stored in a room for up to 10 years, providing excellent insurance against possible crop failures. As well as using the food as a staple crop, the Incas thought potatoes made childbirth easier and used it to treat injuries. The Spanish conquistadors first encountered the potato when they arrived in Peru in 1532 in search of gold, and noted Inca miners eating chuchu. At the time the Spaniards failed to realize that the potato represented a far more important treasure than either silver or gold, but they did gradually begin to use potatoes as basic rations aboard their ships. After the arrival of the potato in Spain in 1570, a few Spanish farmers began to cultivate them on a small scale, mostly as food for livestock. Throughout Europe, potatoes were regarded with suspicion, distaste and fear. Generally considered to be unfit for human consumption, they were used only as animal fodder and sustenance for the starving. In northern Europe, potatoes were primarily grown in botanical gardens as an exotic novelty. Even peasants refused to eat from a plant that produced ugly, misshapen tubers and that had come from a heathen civilization. Some felt that the potato plants resemblance to plants in the nightshade family hinted that it was the creation of witches or devils. In meat-loving England, farmers and urban workers regarded potatoes with extreme distaste. In 1662, the Royal Society recommended the cultivation of the tuber to the English government and the nation, but this recommendation had little impact. Potatoes did not become a staple until, during the food shortages associated with the Revolutionary Wars, the English government began to officially encourage potato cultivation. In 1795, the Board of Agriculture issued a pamphlet entitled Hints Respecting the Culture and Use of Potatoes; this was followed shortly by pro-potato editorials and potato recipes in The Times. Gradually, the lower classes began to follow the lead of the upper classes. A similar pattern emerged across the English Channel in the Netherlands, Belgium and France. While the potato slowly gained ground in eastern France (where it was often the only crop remaining after marauding soldiers plundered wheat fields and vineyards), it did not achieve widespread acceptance until the late 1700s. The peasants remained suspicious, in spite of a 1771 paper from the Facult de Paris testifying that the potato was not harmful but beneficial. The people began to overcome their distaste when the plant received the royal seal of approval: Louis XVI began to sport a potato flower in his buttonhole, and Marie-Antoinette wore the purple potato blossom in her hair. Frederick the Great of Prussia saw the potatos potential to help feed his nation and lower the price of bread, but faced the challenge of overcoming the peoples prejudice against the plant. When he issued a 1774 order for his subjects to grow potatoes as protection against famine, the town of Kolberg replied: The things have neither smell nor taste, not even the dogs will eat them, so what use are they to us? Trying a less direct approach to encourage his subjects to begin planting potatoes, Frederick used a bit of reverse psychology: he planted a royal field of potato plants and stationed a heavy guard to protect this field from thieves. Nearby peasants naturally assumed that anything worth guarding was worth stealing, and so snuck into the field and snatched the plants for their home gardens. Of course, this was entirely in line with Fredericks wishes. Historians debate whether the potato was primarily a cause or an effect of the huge population boom in industrial-era England and Wales. Prior to 1800, the English diet had consisted primarily of meat, supplemented by bread, butter and cheese. Few vegetables were consumed, most vegetables being regarded as nutritionally worthless and potentially harmful. This view began to change gradually in the late 1700s. The Industrial Revolution was drawing an ever increasing percentage of the populace into crowded cities, where only the richest could afford homes with ovens or coal storage rooms, and people were working 12-16 hour days which left them with little time or energy to prepare food. High yielding, easily prepared potato crops were the obvious solution to Englands food problems. Whereas most of their neighbors regarded the potato with suspicion and had to be persuaded to use it by the upper classes, the Irish peasantry embraced the tuber more passionately than anyone since the Incas. The potato was well suited to the Irish the soil and climate, and its high yield suited the most important concern of most Irish farmers: to feed their families. The most dramatic example of the potatos potential to alter population patterns occurred in Ireland, where the potato had become a staple by 1800. The Irish population doubled to eight million between 1780 and 1841, this without any significant expansion of industry or reform of agricultural techniques beyond the widespread cultivation of the potato. Though Irish landholding practices were primitive in comparison with those of England, the potatos high yields allowed even the poorest farmers to produce more healthy food than they needed with scarcely any investment or hard labor. Even children could easily plant, harvest and cook potatoes, which of course required no threshing, curing or grinding. The abundance provided by potatoes greatly decreased infant mortality and encouraged early marriage
Peasants at that time did not like to eat potatoes because they were ugly.
entailment
id_6271
The Impact of the Potato. Jeff Chapman relates the story of history the most important vegetable. The potato was first cultivated in South America between three and seven thousand years ago, though scientists believe they may have grown wild in the region as long as 13,000 years ago. The genetic patterns of potato distribution indicate that the potato probably originated in the mountainous west-central region of the continent. Early Spanish chroniclers who misused the Indian word batata (sweet potato) as the name for the potato noted the importance of the tuber to the Incan Empire. The Incas had learned to preserve the potato for storage by dehydrating and mashing potatoes into a substance called Chuchu could be stored in a room for up to 10 years, providing excellent insurance against possible crop failures. As well as using the food as a staple crop, the Incas thought potatoes made childbirth easier and used it to treat injuries. The Spanish conquistadors first encountered the potato when they arrived in Peru in 1532 in search of gold, and noted Inca miners eating chuchu. At the time the Spaniards failed to realize that the potato represented a far more important treasure than either silver or gold, but they did gradually begin to use potatoes as basic rations aboard their ships. After the arrival of the potato in Spain in 1570, a few Spanish farmers began to cultivate them on a small scale, mostly as food for livestock. Throughout Europe, potatoes were regarded with suspicion, distaste and fear. Generally considered to be unfit for human consumption, they were used only as animal fodder and sustenance for the starving. In northern Europe, potatoes were primarily grown in botanical gardens as an exotic novelty. Even peasants refused to eat from a plant that produced ugly, misshapen tubers and that had come from a heathen civilization. Some felt that the potato plants resemblance to plants in the nightshade family hinted that it was the creation of witches or devils. In meat-loving England, farmers and urban workers regarded potatoes with extreme distaste. In 1662, the Royal Society recommended the cultivation of the tuber to the English government and the nation, but this recommendation had little impact. Potatoes did not become a staple until, during the food shortages associated with the Revolutionary Wars, the English government began to officially encourage potato cultivation. In 1795, the Board of Agriculture issued a pamphlet entitled Hints Respecting the Culture and Use of Potatoes; this was followed shortly by pro-potato editorials and potato recipes in The Times. Gradually, the lower classes began to follow the lead of the upper classes. A similar pattern emerged across the English Channel in the Netherlands, Belgium and France. While the potato slowly gained ground in eastern France (where it was often the only crop remaining after marauding soldiers plundered wheat fields and vineyards), it did not achieve widespread acceptance until the late 1700s. The peasants remained suspicious, in spite of a 1771 paper from the Facult de Paris testifying that the potato was not harmful but beneficial. The people began to overcome their distaste when the plant received the royal seal of approval: Louis XVI began to sport a potato flower in his buttonhole, and Marie-Antoinette wore the purple potato blossom in her hair. Frederick the Great of Prussia saw the potatos potential to help feed his nation and lower the price of bread, but faced the challenge of overcoming the peoples prejudice against the plant. When he issued a 1774 order for his subjects to grow potatoes as protection against famine, the town of Kolberg replied: The things have neither smell nor taste, not even the dogs will eat them, so what use are they to us? Trying a less direct approach to encourage his subjects to begin planting potatoes, Frederick used a bit of reverse psychology: he planted a royal field of potato plants and stationed a heavy guard to protect this field from thieves. Nearby peasants naturally assumed that anything worth guarding was worth stealing, and so snuck into the field and snatched the plants for their home gardens. Of course, this was entirely in line with Fredericks wishes. Historians debate whether the potato was primarily a cause or an effect of the huge population boom in industrial-era England and Wales. Prior to 1800, the English diet had consisted primarily of meat, supplemented by bread, butter and cheese. Few vegetables were consumed, most vegetables being regarded as nutritionally worthless and potentially harmful. This view began to change gradually in the late 1700s. The Industrial Revolution was drawing an ever increasing percentage of the populace into crowded cities, where only the richest could afford homes with ovens or coal storage rooms, and people were working 12-16 hour days which left them with little time or energy to prepare food. High yielding, easily prepared potato crops were the obvious solution to Englands food problems. Whereas most of their neighbors regarded the potato with suspicion and had to be persuaded to use it by the upper classes, the Irish peasantry embraced the tuber more passionately than anyone since the Incas. The potato was well suited to the Irish the soil and climate, and its high yield suited the most important concern of most Irish farmers: to feed their families. The most dramatic example of the potatos potential to alter population patterns occurred in Ireland, where the potato had become a staple by 1800. The Irish population doubled to eight million between 1780 and 1841, this without any significant expansion of industry or reform of agricultural techniques beyond the widespread cultivation of the potato. Though Irish landholding practices were primitive in comparison with those of England, the potatos high yields allowed even the poorest farmers to produce more healthy food than they needed with scarcely any investment or hard labor. Even children could easily plant, harvest and cook potatoes, which of course required no threshing, curing or grinding. The abundance provided by potatoes greatly decreased infant mortality and encouraged early marriage
The popularity of potatoes in the UK was due to food shortages during the war.
entailment
id_6272
The Interlibrary Loan Service allows you to find books and journals that the library may not have, at present but other libraries do have. The library can borrow books or journals from other libraries on your behalf. We strive to make your requests successful, so to help us to do so please play attention to the following directions. Please make sure the following procedures are followed. Clearly write the name of the book or journal, date and/or volume, and author on the pink sheet of paper titled Interlibrary Request Form. Do not use any quotes or abbreviations for repeated information. Please write each request on a separate pink sheet. Make sure you include your full name, student number, and telephone number on each of the slips. Allow for at least 10 working days for the material to come. The library will hold located resources for up to one week. There are no repeat requests if you happen to arrive at the library later than one week for your requests. It is your responsibility to check whether the materials have come in. While many items may be listed that go back many years, the library can only track items that are no more than 10 years old. Also, please remember that fines for overdue requested material are the same as for any material borrowed from the library. If you have any questions please do not hesitate to ask Ms Friedman or Betty Shipley at the information desk.
The library will allow more than one request at a time.
entailment
id_6273
The Interlibrary Loan Service allows you to find books and journals that the library may not have, at present but other libraries do have. The library can borrow books or journals from other libraries on your behalf. We strive to make your requests successful, so to help us to do so please play attention to the following directions. Please make sure the following procedures are followed. Clearly write the name of the book or journal, date and/or volume, and author on the pink sheet of paper titled Interlibrary Request Form. Do not use any quotes or abbreviations for repeated information. Please write each request on a separate pink sheet. Make sure you include your full name, student number, and telephone number on each of the slips. Allow for at least 10 working days for the material to come. The library will hold located resources for up to one week. There are no repeat requests if you happen to arrive at the library later than one week for your requests. It is your responsibility to check whether the materials have come in. While many items may be listed that go back many years, the library can only track items that are no more than 10 years old. Also, please remember that fines for overdue requested material are the same as for any material borrowed from the library. If you have any questions please do not hesitate to ask Ms Friedman or Betty Shipley at the information desk.
The time that it takes does not include holidays.
entailment
id_6274
The Interlibrary Loan Service allows you to find books and journals that the library may not have, at present but other libraries do have. The library can borrow books or journals from other libraries on your behalf. We strive to make your requests successful, so to help us to do so please play attention to the following directions. Please make sure the following procedures are followed. Clearly write the name of the book or journal, date and/or volume, and author on the pink sheet of paper titled Interlibrary Request Form. Do not use any quotes or abbreviations for repeated information. Please write each request on a separate pink sheet. Make sure you include your full name, student number, and telephone number on each of the slips. Allow for at least 10 working days for the material to come. The library will hold located resources for up to one week. There are no repeat requests if you happen to arrive at the library later than one week for your requests. It is your responsibility to check whether the materials have come in. While many items may be listed that go back many years, the library can only track items that are no more than 10 years old. Also, please remember that fines for overdue requested material are the same as for any material borrowed from the library. If you have any questions please do not hesitate to ask Ms Friedman or Betty Shipley at the information desk.
You must write all the requests down clearly on a single request form.
contradiction
id_6275
The Interlibrary Loan Service allows you to find books and journals that the library may not have, at present but other libraries do have. The library can borrow books or journals from other libraries on your behalf. We strive to make your requests successful, so to help us to do so please play attention to the following directions. Please make sure the following procedures are followed. Clearly write the name of the book or journal, date and/or volume, and author on the pink sheet of paper titled Interlibrary Request Form. Do not use any quotes or abbreviations for repeated information. Please write each request on a separate pink sheet. Make sure you include your full name, student number, and telephone number on each of the slips. Allow for at least 10 working days for the material to come. The library will hold located resources for up to one week. There are no repeat requests if you happen to arrive at the library later than one week for your requests. It is your responsibility to check whether the materials have come in. While many items may be listed that go back many years, the library can only track items that are no more than 10 years old. Also, please remember that fines for overdue requested material are the same as for any material borrowed from the library. If you have any questions please do not hesitate to ask Ms Friedman or Betty Shipley at the information desk.
Books or journals will come in within 10 days.
contradiction
id_6276
The Interlibrary Loan Service allows you to find books and journals that the library may not have, at present but other libraries do have. The library can borrow books or journals from other libraries on your behalf. We strive to make your requests successful, so to help us to do so please play attention to the following directions. Please make sure the following procedures are followed. Clearly write the name of the book or journal, date and/or volume, and author on the pink sheet of paper titled Interlibrary Request Form. Do not use any quotes or abbreviations for repeated information. Please write each request on a separate pink sheet. Make sure you include your full name, student number, and telephone number on each of the slips. Allow for at least 10 working days for the material to come. The library will hold located resources for up to one week. There are no repeat requests if you happen to arrive at the library later than one week for your requests. It is your responsibility to check whether the materials have come in. While many items may be listed that go back many years, the library can only track items that are no more than 10 years old. Also, please remember that fines for overdue requested material are the same as for any material borrowed from the library. If you have any questions please do not hesitate to ask Ms Friedman or Betty Shipley at the information desk.
The library will inform you once the book comes in.
contradiction
id_6277
The International Monetary Fund (IMF) has announced plans to give the Republic of Ireland a further loan of four billion pounds over the next three years. This announcement comes as the Irish economy shows signs of stabilizing after new spending cuts were recently implemented. In addition to spending cuts, a rise in tax has also been announced. This would take the level of tax in the Republic of Ireland to twenty nine percent, forcing some members of the Dali (Irish Parliament) to voice concerns that shoppers will go to the North instead. A cut in the number of public service workers is also expected.
Public sector jobs made up the majority of the cuts
neutral
id_6278
The International Monetary Fund (IMF) has announced plans to give the Republic of Ireland a further loan of four billion pounds over the next three years. This announcement comes as the Irish economy shows signs of stabilizing after new spending cuts were recently implemented. In addition to spending cuts, a rise in tax has also been announced. This would take the level of tax in the Republic of Ireland to twenty nine percent, forcing some members of the Dali (Irish Parliament) to voice concerns that shoppers will go to the North instead. A cut in the number of public service workers is also expected.
More Irish shoppers are going to the North instead.
neutral
id_6279
The International Monetary Fund (IMF) has announced plans to give the Republic of Ireland a further loan of four billion pounds over the next three years. This announcement comes as the Irish economy shows signs of stabilizing after new spending cuts were recently implemented. In addition to spending cuts, a rise in tax has also been announced. This would take the level of tax in the Republic of Ireland to twenty nine percent, forcing some members of the Dali (Irish Parliament) to voice concerns that shoppers will go to the North instead. A cut in the number of public service workers is also expected.
The Irish economy shows signs of stabilizing
entailment
id_6280
The International Monetary Fund (IMF) has announced plans to give the Republic of Ireland a further loan of four billion pounds over the next three years. This announcement comes as the Irish economy shows signs of stabilizing after new spending cuts were recently implemented. In addition to spending cuts, a rise in tax has also been announced. This would take the level of tax in the Republic of Ireland to twenty nine percent, forcing some members of the Dali (Irish Parliament) to voice concerns that shoppers will go to the North instead. A cut in the number of public service workers is also expected.
The Republic of Ireland does not expect the loan back.
contradiction
id_6281
The International Monetary Fund (IMF) was created under the 1944 Bretton Woods agreement, a plan to promote open markets through exchange rates tied to the U. S. dollar. If a country couldn't cover its trade deficits, the IMF was to step in and lend it the needed dollars on certain conditions. When the fixed-rate regime of Bretton Woods ended in 1971, economists imagined that a new era of freely floating exchange rates would keep imports and exports roughly in balance, thus eliminating large trade deficits and the need to borrow abroad to cover them. But many governments were loath to let exchange rates float freely. To hold down prices for imported food and energy, they kept their currencies at overvalued levels. They borrowed abroad for other reasons too: for grandiose public-works projects; to keep state-owned industries afloat; and because it suited sticky-fingered ruling families.
The policies of some countries did not hold true to the original goal of Bretton Woods.
entailment
id_6282
The International Monetary Fund (IMF) was created under the 1944 Bretton Woods agreement, a plan to promote open markets through exchange rates tied to the U. S. dollar. If a country couldn't cover its trade deficits, the IMF was to step in and lend it the needed dollars on certain conditions. When the fixed-rate regime of Bretton Woods ended in 1971, economists imagined that a new era of freely floating exchange rates would keep imports and exports roughly in balance, thus eliminating large trade deficits and the need to borrow abroad to cover them. But many governments were loath to let exchange rates float freely. To hold down prices for imported food and energy, they kept their currencies at overvalued levels. They borrowed abroad for other reasons too: for grandiose public-works projects; to keep state-owned industries afloat; and because it suited sticky-fingered ruling families.
Keeping a currency at an overvalued level could act to keep imported goods at low prices.
entailment
id_6283
The International Monetary Fund (IMF) was created under the 1944 Bretton Woods agreement, a plan to promote open markets through exchange rates tied to the U. S. dollar. If a country couldn't cover its trade deficits, the IMF was to step in and lend it the needed dollars on certain conditions. When the fixed-rate regime of Bretton Woods ended in 1971, economists imagined that a new era of freely floating exchange rates would keep imports and exports roughly in balance, thus eliminating large trade deficits and the need to borrow abroad to cover them. But many governments were loath to let exchange rates float freely. To hold down prices for imported food and energy, they kept their currencies at overvalued levels. They borrowed abroad for other reasons too: for grandiose public-works projects; to keep state-owned industries afloat; and because it suited sticky-fingered ruling families.
Countries that have large trade deficits need to borrow abroad to cover them.
neutral
id_6284
The Johari window, a technique created by Joseph Luft and Harrington Ingham, is a tool used to promote a better understanding of the self and of other people. When using the technique, participants are given a list of 56 personality descriptors, and then asked to select the aspects which describe their own personality and map them on a grid. The top left box of the grid is for personal aspects known to you, and known to others. The bottom left box is for aspects which are known to you, but not to others. The top right box is for aspects not known to the self, but to others (and selected by others). The bottom right box is for aspects not known to the self, or to others (the personal aspects left unselected).
The term Johari is a combination of Joseph and Harrington.
neutral
id_6285
The Johari window, a technique created by Joseph Luft and Harrington Ingham, is a tool used to promote a better understanding of the self and of other people. When using the technique, participants are given a list of 56 personality descriptors, and then asked to select the aspects which describe their own personality and map them on a grid. The top left box of the grid is for personal aspects known to you, and known to others. The bottom left box is for aspects which are known to you, but not to others. The top right box is for aspects not known to the self, but to others (and selected by others). The bottom right box is for aspects not known to the self, or to others (the personal aspects left unselected).
The Johari Window technique must be done in a group.
entailment
id_6286
The Johari window, a technique created by Joseph Luft and Harrington Ingham, is a tool used to promote a better understanding of the self and of other people. When using the technique, participants are given a list of 56 personality descriptors, and then asked to select the aspects which describe their own personality and map them on a grid. The top left box of the grid is for personal aspects known to you, and known to others. The bottom left box is for aspects which are known to you, but not to others. The top right box is for aspects not known to the self, but to others (and selected by others). The bottom right box is for aspects not known to the self, or to others (the personal aspects left unselected).
The Bottom left box is for personality aspects hidden to other people.
entailment
id_6287
The Johari window, a technique created by Joseph Luft and Harrington Ingham, is a tool used to promote a better understanding of the self and of other people. When using the technique, participants are given a list of 56 personality descriptors, and then asked to select the aspects which describe their own personality and map them on a grid. The top left box of the grid is for personal aspects known to you, and known to others. The bottom left box is for aspects which are known to you, but not to others. The top right box is for aspects not known to the self, but to others (and selected by others). The bottom right box is for aspects not known to the self, or to others (the personal aspects left unselected).
The Johari window technique is used in personal and team development.
neutral
id_6288
The King of Fruits One fact is certain: youll smell it before you see it. The scent (or should that be odour? ) is overpowering (or should that be nauseating? ). One inhales it with delight, or shrinks back in disgust. Is it sweet almonds with vanilla custard and a splash of whiskey? Or old socks garnished with rotten onion and a sprinkling of turpentine? Whatever the description, it wafts from what must be considered the most singular fruit on the planetthe durian, a Southeast Asian favourite, commonly called the king of fruits. Its title is, in many ways, deserved. As fruits go, it is huge and imposing. As big as a basketball, up to three kilograms heavy, and most noticeably, covered with a thick and tough thorn-covered husk, it demands a royal respect. The thorns are so sharp that even holding the massive object is difficult. In supermarkets, they are usually put into mesh bags to ease handling, while extracting the flesh itself requires the wearing of thick protective gloves, a delicate and dextrous use of a large knife, and visible effort. One can see why it is increasingly popular, in western markets, to have that flesh removed, wrapped up, and purchased directly. This leads one to wonder why nature designed such a smelly fruit in such an inconvenient package. Nature is, however, cleverer than one might think. For a start, that pungent odour allows easier detection by animals in the thick tropical forests of Brunei, Indonesia, and Malaysia, where the wild durian originates. When the pod falls, and the husk begins to crack open, wild deer, pigs, orangutans, and elephants, are easily drawn forth, navigating from hundreds of meters away directly to the tree. The second clever fact is that, since the inner seeds are rather large, the durian tree needs correspondingly larger animals to eat, ingest, and transport these seeds away, hence the use of that tough spiny cover. Only the largest and strongest animals can get past that. And what are they seeking? Upon prising open the large pod, one is presented with white fibrous pith in which are nestled pockets of soft yellowish flesh, divided into lobes. Each lobe holds a large brown seed within. Although these seeds themselves can be cooked and eaten, it is the surrounding flesh over which all the fuss is made. One of the best descriptions comes from the British naturalist, Alfred Wallace. Writen in 1856, his experience is typical of many, and certainly of mine. At first, he struggled hard to overcome the disagreeable aroma, but upon eating it out of doors found the flesh to have a rich glutinous smoothness, neither acid nor sweet nor juicy; yet it wants neither of these qualities, for it is in itself perfect. He at once became a confirmed durian eater. Exactly! In actual fact, the flavour can vary considerably depending on the stage of ripeness and methods of storage. In Southern Thailand, the people prefer younger durian, with firmer texture and milder flavour, whereas in Malaysia, the preference is to allow the durian to fall naturally from the tree, then further ripen during transport. This results in a buttery texture and highly individual aroma, often slightly fermented. Whatever the case, it is this soft creamy consistency which easily allows durian to blend with other Southeast Asian delicacies, from candy and cakes, to modern milkshakes and ice cream. It can also appear in meals, mixed with vegetables or chili, and lower-grade durian (otherwise unfit for human consumption) is fermented into paste, used in a variety of local rice dishes. Such popularity has seen the widespread cultivation of durian, although the tree will only respond to tropical climatesfor example, only in the very northern parts of Australia, where it was introduced in the early 1960s. Since that time, modern breeding and cultivating techniques have resulted in the introduction of hundreds of cultivars (subspecies bred, and maintained by propagation, for desirable characteristics). They produce different degrees of odour, seed size, colour, and texture of flesh. The tree itself is always very large, up to 50 metres, and given that the heavy thorny pods can hang from even the highest branches, and will drop when ripened, one does not walk within a durian plantation without a hardhator at least, not without risking serious injury. Thailand, where durian remains very popular, now exports most of this fruit, with five cultivars in large-scale commercial production. The market is principally other Asian nations, although interest is growing in the West as Asian immigrants take their tastes and eating preferences with them for example, in Canada and Australia. The fruit is seasonal, and local, sale of durian pods is usually done by weight. These can fetch high prices, particularly in the more affluent Asian countries, and especially when one considers that less than one third of that heavy pod contains the edible pulp. In the true spirit of Alfred Wallace, there are certainly a large and growing number of confirmed durian eaters out there.
Thailand consumes the most durians.
neutral
id_6289
The King of Fruits One fact is certain: youll smell it before you see it. The scent (or should that be odour? ) is overpowering (or should that be nauseating? ). One inhales it with delight, or shrinks back in disgust. Is it sweet almonds with vanilla custard and a splash of whiskey? Or old socks garnished with rotten onion and a sprinkling of turpentine? Whatever the description, it wafts from what must be considered the most singular fruit on the planetthe durian, a Southeast Asian favourite, commonly called the king of fruits. Its title is, in many ways, deserved. As fruits go, it is huge and imposing. As big as a basketball, up to three kilograms heavy, and most noticeably, covered with a thick and tough thorn-covered husk, it demands a royal respect. The thorns are so sharp that even holding the massive object is difficult. In supermarkets, they are usually put into mesh bags to ease handling, while extracting the flesh itself requires the wearing of thick protective gloves, a delicate and dextrous use of a large knife, and visible effort. One can see why it is increasingly popular, in western markets, to have that flesh removed, wrapped up, and purchased directly. This leads one to wonder why nature designed such a smelly fruit in such an inconvenient package. Nature is, however, cleverer than one might think. For a start, that pungent odour allows easier detection by animals in the thick tropical forests of Brunei, Indonesia, and Malaysia, where the wild durian originates. When the pod falls, and the husk begins to crack open, wild deer, pigs, orangutans, and elephants, are easily drawn forth, navigating from hundreds of meters away directly to the tree. The second clever fact is that, since the inner seeds are rather large, the durian tree needs correspondingly larger animals to eat, ingest, and transport these seeds away, hence the use of that tough spiny cover. Only the largest and strongest animals can get past that. And what are they seeking? Upon prising open the large pod, one is presented with white fibrous pith in which are nestled pockets of soft yellowish flesh, divided into lobes. Each lobe holds a large brown seed within. Although these seeds themselves can be cooked and eaten, it is the surrounding flesh over which all the fuss is made. One of the best descriptions comes from the British naturalist, Alfred Wallace. Writen in 1856, his experience is typical of many, and certainly of mine. At first, he struggled hard to overcome the disagreeable aroma, but upon eating it out of doors found the flesh to have a rich glutinous smoothness, neither acid nor sweet nor juicy; yet it wants neither of these qualities, for it is in itself perfect. He at once became a confirmed durian eater. Exactly! In actual fact, the flavour can vary considerably depending on the stage of ripeness and methods of storage. In Southern Thailand, the people prefer younger durian, with firmer texture and milder flavour, whereas in Malaysia, the preference is to allow the durian to fall naturally from the tree, then further ripen during transport. This results in a buttery texture and highly individual aroma, often slightly fermented. Whatever the case, it is this soft creamy consistency which easily allows durian to blend with other Southeast Asian delicacies, from candy and cakes, to modern milkshakes and ice cream. It can also appear in meals, mixed with vegetables or chili, and lower-grade durian (otherwise unfit for human consumption) is fermented into paste, used in a variety of local rice dishes. Such popularity has seen the widespread cultivation of durian, although the tree will only respond to tropical climatesfor example, only in the very northern parts of Australia, where it was introduced in the early 1960s. Since that time, modern breeding and cultivating techniques have resulted in the introduction of hundreds of cultivars (subspecies bred, and maintained by propagation, for desirable characteristics). They produce different degrees of odour, seed size, colour, and texture of flesh. The tree itself is always very large, up to 50 metres, and given that the heavy thorny pods can hang from even the highest branches, and will drop when ripened, one does not walk within a durian plantation without a hardhator at least, not without risking serious injury. Thailand, where durian remains very popular, now exports most of this fruit, with five cultivars in large-scale commercial production. The market is principally other Asian nations, although interest is growing in the West as Asian immigrants take their tastes and eating preferences with them for example, in Canada and Australia. The fruit is seasonal, and local, sale of durian pods is usually done by weight. These can fetch high prices, particularly in the more affluent Asian countries, and especially when one considers that less than one third of that heavy pod contains the edible pulp. In the true spirit of Alfred Wallace, there are certainly a large and growing number of confirmed durian eaters out there.
The seeds can be eaten.
entailment
id_6290
The King of Fruits One fact is certain: youll smell it before you see it. The scent (or should that be odour? ) is overpowering (or should that be nauseating? ). One inhales it with delight, or shrinks back in disgust. Is it sweet almonds with vanilla custard and a splash of whiskey? Or old socks garnished with rotten onion and a sprinkling of turpentine? Whatever the description, it wafts from what must be considered the most singular fruit on the planetthe durian, a Southeast Asian favourite, commonly called the king of fruits. Its title is, in many ways, deserved. As fruits go, it is huge and imposing. As big as a basketball, up to three kilograms heavy, and most noticeably, covered with a thick and tough thorn-covered husk, it demands a royal respect. The thorns are so sharp that even holding the massive object is difficult. In supermarkets, they are usually put into mesh bags to ease handling, while extracting the flesh itself requires the wearing of thick protective gloves, a delicate and dextrous use of a large knife, and visible effort. One can see why it is increasingly popular, in western markets, to have that flesh removed, wrapped up, and purchased directly. This leads one to wonder why nature designed such a smelly fruit in such an inconvenient package. Nature is, however, cleverer than one might think. For a start, that pungent odour allows easier detection by animals in the thick tropical forests of Brunei, Indonesia, and Malaysia, where the wild durian originates. When the pod falls, and the husk begins to crack open, wild deer, pigs, orangutans, and elephants, are easily drawn forth, navigating from hundreds of meters away directly to the tree. The second clever fact is that, since the inner seeds are rather large, the durian tree needs correspondingly larger animals to eat, ingest, and transport these seeds away, hence the use of that tough spiny cover. Only the largest and strongest animals can get past that. And what are they seeking? Upon prising open the large pod, one is presented with white fibrous pith in which are nestled pockets of soft yellowish flesh, divided into lobes. Each lobe holds a large brown seed within. Although these seeds themselves can be cooked and eaten, it is the surrounding flesh over which all the fuss is made. One of the best descriptions comes from the British naturalist, Alfred Wallace. Writen in 1856, his experience is typical of many, and certainly of mine. At first, he struggled hard to overcome the disagreeable aroma, but upon eating it out of doors found the flesh to have a rich glutinous smoothness, neither acid nor sweet nor juicy; yet it wants neither of these qualities, for it is in itself perfect. He at once became a confirmed durian eater. Exactly! In actual fact, the flavour can vary considerably depending on the stage of ripeness and methods of storage. In Southern Thailand, the people prefer younger durian, with firmer texture and milder flavour, whereas in Malaysia, the preference is to allow the durian to fall naturally from the tree, then further ripen during transport. This results in a buttery texture and highly individual aroma, often slightly fermented. Whatever the case, it is this soft creamy consistency which easily allows durian to blend with other Southeast Asian delicacies, from candy and cakes, to modern milkshakes and ice cream. It can also appear in meals, mixed with vegetables or chili, and lower-grade durian (otherwise unfit for human consumption) is fermented into paste, used in a variety of local rice dishes. Such popularity has seen the widespread cultivation of durian, although the tree will only respond to tropical climatesfor example, only in the very northern parts of Australia, where it was introduced in the early 1960s. Since that time, modern breeding and cultivating techniques have resulted in the introduction of hundreds of cultivars (subspecies bred, and maintained by propagation, for desirable characteristics). They produce different degrees of odour, seed size, colour, and texture of flesh. The tree itself is always very large, up to 50 metres, and given that the heavy thorny pods can hang from even the highest branches, and will drop when ripened, one does not walk within a durian plantation without a hardhator at least, not without risking serious injury. Thailand, where durian remains very popular, now exports most of this fruit, with five cultivars in large-scale commercial production. The market is principally other Asian nations, although interest is growing in the West as Asian immigrants take their tastes and eating preferences with them for example, in Canada and Australia. The fruit is seasonal, and local, sale of durian pods is usually done by weight. These can fetch high prices, particularly in the more affluent Asian countries, and especially when one considers that less than one third of that heavy pod contains the edible pulp. In the true spirit of Alfred Wallace, there are certainly a large and growing number of confirmed durian eaters out there.
Durian trees are grown in many parts of Australia.
contradiction
id_6291
The Kyoto Protocol is an international agreement written by the United Nations in order to reduce the effects of climate change. This agreement sets targets for countries in order for them to reduce their greenhouse gas emissions. These gases are believed to be responsible for causing global warming as a result of recent industrialisation. The Protocol was written in 1997 and each country that signed the protocol agreed to reduce their emissions to their own specific target. This agreement could only become legally binding when two conditions had been fulfilled: When 55 countries agreed to be legally bound by the agreement and when 55% of emissions from industrialised countries had been accounted for. The first condition was met in 2002 however countries such as Australia and the United States refused to be bound by the agreement so the minimum of 55% of emissions from industrialised countries was not met. It was only after Russia joined in 2004 that allowed the protocol to come into force in 2005. Some climate scientists have argued that the target combined reduction of 5.2% emissions from industrialised nations would not be enough to avoid the worst consequences of global warming. In order to have a significant impact, we would need to aim at reducing emissions by 60% and to get larger countries such as the US to support the agreement.
The greenhouse gas emissions from Australia and the United States represent 45% of emissions from industrialised countries.
contradiction
id_6292
The Kyoto Protocol is an international agreement written by the United Nations in order to reduce the effects of climate change. This agreement sets targets for countries in order for them to reduce their greenhouse gas emissions. These gases are believed to be responsible for causing global warming as a result of recent industrialisation. The Protocol was written in 1997 and each country that signed the protocol agreed to reduce their emissions to their own specific target. This agreement could only become legally binding when two conditions had been fulfilled: When 55 countries agreed to be legally bound by the agreement and when 55% of emissions from industrialised countries had been accounted for. The first condition was met in 2002 however countries such as Australia and the United States refused to be bound by the agreement so the minimum of 55% of emissions from industrialised countries was not met. It was only after Russia joined in 2004 that allowed the protocol to come into force in 2005. Some climate scientists have argued that the target combined reduction of 5.2% emissions from industrialised nations would not be enough to avoid the worst consequences of global warming. In order to have a significant impact, we would need to aim at reducing emissions by 60% and to get larger countries such as the US to support the agreement.
The Kyoto Protocol is legally binding in all industrialised countries.
contradiction
id_6293
The Kyoto Protocol is an international agreement written by the United Nations in order to reduce the effects of climate change. This agreement sets targets for countries in order for them to reduce their greenhouse gas emissions. These gases are believed to be responsible for causing global warming as a result of recent industrialisation. The Protocol was written in 1997 and each country that signed the protocol agreed to reduce their emissions to their own specific target. This agreement could only become legally binding when two conditions had been fulfilled: When 55 countries agreed to be legally bound by the agreement and when 55% of emissions from industrialised countries had been accounted for. The first condition was met in 2002 however countries such as Australia and the United States refused to be bound by the agreement so the minimum of 55% of emissions from industrialised countries was not met. It was only after Russia joined in 2004 that allowed the protocol to come into force in 2005. Some climate scientists have argued that the target combined reduction of 5.2% emissions from industrialised nations would not be enough to avoid the worst consequences of global warming. In order to have a significant impact, we would need to aim at reducing emissions by 60% and to get larger countries such as the US to support the agreement.
The global emission of greenhouse gases has reduced since 2005.
neutral
id_6294
The Kyoto Protocol is an international agreement written by the United Nations in order to reduce the effects of climate change. This agreement sets targets for countries in order for them to reduce their greenhouse gas emissions. These gases are believed to be responsible for causing global warming as a result of recent industrialisation. The Protocol was written in 1997 and each country that signed the protocol agreed to reduce their emissions to their own specific target. This agreement could only become legally binding when two conditions had been fulfilled: When 55 countries agreed to be legally bound by the agreement and when 55% of emissions from industrialised countries had been accounted for. The first condition was met in 2002 however countries such as Australia and the United States refused to be bound by the agreement so the minimum of 55% of emissions from industrialised countries was not met. It was only after Russia joined in 2004 that allowed the protocol to come into force in 2005. Some climate scientists have argued that the target combined reduction of 5.2% emissions from industrialised nations would not be enough to avoid the worst consequences of global warming. In order to have a significant impact, we would need to aim at reducing emissions by 60% and to get larger countries such as the US to support the agreement.
The harmful effects of climate change would be avoided if all countries reduced their emissions by 60%.
neutral
id_6295
The Kyoto Protocol is an international agreement written by the United Nations in order to reduce the effects of climate change. This agreement sets targets for countries in order for them to reduce their greenhouse gas emissions. These gases are believed to be responsible for causing global warming as a result of recent industrialisation. The Protocol was written in 1997 and each country that signed the protocol agreed to reduce their emissions to their own specific target. This agreement could only become legally binding when two conditions had been fulfilled: When 55 countries agreed to be legally bound by the agreement and when 55% of emissions from industrialised countries had been accounted for. The first condition was met in 2002 however countries such as Australia and the United States refused to be bound by the agreement so the minimum of 55% of emissions from industrialised countries was not met. It was only after Russia joined in 2004 that allowed the protocol to come into force in 2005. Some climate scientists have argued that the target combined reduction of 5.2% emissions from industrialised nations would not be enough to avoid the worst consequences of global warming. In order to have a significant impact, we would need to aim at reducing emissions by 60% and to get larger countries such as the US to support the agreement.
Each country chose the amount by which they would reduce their own emissions.
entailment
id_6296
The Labour Party has decided to recover its public profile in the coming election in certain areas where they were outperformed by the Liberals. Their campaign will include television advertisement as well as a press campaign. Four stimulating TV advertisements have been produced. These will be screened alternately in their designated locations on a daily basis for an initial period of eight weeks that might be extended if deemed necessary. This rule would apply to the press campaign as well. The image of the party as reflected in the television campaign is to be consistent with the one reflected in the press campaign. Both campaigns will aim to communicate the party's trustworthy spirit. A return on investment (ROI) study will examine the effectiveness of this campaign and will hopefully provide direction for future campaigns.
The spirit of the press campaign should be similar to the spirit of the TV campaign.
neutral
id_6297
The Labour Party has decided to recover its public profile in the coming election in certain areas where they were outperformed by the Liberals. Their campaign will include television advertisement as well as a press campaign. Four stimulating TV advertisements have been produced. These will be screened alternately in their designated locations on a daily basis for an initial period of eight weeks that might be extended if deemed necessary. This rule would apply to the press campaign as well. The image of the party as reflected in the television campaign is to be consistent with the one reflected in the press campaign. Both campaigns will aim to communicate the party's trustworthy spirit. A return on investment (ROI) study will examine the effectiveness of this campaign and will hopefully provide direction for future campaigns.
The duration of the press campaign is limited to a period of eight weeks.
contradiction
id_6298
The Labour Party has decided to recover its public profile in the coming election in certain areas where they were outperformed by the Liberals. Their campaign will include television advertisement as well as a press campaign. Four stimulating TV advertisements have been produced. These will be screened alternately in their designated locations on a daily basis for an initial period of eight weeks that might be extended if deemed necessary. This rule would apply to the press campaign as well. The image of the party as reflected in the television campaign is to be consistent with the one reflected in the press campaign. Both campaigns will aim to communicate the party's trustworthy spirit. A return on investment (ROI) study will examine the effectiveness of this campaign and will hopefully provide direction for future campaigns.
The influence of the television campaign will be observed.
entailment
id_6299
The Lack Of Sleep It is estimated that the average man or woman needs between seven-and-a-half and eight hours sleep a night. Some can manage on a lot less. Baroness Thatcher, for example, was reported to be able to get by on four hours sleep a night when she was Prime Minister of Britain. Dr Jill Wilkinson, senior lecturer in psychology at Surrey University and co-author of Psychology in Counselling and Therapeutic Practice, states that healthy individuals sleeping less than five hours or even as little as two hours in every 24 hours are rare, but represent a sizeable minority. The latest beliefs are that the main purposes of sleep are to enable the body to rest and replenish, allowing time for repairs to take place and for tissue to be regenerated. One supporting piece of evidence for this rest-and-repair theory is that production of the growth hormone somatotropin, which helps tissue to regenerate, peaks while we are asleep. Lack of sleep, however, can compromise the immune system, muddle thinking, cause depression, promote anxiety and encourage irritability. Researchers in San Diego deprived a group of men of sleep between Sam and lam on just one night, and found that levels of their bodies natural defences against viral infections had fallen significantly when measured the following morning. Sleep is essential for our physical and emotional well-being and there are few aspects of daily living that are not disrupted by the lack of it, says Professor William Regelson of Virginia University, a specialist in insomnia. Because it can seriously undermine the functioning of the immune system, sufferers are vulnerable to infection. For many people, lack of sleep is rarely a matter of choice. Some have problems getting to sleep, others with staying asleep until the morning. Despite popular belief that sleep is one long event, research shows that, in an average night, there are five stages of sleep and four cycles, during which the sequence of stages is repeated. In the first light phase, the heart rate and blood pressure go down and the muscles relax. In the next two stages, sleep gets progressively deeper. In stage four, usually reached after an hour, the slumber is so deep that, if awoken, the sleeper would be confused and disorientated. It is in this phase that sleep-walking can occur, with an average episode lasting no more than 15 minutes. In the fifth stage, the rapid eye movement (REM) stage, the heartbeat quickly gets back to normal levels, brain activity accelerates to daytime heights and above and the eyes move constantly beneath closed lids as if the sleeper is looking at something. During this stage, the body is almost paralysed. This REM phase is also the time when we dream. Sleeping patterns change with age, which is why many people over 60 develop insomnia. In America, that age group consumes almost half the sleep medication on the market. One theory for the age-related change is that it is due to hormonal changes. The temperature General Training: Reading and Writing rise occurs at daybreak in the young, but at three or four in the morning in the elderly. Age aside, it is estimated that roughly one in three people suffer some kind of sleep disturbance. Causes can be anything from pregnancy and stress to alcohol and heart disease. Smoking is a known handicap to sleep, with one survey showing that ex-smokers got to sleep in 18 minutes rather than their earlier average of 52 minutes. Apart from self-help therapy such as regular exercise, there are psychological treatments, including relaxation training and therapy aimed at getting rid of pre-sleep worries and anxieties. There is also sleep reduction therapy, where the aim is to improve sleep quality by strictly regulating the time people go to bed and when they get up. Medication is regarded by many as a last resort and often takes the form of sleeping pills, normally benzodiazepines, which are minor tranquillizers. Professor Regelson advocates the use of melatonin for treating sleep disorders. Melatonin is a naturally secreted hormone, located in the pineal gland deep inside the brain. The main function of the hormone is to control the bodys biological clock, so we know when to sleep and when to wake. The gland detects light reaching it through the eye; when there is no light, it secretes the melatonin into the bloodstream, lowering the body temperature and helping to induce sleep. Melatonin pills contain a synthetic version of the hormone and are commonly used for jet lag as well as for sleep disturbance. John Nicholls, sales manager of one of Americas largest health food shops, claims that sales of the pill have increased dramatically. He explains that it is sold in capsules, tablets, lozenges and mixed with herbs. It is not effective for all insomniacs, but many users have weaned themselves off sleeping tablets as a result of its application.
Dreaming and sleep-walking occur at similar stages of sleep.
contradiction