uid
stringlengths
4
7
premise
stringlengths
19
9.21k
hypothesis
stringlengths
13
488
label
stringclasses
3 values
id_6000
The City of Manchester in England was at the forefront of the 19th century industrial revolution and a global centre for the manufacture of cotton cloth. The citys industry is no longer centred on manufacturing but on service-based commerce, in particular finance and insurance. Manchesters architecture reflects this change and is a mix of buildings that date back to the times of the cotton trade and more contemporary constructions including the Beetham Tower, the tallest building outside of London, and The Green Building, a pioneering eco-friendly housing project. Most of the many ex-cotton mills still exist but have been converted into luxury apartments, hotels and office space. It is estimated that 35 per cent of Manchesters population has Irish ancestry and the Manchester Irish Festival and St Patricks Day Parade are among the most popular of the many events that take place in the city.
The subject of the passage is the architecture of the city of Manchester.
contradiction
id_6001
The City of Manchester in England was at the forefront of the 19th century industrial revolution and a global centre for the manufacture of cotton cloth. The citys industry is no longer centred on manufacturing but on service-based commerce, in particular finance and insurance. Manchesters architecture reflects this change and is a mix of buildings that date back to the times of the cotton trade and more contemporary constructions including the Beetham Tower, the tallest building outside of London, and The Green Building, a pioneering eco-friendly housing project. Most of the many ex-cotton mills still exist but have been converted into luxury apartments, hotels and office space. It is estimated that 35 per cent of Manchesters population has Irish ancestry and the Manchester Irish Festival and St Patricks Day Parade are among the most popular of the many events that take place in the city.
The tone of the passage is buoyant.
entailment
id_6002
The Coastal Archaeology of Britain. The recognition of the wealth and diversity of England's coastal archaeology has been one of the most important developments of recent years. Some elements of this enormous resource have long been known. The so-called 'submerged forests' off the coasts of England, sometimes with clear evidence of the human activity, had attracted the interest of antiquarians since at least the eighteenth century, but serious and systematic attention has been given to the archaeological potential of the coast only since the early 1980s. It is possible to trace a variety of causes for this concentration of effort and interest. In the 1980s and 1990s scientific research into climate change and its environmental impact spilled over into a much broader public debate as awareness of these issues grew; the prospect of rising sea levels over the next century, and their impact on current coastal environments, has been a particular focus for concern. At the same time archaeologists were beginning to recognize that the destruction caused by natural processes of coastal erosion and by human activity was having an increasing impact on the archaeological resource of the coast. The dominant process affecting the physical form of England in the post-glacial period has been the rise in the altitude of sea level relative to the land, as the glaciers melted and the landmass readjusted. The encroachment of the sea, the loss of huge areas of land now under the North Sea and the English Channel and especially the loss of the land bridge between England and France, which finally made Britain an island, must have been immensely significant factors in the lives of our prehistoric ancestors. Yet the way in which prehistoric communities adjusted to these environmental changes has seldom been a major theme in discussions of the period. One factor contributing to this has been that, although the rise in relative sea level is comparatively well documented we know little about the constant reconfiguration of the coastline. This was affected by many processes, which have not yet been adequately researched. The detailed reconstruction of coastline histories and the changing environments available for human use will be an important theme for future research. So great has been the rise in sea level and the consequent regression of the coast that each of the archaeological evidence now exposed in the coastal zone, whether being eroded or exposed as a buried land surface, is derived from what was originally terrestrial occupation. Its current location in the coastal zone is the product of later unrelated processes, and it can tell us little about past adaptations to the sea. Estimates of its significance will need to be made in the context of other related evidence from dry land sites. Nevertheless, its physical environment means that preservation is often excellent, for example in the case of the Neolithic structure excavated at the Stumble in Essex. In some cases these buried land surfaces do contain evidence for human exploitation of what was a coastal environment and elsewhere along the modern coast there is similar evidence. Where the evidence does relate to past human exploitation of the resources and the opportunities offered by the sea and the coast, it is both diverse and as yet little understood. We are not yet in a position to make even preliminary estimates of answers to such fundamental questions as the extent to which the sea and the coast affected human life in the past, what percentage of the population at any time lived within reach of the sea, or whether human settlements in coastal environments showed a distinct character from those inland. The most striking evidence for use of the sea is in the form of boats, yet we still have much to learn about their production and use. Most of the known wrecks around our coast are not unexpectedly of post-medieval date, and offer an unparalleled opportunity for research which has as yet been little used. The prehistoric sewn-plank boats such as those from the Humber estuary and Dover all seem to belong to the second millennium BC; after this there is a gap in the record of a millennium, which cannot yet be explained, before boats reappear, but built using a very different technology. Boatbuilding must have been an extremely important activity around much of our coast, yet we know almost nothing about it. Boats were some of the most complex artefacts produced by pre-modern societies, and further research on their production and use make an important contribution to our understanding of past attitudes to technology and technological change. Boats needed landing places, yet here again our knowledge is very patchy. In many cases the natural shores and beaches would have sufficed, leaving little or no archaeological trace, but especially in later periods, many ports and harbors, as well as smaller facilities such as quays, wharves, and jetties, were built. Despite a growth of interest in the waterfront archaeology of some of our more important Roman and medieval towns, very little attention has been paid to the multitude of smaller landing places. Redevelopment of harbor sites and other development and natural pressures along the coast are subjecting these important locations to unprecedented threats, yet few surveys of such sites have been undertaken. One of the most important revelations of recent research has been the extent of industrial activity along the coast. Fishing and salt production are among the better documented activities, but even here our knowledge is patchy. Many forms of fishing will leave little archaeological trace and one of the surprises of recent survey has been the extent of past investment in facilities for procuring fish and shellfish. Elaborate wooden fish weirs, often of considerable extent and responsive to aerial photography in shallow water, have been identified in areas such as Essex and the Severn estuary. The production of salt especially in the late Iron Age and early Roman periods, has been recognized for some time, especially in the Thames estuary and around the Solent and Poole Harbor, but the reasons for the decline of that industry and the nature of later coastal salt working are much less well understood. Other industries were also located along the coast, either because the raw materials outcropped there or for ease of working and transport mineral resources such as sand, gravel, stone, coal, ironstone, and alum were all exploited. These industries are poorly documented, but their remains are sometimes extensive and striking. Some appreciation of the variety and importance of the archaeological remains preserved in the coastal zone, albeit only in preliminary form, can thus be gained from recent work, but the complexity of the problem of managing that resource is also being realised. The problem arises not only from the scale and variety of the archaeological remains, but also from two other sources: the very varied natural and human threats to the resource, and the complex web of organisations with authority over, or interests in, the coastal zone. Human threats include the redevelopment of historic towns and old dockland areas, and the increased importance of the coast for the leisure and tourism industries, resulting in pressure for the increased provision of facilities such as marinas. The larger size of ferries has also caused an increase in the damage caused by their wash to fragile deposits in the intertidal zone. The most significant natural threat is the predicted rise in sea level over the next century especially in the south and east of England. Its impact on archaeology is not easy to predict, and though it is likely to be highly localized, it will be at a scale much larger than that of most archaeological sites. Thus protecting one site may simply result in transposing the threat to a point further along the coast. The management of the archaeological remains will have to be considered in a much longer time scale and a much wider geographical scale than is common in the case of dry land sites, and this will pose a serious challenge for archaeologists.
England lost much of its land after the Ice Age due to the rising sea level.
entailment
id_6003
The Coastal Archaeology of Britain. The recognition of the wealth and diversity of England's coastal archaeology has been one of the most important developments of recent years. Some elements of this enormous resource have long been known. The so-called 'submerged forests' off the coasts of England, sometimes with clear evidence of the human activity, had attracted the interest of antiquarians since at least the eighteenth century, but serious and systematic attention has been given to the archaeological potential of the coast only since the early 1980s. It is possible to trace a variety of causes for this concentration of effort and interest. In the 1980s and 1990s scientific research into climate change and its environmental impact spilled over into a much broader public debate as awareness of these issues grew; the prospect of rising sea levels over the next century, and their impact on current coastal environments, has been a particular focus for concern. At the same time archaeologists were beginning to recognize that the destruction caused by natural processes of coastal erosion and by human activity was having an increasing impact on the archaeological resource of the coast. The dominant process affecting the physical form of England in the post-glacial period has been the rise in the altitude of sea level relative to the land, as the glaciers melted and the landmass readjusted. The encroachment of the sea, the loss of huge areas of land now under the North Sea and the English Channel and especially the loss of the land bridge between England and France, which finally made Britain an island, must have been immensely significant factors in the lives of our prehistoric ancestors. Yet the way in which prehistoric communities adjusted to these environmental changes has seldom been a major theme in discussions of the period. One factor contributing to this has been that, although the rise in relative sea level is comparatively well documented we know little about the constant reconfiguration of the coastline. This was affected by many processes, which have not yet been adequately researched. The detailed reconstruction of coastline histories and the changing environments available for human use will be an important theme for future research. So great has been the rise in sea level and the consequent regression of the coast that each of the archaeological evidence now exposed in the coastal zone, whether being eroded or exposed as a buried land surface, is derived from what was originally terrestrial occupation. Its current location in the coastal zone is the product of later unrelated processes, and it can tell us little about past adaptations to the sea. Estimates of its significance will need to be made in the context of other related evidence from dry land sites. Nevertheless, its physical environment means that preservation is often excellent, for example in the case of the Neolithic structure excavated at the Stumble in Essex. In some cases these buried land surfaces do contain evidence for human exploitation of what was a coastal environment and elsewhere along the modern coast there is similar evidence. Where the evidence does relate to past human exploitation of the resources and the opportunities offered by the sea and the coast, it is both diverse and as yet little understood. We are not yet in a position to make even preliminary estimates of answers to such fundamental questions as the extent to which the sea and the coast affected human life in the past, what percentage of the population at any time lived within reach of the sea, or whether human settlements in coastal environments showed a distinct character from those inland. The most striking evidence for use of the sea is in the form of boats, yet we still have much to learn about their production and use. Most of the known wrecks around our coast are not unexpectedly of post-medieval date, and offer an unparalleled opportunity for research which has as yet been little used. The prehistoric sewn-plank boats such as those from the Humber estuary and Dover all seem to belong to the second millennium BC; after this there is a gap in the record of a millennium, which cannot yet be explained, before boats reappear, but built using a very different technology. Boatbuilding must have been an extremely important activity around much of our coast, yet we know almost nothing about it. Boats were some of the most complex artefacts produced by pre-modern societies, and further research on their production and use make an important contribution to our understanding of past attitudes to technology and technological change. Boats needed landing places, yet here again our knowledge is very patchy. In many cases the natural shores and beaches would have sufficed, leaving little or no archaeological trace, but especially in later periods, many ports and harbors, as well as smaller facilities such as quays, wharves, and jetties, were built. Despite a growth of interest in the waterfront archaeology of some of our more important Roman and medieval towns, very little attention has been paid to the multitude of smaller landing places. Redevelopment of harbor sites and other development and natural pressures along the coast are subjecting these important locations to unprecedented threats, yet few surveys of such sites have been undertaken. One of the most important revelations of recent research has been the extent of industrial activity along the coast. Fishing and salt production are among the better documented activities, but even here our knowledge is patchy. Many forms of fishing will leave little archaeological trace and one of the surprises of recent survey has been the extent of past investment in facilities for procuring fish and shellfish. Elaborate wooden fish weirs, often of considerable extent and responsive to aerial photography in shallow water, have been identified in areas such as Essex and the Severn estuary. The production of salt especially in the late Iron Age and early Roman periods, has been recognized for some time, especially in the Thames estuary and around the Solent and Poole Harbor, but the reasons for the decline of that industry and the nature of later coastal salt working are much less well understood. Other industries were also located along the coast, either because the raw materials outcropped there or for ease of working and transport mineral resources such as sand, gravel, stone, coal, ironstone, and alum were all exploited. These industries are poorly documented, but their remains are sometimes extensive and striking. Some appreciation of the variety and importance of the archaeological remains preserved in the coastal zone, albeit only in preliminary form, can thus be gained from recent work, but the complexity of the problem of managing that resource is also being realised. The problem arises not only from the scale and variety of the archaeological remains, but also from two other sources: the very varied natural and human threats to the resource, and the complex web of organisations with authority over, or interests in, the coastal zone. Human threats include the redevelopment of historic towns and old dockland areas, and the increased importance of the coast for the leisure and tourism industries, resulting in pressure for the increased provision of facilities such as marinas. The larger size of ferries has also caused an increase in the damage caused by their wash to fragile deposits in the intertidal zone. The most significant natural threat is the predicted rise in sea level over the next century especially in the south and east of England. Its impact on archaeology is not easy to predict, and though it is likely to be highly localized, it will be at a scale much larger than that of most archaeological sites. Thus protecting one site may simply result in transposing the threat to a point further along the coast. The management of the archaeological remains will have to be considered in a much longer time scale and a much wider geographical scale than is common in the case of dry land sites, and this will pose a serious challenge for archaeologists.
Coastal archaeological evidence may be well-protected by sea water.
entailment
id_6004
The Coastal Archaeology of Britain. The recognition of the wealth and diversity of England's coastal archaeology has been one of the most important developments of recent years. Some elements of this enormous resource have long been known. The so-called 'submerged forests' off the coasts of England, sometimes with clear evidence of the human activity, had attracted the interest of antiquarians since at least the eighteenth century, but serious and systematic attention has been given to the archaeological potential of the coast only since the early 1980s. It is possible to trace a variety of causes for this concentration of effort and interest. In the 1980s and 1990s scientific research into climate change and its environmental impact spilled over into a much broader public debate as awareness of these issues grew; the prospect of rising sea levels over the next century, and their impact on current coastal environments, has been a particular focus for concern. At the same time archaeologists were beginning to recognize that the destruction caused by natural processes of coastal erosion and by human activity was having an increasing impact on the archaeological resource of the coast. The dominant process affecting the physical form of England in the post-glacial period has been the rise in the altitude of sea level relative to the land, as the glaciers melted and the landmass readjusted. The encroachment of the sea, the loss of huge areas of land now under the North Sea and the English Channel and especially the loss of the land bridge between England and France, which finally made Britain an island, must have been immensely significant factors in the lives of our prehistoric ancestors. Yet the way in which prehistoric communities adjusted to these environmental changes has seldom been a major theme in discussions of the period. One factor contributing to this has been that, although the rise in relative sea level is comparatively well documented we know little about the constant reconfiguration of the coastline. This was affected by many processes, which have not yet been adequately researched. The detailed reconstruction of coastline histories and the changing environments available for human use will be an important theme for future research. So great has been the rise in sea level and the consequent regression of the coast that each of the archaeological evidence now exposed in the coastal zone, whether being eroded or exposed as a buried land surface, is derived from what was originally terrestrial occupation. Its current location in the coastal zone is the product of later unrelated processes, and it can tell us little about past adaptations to the sea. Estimates of its significance will need to be made in the context of other related evidence from dry land sites. Nevertheless, its physical environment means that preservation is often excellent, for example in the case of the Neolithic structure excavated at the Stumble in Essex. In some cases these buried land surfaces do contain evidence for human exploitation of what was a coastal environment and elsewhere along the modern coast there is similar evidence. Where the evidence does relate to past human exploitation of the resources and the opportunities offered by the sea and the coast, it is both diverse and as yet little understood. We are not yet in a position to make even preliminary estimates of answers to such fundamental questions as the extent to which the sea and the coast affected human life in the past, what percentage of the population at any time lived within reach of the sea, or whether human settlements in coastal environments showed a distinct character from those inland. The most striking evidence for use of the sea is in the form of boats, yet we still have much to learn about their production and use. Most of the known wrecks around our coast are not unexpectedly of post-medieval date, and offer an unparalleled opportunity for research which has as yet been little used. The prehistoric sewn-plank boats such as those from the Humber estuary and Dover all seem to belong to the second millennium BC; after this there is a gap in the record of a millennium, which cannot yet be explained, before boats reappear, but built using a very different technology. Boatbuilding must have been an extremely important activity around much of our coast, yet we know almost nothing about it. Boats were some of the most complex artefacts produced by pre-modern societies, and further research on their production and use make an important contribution to our understanding of past attitudes to technology and technological change. Boats needed landing places, yet here again our knowledge is very patchy. In many cases the natural shores and beaches would have sufficed, leaving little or no archaeological trace, but especially in later periods, many ports and harbors, as well as smaller facilities such as quays, wharves, and jetties, were built. Despite a growth of interest in the waterfront archaeology of some of our more important Roman and medieval towns, very little attention has been paid to the multitude of smaller landing places. Redevelopment of harbor sites and other development and natural pressures along the coast are subjecting these important locations to unprecedented threats, yet few surveys of such sites have been undertaken. One of the most important revelations of recent research has been the extent of industrial activity along the coast. Fishing and salt production are among the better documented activities, but even here our knowledge is patchy. Many forms of fishing will leave little archaeological trace and one of the surprises of recent survey has been the extent of past investment in facilities for procuring fish and shellfish. Elaborate wooden fish weirs, often of considerable extent and responsive to aerial photography in shallow water, have been identified in areas such as Essex and the Severn estuary. The production of salt especially in the late Iron Age and early Roman periods, has been recognized for some time, especially in the Thames estuary and around the Solent and Poole Harbor, but the reasons for the decline of that industry and the nature of later coastal salt working are much less well understood. Other industries were also located along the coast, either because the raw materials outcropped there or for ease of working and transport mineral resources such as sand, gravel, stone, coal, ironstone, and alum were all exploited. These industries are poorly documented, but their remains are sometimes extensive and striking. Some appreciation of the variety and importance of the archaeological remains preserved in the coastal zone, albeit only in preliminary form, can thus be gained from recent work, but the complexity of the problem of managing that resource is also being realised. The problem arises not only from the scale and variety of the archaeological remains, but also from two other sources: the very varied natural and human threats to the resource, and the complex web of organisations with authority over, or interests in, the coastal zone. Human threats include the redevelopment of historic towns and old dockland areas, and the increased importance of the coast for the leisure and tourism industries, resulting in pressure for the increased provision of facilities such as marinas. The larger size of ferries has also caused an increase in the damage caused by their wash to fragile deposits in the intertidal zone. The most significant natural threat is the predicted rise in sea level over the next century especially in the south and east of England. Its impact on archaeology is not easy to predict, and though it is likely to be highly localized, it will be at a scale much larger than that of most archaeological sites. Thus protecting one site may simply result in transposing the threat to a point further along the coast. The management of the archaeological remains will have to be considered in a much longer time scale and a much wider geographical scale than is common in the case of dry land sites, and this will pose a serious challenge for archaeologists.
The design of boats used by pre-modern people was very simple.
contradiction
id_6005
The Coastal Archaeology of Britain. The recognition of the wealth and diversity of England's coastal archaeology has been one of the most important developments of recent years. Some elements of this enormous resource have long been known. The so-called 'submerged forests' off the coasts of England, sometimes with clear evidence of the human activity, had attracted the interest of antiquarians since at least the eighteenth century, but serious and systematic attention has been given to the archaeological potential of the coast only since the early 1980s. It is possible to trace a variety of causes for this concentration of effort and interest. In the 1980s and 1990s scientific research into climate change and its environmental impact spilled over into a much broader public debate as awareness of these issues grew; the prospect of rising sea levels over the next century, and their impact on current coastal environments, has been a particular focus for concern. At the same time archaeologists were beginning to recognize that the destruction caused by natural processes of coastal erosion and by human activity was having an increasing impact on the archaeological resource of the coast. The dominant process affecting the physical form of England in the post-glacial period has been the rise in the altitude of sea level relative to the land, as the glaciers melted and the landmass readjusted. The encroachment of the sea, the loss of huge areas of land now under the North Sea and the English Channel and especially the loss of the land bridge between England and France, which finally made Britain an island, must have been immensely significant factors in the lives of our prehistoric ancestors. Yet the way in which prehistoric communities adjusted to these environmental changes has seldom been a major theme in discussions of the period. One factor contributing to this has been that, although the rise in relative sea level is comparatively well documented we know little about the constant reconfiguration of the coastline. This was affected by many processes, which have not yet been adequately researched. The detailed reconstruction of coastline histories and the changing environments available for human use will be an important theme for future research. So great has been the rise in sea level and the consequent regression of the coast that each of the archaeological evidence now exposed in the coastal zone, whether being eroded or exposed as a buried land surface, is derived from what was originally terrestrial occupation. Its current location in the coastal zone is the product of later unrelated processes, and it can tell us little about past adaptations to the sea. Estimates of its significance will need to be made in the context of other related evidence from dry land sites. Nevertheless, its physical environment means that preservation is often excellent, for example in the case of the Neolithic structure excavated at the Stumble in Essex. In some cases these buried land surfaces do contain evidence for human exploitation of what was a coastal environment and elsewhere along the modern coast there is similar evidence. Where the evidence does relate to past human exploitation of the resources and the opportunities offered by the sea and the coast, it is both diverse and as yet little understood. We are not yet in a position to make even preliminary estimates of answers to such fundamental questions as the extent to which the sea and the coast affected human life in the past, what percentage of the population at any time lived within reach of the sea, or whether human settlements in coastal environments showed a distinct character from those inland. The most striking evidence for use of the sea is in the form of boats, yet we still have much to learn about their production and use. Most of the known wrecks around our coast are not unexpectedly of post-medieval date, and offer an unparalleled opportunity for research which has as yet been little used. The prehistoric sewn-plank boats such as those from the Humber estuary and Dover all seem to belong to the second millennium BC; after this there is a gap in the record of a millennium, which cannot yet be explained, before boats reappear, but built using a very different technology. Boatbuilding must have been an extremely important activity around much of our coast, yet we know almost nothing about it. Boats were some of the most complex artefacts produced by pre-modern societies, and further research on their production and use make an important contribution to our understanding of past attitudes to technology and technological change. Boats needed landing places, yet here again our knowledge is very patchy. In many cases the natural shores and beaches would have sufficed, leaving little or no archaeological trace, but especially in later periods, many ports and harbors, as well as smaller facilities such as quays, wharves, and jetties, were built. Despite a growth of interest in the waterfront archaeology of some of our more important Roman and medieval towns, very little attention has been paid to the multitude of smaller landing places. Redevelopment of harbor sites and other development and natural pressures along the coast are subjecting these important locations to unprecedented threats, yet few surveys of such sites have been undertaken. One of the most important revelations of recent research has been the extent of industrial activity along the coast. Fishing and salt production are among the better documented activities, but even here our knowledge is patchy. Many forms of fishing will leave little archaeological trace and one of the surprises of recent survey has been the extent of past investment in facilities for procuring fish and shellfish. Elaborate wooden fish weirs, often of considerable extent and responsive to aerial photography in shallow water, have been identified in areas such as Essex and the Severn estuary. The production of salt especially in the late Iron Age and early Roman periods, has been recognized for some time, especially in the Thames estuary and around the Solent and Poole Harbor, but the reasons for the decline of that industry and the nature of later coastal salt working are much less well understood. Other industries were also located along the coast, either because the raw materials outcropped there or for ease of working and transport mineral resources such as sand, gravel, stone, coal, ironstone, and alum were all exploited. These industries are poorly documented, but their remains are sometimes extensive and striking. Some appreciation of the variety and importance of the archaeological remains preserved in the coastal zone, albeit only in preliminary form, can thus be gained from recent work, but the complexity of the problem of managing that resource is also being realised. The problem arises not only from the scale and variety of the archaeological remains, but also from two other sources: the very varied natural and human threats to the resource, and the complex web of organisations with authority over, or interests in, the coastal zone. Human threats include the redevelopment of historic towns and old dockland areas, and the increased importance of the coast for the leisure and tourism industries, resulting in pressure for the increased provision of facilities such as marinas. The larger size of ferries has also caused an increase in the damage caused by their wash to fragile deposits in the intertidal zone. The most significant natural threat is the predicted rise in sea level over the next century especially in the south and east of England. Its impact on archaeology is not easy to predict, and though it is likely to be highly localized, it will be at a scale much larger than that of most archaeological sites. Thus protecting one site may simply result in transposing the threat to a point further along the coast. The management of the archaeological remains will have to be considered in a much longer time scale and a much wider geographical scale than is common in the case of dry land sites, and this will pose a serious challenge for archaeologists.
Similar boats were also discovered in many other European countries.
neutral
id_6006
The Coastal Archaeology of Britain. The recognition of the wealth and diversity of England's coastal archaeology has been one of the most important developments of recent years. Some elements of this enormous resource have long been known. The so-called 'submerged forests' off the coasts of England, sometimes with clear evidence of the human activity, had attracted the interest of antiquarians since at least the eighteenth century, but serious and systematic attention has been given to the archaeological potential of the coast only since the early 1980s. It is possible to trace a variety of causes for this concentration of effort and interest. In the 1980s and 1990s scientific research into climate change and its environmental impact spilled over into a much broader public debate as awareness of these issues grew; the prospect of rising sea levels over the next century, and their impact on current coastal environments, has been a particular focus for concern. At the same time archaeologists were beginning to recognize that the destruction caused by natural processes of coastal erosion and by human activity was having an increasing impact on the archaeological resource of the coast. The dominant process affecting the physical form of England in the post-glacial period has been the rise in the altitude of sea level relative to the land, as the glaciers melted and the landmass readjusted. The encroachment of the sea, the loss of huge areas of land now under the North Sea and the English Channel and especially the loss of the land bridge between England and France, which finally made Britain an island, must have been immensely significant factors in the lives of our prehistoric ancestors. Yet the way in which prehistoric communities adjusted to these environmental changes has seldom been a major theme in discussions of the period. One factor contributing to this has been that, although the rise in relative sea level is comparatively well documented we know little about the constant reconfiguration of the coastline. This was affected by many processes, which have not yet been adequately researched. The detailed reconstruction of coastline histories and the changing environments available for human use will be an important theme for future research. So great has been the rise in sea level and the consequent regression of the coast that each of the archaeological evidence now exposed in the coastal zone, whether being eroded or exposed as a buried land surface, is derived from what was originally terrestrial occupation. Its current location in the coastal zone is the product of later unrelated processes, and it can tell us little about past adaptations to the sea. Estimates of its significance will need to be made in the context of other related evidence from dry land sites. Nevertheless, its physical environment means that preservation is often excellent, for example in the case of the Neolithic structure excavated at the Stumble in Essex. In some cases these buried land surfaces do contain evidence for human exploitation of what was a coastal environment and elsewhere along the modern coast there is similar evidence. Where the evidence does relate to past human exploitation of the resources and the opportunities offered by the sea and the coast, it is both diverse and as yet little understood. We are not yet in a position to make even preliminary estimates of answers to such fundamental questions as the extent to which the sea and the coast affected human life in the past, what percentage of the population at any time lived within reach of the sea, or whether human settlements in coastal environments showed a distinct character from those inland. The most striking evidence for use of the sea is in the form of boats, yet we still have much to learn about their production and use. Most of the known wrecks around our coast are not unexpectedly of post-medieval date, and offer an unparalleled opportunity for research which has as yet been little used. The prehistoric sewn-plank boats such as those from the Humber estuary and Dover all seem to belong to the second millennium BC; after this there is a gap in the record of a millennium, which cannot yet be explained, before boats reappear, but built using a very different technology. Boatbuilding must have been an extremely important activity around much of our coast, yet we know almost nothing about it. Boats were some of the most complex artefacts produced by pre-modern societies, and further research on their production and use make an important contribution to our understanding of past attitudes to technology and technological change. Boats needed landing places, yet here again our knowledge is very patchy. In many cases the natural shores and beaches would have sufficed, leaving little or no archaeological trace, but especially in later periods, many ports and harbors, as well as smaller facilities such as quays, wharves, and jetties, were built. Despite a growth of interest in the waterfront archaeology of some of our more important Roman and medieval towns, very little attention has been paid to the multitude of smaller landing places. Redevelopment of harbor sites and other development and natural pressures along the coast are subjecting these important locations to unprecedented threats, yet few surveys of such sites have been undertaken. One of the most important revelations of recent research has been the extent of industrial activity along the coast. Fishing and salt production are among the better documented activities, but even here our knowledge is patchy. Many forms of fishing will leave little archaeological trace and one of the surprises of recent survey has been the extent of past investment in facilities for procuring fish and shellfish. Elaborate wooden fish weirs, often of considerable extent and responsive to aerial photography in shallow water, have been identified in areas such as Essex and the Severn estuary. The production of salt especially in the late Iron Age and early Roman periods, has been recognized for some time, especially in the Thames estuary and around the Solent and Poole Harbor, but the reasons for the decline of that industry and the nature of later coastal salt working are much less well understood. Other industries were also located along the coast, either because the raw materials outcropped there or for ease of working and transport mineral resources such as sand, gravel, stone, coal, ironstone, and alum were all exploited. These industries are poorly documented, but their remains are sometimes extensive and striking. Some appreciation of the variety and importance of the archaeological remains preserved in the coastal zone, albeit only in preliminary form, can thus be gained from recent work, but the complexity of the problem of managing that resource is also being realised. The problem arises not only from the scale and variety of the archaeological remains, but also from two other sources: the very varied natural and human threats to the resource, and the complex web of organisations with authority over, or interests in, the coastal zone. Human threats include the redevelopment of historic towns and old dockland areas, and the increased importance of the coast for the leisure and tourism industries, resulting in pressure for the increased provision of facilities such as marinas. The larger size of ferries has also caused an increase in the damage caused by their wash to fragile deposits in the intertidal zone. The most significant natural threat is the predicted rise in sea level over the next century especially in the south and east of England. Its impact on archaeology is not easy to predict, and though it is likely to be highly localized, it will be at a scale much larger than that of most archaeological sites. Thus protecting one site may simply result in transposing the threat to a point further along the coast. The management of the archaeological remains will have to be considered in a much longer time scale and a much wider geographical scale than is common in the case of dry land sites, and this will pose a serious challenge for archaeologists.
There are few documents relating to mineral exploitation.
entailment
id_6007
The Coastal Archaeology of Britain. The recognition of the wealth and diversity of England's coastal archaeology has been one of the most important developments of recent years. Some elements of this enormous resource have long been known. The so-called 'submerged forests' off the coasts of England, sometimes with clear evidence of the human activity, had attracted the interest of antiquarians since at least the eighteenth century, but serious and systematic attention has been given to the archaeological potential of the coast only since the early 1980s. It is possible to trace a variety of causes for this concentration of effort and interest. In the 1980s and 1990s scientific research into climate change and its environmental impact spilled over into a much broader public debate as awareness of these issues grew; the prospect of rising sea levels over the next century, and their impact on current coastal environments, has been a particular focus for concern. At the same time archaeologists were beginning to recognize that the destruction caused by natural processes of coastal erosion and by human activity was having an increasing impact on the archaeological resource of the coast. The dominant process affecting the physical form of England in the post-glacial period has been the rise in the altitude of sea level relative to the land, as the glaciers melted and the landmass readjusted. The encroachment of the sea, the loss of huge areas of land now under the North Sea and the English Channel and especially the loss of the land bridge between England and France, which finally made Britain an island, must have been immensely significant factors in the lives of our prehistoric ancestors. Yet the way in which prehistoric communities adjusted to these environmental changes has seldom been a major theme in discussions of the period. One factor contributing to this has been that, although the rise in relative sea level is comparatively well documented we know little about the constant reconfiguration of the coastline. This was affected by many processes, which have not yet been adequately researched. The detailed reconstruction of coastline histories and the changing environments available for human use will be an important theme for future research. So great has been the rise in sea level and the consequent regression of the coast that each of the archaeological evidence now exposed in the coastal zone, whether being eroded or exposed as a buried land surface, is derived from what was originally terrestrial occupation. Its current location in the coastal zone is the product of later unrelated processes, and it can tell us little about past adaptations to the sea. Estimates of its significance will need to be made in the context of other related evidence from dry land sites. Nevertheless, its physical environment means that preservation is often excellent, for example in the case of the Neolithic structure excavated at the Stumble in Essex. In some cases these buried land surfaces do contain evidence for human exploitation of what was a coastal environment and elsewhere along the modern coast there is similar evidence. Where the evidence does relate to past human exploitation of the resources and the opportunities offered by the sea and the coast, it is both diverse and as yet little understood. We are not yet in a position to make even preliminary estimates of answers to such fundamental questions as the extent to which the sea and the coast affected human life in the past, what percentage of the population at any time lived within reach of the sea, or whether human settlements in coastal environments showed a distinct character from those inland. The most striking evidence for use of the sea is in the form of boats, yet we still have much to learn about their production and use. Most of the known wrecks around our coast are not unexpectedly of post-medieval date, and offer an unparalleled opportunity for research which has as yet been little used. The prehistoric sewn-plank boats such as those from the Humber estuary and Dover all seem to belong to the second millennium BC; after this there is a gap in the record of a millennium, which cannot yet be explained, before boats reappear, but built using a very different technology. Boatbuilding must have been an extremely important activity around much of our coast, yet we know almost nothing about it. Boats were some of the most complex artefacts produced by pre-modern societies, and further research on their production and use make an important contribution to our understanding of past attitudes to technology and technological change. Boats needed landing places, yet here again our knowledge is very patchy. In many cases the natural shores and beaches would have sufficed, leaving little or no archaeological trace, but especially in later periods, many ports and harbors, as well as smaller facilities such as quays, wharves, and jetties, were built. Despite a growth of interest in the waterfront archaeology of some of our more important Roman and medieval towns, very little attention has been paid to the multitude of smaller landing places. Redevelopment of harbor sites and other development and natural pressures along the coast are subjecting these important locations to unprecedented threats, yet few surveys of such sites have been undertaken. One of the most important revelations of recent research has been the extent of industrial activity along the coast. Fishing and salt production are among the better documented activities, but even here our knowledge is patchy. Many forms of fishing will leave little archaeological trace and one of the surprises of recent survey has been the extent of past investment in facilities for procuring fish and shellfish. Elaborate wooden fish weirs, often of considerable extent and responsive to aerial photography in shallow water, have been identified in areas such as Essex and the Severn estuary. The production of salt especially in the late Iron Age and early Roman periods, has been recognized for some time, especially in the Thames estuary and around the Solent and Poole Harbor, but the reasons for the decline of that industry and the nature of later coastal salt working are much less well understood. Other industries were also located along the coast, either because the raw materials outcropped there or for ease of working and transport mineral resources such as sand, gravel, stone, coal, ironstone, and alum were all exploited. These industries are poorly documented, but their remains are sometimes extensive and striking. Some appreciation of the variety and importance of the archaeological remains preserved in the coastal zone, albeit only in preliminary form, can thus be gained from recent work, but the complexity of the problem of managing that resource is also being realised. The problem arises not only from the scale and variety of the archaeological remains, but also from two other sources: the very varied natural and human threats to the resource, and the complex web of organisations with authority over, or interests in, the coastal zone. Human threats include the redevelopment of historic towns and old dockland areas, and the increased importance of the coast for the leisure and tourism industries, resulting in pressure for the increased provision of facilities such as marinas. The larger size of ferries has also caused an increase in the damage caused by their wash to fragile deposits in the intertidal zone. The most significant natural threat is the predicted rise in sea level over the next century especially in the south and east of England. Its impact on archaeology is not easy to predict, and though it is likely to be highly localized, it will be at a scale much larger than that of most archaeological sites. Thus protecting one site may simply result in transposing the threat to a point further along the coast. The management of the archaeological remains will have to be considered in a much longer time scale and a much wider geographical scale than is common in the case of dry land sites, and this will pose a serious challenge for archaeologists.
Large passenger boats are causing increasing damage to the seashore.
entailment
id_6008
The Coastal Archaeology of Britain. The recognition of the wealth and diversity of England's coastal archaeology has been one of the most important developments of recent years. Some elements of this enormous resource have long been known. The so-called 'submerged forests' off the coasts of England, sometimes with clear evidence of the human activity, had attracted the interest of antiquarians since at least the eighteenth century, but serious and systematic attention has been given to the archaeological potential of the coast only since the early 1980s. It is possible to trace a variety of causes for this concentration of effort and interest. In the 1980s and 1990s scientific research into climate change and its environmental impact spilled over into a much broader public debate as awareness of these issues grew; the prospect of rising sea levels over the next century, and their impact on current coastal environments, has been a particular focus for concern. At the same time archaeologists were beginning to recognize that the destruction caused by natural processes of coastal erosion and by human activity was having an increasing impact on the archaeological resource of the coast. The dominant process affecting the physical form of England in the post-glacial period has been the rise in the altitude of sea level relative to the land, as the glaciers melted and the landmass readjusted. The encroachment of the sea, the loss of huge areas of land now under the North Sea and the English Channel and especially the loss of the land bridge between England and France, which finally made Britain an island, must have been immensely significant factors in the lives of our prehistoric ancestors. Yet the way in which prehistoric communities adjusted to these environmental changes has seldom been a major theme in discussions of the period. One factor contributing to this has been that, although the rise in relative sea level is comparatively well documented we know little about the constant reconfiguration of the coastline. This was affected by many processes, which have not yet been adequately researched. The detailed reconstruction of coastline histories and the changing environments available for human use will be an important theme for future research. So great has been the rise in sea level and the consequent regression of the coast that each of the archaeological evidence now exposed in the coastal zone, whether being eroded or exposed as a buried land surface, is derived from what was originally terrestrial occupation. Its current location in the coastal zone is the product of later unrelated processes, and it can tell us little about past adaptations to the sea. Estimates of its significance will need to be made in the context of other related evidence from dry land sites. Nevertheless, its physical environment means that preservation is often excellent, for example in the case of the Neolithic structure excavated at the Stumble in Essex. In some cases these buried land surfaces do contain evidence for human exploitation of what was a coastal environment and elsewhere along the modern coast there is similar evidence. Where the evidence does relate to past human exploitation of the resources and the opportunities offered by the sea and the coast, it is both diverse and as yet little understood. We are not yet in a position to make even preliminary estimates of answers to such fundamental questions as the extent to which the sea and the coast affected human life in the past, what percentage of the population at any time lived within reach of the sea, or whether human settlements in coastal environments showed a distinct character from those inland. The most striking evidence for use of the sea is in the form of boats, yet we still have much to learn about their production and use. Most of the known wrecks around our coast are not unexpectedly of post-medieval date, and offer an unparalleled opportunity for research which has as yet been little used. The prehistoric sewn-plank boats such as those from the Humber estuary and Dover all seem to belong to the second millennium BC; after this there is a gap in the record of a millennium, which cannot yet be explained, before boats reappear, but built using a very different technology. Boatbuilding must have been an extremely important activity around much of our coast, yet we know almost nothing about it. Boats were some of the most complex artefacts produced by pre-modern societies, and further research on their production and use make an important contribution to our understanding of past attitudes to technology and technological change. Boats needed landing places, yet here again our knowledge is very patchy. In many cases the natural shores and beaches would have sufficed, leaving little or no archaeological trace, but especially in later periods, many ports and harbors, as well as smaller facilities such as quays, wharves, and jetties, were built. Despite a growth of interest in the waterfront archaeology of some of our more important Roman and medieval towns, very little attention has been paid to the multitude of smaller landing places. Redevelopment of harbor sites and other development and natural pressures along the coast are subjecting these important locations to unprecedented threats, yet few surveys of such sites have been undertaken. One of the most important revelations of recent research has been the extent of industrial activity along the coast. Fishing and salt production are among the better documented activities, but even here our knowledge is patchy. Many forms of fishing will leave little archaeological trace and one of the surprises of recent survey has been the extent of past investment in facilities for procuring fish and shellfish. Elaborate wooden fish weirs, often of considerable extent and responsive to aerial photography in shallow water, have been identified in areas such as Essex and the Severn estuary. The production of salt especially in the late Iron Age and early Roman periods, has been recognized for some time, especially in the Thames estuary and around the Solent and Poole Harbor, but the reasons for the decline of that industry and the nature of later coastal salt working are much less well understood. Other industries were also located along the coast, either because the raw materials outcropped there or for ease of working and transport mineral resources such as sand, gravel, stone, coal, ironstone, and alum were all exploited. These industries are poorly documented, but their remains are sometimes extensive and striking. Some appreciation of the variety and importance of the archaeological remains preserved in the coastal zone, albeit only in preliminary form, can thus be gained from recent work, but the complexity of the problem of managing that resource is also being realised. The problem arises not only from the scale and variety of the archaeological remains, but also from two other sources: the very varied natural and human threats to the resource, and the complex web of organisations with authority over, or interests in, the coastal zone. Human threats include the redevelopment of historic towns and old dockland areas, and the increased importance of the coast for the leisure and tourism industries, resulting in pressure for the increased provision of facilities such as marinas. The larger size of ferries has also caused an increase in the damage caused by their wash to fragile deposits in the intertidal zone. The most significant natural threat is the predicted rise in sea level over the next century especially in the south and east of England. Its impact on archaeology is not easy to predict, and though it is likely to be highly localized, it will be at a scale much larger than that of most archaeological sites. Thus protecting one site may simply result in transposing the threat to a point further along the coast. The management of the archaeological remains will have to be considered in a much longer time scale and a much wider geographical scale than is common in the case of dry land sites, and this will pose a serious challenge for archaeologists.
The coastline of England has changed periodically.
contradiction
id_6009
The Company expect the new factory, its first in Asia, to begin production early next year, and aims to build 18,000 tractors during its first year of operation. Full capacity will be achieved about five years later, by which time annual output will be about 40,000 tractors, making it the companys largest producer worldwide. The move to open such a large production site stemmed from the availability of labour within the region, low production costs, positive inducements by the government to encourage foreign investment, good communication and transport links, and increasing demand for tractors locally.
The number of tractors required locally is on the decrease.
contradiction
id_6010
The Company expect the new factory, its first in Asia, to begin production early next year, and aims to build 18,000 tractors during its first year of operation. Full capacity will be achieved about five years later, by which time annual output will be about 40,000 tractors, making it the companys largest producer worldwide. The move to open such a large production site stemmed from the availability of labour within the region, low production costs, positive inducements by the government to encourage foreign investment, good communication and transport links, and increasing demand for tractors locally.
The company will sent most of the tractors it makes in its new plant locally.
neutral
id_6011
The Company expect the new factory, its first in Asia, to begin production early next year, and aims to build 18,000 tractors during its first year of operation. Full capacity will be achieved about five years later, by which time annual output will be about 40,000 tractors, making it the companys largest producer worldwide. The move to open such a large production site stemmed from the availability of labour within the region, low production costs, positive inducements by the government to encourage foreign investment, good communication and transport links, and increasing demand for tractors locally.
Communication likes were not an important in deciding where to locate the new factory.
contradiction
id_6012
The Company expect the new factory, its first in Asia, to begin production early next year, and aims to build 18,000 tractors during its first year of operation. Full capacity will be achieved about five years later, by which time annual output will be about 40,000 tractors, making it the companys largest producer worldwide. The move to open such a large production site stemmed from the availability of labour within the region, low production costs, positive inducements by the government to encourage foreign investment, good communication and transport links, and increasing demand for tractors locally.
Full production capacity will not be achieved after the first year of operation.
entailment
id_6013
The Company expect the new factory, its first in Asia, to begin production early next year, and aims to build 18,000 tractors during its first year of operation. Full capacity will be achieved about five years later, by which time annual output will be about 40,000 tractors, making it the companys largest producer worldwide. The move to open such a large production site stemmed from the availability of labour within the region, low production costs, positive inducements by the government to encourage foreign investment, good communication and transport links, and increasing demand for tractors locally.
Currently the company has no production facilities within Asia.
entailment
id_6014
The Compton School is an above average size secondary school for pupils aged 11 to 16, situated on the outskirts of Brampton, a town in the West Midlands. Unemployment in the area has risen following a decline in motor manufacturing. In a letter to the local newspaper an angry parent was critical of the quality of education being provided by the school. The parent alleged his son was unable to learn because lessons were disrupted by poor behaviour and argued that this was the reason why the schools test and examination results are below national averages and well below those of most other schools in the area. It is also known that: Over the last six years the school has had three different headteachers. Two years ago a large part of the school was burnt down and many lessons take place in temporary classrooms while rebuilding work is going on. There are 1,200 pupils in the school, of whom 58 per cent are boys and 42 per cent are girls. The staff are coping well with the disruption being caused by the rebuilding work. The school was last inspected four years ago and judged to be providing a satisfactory quality of education at that time. Verbal logical reasoning tests
Employment in the area is rising.
contradiction
id_6015
The Compton School is an above average size secondary school for pupils aged 11 to 16, situated on the outskirts of Brampton, a town in the West Midlands. Unemployment in the area has risen following a decline in motor manufacturing. In a letter to the local newspaper an angry parent was critical of the quality of education being provided by the school. The parent alleged his son was unable to learn because lessons were disrupted by poor behaviour and argued that this was the reason why the schools test and examination results are below national averages and well below those of most other schools in the area. It is also known that: Over the last six years the school has had three different headteachers. Two years ago a large part of the school was burnt down and many lessons take place in temporary classrooms while rebuilding work is going on. There are 1,200 pupils in the school, of whom 58 per cent are boys and 42 per cent are girls. The staff are coping well with the disruption being caused by the rebuilding work. The school was last inspected four years ago and judged to be providing a satisfactory quality of education at that time. Verbal logical reasoning tests
Lessons are disrupted by poor behaviour.
neutral
id_6016
The Compton School is an above average size secondary school for pupils aged 11 to 16, situated on the outskirts of Brampton, a town in the West Midlands. Unemployment in the area has risen following a decline in motor manufacturing. In a letter to the local newspaper an angry parent was critical of the quality of education being provided by the school. The parent alleged his son was unable to learn because lessons were disrupted by poor behaviour and argued that this was the reason why the schools test and examination results are below national averages and well below those of most other schools in the area. It is also known that: Over the last six years the school has had three different headteachers. Two years ago a large part of the school was burnt down and many lessons take place in temporary classrooms while rebuilding work is going on. There are 1,200 pupils in the school, of whom 58 per cent are boys and 42 per cent are girls. The staff are coping well with the disruption being caused by the rebuilding work. The school was last inspected four years ago and judged to be providing a satisfactory quality of education at that time. Verbal logical reasoning tests
The school is about the same size as most other schools.
contradiction
id_6017
The Compton School is an above average size secondary school for pupils aged 11 to 16, situated on the outskirts of Brampton, a town in the West Midlands. Unemployment in the area has risen following a decline in motor manufacturing. In a letter to the local newspaper an angry parent was critical of the quality of education being provided by the school. The parent alleged his son was unable to learn because lessons were disrupted by poor behaviour and argued that this was the reason why the schools test and examination results are below national averages and well below those of most other schools in the area. It is also known that: Over the last six years the school has had three different headteachers. Two years ago a large part of the school was burnt down and many lessons take place in temporary classrooms while rebuilding work is going on. There are 1,200 pupils in the school, of whom 58 per cent are boys and 42 per cent are girls. The staff are coping well with the disruption being caused by the rebuilding work. The school was last inspected four years ago and judged to be providing a satisfactory quality of education at that time. Verbal logical reasoning tests
The present headteacher was the headteacher at the time when the school was last inspected.
neutral
id_6018
The Compton School is an above average size secondary school for pupils aged 11 to 16, situated on the outskirts of Brampton, a town in the West Midlands. Unemployment in the area has risen following a decline in motor manufacturing. In a letter to the local newspaper an angry parent was critical of the quality of education being provided by the school. The parent alleged his son was unable to learn because lessons were disrupted by poor behaviour and argued that this was the reason why the schools test and examination results are below national averages and well below those of most other schools in the area. It is also known that: Over the last six years the school has had three different headteachers. Two years ago a large part of the school was burnt down and many lessons take place in temporary classrooms while rebuilding work is going on. There are 1,200 pupils in the school, of whom 58 per cent are boys and 42 per cent are girls. The staff are coping well with the disruption being caused by the rebuilding work. The school was last inspected four years ago and judged to be providing a satisfactory quality of education at that time. Verbal logical reasoning tests
Test and examination results are below average because of the disruption caused by the rebuilding work.
neutral
id_6019
The Concept of Childhood in Western Countries The history of childhood has been a heated topic in social history since the highly influential book Centuries of Childhood, written by French historian Philippe Aries, emerged in 1960. He claimed that childhood is a concept created by modern society. Whether childhood is itself a recent invention has been one of the most intensely debated issues in the history of childhood. Historian Philippe Aries asserted that children were regarded as miniature adults, with all the intellect and personality that this implies, in Western Europe during the Middle Ages (up to about the end of the 15th century). After scrutinising medieval pictures and diaries, he concluded that there was no distinction between children and adults for they shared similar leisure activities and work; However, this does not mean children were neglected, forsaken or despised, he argued. The idea of childhood corresponds to awareness about the peculiar nature of childhood, which distinguishes the child from adult, even the young adult. Therefore, the concept of childhood is not to be confused with affection for children. Traditionally, children played a functional role in contributing to the family income in the history. Under this circumstance, children were considered to be useful. Back in the Middle Ages, children of 5 or 6 years old did necessary chores for their parents. During the 16th century, children of 9 or 10 years old were often encouraged or even forced to leave their family to work as servants for wealthier families or apprentices for a trade. In the 18th and 19th centuries, industrialisation created a new demand for child labour; thus many children were forced to work for a long time in mines, workshops and factories. The issue of whether long hours of labouring would interfere with childrens growing bodies began to perplex social reformers. Some of them started to realise the potential of systematic studies to monitor how far these early deprivations might be influencing childrens development. The concerns of reformers gradually had some impact upon the working condition of children. For example, in Britain, the Factory Act of 1833 signified the emergence of legal protection of children from exploitation and was also associated with the rise of schools for factory children. Due partly to factory reform, the worst forms of child exploitation were eliminated gradually. The influence of trade unions and economic changes also contributed to the evolution by leaving some forms of child labour redundant during the 19th century. Initiating children into work as useful children was no longer a priority, and childhood was deemed to be a time for play and education for all children instead of a privileged minority. Childhood was increasingly understood as a more extended phase of dependency, development and learning with the delay of the age for starting full-time work- Even so, work continued to play a significant, if less essential, role in childrens lives in the later 19th and 20th centuries. Finally, the useful child has become a controversial concept during the first decade of the 21st century, especially in the context of global concern about large numbers of children engaged in child labour. The half-time schools established upon the Factory Act of 1833 allowed children to work and attend school. However, a significant proportion of children never attended school in the 1840s, and even if they did, they dropped out by the age of 10 or 11. By the end of the 19th century in Britain, the situation changed dramatically, and schools became the core to the concept of a normal childhood. It is no longer a privilege for children to attend school and all children are expected to spend a significant part of their day in a classroom. Once in school, childrens lives could be separated from domestic life and the adult world of work. In this way, school turns into an institution dedicated to shaping the minds, behaviour and morals of the young. Besides, education dominated the management of childrens waking hours through the hours spent in the classroom, homework (the growth of after school activities), and the importance attached to parental involvement. Industrialisation, urbanisation and mass schooling pose new challenges for those who are responsible for protecting childrens welfare, as well as promoting their learning. An increasing number of children are being treated as a group with unique needs, and are organised into groups in the light of their age. For instance, teachers need to know some information about what to expect of children in their classrooms, what kinds of instruction are appropriate for different age groups, and what is the best way to assess childrens progress. Also, they want tools enabling them to sort and select children according to their abilities and potential.
In the 20th century, almost all children needed to go to school with a full-time schedule.
neutral
id_6020
The Concept of Childhood in Western Countries The history of childhood has been a heated topic in social history since the highly influential book Centuries of Childhood, written by French historian Philippe Aries, emerged in 1960. He claimed that childhood is a concept created by modern society. Whether childhood is itself a recent invention has been one of the most intensely debated issues in the history of childhood. Historian Philippe Aries asserted that children were regarded as miniature adults, with all the intellect and personality that this implies, in Western Europe during the Middle Ages (up to about the end of the 15th century). After scrutinising medieval pictures and diaries, he concluded that there was no distinction between children and adults for they shared similar leisure activities and work; However, this does not mean children were neglected, forsaken or despised, he argued. The idea of childhood corresponds to awareness about the peculiar nature of childhood, which distinguishes the child from adult, even the young adult. Therefore, the concept of childhood is not to be confused with affection for children. Traditionally, children played a functional role in contributing to the family income in the history. Under this circumstance, children were considered to be useful. Back in the Middle Ages, children of 5 or 6 years old did necessary chores for their parents. During the 16th century, children of 9 or 10 years old were often encouraged or even forced to leave their family to work as servants for wealthier families or apprentices for a trade. In the 18th and 19th centuries, industrialisation created a new demand for child labour; thus many children were forced to work for a long time in mines, workshops and factories. The issue of whether long hours of labouring would interfere with childrens growing bodies began to perplex social reformers. Some of them started to realise the potential of systematic studies to monitor how far these early deprivations might be influencing childrens development. The concerns of reformers gradually had some impact upon the working condition of children. For example, in Britain, the Factory Act of 1833 signified the emergence of legal protection of children from exploitation and was also associated with the rise of schools for factory children. Due partly to factory reform, the worst forms of child exploitation were eliminated gradually. The influence of trade unions and economic changes also contributed to the evolution by leaving some forms of child labour redundant during the 19th century. Initiating children into work as useful children was no longer a priority, and childhood was deemed to be a time for play and education for all children instead of a privileged minority. Childhood was increasingly understood as a more extended phase of dependency, development and learning with the delay of the age for starting full-time work- Even so, work continued to play a significant, if less essential, role in childrens lives in the later 19th and 20th centuries. Finally, the useful child has become a controversial concept during the first decade of the 21st century, especially in the context of global concern about large numbers of children engaged in child labour. The half-time schools established upon the Factory Act of 1833 allowed children to work and attend school. However, a significant proportion of children never attended school in the 1840s, and even if they did, they dropped out by the age of 10 or 11. By the end of the 19th century in Britain, the situation changed dramatically, and schools became the core to the concept of a normal childhood. It is no longer a privilege for children to attend school and all children are expected to spend a significant part of their day in a classroom. Once in school, childrens lives could be separated from domestic life and the adult world of work. In this way, school turns into an institution dedicated to shaping the minds, behaviour and morals of the young. Besides, education dominated the management of childrens waking hours through the hours spent in the classroom, homework (the growth of after school activities), and the importance attached to parental involvement. Industrialisation, urbanisation and mass schooling pose new challenges for those who are responsible for protecting childrens welfare, as well as promoting their learning. An increasing number of children are being treated as a group with unique needs, and are organised into groups in the light of their age. For instance, teachers need to know some information about what to expect of children in their classrooms, what kinds of instruction are appropriate for different age groups, and what is the best way to assess childrens progress. Also, they want tools enabling them to sort and select children according to their abilities and potential.
Working children during the Middle Ages were generally unloved.
contradiction
id_6021
The Concept of Childhood in Western Countries The history of childhood has been a heated topic in social history since the highly influential book Centuries of Childhood, written by French historian Philippe Aries, emerged in 1960. He claimed that childhood is a concept created by modern society. Whether childhood is itself a recent invention has been one of the most intensely debated issues in the history of childhood. Historian Philippe Aries asserted that children were regarded as miniature adults, with all the intellect and personality that this implies, in Western Europe during the Middle Ages (up to about the end of the 15th century). After scrutinising medieval pictures and diaries, he concluded that there was no distinction between children and adults for they shared similar leisure activities and work; However, this does not mean children were neglected, forsaken or despised, he argued. The idea of childhood corresponds to awareness about the peculiar nature of childhood, which distinguishes the child from adult, even the young adult. Therefore, the concept of childhood is not to be confused with affection for children. Traditionally, children played a functional role in contributing to the family income in the history. Under this circumstance, children were considered to be useful. Back in the Middle Ages, children of 5 or 6 years old did necessary chores for their parents. During the 16th century, children of 9 or 10 years old were often encouraged or even forced to leave their family to work as servants for wealthier families or apprentices for a trade. In the 18th and 19th centuries, industrialisation created a new demand for child labour; thus many children were forced to work for a long time in mines, workshops and factories. The issue of whether long hours of labouring would interfere with childrens growing bodies began to perplex social reformers. Some of them started to realise the potential of systematic studies to monitor how far these early deprivations might be influencing childrens development. The concerns of reformers gradually had some impact upon the working condition of children. For example, in Britain, the Factory Act of 1833 signified the emergence of legal protection of children from exploitation and was also associated with the rise of schools for factory children. Due partly to factory reform, the worst forms of child exploitation were eliminated gradually. The influence of trade unions and economic changes also contributed to the evolution by leaving some forms of child labour redundant during the 19th century. Initiating children into work as useful children was no longer a priority, and childhood was deemed to be a time for play and education for all children instead of a privileged minority. Childhood was increasingly understood as a more extended phase of dependency, development and learning with the delay of the age for starting full-time work- Even so, work continued to play a significant, if less essential, role in childrens lives in the later 19th and 20th centuries. Finally, the useful child has become a controversial concept during the first decade of the 21st century, especially in the context of global concern about large numbers of children engaged in child labour. The half-time schools established upon the Factory Act of 1833 allowed children to work and attend school. However, a significant proportion of children never attended school in the 1840s, and even if they did, they dropped out by the age of 10 or 11. By the end of the 19th century in Britain, the situation changed dramatically, and schools became the core to the concept of a normal childhood. It is no longer a privilege for children to attend school and all children are expected to spend a significant part of their day in a classroom. Once in school, childrens lives could be separated from domestic life and the adult world of work. In this way, school turns into an institution dedicated to shaping the minds, behaviour and morals of the young. Besides, education dominated the management of childrens waking hours through the hours spent in the classroom, homework (the growth of after school activities), and the importance attached to parental involvement. Industrialisation, urbanisation and mass schooling pose new challenges for those who are responsible for protecting childrens welfare, as well as promoting their learning. An increasing number of children are being treated as a group with unique needs, and are organised into groups in the light of their age. For instance, teachers need to know some information about what to expect of children in their classrooms, what kinds of instruction are appropriate for different age groups, and what is the best way to assess childrens progress. Also, they want tools enabling them to sort and select children according to their abilities and potential.
The rise of trade unions majorly contributed to the protection of children from exploitation in the 19th century.
neutral
id_6022
The Concept of Childhood in Western Countries The history of childhood has been a heated topic in social history since the highly influential book Centuries of Childhood, written by French historian Philippe Aries, emerged in 1960. He claimed that childhood is a concept created by modern society. Whether childhood is itself a recent invention has been one of the most intensely debated issues in the history of childhood. Historian Philippe Aries asserted that children were regarded as miniature adults, with all the intellect and personality that this implies, in Western Europe during the Middle Ages (up to about the end of the 15th century). After scrutinising medieval pictures and diaries, he concluded that there was no distinction between children and adults for they shared similar leisure activities and work; However, this does not mean children were neglected, forsaken or despised, he argued. The idea of childhood corresponds to awareness about the peculiar nature of childhood, which distinguishes the child from adult, even the young adult. Therefore, the concept of childhood is not to be confused with affection for children. Traditionally, children played a functional role in contributing to the family income in the history. Under this circumstance, children were considered to be useful. Back in the Middle Ages, children of 5 or 6 years old did necessary chores for their parents. During the 16th century, children of 9 or 10 years old were often encouraged or even forced to leave their family to work as servants for wealthier families or apprentices for a trade. In the 18th and 19th centuries, industrialisation created a new demand for child labour; thus many children were forced to work for a long time in mines, workshops and factories. The issue of whether long hours of labouring would interfere with childrens growing bodies began to perplex social reformers. Some of them started to realise the potential of systematic studies to monitor how far these early deprivations might be influencing childrens development. The concerns of reformers gradually had some impact upon the working condition of children. For example, in Britain, the Factory Act of 1833 signified the emergence of legal protection of children from exploitation and was also associated with the rise of schools for factory children. Due partly to factory reform, the worst forms of child exploitation were eliminated gradually. The influence of trade unions and economic changes also contributed to the evolution by leaving some forms of child labour redundant during the 19th century. Initiating children into work as useful children was no longer a priority, and childhood was deemed to be a time for play and education for all children instead of a privileged minority. Childhood was increasingly understood as a more extended phase of dependency, development and learning with the delay of the age for starting full-time work- Even so, work continued to play a significant, if less essential, role in childrens lives in the later 19th and 20th centuries. Finally, the useful child has become a controversial concept during the first decade of the 21st century, especially in the context of global concern about large numbers of children engaged in child labour. The half-time schools established upon the Factory Act of 1833 allowed children to work and attend school. However, a significant proportion of children never attended school in the 1840s, and even if they did, they dropped out by the age of 10 or 11. By the end of the 19th century in Britain, the situation changed dramatically, and schools became the core to the concept of a normal childhood. It is no longer a privilege for children to attend school and all children are expected to spend a significant part of their day in a classroom. Once in school, childrens lives could be separated from domestic life and the adult world of work. In this way, school turns into an institution dedicated to shaping the minds, behaviour and morals of the young. Besides, education dominated the management of childrens waking hours through the hours spent in the classroom, homework (the growth of after school activities), and the importance attached to parental involvement. Industrialisation, urbanisation and mass schooling pose new challenges for those who are responsible for protecting childrens welfare, as well as promoting their learning. An increasing number of children are being treated as a group with unique needs, and are organised into groups in the light of their age. For instance, teachers need to know some information about what to expect of children in their classrooms, what kinds of instruction are appropriate for different age groups, and what is the best way to assess childrens progress. Also, they want tools enabling them to sort and select children according to their abilities and potential.
Through the aid of half-time schools, most children went to school in the mid-19th century.
contradiction
id_6023
The Concept of Childhood in Western Countries The history of childhood has been a heated topic in social history since the highly influential book Centuries of Childhood, written by French historian Philippe Aries, emerged in 1960. He claimed that childhood is a concept created by modern society. Whether childhood is itself a recent invention has been one of the most intensely debated issues in the history of childhood. Historian Philippe Aries asserted that children were regarded as miniature adults, with all the intellect and personality that this implies, in Western Europe during the Middle Ages (up to about the end of the 15th century). After scrutinising medieval pictures and diaries, he concluded that there was no distinction between children and adults for they shared similar leisure activities and work; However, this does not mean children were neglected, forsaken or despised, he argued. The idea of childhood corresponds to awareness about the peculiar nature of childhood, which distinguishes the child from adult, even the young adult. Therefore, the concept of childhood is not to be confused with affection for children. Traditionally, children played a functional role in contributing to the family income in the history. Under this circumstance, children were considered to be useful. Back in the Middle Ages, children of 5 or 6 years old did necessary chores for their parents. During the 16th century, children of 9 or 10 years old were often encouraged or even forced to leave their family to work as servants for wealthier families or apprentices for a trade. In the 18th and 19th centuries, industrialisation created a new demand for child labour; thus many children were forced to work for a long time in mines, workshops and factories. The issue of whether long hours of labouring would interfere with childrens growing bodies began to perplex social reformers. Some of them started to realise the potential of systematic studies to monitor how far these early deprivations might be influencing childrens development. The concerns of reformers gradually had some impact upon the working condition of children. For example, in Britain, the Factory Act of 1833 signified the emergence of legal protection of children from exploitation and was also associated with the rise of schools for factory children. Due partly to factory reform, the worst forms of child exploitation were eliminated gradually. The influence of trade unions and economic changes also contributed to the evolution by leaving some forms of child labour redundant during the 19th century. Initiating children into work as useful children was no longer a priority, and childhood was deemed to be a time for play and education for all children instead of a privileged minority. Childhood was increasingly understood as a more extended phase of dependency, development and learning with the delay of the age for starting full-time work- Even so, work continued to play a significant, if less essential, role in childrens lives in the later 19th and 20th centuries. Finally, the useful child has become a controversial concept during the first decade of the 21st century, especially in the context of global concern about large numbers of children engaged in child labour. The half-time schools established upon the Factory Act of 1833 allowed children to work and attend school. However, a significant proportion of children never attended school in the 1840s, and even if they did, they dropped out by the age of 10 or 11. By the end of the 19th century in Britain, the situation changed dramatically, and schools became the core to the concept of a normal childhood. It is no longer a privilege for children to attend school and all children are expected to spend a significant part of their day in a classroom. Once in school, childrens lives could be separated from domestic life and the adult world of work. In this way, school turns into an institution dedicated to shaping the minds, behaviour and morals of the young. Besides, education dominated the management of childrens waking hours through the hours spent in the classroom, homework (the growth of after school activities), and the importance attached to parental involvement. Industrialisation, urbanisation and mass schooling pose new challenges for those who are responsible for protecting childrens welfare, as well as promoting their learning. An increasing number of children are being treated as a group with unique needs, and are organised into groups in the light of their age. For instance, teachers need to know some information about what to expect of children in their classrooms, what kinds of instruction are appropriate for different age groups, and what is the best way to assess childrens progress. Also, they want tools enabling them to sort and select children according to their abilities and potential.
Aries pointed out that children did different types of work to adults during the Middle Ages.
contradiction
id_6024
The Concept of Childhood in Western Countries The history of childhood has been a heated topic in social history since the highly influential book Centuries of Childhood, written by French historian Philippe Aries, emerged in 1960. He claimed that childhood is a concept created by modern society. Whether childhood is itself a recent invention has been one of the most intensely debated issues in the history of childhood. Historian Philippe Aries asserted that children were regarded as miniature adults, with all the intellect and personality that this implies, in Western Europe during the Middle Ages (up to about the end of the 15th century). After scrutinising medieval pictures and diaries, he concluded that there was no distinction between children and adults for they shared similar leisure activities and work; However, this does not mean children were neglected, forsaken or despised, he argued. The idea of childhood corresponds to awareness about the peculiar nature of childhood, which distinguishes the child from adult, even the young adult. Therefore, the concept of childhood is not to be confused with affection for children. Traditionally, children played a functional role in contributing to the family income in the history. Under this circumstance, children were considered to be useful. Back in the Middle Ages, children of 5 or 6 years old did necessary chores for their parents. During the 16th century, children of 9 or 10 years old were often encouraged or even forced to leave their family to work as servants for wealthier families or apprentices for a trade. In the 18th and 19th centuries, industrialisation created a new demand for child labour; thus many children were forced to work for a long time in mines, workshops and factories. The issue of whether long hours of labouring would interfere with childrens growing bodies began to perplex social reformers. Some of them started to realise the potential of systematic studies to monitor how far these early deprivations might be influencing childrens development. The concerns of reformers gradually had some impact upon the working condition of children. For example, in Britain, the Factory Act of 1833 signified the emergence of legal protection of children from exploitation and was also associated with the rise of schools for factory children. Due partly to factory reform, the worst forms of child exploitation were eliminated gradually. The influence of trade unions and economic changes also contributed to the evolution by leaving some forms of child labour redundant during the 19th century. Initiating children into work as useful children was no longer a priority, and childhood was deemed to be a time for play and education for all children instead of a privileged minority. Childhood was increasingly understood as a more extended phase of dependency, development and learning with the delay of the age for starting full-time work- Even so, work continued to play a significant, if less essential, role in childrens lives in the later 19th and 20th centuries. Finally, the useful child has become a controversial concept during the first decade of the 21st century, especially in the context of global concern about large numbers of children engaged in child labour. The half-time schools established upon the Factory Act of 1833 allowed children to work and attend school. However, a significant proportion of children never attended school in the 1840s, and even if they did, they dropped out by the age of 10 or 11. By the end of the 19th century in Britain, the situation changed dramatically, and schools became the core to the concept of a normal childhood. It is no longer a privilege for children to attend school and all children are expected to spend a significant part of their day in a classroom. Once in school, childrens lives could be separated from domestic life and the adult world of work. In this way, school turns into an institution dedicated to shaping the minds, behaviour and morals of the young. Besides, education dominated the management of childrens waking hours through the hours spent in the classroom, homework (the growth of after school activities), and the importance attached to parental involvement. Industrialisation, urbanisation and mass schooling pose new challenges for those who are responsible for protecting childrens welfare, as well as promoting their learning. An increasing number of children are being treated as a group with unique needs, and are organised into groups in the light of their age. For instance, teachers need to know some information about what to expect of children in their classrooms, what kinds of instruction are appropriate for different age groups, and what is the best way to assess childrens progress. Also, they want tools enabling them to sort and select children according to their abilities and potential.
Nowadays, childrens needs are much differentiated and categorised based on how old they are.
entailment
id_6025
The Concept of Childhood in Western Countries The history of childhood has been a heated topic in social history since the highly influential book Centuries of Childhood, written by French historian Philippe Aries, emerged in 1960. He claimed that childhood is a concept created by modern society. Whether childhood is itself a recent invention has been one of the most intensely debated issues in the history of childhood. Historian Philippe Aries asserted that children were regarded as miniature adults, with all the intellect and personality that this implies, in Western Europe during the Middle Ages (up to about the end of the 15th century). After scrutinising medieval pictures and diaries, he concluded that there was no distinction between children and adults for they shared similar leisure activities and work; However, this does not mean children were neglected, forsaken or despised, he argued. The idea of childhood corresponds to awareness about the peculiar nature of childhood, which distinguishes the child from adult, even the young adult. Therefore, the concept of childhood is not to be confused with affection for children. Traditionally, children played a functional role in contributing to the family income in the history. Under this circumstance, children were considered to be useful. Back in the Middle Ages, children of 5 or 6 years old did necessary chores for their parents. During the 16th century, children of 9 or 10 years old were often encouraged or even forced to leave their family to work as servants for wealthier families or apprentices for a trade. In the 18th and 19th centuries, industrialisation created a new demand for child labour; thus many children were forced to work for a long time in mines, workshops and factories. The issue of whether long hours of labouring would interfere with childrens growing bodies began to perplex social reformers. Some of them started to realise the potential of systematic studies to monitor how far these early deprivations might be influencing childrens development. The concerns of reformers gradually had some impact upon the working condition of children. For example, in Britain, the Factory Act of 1833 signified the emergence of legal protection of children from exploitation and was also associated with the rise of schools for factory children. Due partly to factory reform, the worst forms of child exploitation were eliminated gradually. The influence of trade unions and economic changes also contributed to the evolution by leaving some forms of child labour redundant during the 19th century. Initiating children into work as useful children was no longer a priority, and childhood was deemed to be a time for play and education for all children instead of a privileged minority. Childhood was increasingly understood as a more extended phase of dependency, development and learning with the delay of the age for starting full-time work- Even so, work continued to play a significant, if less essential, role in childrens lives in the later 19th and 20th centuries. Finally, the useful child has become a controversial concept during the first decade of the 21st century, especially in the context of global concern about large numbers of children engaged in child labour. The half-time schools established upon the Factory Act of 1833 allowed children to work and attend school. However, a significant proportion of children never attended school in the 1840s, and even if they did, they dropped out by the age of 10 or 11. By the end of the 19th century in Britain, the situation changed dramatically, and schools became the core to the concept of a normal childhood. It is no longer a privilege for children to attend school and all children are expected to spend a significant part of their day in a classroom. Once in school, childrens lives could be separated from domestic life and the adult world of work. In this way, school turns into an institution dedicated to shaping the minds, behaviour and morals of the young. Besides, education dominated the management of childrens waking hours through the hours spent in the classroom, homework (the growth of after school activities), and the importance attached to parental involvement. Industrialisation, urbanisation and mass schooling pose new challenges for those who are responsible for protecting childrens welfare, as well as promoting their learning. An increasing number of children are being treated as a group with unique needs, and are organised into groups in the light of their age. For instance, teachers need to know some information about what to expect of children in their classrooms, what kinds of instruction are appropriate for different age groups, and what is the best way to assess childrens progress. Also, they want tools enabling them to sort and select children according to their abilities and potential.
Some scientists thought that overwork might damage the health of young children.
entailment
id_6026
The Conquest of Malaria in Italy, 1900-1962 Mal-aria. Bad air. Even the world is Italian, and this horrible disease marked the life of those in the peninsula for thousands of years. Giuseppe Garibaldis wife died of the disease, as did the countrys first prime minister, Cavour, in 1861. Yet by 1962, Italy was officially declared malaria- free, and it has remained so ever since. Frank Snowdens study of this success story is a remarkable piece of historical work. Original, crystal-clear, analytical and passionate, Snowden (who has previously written about cholera) takes us to areas historians have rarely visited before. Everybody now knows that malaria is carried by mosquitoes. Malaria has always been the subject of research for medical practitioners from time immemorial. However, many ancient texts, especially medical literature, mention of various aspects of malaria and even of its possible link with mosquitoes and insects. Early man, confronting the manifestations of malaria, attributed the fevers to supernatural influences: evil spirits, angered deities, or the black magic of sorcerers. But in the 19 th century, most expects believed that the disease was produced by unclear air (miasma or poisoning of the air). Two Americans, Josiah Clark Nott and Lewis Daniel Beauperthy, echoed Crawfords ideas. Nott in his essay Yellow Fever Contrasted with Bilious Fever, published in 1850, dismissed the miasma theory as worthless, arguing that microscopic insects somehow transmitted by mosquitoes caused both malaria and yellow fever. Others made a link between swamps, water and malaria, but did not make the future leap towards insects. The consequences of these theories were that little was done to combat the disease before the end of the century. Things became so bad that 11m Italians (from a total population of 25m) were permanently at risk. In malarial zones the life expectancy of land workers was a terrifying 22.5 years. Those who escaped death were weakened or suffered from splenomegaly a painful enlargement of the spleen and a lifeless stare. The economic impact of the disease was immense. Epidemics were blamed on southern Italians, given the widespread belief that malaria was hereditary. In the 1880s, such theories began to collapse as the dreaded mosquito was identified as the real culprit. Italian scientists, drawing on the pioneering work on French doctor Alphonse Laveran, were able to predict the cycles of fever but it was in Rome that further key discoveries were made. Giovanni Battista Grassi, a naturalist, found that a particular type of mosquito was the carrier of malaria. By experimenting on healthy volunteers (mosquitoes were released into rooms where they drank the blood of the human guinea pigs), Grassi was able to make the direct link between the insects (all females of a certain kind) and the disease. Soon, doctors and scientists made another startling discovery: the mosquitoes themselves were also infected and not mere carriers. Every year, during the mosquito season, malarial blood was moved around the population by the insects. Definitive proof of these new theories was obtained after an extraordinary series of experiments in Italy, where healthy people were introduced into malarial zones but kept free of mosquito bites and remained well. The new Italian state had the necessary information to tackle the disease. A complicated approach was adopted, which made use of quinine a drug obtained from tree bark which had long been used to combat fever, but was now seen as a crucial part of the war on malaria. Italy introduced a quinine law and a quinine tax in 1904, and the drug was administered to large numbers of rural workers. Despite its often terrible side-effects (the headaches produced were known as the quinine-buzz) the drug was successful in limiting the spread of the disease, and in breaking cycles of infection. In addition, Italy set up rural health centers and invested heavily in education programmes. Malaria, as Snowden shows, was not just a medical problem, but a social and regional issue, and could only be defeated through multilayered strategies. Politics was itself transformed by the anti-malarial campaigns. It was originally decided to give quinine to all those in certain regions even healthy people; peasants were often suspicious of medicine being forced upon them. Doctors were sometimes met with hostility and refusal, and many were dubbed poisoners. Despite these problems, the strategy was hugely successful. Deaths from malaria fell by some 80% in the first decade of the 20 th century and some areas escaped altogether from the scourge of the disease. Shamefully, the Italian malaria expert Alberto Missiroli had a role to play in the disaster: he did not distribute quinine, despite being well aware of the epidemic to come. Snowden claims that Missiroli was already preparing a new strategy with the support of the US Rockefeller Foundation-using a new pesticide, DDT. Missiroli allowed the epidemic to spread, in order to create the ideal conditions for a massive, and lucrative, human experiment. Fifty-five thousand cases of malaria were recorded in the province of Littoria alone in 1944. It is estimated that more than a third of those affected area contracted the disease. Thousands, nobody knows how many, died. With the war over, the US government and the Rockefeller Foundation were free to experiment. DDT was sprayed from the air and 3m Italians had their bodies covered with the chemical. The effects were dramatic, and nobody really cared about the toxic effects of the chemical. By 1962, malaria was more or less gone from the whole peninsula. The last cases were noted in a poor region of Sicily. One of the final victims to die of the disease in Italy was the popular cyclist, Fausto Coppi. He had contracted malaria in Africa in 1960, and the failure of doctors in the north of Italy to spot the disease was a sign of the times. A few decades earlier, they would have immediately noticed the tell-tale signs; it was later claimed that a small dose of quinine would have saved his life. As there are still more than 1m deaths every year from malaria worldwide, Snowdens book also has contemporary relevance. As Snowden writes: In Italy malaria undermined agricultural productivity, decimated the army, destroyed communities and left families impoverished. The economic miracle of the 50s and 60s which made Italy into a modern industrial nation would not have been possible without the eradication of malaria. Moreover, this book convincingly argues that the disease was an integral part of the big picture of modern Italian history. This magnificent study, beautifully written and impeccably documented, deserves an audience beyond specialists in history, or in Italy. It also provides us with a message of hope for a world struggling with the great present-day medical emergency"
Quinine is an effective drug which had long been used to combat malaria.
contradiction
id_6027
The Conquest of Malaria in Italy, 1900-1962 Mal-aria. Bad air. Even the world is Italian, and this horrible disease marked the life of those in the peninsula for thousands of years. Giuseppe Garibaldis wife died of the disease, as did the countrys first prime minister, Cavour, in 1861. Yet by 1962, Italy was officially declared malaria- free, and it has remained so ever since. Frank Snowdens study of this success story is a remarkable piece of historical work. Original, crystal-clear, analytical and passionate, Snowden (who has previously written about cholera) takes us to areas historians have rarely visited before. Everybody now knows that malaria is carried by mosquitoes. Malaria has always been the subject of research for medical practitioners from time immemorial. However, many ancient texts, especially medical literature, mention of various aspects of malaria and even of its possible link with mosquitoes and insects. Early man, confronting the manifestations of malaria, attributed the fevers to supernatural influences: evil spirits, angered deities, or the black magic of sorcerers. But in the 19 th century, most expects believed that the disease was produced by unclear air (miasma or poisoning of the air). Two Americans, Josiah Clark Nott and Lewis Daniel Beauperthy, echoed Crawfords ideas. Nott in his essay Yellow Fever Contrasted with Bilious Fever, published in 1850, dismissed the miasma theory as worthless, arguing that microscopic insects somehow transmitted by mosquitoes caused both malaria and yellow fever. Others made a link between swamps, water and malaria, but did not make the future leap towards insects. The consequences of these theories were that little was done to combat the disease before the end of the century. Things became so bad that 11m Italians (from a total population of 25m) were permanently at risk. In malarial zones the life expectancy of land workers was a terrifying 22.5 years. Those who escaped death were weakened or suffered from splenomegaly a painful enlargement of the spleen and a lifeless stare. The economic impact of the disease was immense. Epidemics were blamed on southern Italians, given the widespread belief that malaria was hereditary. In the 1880s, such theories began to collapse as the dreaded mosquito was identified as the real culprit. Italian scientists, drawing on the pioneering work on French doctor Alphonse Laveran, were able to predict the cycles of fever but it was in Rome that further key discoveries were made. Giovanni Battista Grassi, a naturalist, found that a particular type of mosquito was the carrier of malaria. By experimenting on healthy volunteers (mosquitoes were released into rooms where they drank the blood of the human guinea pigs), Grassi was able to make the direct link between the insects (all females of a certain kind) and the disease. Soon, doctors and scientists made another startling discovery: the mosquitoes themselves were also infected and not mere carriers. Every year, during the mosquito season, malarial blood was moved around the population by the insects. Definitive proof of these new theories was obtained after an extraordinary series of experiments in Italy, where healthy people were introduced into malarial zones but kept free of mosquito bites and remained well. The new Italian state had the necessary information to tackle the disease. A complicated approach was adopted, which made use of quinine a drug obtained from tree bark which had long been used to combat fever, but was now seen as a crucial part of the war on malaria. Italy introduced a quinine law and a quinine tax in 1904, and the drug was administered to large numbers of rural workers. Despite its often terrible side-effects (the headaches produced were known as the quinine-buzz) the drug was successful in limiting the spread of the disease, and in breaking cycles of infection. In addition, Italy set up rural health centers and invested heavily in education programmes. Malaria, as Snowden shows, was not just a medical problem, but a social and regional issue, and could only be defeated through multilayered strategies. Politics was itself transformed by the anti-malarial campaigns. It was originally decided to give quinine to all those in certain regions even healthy people; peasants were often suspicious of medicine being forced upon them. Doctors were sometimes met with hostility and refusal, and many were dubbed poisoners. Despite these problems, the strategy was hugely successful. Deaths from malaria fell by some 80% in the first decade of the 20 th century and some areas escaped altogether from the scourge of the disease. Shamefully, the Italian malaria expert Alberto Missiroli had a role to play in the disaster: he did not distribute quinine, despite being well aware of the epidemic to come. Snowden claims that Missiroli was already preparing a new strategy with the support of the US Rockefeller Foundation-using a new pesticide, DDT. Missiroli allowed the epidemic to spread, in order to create the ideal conditions for a massive, and lucrative, human experiment. Fifty-five thousand cases of malaria were recorded in the province of Littoria alone in 1944. It is estimated that more than a third of those affected area contracted the disease. Thousands, nobody knows how many, died. With the war over, the US government and the Rockefeller Foundation were free to experiment. DDT was sprayed from the air and 3m Italians had their bodies covered with the chemical. The effects were dramatic, and nobody really cared about the toxic effects of the chemical. By 1962, malaria was more or less gone from the whole peninsula. The last cases were noted in a poor region of Sicily. One of the final victims to die of the disease in Italy was the popular cyclist, Fausto Coppi. He had contracted malaria in Africa in 1960, and the failure of doctors in the north of Italy to spot the disease was a sign of the times. A few decades earlier, they would have immediately noticed the tell-tale signs; it was later claimed that a small dose of quinine would have saved his life. As there are still more than 1m deaths every year from malaria worldwide, Snowdens book also has contemporary relevance. As Snowden writes: In Italy malaria undermined agricultural productivity, decimated the army, destroyed communities and left families impoverished. The economic miracle of the 50s and 60s which made Italy into a modern industrial nation would not have been possible without the eradication of malaria. Moreover, this book convincingly argues that the disease was an integral part of the big picture of modern Italian history. This magnificent study, beautifully written and impeccably documented, deserves an audience beyond specialists in history, or in Italy. It also provides us with a message of hope for a world struggling with the great present-day medical emergency"
Healthy people could remain safe in the malaria- infectious zone if they did not have mosquito bites.
entailment
id_6028
The Conquest of Malaria in Italy, 1900-1962 Mal-aria. Bad air. Even the world is Italian, and this horrible disease marked the life of those in the peninsula for thousands of years. Giuseppe Garibaldis wife died of the disease, as did the countrys first prime minister, Cavour, in 1861. Yet by 1962, Italy was officially declared malaria- free, and it has remained so ever since. Frank Snowdens study of this success story is a remarkable piece of historical work. Original, crystal-clear, analytical and passionate, Snowden (who has previously written about cholera) takes us to areas historians have rarely visited before. Everybody now knows that malaria is carried by mosquitoes. Malaria has always been the subject of research for medical practitioners from time immemorial. However, many ancient texts, especially medical literature, mention of various aspects of malaria and even of its possible link with mosquitoes and insects. Early man, confronting the manifestations of malaria, attributed the fevers to supernatural influences: evil spirits, angered deities, or the black magic of sorcerers. But in the 19 th century, most expects believed that the disease was produced by unclear air (miasma or poisoning of the air). Two Americans, Josiah Clark Nott and Lewis Daniel Beauperthy, echoed Crawfords ideas. Nott in his essay Yellow Fever Contrasted with Bilious Fever, published in 1850, dismissed the miasma theory as worthless, arguing that microscopic insects somehow transmitted by mosquitoes caused both malaria and yellow fever. Others made a link between swamps, water and malaria, but did not make the future leap towards insects. The consequences of these theories were that little was done to combat the disease before the end of the century. Things became so bad that 11m Italians (from a total population of 25m) were permanently at risk. In malarial zones the life expectancy of land workers was a terrifying 22.5 years. Those who escaped death were weakened or suffered from splenomegaly a painful enlargement of the spleen and a lifeless stare. The economic impact of the disease was immense. Epidemics were blamed on southern Italians, given the widespread belief that malaria was hereditary. In the 1880s, such theories began to collapse as the dreaded mosquito was identified as the real culprit. Italian scientists, drawing on the pioneering work on French doctor Alphonse Laveran, were able to predict the cycles of fever but it was in Rome that further key discoveries were made. Giovanni Battista Grassi, a naturalist, found that a particular type of mosquito was the carrier of malaria. By experimenting on healthy volunteers (mosquitoes were released into rooms where they drank the blood of the human guinea pigs), Grassi was able to make the direct link between the insects (all females of a certain kind) and the disease. Soon, doctors and scientists made another startling discovery: the mosquitoes themselves were also infected and not mere carriers. Every year, during the mosquito season, malarial blood was moved around the population by the insects. Definitive proof of these new theories was obtained after an extraordinary series of experiments in Italy, where healthy people were introduced into malarial zones but kept free of mosquito bites and remained well. The new Italian state had the necessary information to tackle the disease. A complicated approach was adopted, which made use of quinine a drug obtained from tree bark which had long been used to combat fever, but was now seen as a crucial part of the war on malaria. Italy introduced a quinine law and a quinine tax in 1904, and the drug was administered to large numbers of rural workers. Despite its often terrible side-effects (the headaches produced were known as the quinine-buzz) the drug was successful in limiting the spread of the disease, and in breaking cycles of infection. In addition, Italy set up rural health centers and invested heavily in education programmes. Malaria, as Snowden shows, was not just a medical problem, but a social and regional issue, and could only be defeated through multilayered strategies. Politics was itself transformed by the anti-malarial campaigns. It was originally decided to give quinine to all those in certain regions even healthy people; peasants were often suspicious of medicine being forced upon them. Doctors were sometimes met with hostility and refusal, and many were dubbed poisoners. Despite these problems, the strategy was hugely successful. Deaths from malaria fell by some 80% in the first decade of the 20 th century and some areas escaped altogether from the scourge of the disease. Shamefully, the Italian malaria expert Alberto Missiroli had a role to play in the disaster: he did not distribute quinine, despite being well aware of the epidemic to come. Snowden claims that Missiroli was already preparing a new strategy with the support of the US Rockefeller Foundation-using a new pesticide, DDT. Missiroli allowed the epidemic to spread, in order to create the ideal conditions for a massive, and lucrative, human experiment. Fifty-five thousand cases of malaria were recorded in the province of Littoria alone in 1944. It is estimated that more than a third of those affected area contracted the disease. Thousands, nobody knows how many, died. With the war over, the US government and the Rockefeller Foundation were free to experiment. DDT was sprayed from the air and 3m Italians had their bodies covered with the chemical. The effects were dramatic, and nobody really cared about the toxic effects of the chemical. By 1962, malaria was more or less gone from the whole peninsula. The last cases were noted in a poor region of Sicily. One of the final victims to die of the disease in Italy was the popular cyclist, Fausto Coppi. He had contracted malaria in Africa in 1960, and the failure of doctors in the north of Italy to spot the disease was a sign of the times. A few decades earlier, they would have immediately noticed the tell-tale signs; it was later claimed that a small dose of quinine would have saved his life. As there are still more than 1m deaths every year from malaria worldwide, Snowdens book also has contemporary relevance. As Snowden writes: In Italy malaria undermined agricultural productivity, decimated the army, destroyed communities and left families impoverished. The economic miracle of the 50s and 60s which made Italy into a modern industrial nation would not have been possible without the eradication of malaria. Moreover, this book convincingly argues that the disease was an integral part of the big picture of modern Italian history. This magnificent study, beautifully written and impeccably documented, deserves an audience beyond specialists in history, or in Italy. It also provides us with a message of hope for a world struggling with the great present-day medical emergency"
The volunteers in Grassi experiments were from all parts over the Italy.
neutral
id_6029
The Conquest of Malaria in Italy, 1900-1962 Mal-aria. Bad air. Even the world is Italian, and this horrible disease marked the life of those in the peninsula for thousands of years. Giuseppe Garibaldis wife died of the disease, as did the countrys first prime minister, Cavour, in 1861. Yet by 1962, Italy was officially declared malaria- free, and it has remained so ever since. Frank Snowdens study of this success story is a remarkable piece of historical work. Original, crystal-clear, analytical and passionate, Snowden (who has previously written about cholera) takes us to areas historians have rarely visited before. Everybody now knows that malaria is carried by mosquitoes. Malaria has always been the subject of research for medical practitioners from time immemorial. However, many ancient texts, especially medical literature, mention of various aspects of malaria and even of its possible link with mosquitoes and insects. Early man, confronting the manifestations of malaria, attributed the fevers to supernatural influences: evil spirits, angered deities, or the black magic of sorcerers. But in the 19 th century, most expects believed that the disease was produced by unclear air (miasma or poisoning of the air). Two Americans, Josiah Clark Nott and Lewis Daniel Beauperthy, echoed Crawfords ideas. Nott in his essay Yellow Fever Contrasted with Bilious Fever, published in 1850, dismissed the miasma theory as worthless, arguing that microscopic insects somehow transmitted by mosquitoes caused both malaria and yellow fever. Others made a link between swamps, water and malaria, but did not make the future leap towards insects. The consequences of these theories were that little was done to combat the disease before the end of the century. Things became so bad that 11m Italians (from a total population of 25m) were permanently at risk. In malarial zones the life expectancy of land workers was a terrifying 22.5 years. Those who escaped death were weakened or suffered from splenomegaly a painful enlargement of the spleen and a lifeless stare. The economic impact of the disease was immense. Epidemics were blamed on southern Italians, given the widespread belief that malaria was hereditary. In the 1880s, such theories began to collapse as the dreaded mosquito was identified as the real culprit. Italian scientists, drawing on the pioneering work on French doctor Alphonse Laveran, were able to predict the cycles of fever but it was in Rome that further key discoveries were made. Giovanni Battista Grassi, a naturalist, found that a particular type of mosquito was the carrier of malaria. By experimenting on healthy volunteers (mosquitoes were released into rooms where they drank the blood of the human guinea pigs), Grassi was able to make the direct link between the insects (all females of a certain kind) and the disease. Soon, doctors and scientists made another startling discovery: the mosquitoes themselves were also infected and not mere carriers. Every year, during the mosquito season, malarial blood was moved around the population by the insects. Definitive proof of these new theories was obtained after an extraordinary series of experiments in Italy, where healthy people were introduced into malarial zones but kept free of mosquito bites and remained well. The new Italian state had the necessary information to tackle the disease. A complicated approach was adopted, which made use of quinine a drug obtained from tree bark which had long been used to combat fever, but was now seen as a crucial part of the war on malaria. Italy introduced a quinine law and a quinine tax in 1904, and the drug was administered to large numbers of rural workers. Despite its often terrible side-effects (the headaches produced were known as the quinine-buzz) the drug was successful in limiting the spread of the disease, and in breaking cycles of infection. In addition, Italy set up rural health centers and invested heavily in education programmes. Malaria, as Snowden shows, was not just a medical problem, but a social and regional issue, and could only be defeated through multilayered strategies. Politics was itself transformed by the anti-malarial campaigns. It was originally decided to give quinine to all those in certain regions even healthy people; peasants were often suspicious of medicine being forced upon them. Doctors were sometimes met with hostility and refusal, and many were dubbed poisoners. Despite these problems, the strategy was hugely successful. Deaths from malaria fell by some 80% in the first decade of the 20 th century and some areas escaped altogether from the scourge of the disease. Shamefully, the Italian malaria expert Alberto Missiroli had a role to play in the disaster: he did not distribute quinine, despite being well aware of the epidemic to come. Snowden claims that Missiroli was already preparing a new strategy with the support of the US Rockefeller Foundation-using a new pesticide, DDT. Missiroli allowed the epidemic to spread, in order to create the ideal conditions for a massive, and lucrative, human experiment. Fifty-five thousand cases of malaria were recorded in the province of Littoria alone in 1944. It is estimated that more than a third of those affected area contracted the disease. Thousands, nobody knows how many, died. With the war over, the US government and the Rockefeller Foundation were free to experiment. DDT was sprayed from the air and 3m Italians had their bodies covered with the chemical. The effects were dramatic, and nobody really cared about the toxic effects of the chemical. By 1962, malaria was more or less gone from the whole peninsula. The last cases were noted in a poor region of Sicily. One of the final victims to die of the disease in Italy was the popular cyclist, Fausto Coppi. He had contracted malaria in Africa in 1960, and the failure of doctors in the north of Italy to spot the disease was a sign of the times. A few decades earlier, they would have immediately noticed the tell-tale signs; it was later claimed that a small dose of quinine would have saved his life. As there are still more than 1m deaths every year from malaria worldwide, Snowdens book also has contemporary relevance. As Snowden writes: In Italy malaria undermined agricultural productivity, decimated the army, destroyed communities and left families impoverished. The economic miracle of the 50s and 60s which made Italy into a modern industrial nation would not have been possible without the eradication of malaria. Moreover, this book convincingly argues that the disease was an integral part of the big picture of modern Italian history. This magnificent study, beautifully written and impeccably documented, deserves an audience beyond specialists in history, or in Italy. It also provides us with a message of hope for a world struggling with the great present-day medical emergency"
Eradicating malaria was a goal combined both medical and political significance.
entailment
id_6030
The Context, Meaning and Scope of Tourism Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions,. In 1992 the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer the almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion m direct indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, 54Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
Tourism contributes over six per cent of the Australian gross national product.
neutral
id_6031
The Context, Meaning and Scope of Tourism Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions,. In 1992 the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer the almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion m direct indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, 54Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
Tourism has a social impact because it promotes recreation.
neutral
id_6032
The Context, Meaning and Scope of Tourism Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions,. In 1992 the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer the almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion m direct indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, 54Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
It is easy to show statistically how tourism affects individual economies.
neutral
id_6033
The Context, Meaning and Scope of Tourism Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions,. In 1992 the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer the almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion m direct indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, 54Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
Two main features of the travel and tourism industry make its economic significance difficult to ascertain.
entailment
id_6034
The Context, Meaning and Scope of Tourism Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions,. In 1992 the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer the almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion m direct indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, 54Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
Visitor spending is always greater than the spending of residents in tourist areas.
contradiction
id_6035
The Context, Meaning and Scope of Tourism Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions,. In 1992 the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer the almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion m direct indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, 54Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
The largest employment figures in the world are found in the travel and tourism industry.
entailment
id_6036
The Context, Meaning and Scope of Tourism. Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions. In 1992, the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer with almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion in direct, indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
Two main features of the travel and tourism industry make its economic significance difficult to ascertain.
entailment
id_6037
The Context, Meaning and Scope of Tourism. Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions. In 1992, the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer with almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion in direct, indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
The largest employment figures in the world are found in the travel and tourism industry.
entailment
id_6038
The Context, Meaning and Scope of Tourism. Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions. In 1992, the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer with almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion in direct, indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
Visitor spending is always greater than the spending of residents in tourist areas.
neutral
id_6039
The Context, Meaning and Scope of Tourism. Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions. In 1992, the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer with almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion in direct, indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
Tourism has a social impact because it promotes recreation.
neutral
id_6040
The Context, Meaning and Scope of Tourism. Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions. In 1992, the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer with almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion in direct, indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
Tourism contributes over six per cent of the Australian gross national product.
neutral
id_6041
The Context, Meaning and Scope of Tourism. Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies. Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange. Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992), Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions. In 1992, the industrys gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the worlds largest employer with almost 130 million jobs, or almost 7 per cent of all employees. This industry is the worlds leading industrial contributor, producing over 6 per cent of the worlds national product and accounting for capital investment in excess of $422 billion in direct, indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself. However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities. Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.
It is easy to show statistically how tourism affects individual economies.
contradiction
id_6042
The Copernican model of the solar system, with the Earth and associated planets revolving around the Sun, was formulated in the middle of the stxteenth century. Nicolaus Copernicuss theory was the first heliocentric model of planetary motion, placing the sun at the centre. That said, his solar model retained the erroneous premise as per the Ptolemaic System that planets move mn perfect circles. The sixteenth centurys move away from the geocentric view of the universe as beng centred on Earth led to a complete change m peopkes concept of the universe. The Copernican model of the solar system laid the groundwork for Newtons laws of gravity and Keplers laws of planetary motion. The former describes how planets are held in their individual orbtts.
Nicolaus Coperntuss work in the 1600s had a profound effect on how the universe was understood.
contradiction
id_6043
The Copernican model of the solar system, with the Earth and associated planets revolving around the Sun, was formulated in the middle of the stxteenth century. Nicolaus Copernicuss theory was the first heliocentric model of planetary motion, placing the sun at the centre. That said, his solar model retained the erroneous premise as per the Ptolemaic System that planets move mn perfect circles. The sixteenth centurys move away from the geocentric view of the universe as beng centred on Earth led to a complete change m peopkes concept of the universe. The Copernican model of the solar system laid the groundwork for Newtons laws of gravity and Keplers laws of planetary motion. The former describes how planets are held in their individual orbtts.
The Ptolemak system preceded the Copernican model.
entailment
id_6044
The Copernican model of the solar system, with the Earth and associated planets revolving around the Sun, was formulated in the middle of the stxteenth century. Nicolaus Copernicuss theory was the first heliocentric model of planetary motion, placing the sun at the centre. That said, his solar model retained the erroneous premise as per the Ptolemaic System that planets move mn perfect circles. The sixteenth centurys move away from the geocentric view of the universe as beng centred on Earth led to a complete change m peopkes concept of the universe. The Copernican model of the solar system laid the groundwork for Newtons laws of gravity and Keplers laws of planetary motion. The former describes how planets are held in their individual orbtts.
The passage suggests that the Copernican system was flawkess.
contradiction
id_6045
The Copernican model of the solar system, with the Earth and associated planets revolving around the Sun, was formulated in the middle of the stxteenth century. Nicolaus Copernicuss theory was the first heliocentric model of planetary motion, placing the sun at the centre. That said, his solar model retained the erroneous premise as per the Ptolemaic System that planets move mn perfect circles. The sixteenth centurys move away from the geocentric view of the universe as beng centred on Earth led to a complete change m peopkes concept of the universe. The Copernican model of the solar system laid the groundwork for Newtons laws of gravity and Keplers laws of planetary motion. The former describes how planets are held in their individual orbtts.
Copernkus paved the way for Newtons work on the orbits of the planets.
entailment
id_6046
The Copernican model of the solar system, with the Earth and associated planets revolving around the Sun, was formulated in the middle of the stxteenth century. Nicolaus Copernicuss theory was the first heliocentric model of planetary motion, placing the sun at the centre. That said, his solar model retained the erroneous premise as per the Ptolemaic System that planets move mn perfect circles. The sixteenth centurys move away from the geocentric view of the universe as beng centred on Earth led to a complete change m peopkes concept of the universe. The Copernican model of the solar system laid the groundwork for Newtons laws of gravity and Keplers laws of planetary motion. The former describes how planets are held in their individual orbtts.
Copernicus developed the first geocentric model of the Earths solar system.
contradiction
id_6047
The Creativity Myth It is a myth that creative people are born with their talents: gifts from God or nature. Creative genius is, in fact, latent within many of us, without our realising. But how far do we need to travel to find the path to creativity? For many people, a long way. In our everyday lives, we have to perform many acts out of habit to survive, like opening the door, shaving, getting dressed, walking to work, and so on. If this were not the case, we would, in all probability, become mentally unhinged. So strongly ingrained are our habits, though this varies from person to person, that sometimes when a conscious effort is made to be creative, automatic response takes over. We may try, for example, to walk to work following a different route, but end up on our usual path. By then it is too late to go back and change our minds. Another day, perhaps. The same applies to all other areas of our lives. When we are solving problems, for example, we may seek different answers, but, often as not, find ourselves walking along the same well-trodden paths. So, for many people, their actions and behaviour are set in immovable blocks, their minds clogged with the cholesterol of habitual actions, preventing them from operating freely, and thereby stifling creation. Unfortunately, mankinds very struggle for survival has become a tyranny the obsessive desire to give order to the world is a case in point. Witness peoples attitude to time, social customs and the panoply of rules and regulations by which the human mind is now circumscribed. The groundwork for keeping creative ability in check begins at school. School, later university and then work, teach us to regulate our lives, imposing a continuous process of restrictions which is increasing exponentially with the advancement of technology. Is it surprising then that creative ability appears to be so rare? It is trapped in the prison that we have erected. Yet, even here in this hostile environment, the foundations for creativity are being laid; because setting off on the creative path is also partly about using rules and regulations. Such limitations are needed so that once they are learnt, they can be broken. The truly creative mind is often seen as totally free and unfettered. But a better image is of a mind, which can be free when it wants, and one that recognises that rules and regulations are parameters, or barriers, to be raised and dropped again at will. An example of how the human mind can be trained to be creative might help here. Peoples minds are just like tense muscles that need to be freed up and the potential unlocked. One strategy is to erect artificial barriers or hurdles in solving a problem. As a form of stimulation, the participants in the task can be forbidden to use particular solutions or to follow certain lines of thought to solve a problem. In this way, they are obliged to explore unfamiliar territory, which may lead to some startling discoveries. Unfortunately, the difficulty in this exercise, and with creation itself, is convincing people that creation is possible, shrouded as it is in so much myth and legend. There is also an element of fear involved, however subliminal, as deviating from the safety of ones thought patterns is very much akin to madness. But, open Pandoras box and a whole new world unfold before your very eyes. Lifting barriers into place also plays a major part in helping the mind to control ideas rather than letting them collide at random. Parameters act as containers for ideas and thus help the mind to fix on them. When the mind is thinking laterally and two ideas from different areas of the brain come or are brought together, they form a new idea, just like atoms floating around and then forming a molecule. Once the idea has been formed, it needs to be contained or it will fly away, so fleeting is its passage. The mind needs to hold it in place for a time so that it can recognise it or call on it again. And then the parameters can act as channels along which the ideas can flow, developing and expanding. When the mind has brought the idea to fruition by thinking it through to its conclusion, the parameters can be brought down and the idea allowed to float off and come in contact with other ideas.
The act of creation is linked to madness.
entailment
id_6048
The Creativity Myth It is a myth that creative people are born with their talents: gifts from God or nature. Creative genius is, in fact, latent within many of us, without our realising. But how far do we need to travel to find the path to creativity? For many people, a long way. In our everyday lives, we have to perform many acts out of habit to survive, like opening the door, shaving, getting dressed, walking to work, and so on. If this were not the case, we would, in all probability, become mentally unhinged. So strongly ingrained are our habits, though this varies from person to person, that sometimes when a conscious effort is made to be creative, automatic response takes over. We may try, for example, to walk to work following a different route, but end up on our usual path. By then it is too late to go back and change our minds. Another day, perhaps. The same applies to all other areas of our lives. When we are solving problems, for example, we may seek different answers, but, often as not, find ourselves walking along the same well-trodden paths. So, for many people, their actions and behaviour are set in immovable blocks, their minds clogged with the cholesterol of habitual actions, preventing them from operating freely, and thereby stifling creation. Unfortunately, mankinds very struggle for survival has become a tyranny the obsessive desire to give order to the world is a case in point. Witness peoples attitude to time, social customs and the panoply of rules and regulations by which the human mind is now circumscribed. The groundwork for keeping creative ability in check begins at school. School, later university and then work, teach us to regulate our lives, imposing a continuous process of restrictions which is increasing exponentially with the advancement of technology. Is it surprising then that creative ability appears to be so rare? It is trapped in the prison that we have erected. Yet, even here in this hostile environment, the foundations for creativity are being laid; because setting off on the creative path is also partly about using rules and regulations. Such limitations are needed so that once they are learnt, they can be broken. The truly creative mind is often seen as totally free and unfettered. But a better image is of a mind, which can be free when it wants, and one that recognises that rules and regulations are parameters, or barriers, to be raised and dropped again at will. An example of how the human mind can be trained to be creative might help here. Peoples minds are just like tense muscles that need to be freed up and the potential unlocked. One strategy is to erect artificial barriers or hurdles in solving a problem. As a form of stimulation, the participants in the task can be forbidden to use particular solutions or to follow certain lines of thought to solve a problem. In this way, they are obliged to explore unfamiliar territory, which may lead to some startling discoveries. Unfortunately, the difficulty in this exercise, and with creation itself, is convincing people that creation is possible, shrouded as it is in so much myth and legend. There is also an element of fear involved, however subliminal, as deviating from the safety of ones thought patterns is very much akin to madness. But, open Pandoras box and a whole new world unfold before your very eyes. Lifting barriers into place also plays a major part in helping the mind to control ideas rather than letting them collide at random. Parameters act as containers for ideas and thus help the mind to fix on them. When the mind is thinking laterally and two ideas from different areas of the brain come or are brought together, they form a new idea, just like atoms floating around and then forming a molecule. Once the idea has been formed, it needs to be contained or it will fly away, so fleeting is its passage. The mind needs to hold it in place for a time so that it can recognise it or call on it again. And then the parameters can act as channels along which the ideas can flow, developing and expanding. When the mind has brought the idea to fruition by thinking it through to its conclusion, the parameters can be brought down and the idea allowed to float off and come in contact with other ideas.
One problem with creativity is that people think it is impossible.
entailment
id_6049
The Creativity Myth It is a myth that creative people are born with their talents: gifts from God or nature. Creative genius is, in fact, latent within many of us, without our realising. But how far do we need to travel to find the path to creativity? For many people, a long way. In our everyday lives, we have to perform many acts out of habit to survive, like opening the door, shaving, getting dressed, walking to work, and so on. If this were not the case, we would, in all probability, become mentally unhinged. So strongly ingrained are our habits, though this varies from person to person, that sometimes when a conscious effort is made to be creative, automatic response takes over. We may try, for example, to walk to work following a different route, but end up on our usual path. By then it is too late to go back and change our minds. Another day, perhaps. The same applies to all other areas of our lives. When we are solving problems, for example, we may seek different answers, but, often as not, find ourselves walking along the same well-trodden paths. So, for many people, their actions and behaviour are set in immovable blocks, their minds clogged with the cholesterol of habitual actions, preventing them from operating freely, and thereby stifling creation. Unfortunately, mankinds very struggle for survival has become a tyranny the obsessive desire to give order to the world is a case in point. Witness peoples attitude to time, social customs and the panoply of rules and regulations by which the human mind is now circumscribed. The groundwork for keeping creative ability in check begins at school. School, later university and then work, teach us to regulate our lives, imposing a continuous process of restrictions which is increasing exponentially with the advancement of technology. Is it surprising then that creative ability appears to be so rare? It is trapped in the prison that we have erected. Yet, even here in this hostile environment, the foundations for creativity are being laid; because setting off on the creative path is also partly about using rules and regulations. Such limitations are needed so that once they are learnt, they can be broken. The truly creative mind is often seen as totally free and unfettered. But a better image is of a mind, which can be free when it wants, and one that recognises that rules and regulations are parameters, or barriers, to be raised and dropped again at will. An example of how the human mind can be trained to be creative might help here. Peoples minds are just like tense muscles that need to be freed up and the potential unlocked. One strategy is to erect artificial barriers or hurdles in solving a problem. As a form of stimulation, the participants in the task can be forbidden to use particular solutions or to follow certain lines of thought to solve a problem. In this way, they are obliged to explore unfamiliar territory, which may lead to some startling discoveries. Unfortunately, the difficulty in this exercise, and with creation itself, is convincing people that creation is possible, shrouded as it is in so much myth and legend. There is also an element of fear involved, however subliminal, as deviating from the safety of ones thought patterns is very much akin to madness. But, open Pandoras box and a whole new world unfold before your very eyes. Lifting barriers into place also plays a major part in helping the mind to control ideas rather than letting them collide at random. Parameters act as containers for ideas and thus help the mind to fix on them. When the mind is thinking laterally and two ideas from different areas of the brain come or are brought together, they form a new idea, just like atoms floating around and then forming a molecule. Once the idea has been formed, it needs to be contained or it will fly away, so fleeting is its passage. The mind needs to hold it in place for a time so that it can recognise it or call on it again. And then the parameters can act as channels along which the ideas can flow, developing and expanding. When the mind has brought the idea to fruition by thinking it through to its conclusion, the parameters can be brought down and the idea allowed to float off and come in contact with other ideas.
The truly creative mind is associated with the need for free speech and a free society.
neutral
id_6050
The Creativity Myth It is a myth that creative people are born with their talents: gifts from God or nature. Creative genius is, in fact, latent within many of us, without our realising. But how far do we need to travel to find the path to creativity? For many people, a long way. In our everyday lives, we have to perform many acts out of habit to survive, like opening the door, shaving, getting dressed, walking to work, and so on. If this were not the case, we would, in all probability, become mentally unhinged. So strongly ingrained are our habits, though this varies from person to person, that sometimes when a conscious effort is made to be creative, automatic response takes over. We may try, for example, to walk to work following a different route, but end up on our usual path. By then it is too late to go back and change our minds. Another day, perhaps. The same applies to all other areas of our lives. When we are solving problems, for example, we may seek different answers, but, often as not, find ourselves walking along the same well-trodden paths. So, for many people, their actions and behaviour are set in immovable blocks, their minds clogged with the cholesterol of habitual actions, preventing them from operating freely, and thereby stifling creation. Unfortunately, mankinds very struggle for survival has become a tyranny the obsessive desire to give order to the world is a case in point. Witness peoples attitude to time, social customs and the panoply of rules and regulations by which the human mind is now circumscribed. The groundwork for keeping creative ability in check begins at school. School, later university and then work, teach us to regulate our lives, imposing a continuous process of restrictions which is increasing exponentially with the advancement of technology. Is it surprising then that creative ability appears to be so rare? It is trapped in the prison that we have erected. Yet, even here in this hostile environment, the foundations for creativity are being laid; because setting off on the creative path is also partly about using rules and regulations. Such limitations are needed so that once they are learnt, they can be broken. The truly creative mind is often seen as totally free and unfettered. But a better image is of a mind, which can be free when it wants, and one that recognises that rules and regulations are parameters, or barriers, to be raised and dropped again at will. An example of how the human mind can be trained to be creative might help here. Peoples minds are just like tense muscles that need to be freed up and the potential unlocked. One strategy is to erect artificial barriers or hurdles in solving a problem. As a form of stimulation, the participants in the task can be forbidden to use particular solutions or to follow certain lines of thought to solve a problem. In this way, they are obliged to explore unfamiliar territory, which may lead to some startling discoveries. Unfortunately, the difficulty in this exercise, and with creation itself, is convincing people that creation is possible, shrouded as it is in so much myth and legend. There is also an element of fear involved, however subliminal, as deviating from the safety of ones thought patterns is very much akin to madness. But, open Pandoras box and a whole new world unfold before your very eyes. Lifting barriers into place also plays a major part in helping the mind to control ideas rather than letting them collide at random. Parameters act as containers for ideas and thus help the mind to fix on them. When the mind is thinking laterally and two ideas from different areas of the brain come or are brought together, they form a new idea, just like atoms floating around and then forming a molecule. Once the idea has been formed, it needs to be contained or it will fly away, so fleeting is its passage. The mind needs to hold it in place for a time so that it can recognise it or call on it again. And then the parameters can act as channels along which the ideas can flow, developing and expanding. When the mind has brought the idea to fruition by thinking it through to its conclusion, the parameters can be brought down and the idea allowed to float off and come in contact with other ideas.
Rules and regulations are examples of parameters.
entailment
id_6051
The Dead Sea Scrolls are probably the most significant archaeological discovery of the twentieth century. More than 800 ancient documents, written on papyrus and parchment, were found in 1947 in desert caves at Qumran, near the Dead Sea. The texts mainly date from between the last century BCE and the first century CE and are comprised of three types of document: copies of books from the Hebrew Bible; apocryphal manuscripts; and documents pertaining to the beliefs and practices of a sectarian community. The former category is arguably of the greatest academic significance, as documents such as a complete copy of the Book of Isaiah enabled historians to analyse the accuracy of Bible translations. However, the secrecy of the scholars appointed by the Israeli Antiquities Authority, and their slow rate of publication, were the subject of international controversy. In 1991, the Huntington Library made photographic images of the full set of scrolls finally available to all researchers. While the scrolls importance is indisputable, there is no consensus over the texts origins. The traditional view is that the scrolls belonged to an ascetic Jewish sect, widely believed to be the Essenes. The Essenes rules and doctrines are even seen by some scholars as a precursor to Christianity. A competing theory holds that the documents are sacred texts belonging to various Jewish communities, hidden in the caves for safekeeping around 68CE, during the unsuccessful Jewish Revolt against the Romans in Jerusalem.
Some scholars believe the Essenes inhabited the desert caves at Qumran, near the Dead Sea.
neutral
id_6052
The Dead Sea Scrolls are probably the most significant archaeological discovery of the twentieth century. More than 800 ancient documents, written on papyrus and parchment, were found in 1947 in desert caves at Qumran, near the Dead Sea. The texts mainly date from between the last century BCE and the first century CE and are comprised of three types of document: copies of books from the Hebrew Bible; apocryphal manuscripts; and documents pertaining to the beliefs and practices of a sectarian community. The former category is arguably of the greatest academic significance, as documents such as a complete copy of the Book of Isaiah enabled historians to analyse the accuracy of Bible translations. However, the secrecy of the scholars appointed by the Israeli Antiquities Authority, and their slow rate of publication, were the subject of international controversy. In 1991, the Huntington Library made photographic images of the full set of scrolls finally available to all researchers. While the scrolls importance is indisputable, there is no consensus over the texts origins. The traditional view is that the scrolls belonged to an ascetic Jewish sect, widely believed to be the Essenes. The Essenes rules and doctrines are even seen by some scholars as a precursor to Christianity. A competing theory holds that the documents are sacred texts belonging to various Jewish communities, hidden in the caves for safekeeping around 68CE, during the unsuccessful Jewish Revolt against the Romans in Jerusalem.
Not only the origins of the Dead Sea Scrolls, but also the process of their interpretation, have been disputed.
entailment
id_6053
The Dead Sea Scrolls are probably the most significant archaeological discovery of the twentieth century. More than 800 ancient documents, written on papyrus and parchment, were found in 1947 in desert caves at Qumran, near the Dead Sea. The texts mainly date from between the last century BCE and the first century CE and are comprised of three types of document: copies of books from the Hebrew Bible; apocryphal manuscripts; and documents pertaining to the beliefs and practices of a sectarian community. The former category is arguably of the greatest academic significance, as documents such as a complete copy of the Book of Isaiah enabled historians to analyse the accuracy of Bible translations. However, the secrecy of the scholars appointed by the Israeli Antiquities Authority, and their slow rate of publication, were the subject of international controversy. In 1991, the Huntington Library made photographic images of the full set of scrolls finally available to all researchers. While the scrolls importance is indisputable, there is no consensus over the texts origins. The traditional view is that the scrolls belonged to an ascetic Jewish sect, widely believed to be the Essenes. The Essenes rules and doctrines are even seen by some scholars as a precursor to Christianity. A competing theory holds that the documents are sacred texts belonging to various Jewish communities, hidden in the caves for safekeeping around 68CE, during the unsuccessful Jewish Revolt against the Romans in Jerusalem.
Academics debate whether the scrolls are the detailed accounts of one particular sect, or provide historical information about the wider Jewish people.
entailment
id_6054
The Dead Sea Scrolls are probably the most significant archaeological discovery of the twentieth century. More than 800 ancient documents, written on papyrus and parchment, were found in 1947 in desert caves at Qumran, near the Dead Sea. The texts mainly date from between the last century BCE and the first century CE and are comprised of three types of document: copies of books from the Hebrew Bible; apocryphal manuscripts; and documents pertaining to the beliefs and practices of a sectarian community. The former category is arguably of the greatest academic significance, as documents such as a complete copy of the Book of Isaiah enabled historians to analyse the accuracy of Bible translations. However, the secrecy of the scholars appointed by the Israeli Antiquities Authority, and their slow rate of publication, were the subject of international controversy. In 1991, the Huntington Library made photographic images of the full set of scrolls finally available to all researchers. While the scrolls importance is indisputable, there is no consensus over the texts origins. The traditional view is that the scrolls belonged to an ascetic Jewish sect, widely believed to be the Essenes. The Essenes rules and doctrines are even seen by some scholars as a precursor to Christianity. A competing theory holds that the documents are sacred texts belonging to various Jewish communities, hidden in the caves for safekeeping around 68CE, during the unsuccessful Jewish Revolt against the Romans in Jerusalem.
The traditional interpretation of the Dead Sea Scrolls is that they belonged to an early Christian sect called the Essenes.
contradiction
id_6055
The Dead Sea Scrolls are probably the most significant archaeological discovery of the twentieth century. More than 800 ancient documents, written on papyrus and parchment, were found in 1947 in desert caves at Qumran, near the Dead Sea. The texts mainly date from between the last century BCE and the first century CE and are comprised of three types of document: copies of books from the Hebrew Bible; apocryphal manuscripts; and documents pertaining to the beliefs and practices of a sectarian community. The former category is arguably of the greatest academic significance, as documents such as a complete copy of the Book of Isaiah enabled historians to analyse the accuracy of Bible translations. However, the secrecy of the scholars appointed by the Israeli Antiquities Authority, and their slow rate of publication, were the subject of international controversy. In 1991, the Huntington Library made photographic images of the full set of scrolls finally available to all researchers. While the scrolls importance is indisputable, there is no consensus over the texts origins. The traditional view is that the scrolls belonged to an ascetic Jewish sect, widely believed to be the Essenes. The Essenes rules and doctrines are even seen by some scholars as a precursor to Christianity. A competing theory holds that the documents are sacred texts belonging to various Jewish communities, hidden in the caves for safekeeping around 68CE, during the unsuccessful Jewish Revolt against the Romans in Jerusalem.
The Dead Sea Scrolls include the oldest known copy of the Book of Isaiah.
neutral
id_6056
The Delhi Government has decided to deploy marshals on a hundred Delhi Transport Corporation (DTC) buses and dark spots for making public transport and public places safer for the women of the Capital.
Women will feel safer in public transport and public places.
entailment
id_6057
The Delhi Government has decided to deploy marshals on a hundred Delhi Transport Corporation (DTC) buses and dark spots for making public transport and public places safer for the women of the Capital.
Crime against women will decrease.
entailment
id_6058
The Delhi Government has decided to deploy marshals on a hundred Delhi Transport Corporation (DTC) buses and dark spots for making public transport and public places safer for the women of the Capital.
Delhi will be declared as the safest capital for women.
entailment
id_6059
The Delhi Government has decided to deploy marshals on a hundred Delhi Transport Corporation (DTC) buses and dark spots for making public transport and public places safer for the women of the Capital.
Women will use public transport instead of their own vehicles.
entailment
id_6060
The Delhi government has proposed an odd/even formula for cars. The cars with odd number plate and even number plate will play on roads on alternate days. The move is planned to be rolled out from January 1, 2016.
People of Delhi will buy two cars.
neutral
id_6061
The Delhi government has proposed an odd/even formula for cars. The cars with odd number plate and even number plate will play on roads on alternate days. The move is planned to be rolled out from January 1, 2016.
Sales of car will reduce in Delhi.
neutral
id_6062
The Delhi government has proposed an odd/even formula for cars. The cars with odd number plate and even number plate will play on roads on alternate days. The move is planned to be rolled out from January 1, 2016.
Pollution of the city will decline.
entailment
id_6063
The Delhi government has proposed an odd/even formula for cars. The cars with odd number plate and even number plate will play on roads on alternate days. The move is planned to be rolled out from January 1, 2016.
There will be no traffic jams on the Delhi road.
contradiction
id_6064
The Development Of Plastics When rubber was first commercially produced in Europe during the nineteenth century, it rapidly became a very important commodity, particularly in the fields of transportation and electricity. However, during the twentieth century a number of new synthetic materials, called plastics, superseded natural rubber in all but a few applications. Rubber is a polymer a compound containing large molecules that are formed by the bonding of many smaller, simpler units, repeated over and over again. The same bonding principle polymerisationunderlies the creation of a huge range of plastics by the chemical industry. The first plastic was developed as a result of a competition in the USA. In the 1860s, $10,000 was offered to anybody who could replace ivory supplies of which were declining with something equally good as a material for making billiard balls. The prize was won by John Wesley Hyatt with a material called celluloid. Celluloid was made by dissolving cellulose, a carbohydrate derived from plants, in a solution of camphor dissolved in ethanol. This new material rapidly found uses in the manufacture of products such as knife handles, detachable collars and cuffs, spectacle frames and photographic film. Without celluloid, the film industry could never have got off the ground at the end of the 19th century. Celluloid can be repeatedly softened and reshaped by heat and is known as a thermoplastic. In 1907 Leo Baekeland, a Belgian chemist working in the USA, invented a different kind of plastic by causing phenol and formaldehyde to react together. Baekeland called the material Bakelite, and it was the first of the thermosets plastics that can be cast and moulded while hot but cannot be softened by heat and reshaped once they have set. Bakelite was a good insulator and was resistant to water, acids and moderate heat. With these properties, it was soon being used in the manufacture of switches, household items, such as knife handles, and electrical components for cars. Soon chemists began looking for other small molecules that could be strung together to make polymers. In the 1930s, British chemists discovered that the gas ethylene would polymerize under heat and pressure to form a thermoplastic they called polythene. Polypropylene followed in the 1950s. Both were used to make bottles, pipes and plastic bags. A small change in the starting material replacing a hydrogen atom in ethylene with a chlorine atom produced PVC (polyvinyl chloride) , a hard, fireproof plastic suitable for drains and gutters. And by adding certain chemicals, a soft form of PVC could be produced, suitable as a substitute for rubber in items such as waterproof clothing. A closely related plastic was Teflon, or PTFE (polytetrafluoroethylene). This had a very low coefficient of friction, making it ideal for bearings, rollers, and non-stick frying pans. Polystyrene, developed during the 1930s in Germany, was a clear, glass-like material, used in food containers, domestic appliances, and toys. Expanded polystyrene a white, rigid foam was widely used in packaging and insulation. Polyurethanes, also developed in Germany, found uses as adhesives, coatings, and in the form of rigid foams as insulation materials. They are all produced from chemicals derived from crude oil, which contains exactly the same elements carbon and hydrogen as many plastics. The first of the man-made fibers, nylon, was also created in the 1930s. Its inventor was a chemist called Wallace Carothers, who worked for the Du Pont Company in the USA. He found that under the right conditions, two chemicals hexamethylenediamine and adipic acid would form a polymer that could be pumped out through holes and then stretched to form long glossy threads that could be woven like silk. Its first use was to make parachutes for the US armed forces in World War II. In the post-war years, nylon completely replaced silk in the manufacture of stockings. Subsequently, many other synthetic fibres joined nylon, including Orion, Acrilan, and Terylene. Today most garments are made of a blend of natural fibres, such as cotton and wool, and man-made fibres that make fabrics easier to look after. The great strength of the plastic is its indestructibility. However, this quality is also something of a drawback: beaches all over the world, even on the remotest islands, are littered with plastic bottles that nothing can destroy. Nor is it very easy to recycle plastics, as different types of plastic are often used in the same items and call for different treatments. Plastics can be made biodegradable by incorporating into their structure a material such as starch, which is attacked by bacteria and causes the plastic to fall apart. Other materials can be incorporated that gradually decay in sunlight although bottles made of such materials have to be stored in the dark, to ensure that they do not disintegrate before they have been used.
The chemical structure of plastic is very different from that of rubber.
contradiction
id_6065
The Development Of Plastics When rubber was first commercially produced in Europe during the nineteenth century, it rapidly became a very important commodity, particularly in the fields of transportation and electricity. However, during the twentieth century a number of new synthetic materials, called plastics, superseded natural rubber in all but a few applications. Rubber is a polymer a compound containing large molecules that are formed by the bonding of many smaller, simpler units, repeated over and over again. The same bonding principle polymerisationunderlies the creation of a huge range of plastics by the chemical industry. The first plastic was developed as a result of a competition in the USA. In the 1860s, $10,000 was offered to anybody who could replace ivory supplies of which were declining with something equally good as a material for making billiard balls. The prize was won by John Wesley Hyatt with a material called celluloid. Celluloid was made by dissolving cellulose, a carbohydrate derived from plants, in a solution of camphor dissolved in ethanol. This new material rapidly found uses in the manufacture of products such as knife handles, detachable collars and cuffs, spectacle frames and photographic film. Without celluloid, the film industry could never have got off the ground at the end of the 19th century. Celluloid can be repeatedly softened and reshaped by heat and is known as a thermoplastic. In 1907 Leo Baekeland, a Belgian chemist working in the USA, invented a different kind of plastic by causing phenol and formaldehyde to react together. Baekeland called the material Bakelite, and it was the first of the thermosets plastics that can be cast and moulded while hot but cannot be softened by heat and reshaped once they have set. Bakelite was a good insulator and was resistant to water, acids and moderate heat. With these properties, it was soon being used in the manufacture of switches, household items, such as knife handles, and electrical components for cars. Soon chemists began looking for other small molecules that could be strung together to make polymers. In the 1930s, British chemists discovered that the gas ethylene would polymerize under heat and pressure to form a thermoplastic they called polythene. Polypropylene followed in the 1950s. Both were used to make bottles, pipes and plastic bags. A small change in the starting material replacing a hydrogen atom in ethylene with a chlorine atom produced PVC (polyvinyl chloride) , a hard, fireproof plastic suitable for drains and gutters. And by adding certain chemicals, a soft form of PVC could be produced, suitable as a substitute for rubber in items such as waterproof clothing. A closely related plastic was Teflon, or PTFE (polytetrafluoroethylene). This had a very low coefficient of friction, making it ideal for bearings, rollers, and non-stick frying pans. Polystyrene, developed during the 1930s in Germany, was a clear, glass-like material, used in food containers, domestic appliances, and toys. Expanded polystyrene a white, rigid foam was widely used in packaging and insulation. Polyurethanes, also developed in Germany, found uses as adhesives, coatings, and in the form of rigid foams as insulation materials. They are all produced from chemicals derived from crude oil, which contains exactly the same elements carbon and hydrogen as many plastics. The first of the man-made fibers, nylon, was also created in the 1930s. Its inventor was a chemist called Wallace Carothers, who worked for the Du Pont Company in the USA. He found that under the right conditions, two chemicals hexamethylenediamine and adipic acid would form a polymer that could be pumped out through holes and then stretched to form long glossy threads that could be woven like silk. Its first use was to make parachutes for the US armed forces in World War II. In the post-war years, nylon completely replaced silk in the manufacture of stockings. Subsequently, many other synthetic fibres joined nylon, including Orion, Acrilan, and Terylene. Today most garments are made of a blend of natural fibres, such as cotton and wool, and man-made fibres that make fabrics easier to look after. The great strength of the plastic is its indestructibility. However, this quality is also something of a drawback: beaches all over the world, even on the remotest islands, are littered with plastic bottles that nothing can destroy. Nor is it very easy to recycle plastics, as different types of plastic are often used in the same items and call for different treatments. Plastics can be made biodegradable by incorporating into their structure a material such as starch, which is attacked by bacteria and causes the plastic to fall apart. Other materials can be incorporated that gradually decay in sunlight although bottles made of such materials have to be stored in the dark, to ensure that they do not disintegrate before they have been used.
John Wesley was a famous chemist.
neutral
id_6066
The Development Of Plastics When rubber was first commercially produced in Europe during the nineteenth century, it rapidly became a very important commodity, particularly in the fields of transportation and electricity. However, during the twentieth century a number of new synthetic materials, called plastics, superseded natural rubber in all but a few applications. Rubber is a polymer a compound containing large molecules that are formed by the bonding of many smaller, simpler units, repeated over and over again. The same bonding principle polymerisationunderlies the creation of a huge range of plastics by the chemical industry. The first plastic was developed as a result of a competition in the USA. In the 1860s, $10,000 was offered to anybody who could replace ivory supplies of which were declining with something equally good as a material for making billiard balls. The prize was won by John Wesley Hyatt with a material called celluloid. Celluloid was made by dissolving cellulose, a carbohydrate derived from plants, in a solution of camphor dissolved in ethanol. This new material rapidly found uses in the manufacture of products such as knife handles, detachable collars and cuffs, spectacle frames and photographic film. Without celluloid, the film industry could never have got off the ground at the end of the 19th century. Celluloid can be repeatedly softened and reshaped by heat and is known as a thermoplastic. In 1907 Leo Baekeland, a Belgian chemist working in the USA, invented a different kind of plastic by causing phenol and formaldehyde to react together. Baekeland called the material Bakelite, and it was the first of the thermosets plastics that can be cast and moulded while hot but cannot be softened by heat and reshaped once they have set. Bakelite was a good insulator and was resistant to water, acids and moderate heat. With these properties, it was soon being used in the manufacture of switches, household items, such as knife handles, and electrical components for cars. Soon chemists began looking for other small molecules that could be strung together to make polymers. In the 1930s, British chemists discovered that the gas ethylene would polymerize under heat and pressure to form a thermoplastic they called polythene. Polypropylene followed in the 1950s. Both were used to make bottles, pipes and plastic bags. A small change in the starting material replacing a hydrogen atom in ethylene with a chlorine atom produced PVC (polyvinyl chloride) , a hard, fireproof plastic suitable for drains and gutters. And by adding certain chemicals, a soft form of PVC could be produced, suitable as a substitute for rubber in items such as waterproof clothing. A closely related plastic was Teflon, or PTFE (polytetrafluoroethylene). This had a very low coefficient of friction, making it ideal for bearings, rollers, and non-stick frying pans. Polystyrene, developed during the 1930s in Germany, was a clear, glass-like material, used in food containers, domestic appliances, and toys. Expanded polystyrene a white, rigid foam was widely used in packaging and insulation. Polyurethanes, also developed in Germany, found uses as adhesives, coatings, and in the form of rigid foams as insulation materials. They are all produced from chemicals derived from crude oil, which contains exactly the same elements carbon and hydrogen as many plastics. The first of the man-made fibers, nylon, was also created in the 1930s. Its inventor was a chemist called Wallace Carothers, who worked for the Du Pont Company in the USA. He found that under the right conditions, two chemicals hexamethylenediamine and adipic acid would form a polymer that could be pumped out through holes and then stretched to form long glossy threads that could be woven like silk. Its first use was to make parachutes for the US armed forces in World War II. In the post-war years, nylon completely replaced silk in the manufacture of stockings. Subsequently, many other synthetic fibres joined nylon, including Orion, Acrilan, and Terylene. Today most garments are made of a blend of natural fibres, such as cotton and wool, and man-made fibres that make fabrics easier to look after. The great strength of the plastic is its indestructibility. However, this quality is also something of a drawback: beaches all over the world, even on the remotest islands, are littered with plastic bottles that nothing can destroy. Nor is it very easy to recycle plastics, as different types of plastic are often used in the same items and call for different treatments. Plastics can be made biodegradable by incorporating into their structure a material such as starch, which is attacked by bacteria and causes the plastic to fall apart. Other materials can be incorporated that gradually decay in sunlight although bottles made of such materials have to be stored in the dark, to ensure that they do not disintegrate before they have been used.
Celluloid and Bakelite react to heat in the same way.
contradiction
id_6067
The Development Of Plastics When rubber was first commercially produced in Europe during the nineteenth century, it rapidly became a very important commodity, particularly in the fields of transportation and electricity. However, during the twentieth century a number of new synthetic materials, called plastics, superseded natural rubber in all but a few applications. Rubber is a polymer a compound containing large molecules that are formed by the bonding of many smaller, simpler units, repeated over and over again. The same bonding principle polymerisationunderlies the creation of a huge range of plastics by the chemical industry. The first plastic was developed as a result of a competition in the USA. In the 1860s, $10,000 was offered to anybody who could replace ivory supplies of which were declining with something equally good as a material for making billiard balls. The prize was won by John Wesley Hyatt with a material called celluloid. Celluloid was made by dissolving cellulose, a carbohydrate derived from plants, in a solution of camphor dissolved in ethanol. This new material rapidly found uses in the manufacture of products such as knife handles, detachable collars and cuffs, spectacle frames and photographic film. Without celluloid, the film industry could never have got off the ground at the end of the 19th century. Celluloid can be repeatedly softened and reshaped by heat and is known as a thermoplastic. In 1907 Leo Baekeland, a Belgian chemist working in the USA, invented a different kind of plastic by causing phenol and formaldehyde to react together. Baekeland called the material Bakelite, and it was the first of the thermosets plastics that can be cast and moulded while hot but cannot be softened by heat and reshaped once they have set. Bakelite was a good insulator and was resistant to water, acids and moderate heat. With these properties, it was soon being used in the manufacture of switches, household items, such as knife handles, and electrical components for cars. Soon chemists began looking for other small molecules that could be strung together to make polymers. In the 1930s, British chemists discovered that the gas ethylene would polymerize under heat and pressure to form a thermoplastic they called polythene. Polypropylene followed in the 1950s. Both were used to make bottles, pipes and plastic bags. A small change in the starting material replacing a hydrogen atom in ethylene with a chlorine atom produced PVC (polyvinyl chloride) , a hard, fireproof plastic suitable for drains and gutters. And by adding certain chemicals, a soft form of PVC could be produced, suitable as a substitute for rubber in items such as waterproof clothing. A closely related plastic was Teflon, or PTFE (polytetrafluoroethylene). This had a very low coefficient of friction, making it ideal for bearings, rollers, and non-stick frying pans. Polystyrene, developed during the 1930s in Germany, was a clear, glass-like material, used in food containers, domestic appliances, and toys. Expanded polystyrene a white, rigid foam was widely used in packaging and insulation. Polyurethanes, also developed in Germany, found uses as adhesives, coatings, and in the form of rigid foams as insulation materials. They are all produced from chemicals derived from crude oil, which contains exactly the same elements carbon and hydrogen as many plastics. The first of the man-made fibers, nylon, was also created in the 1930s. Its inventor was a chemist called Wallace Carothers, who worked for the Du Pont Company in the USA. He found that under the right conditions, two chemicals hexamethylenediamine and adipic acid would form a polymer that could be pumped out through holes and then stretched to form long glossy threads that could be woven like silk. Its first use was to make parachutes for the US armed forces in World War II. In the post-war years, nylon completely replaced silk in the manufacture of stockings. Subsequently, many other synthetic fibres joined nylon, including Orion, Acrilan, and Terylene. Today most garments are made of a blend of natural fibres, such as cotton and wool, and man-made fibres that make fabrics easier to look after. The great strength of the plastic is its indestructibility. However, this quality is also something of a drawback: beaches all over the world, even on the remotest islands, are littered with plastic bottles that nothing can destroy. Nor is it very easy to recycle plastics, as different types of plastic are often used in the same items and call for different treatments. Plastics can be made biodegradable by incorporating into their structure a material such as starch, which is attacked by bacteria and causes the plastic to fall apart. Other materials can be incorporated that gradually decay in sunlight although bottles made of such materials have to be stored in the dark, to ensure that they do not disintegrate before they have been used.
The mix of different varieties of plastic can make them less recyclable.
entailment
id_6068
The Development Of Plastics When rubber was first commercially produced in Europe during the nineteenth century, it rapidly became a very important commodity, particularly in the fields of transportation and electricity. However, during the twentieth century a number of new synthetic materials, called plastics, superseded natural rubber in all but a few applications. Rubber is a polymer a compound containing large molecules that are formed by the bonding of many smaller, simpler units, repeated over and over again. The same bonding principle polymerisationunderlies the creation of a huge range of plastics by the chemical industry. The first plastic was developed as a result of a competition in the USA. In the 1860s, $10,000 was offered to anybody who could replace ivory supplies of which were declining with something equally good as a material for making billiard balls. The prize was won by John Wesley Hyatt with a material called celluloid. Celluloid was made by dissolving cellulose, a carbohydrate derived from plants, in a solution of camphor dissolved in ethanol. This new material rapidly found uses in the manufacture of products such as knife handles, detachable collars and cuffs, spectacle frames and photographic film. Without celluloid, the film industry could never have got off the ground at the end of the 19th century. Celluloid can be repeatedly softened and reshaped by heat and is known as a thermoplastic. In 1907 Leo Baekeland, a Belgian chemist working in the USA, invented a different kind of plastic by causing phenol and formaldehyde to react together. Baekeland called the material Bakelite, and it was the first of the thermosets plastics that can be cast and moulded while hot but cannot be softened by heat and reshaped once they have set. Bakelite was a good insulator and was resistant to water, acids and moderate heat. With these properties, it was soon being used in the manufacture of switches, household items, such as knife handles, and electrical components for cars. Soon chemists began looking for other small molecules that could be strung together to make polymers. In the 1930s, British chemists discovered that the gas ethylene would polymerize under heat and pressure to form a thermoplastic they called polythene. Polypropylene followed in the 1950s. Both were used to make bottles, pipes and plastic bags. A small change in the starting material replacing a hydrogen atom in ethylene with a chlorine atom produced PVC (polyvinyl chloride) , a hard, fireproof plastic suitable for drains and gutters. And by adding certain chemicals, a soft form of PVC could be produced, suitable as a substitute for rubber in items such as waterproof clothing. A closely related plastic was Teflon, or PTFE (polytetrafluoroethylene). This had a very low coefficient of friction, making it ideal for bearings, rollers, and non-stick frying pans. Polystyrene, developed during the 1930s in Germany, was a clear, glass-like material, used in food containers, domestic appliances, and toys. Expanded polystyrene a white, rigid foam was widely used in packaging and insulation. Polyurethanes, also developed in Germany, found uses as adhesives, coatings, and in the form of rigid foams as insulation materials. They are all produced from chemicals derived from crude oil, which contains exactly the same elements carbon and hydrogen as many plastics. The first of the man-made fibers, nylon, was also created in the 1930s. Its inventor was a chemist called Wallace Carothers, who worked for the Du Pont Company in the USA. He found that under the right conditions, two chemicals hexamethylenediamine and adipic acid would form a polymer that could be pumped out through holes and then stretched to form long glossy threads that could be woven like silk. Its first use was to make parachutes for the US armed forces in World War II. In the post-war years, nylon completely replaced silk in the manufacture of stockings. Subsequently, many other synthetic fibres joined nylon, including Orion, Acrilan, and Terylene. Today most garments are made of a blend of natural fibres, such as cotton and wool, and man-made fibres that make fabrics easier to look after. The great strength of the plastic is its indestructibility. However, this quality is also something of a drawback: beaches all over the world, even on the remotest islands, are littered with plastic bottles that nothing can destroy. Nor is it very easy to recycle plastics, as different types of plastic are often used in the same items and call for different treatments. Plastics can be made biodegradable by incorporating into their structure a material such as starch, which is attacked by bacteria and causes the plastic to fall apart. Other materials can be incorporated that gradually decay in sunlight although bottles made of such materials have to be stored in the dark, to ensure that they do not disintegrate before they have been used.
Adding starch into plastic does not necessarily make plastic more durable.
entailment
id_6069
The Development Of Plastics When rubber was first commercially produced in Europe during the nineteenth century, it rapidly became a very important commodity, particularly in the fields of transportation and electricity. However, during the twentieth century a number of new synthetic materials, called plastics, superseded natural rubber in all but a few applications. Rubber is a polymer a compound containing large molecules that are formed by the bonding of many smaller, simpler units, repeated over and over again. The same bonding principle polymerisationunderlies the creation of a huge range of plastics by the chemical industry. The first plastic was developed as a result of a competition in the USA. In the 1860s, $10,000 was offered to anybody who could replace ivory supplies of which were declining with something equally good as a material for making billiard balls. The prize was won by John Wesley Hyatt with a material called celluloid. Celluloid was made by dissolving cellulose, a carbohydrate derived from plants, in a solution of camphor dissolved in ethanol. This new material rapidly found uses in the manufacture of products such as knife handles, detachable collars and cuffs, spectacle frames and photographic film. Without celluloid, the film industry could never have got off the ground at the end of the 19th century. Celluloid can be repeatedly softened and reshaped by heat and is known as a thermoplastic. In 1907 Leo Baekeland, a Belgian chemist working in the USA, invented a different kind of plastic by causing phenol and formaldehyde to react together. Baekeland called the material Bakelite, and it was the first of the thermosets plastics that can be cast and moulded while hot but cannot be softened by heat and reshaped once they have set. Bakelite was a good insulator and was resistant to water, acids and moderate heat. With these properties, it was soon being used in the manufacture of switches, household items, such as knife handles, and electrical components for cars. Soon chemists began looking for other small molecules that could be strung together to make polymers. In the 1930s, British chemists discovered that the gas ethylene would polymerize under heat and pressure to form a thermoplastic they called polythene. Polypropylene followed in the 1950s. Both were used to make bottles, pipes and plastic bags. A small change in the starting material replacing a hydrogen atom in ethylene with a chlorine atom produced PVC (polyvinyl chloride) , a hard, fireproof plastic suitable for drains and gutters. And by adding certain chemicals, a soft form of PVC could be produced, suitable as a substitute for rubber in items such as waterproof clothing. A closely related plastic was Teflon, or PTFE (polytetrafluoroethylene). This had a very low coefficient of friction, making it ideal for bearings, rollers, and non-stick frying pans. Polystyrene, developed during the 1930s in Germany, was a clear, glass-like material, used in food containers, domestic appliances, and toys. Expanded polystyrene a white, rigid foam was widely used in packaging and insulation. Polyurethanes, also developed in Germany, found uses as adhesives, coatings, and in the form of rigid foams as insulation materials. They are all produced from chemicals derived from crude oil, which contains exactly the same elements carbon and hydrogen as many plastics. The first of the man-made fibers, nylon, was also created in the 1930s. Its inventor was a chemist called Wallace Carothers, who worked for the Du Pont Company in the USA. He found that under the right conditions, two chemicals hexamethylenediamine and adipic acid would form a polymer that could be pumped out through holes and then stretched to form long glossy threads that could be woven like silk. Its first use was to make parachutes for the US armed forces in World War II. In the post-war years, nylon completely replaced silk in the manufacture of stockings. Subsequently, many other synthetic fibres joined nylon, including Orion, Acrilan, and Terylene. Today most garments are made of a blend of natural fibres, such as cotton and wool, and man-made fibres that make fabrics easier to look after. The great strength of the plastic is its indestructibility. However, this quality is also something of a drawback: beaches all over the world, even on the remotest islands, are littered with plastic bottles that nothing can destroy. Nor is it very easy to recycle plastics, as different types of plastic are often used in the same items and call for different treatments. Plastics can be made biodegradable by incorporating into their structure a material such as starch, which is attacked by bacteria and causes the plastic to fall apart. Other materials can be incorporated that gradually decay in sunlight although bottles made of such materials have to be stored in the dark, to ensure that they do not disintegrate before they have been used.
Some plastic containers have to be preserved in special conditions.
entailment
id_6070
The Development of Museums The conviction that historical relics provide infallible testimony about the past is rooted in the nineteenth and early twentieth centuries, when science was regarded as objective and value free. As one writer observes: 'Although it is now evident that artefacts are as easily altered as chronicles, public faith in their veracity endures: a tangible relic seems ipso facto real. ' Such conviction was, until recently, reflected in museum displays. Museums used to look - and some still do - much like storage rooms of objects packed together in showcases: good for scholars who wanted to study the subtle differences in design, but not for the ordinary visitor, to whom it all looked alike. Similarly, the information accompanying the objects often made little sense to the lay visitor. The content and format of explanations dated back to a time when the museum was the exclusive domain of the scientific researcher. Recently, however, attitudes towards history and the way it should be presented have altered. The key word in heritage display is now 'experience', the more exciting the better and, if possible, involving all the senses. Good examples of this approach in the UK are the Jorvik Centre in York; the National Museum of Photography, Film and Television in Bradford; and the Imperial War Museum in London. In the US the trend emerged much earlier: Williamsburg has been a prototype for many heritage developments in other parts of the world. No one can predict where the process will end. On so-called heritage sites the re- enactment of historical events is increasingly popular, and computers will soon provide virtual reality experiences, which will present visitors with a vivid image of the period of their choice, in which they themselves can act as if part of the historical environment. Such developments have been criticised as an intolerable vulgarisation, but the success of many historical theme parks and similar locations suggests that the majority of the public does not share this opinion. In a related development, the sharp distinction between museum and heritage sites on the one hand, and theme parks on the other, is gradually evaporating. They already borrow ideas and concepts from one another. For example, museums have adopted story lines for exhibitions, sites have accepted 'theming'as a relevant tool, and theme parks are moving towards more authenticity and research-based presentations. In zoos, animals are no longer kept in cages, but in great spaces, either in the open air or in enormous greenhouses, such as the jungle and desert environments in Burgers'Zoo in Holland. This particular trend is regarded as one of the major developments in the presentation of natural history in the twentieth century. Theme parks are undergoing other changes, too, as they try to present more serious social and cultural issues, and move away from fantasy. This development is a response to market forces and, although museums and heritage sites have a special, rather distinct, role to fulfil, they are also operating in a very competitive environment, where visitors make choices on how and where to spend their free time. Heritage and museum experts do not have to invent stories and recreate historical environments to attract their visitors: their assets are already in place. However, exhibits must be both based on artefacts and facts as we know them, and attractively presented. Those who are professionally engaged in the art of interpreting history are thus in a difficult position, as they must steer a narrow course between the demands of 'evidence' and 'attractiveness', especially given the increasing need in the heritage industry for income- generating activities. It could be claimed that in order to make everything in heritage more 'real', historical accuracy must be increasingly altered. For example, Pithecanthropus erectus is depicted in an Indonesian museum with Malay facial features, because this corresponds to public perceptions. Similarly, in the Museum of Natural History in Washington, Neanderthal man is shown making a dominant gesture to his wife. Such presentations tell us 38more about contemporary perceptions of the world than about our ancestors. There is one compensation, however, for the professionals who make these interpretations: if they did not provide the interpretation, visitors would do it for themselves, based on their own ideas, misconceptions and prejudices. And no matter how exciting the result, it would contain a lot more bias than the presentations provided by experts. Human bias is inevitable, but another source of bias in the representation of history has to do with the transitory nature of the materials themselves. The simple fact is that not everything from history survives the historical process. Castles, palaces and cathedrals have a longer lifespan than the dwellings of ordinary people. The same applies to the furnishings and other contents of the premises. In a town like Leyden in Holland, which in the seventeenth century was occupied by approximately the same number of inhabitants as today, people lived within the walled town, an area more than five times smaller than modern Leyden. In most of the houses several families lived together in circumstances beyond our imagination. Yet in museums, fine period rooms give only an image of the lifestyle of the upper class of that era. No wonder that people who stroll around exhibitions are filled with nostalgia; the evidence in museums indicates that life was so much better in the past. This notion is induced by the bias in its representation in museums and heritage centres.
More people visit museums than theme parks.
contradiction
id_6071
The Development of Museums The conviction that historical relics provide infallible testimony about the past is rooted in the nineteenth and early twentieth centuries, when science was regarded as objective and value free. As one writer observes: 'Although it is now evident that artefacts are as easily altered as chronicles, public faith in their veracity endures: a tangible relic seems ipso facto real. ' Such conviction was, until recently, reflected in museum displays. Museums used to look - and some still do - much like storage rooms of objects packed together in showcases: good for scholars who wanted to study the subtle differences in design, but not for the ordinary visitor, to whom it all looked alike. Similarly, the information accompanying the objects often made little sense to the lay visitor. The content and format of explanations dated back to a time when the museum was the exclusive domain of the scientific researcher. Recently, however, attitudes towards history and the way it should be presented have altered. The key word in heritage display is now 'experience', the more exciting the better and, if possible, involving all the senses. Good examples of this approach in the UK are the Jorvik Centre in York; the National Museum of Photography, Film and Television in Bradford; and the Imperial War Museum in London. In the US the trend emerged much earlier: Williamsburg has been a prototype for many heritage developments in other parts of the world. No one can predict where the process will end. On so-called heritage sites the re- enactment of historical events is increasingly popular, and computers will soon provide virtual reality experiences, which will present visitors with a vivid image of the period of their choice, in which they themselves can act as if part of the historical environment. Such developments have been criticised as an intolerable vulgarisation, but the success of many historical theme parks and similar locations suggests that the majority of the public does not share this opinion. In a related development, the sharp distinction between museum and heritage sites on the one hand, and theme parks on the other, is gradually evaporating. They already borrow ideas and concepts from one another. For example, museums have adopted story lines for exhibitions, sites have accepted 'theming'as a relevant tool, and theme parks are moving towards more authenticity and research-based presentations. In zoos, animals are no longer kept in cages, but in great spaces, either in the open air or in enormous greenhouses, such as the jungle and desert environments in Burgers'Zoo in Holland. This particular trend is regarded as one of the major developments in the presentation of natural history in the twentieth century. Theme parks are undergoing other changes, too, as they try to present more serious social and cultural issues, and move away from fantasy. This development is a response to market forces and, although museums and heritage sites have a special, rather distinct, role to fulfil, they are also operating in a very competitive environment, where visitors make choices on how and where to spend their free time. Heritage and museum experts do not have to invent stories and recreate historical environments to attract their visitors: their assets are already in place. However, exhibits must be both based on artefacts and facts as we know them, and attractively presented. Those who are professionally engaged in the art of interpreting history are thus in a difficult position, as they must steer a narrow course between the demands of 'evidence' and 'attractiveness', especially given the increasing need in the heritage industry for income- generating activities. It could be claimed that in order to make everything in heritage more 'real', historical accuracy must be increasingly altered. For example, Pithecanthropus erectus is depicted in an Indonesian museum with Malay facial features, because this corresponds to public perceptions. Similarly, in the Museum of Natural History in Washington, Neanderthal man is shown making a dominant gesture to his wife. Such presentations tell us 38more about contemporary perceptions of the world than about our ancestors. There is one compensation, however, for the professionals who make these interpretations: if they did not provide the interpretation, visitors would do it for themselves, based on their own ideas, misconceptions and prejudices. And no matter how exciting the result, it would contain a lot more bias than the presentations provided by experts. Human bias is inevitable, but another source of bias in the representation of history has to do with the transitory nature of the materials themselves. The simple fact is that not everything from history survives the historical process. Castles, palaces and cathedrals have a longer lifespan than the dwellings of ordinary people. The same applies to the furnishings and other contents of the premises. In a town like Leyden in Holland, which in the seventeenth century was occupied by approximately the same number of inhabitants as today, people lived within the walled town, an area more than five times smaller than modern Leyden. In most of the houses several families lived together in circumstances beyond our imagination. Yet in museums, fine period rooms give only an image of the lifestyle of the upper class of that era. No wonder that people who stroll around exhibitions are filled with nostalgia; the evidence in museums indicates that life was so much better in the past. This notion is induced by the bias in its representation in museums and heritage centres.
The boundaries of Leyden have changed little since the seventeenth century.
neutral
id_6072
The Development of Museums The conviction that historical relics provide infallible testimony about the past is rooted in the nineteenth and early twentieth centuries, when science was regarded as objective and value free. As one writer observes: 'Although it is now evident that artefacts are as easily altered as chronicles, public faith in their veracity endures: a tangible relic seems ipso facto real. ' Such conviction was, until recently, reflected in museum displays. Museums used to look - and some still do - much like storage rooms of objects packed together in showcases: good for scholars who wanted to study the subtle differences in design, but not for the ordinary visitor, to whom it all looked alike. Similarly, the information accompanying the objects often made little sense to the lay visitor. The content and format of explanations dated back to a time when the museum was the exclusive domain of the scientific researcher. Recently, however, attitudes towards history and the way it should be presented have altered. The key word in heritage display is now 'experience', the more exciting the better and, if possible, involving all the senses. Good examples of this approach in the UK are the Jorvik Centre in York; the National Museum of Photography, Film and Television in Bradford; and the Imperial War Museum in London. In the US the trend emerged much earlier: Williamsburg has been a prototype for many heritage developments in other parts of the world. No one can predict where the process will end. On so-called heritage sites the re- enactment of historical events is increasingly popular, and computers will soon provide virtual reality experiences, which will present visitors with a vivid image of the period of their choice, in which they themselves can act as if part of the historical environment. Such developments have been criticised as an intolerable vulgarisation, but the success of many historical theme parks and similar locations suggests that the majority of the public does not share this opinion. In a related development, the sharp distinction between museum and heritage sites on the one hand, and theme parks on the other, is gradually evaporating. They already borrow ideas and concepts from one another. For example, museums have adopted story lines for exhibitions, sites have accepted 'theming'as a relevant tool, and theme parks are moving towards more authenticity and research-based presentations. In zoos, animals are no longer kept in cages, but in great spaces, either in the open air or in enormous greenhouses, such as the jungle and desert environments in Burgers'Zoo in Holland. This particular trend is regarded as one of the major developments in the presentation of natural history in the twentieth century. Theme parks are undergoing other changes, too, as they try to present more serious social and cultural issues, and move away from fantasy. This development is a response to market forces and, although museums and heritage sites have a special, rather distinct, role to fulfil, they are also operating in a very competitive environment, where visitors make choices on how and where to spend their free time. Heritage and museum experts do not have to invent stories and recreate historical environments to attract their visitors: their assets are already in place. However, exhibits must be both based on artefacts and facts as we know them, and attractively presented. Those who are professionally engaged in the art of interpreting history are thus in a difficult position, as they must steer a narrow course between the demands of 'evidence' and 'attractiveness', especially given the increasing need in the heritage industry for income- generating activities. It could be claimed that in order to make everything in heritage more 'real', historical accuracy must be increasingly altered. For example, Pithecanthropus erectus is depicted in an Indonesian museum with Malay facial features, because this corresponds to public perceptions. Similarly, in the Museum of Natural History in Washington, Neanderthal man is shown making a dominant gesture to his wife. Such presentations tell us 38more about contemporary perceptions of the world than about our ancestors. There is one compensation, however, for the professionals who make these interpretations: if they did not provide the interpretation, visitors would do it for themselves, based on their own ideas, misconceptions and prejudices. And no matter how exciting the result, it would contain a lot more bias than the presentations provided by experts. Human bias is inevitable, but another source of bias in the representation of history has to do with the transitory nature of the materials themselves. The simple fact is that not everything from history survives the historical process. Castles, palaces and cathedrals have a longer lifespan than the dwellings of ordinary people. The same applies to the furnishings and other contents of the premises. In a town like Leyden in Holland, which in the seventeenth century was occupied by approximately the same number of inhabitants as today, people lived within the walled town, an area more than five times smaller than modern Leyden. In most of the houses several families lived together in circumstances beyond our imagination. Yet in museums, fine period rooms give only an image of the lifestyle of the upper class of that era. No wonder that people who stroll around exhibitions are filled with nostalgia; the evidence in museums indicates that life was so much better in the past. This notion is induced by the bias in its representation in museums and heritage centres.
Museums can give a false impression of how life used to be.
contradiction
id_6073
The Development of Museums The conviction that historical relics provide infallible testimony about the past is rooted in the nineteenth and early twentieth centuries, when science was regarded as objective and value free. As one writer observes: 'Although it is now evident that artefacts are as easily altered as chronicles, public faith in their veracity endures: a tangible relic seems ipso facto real. ' Such conviction was, until recently, reflected in museum displays. Museums used to look - and some still do - much like storage rooms of objects packed together in showcases: good for scholars who wanted to study the subtle differences in design, but not for the ordinary visitor, to whom it all looked alike. Similarly, the information accompanying the objects often made little sense to the lay visitor. The content and format of explanations dated back to a time when the museum was the exclusive domain of the scientific researcher. Recently, however, attitudes towards history and the way it should be presented have altered. The key word in heritage display is now 'experience', the more exciting the better and, if possible, involving all the senses. Good examples of this approach in the UK are the Jorvik Centre in York; the National Museum of Photography, Film and Television in Bradford; and the Imperial War Museum in London. In the US the trend emerged much earlier: Williamsburg has been a prototype for many heritage developments in other parts of the world. No one can predict where the process will end. On so-called heritage sites the re- enactment of historical events is increasingly popular, and computers will soon provide virtual reality experiences, which will present visitors with a vivid image of the period of their choice, in which they themselves can act as if part of the historical environment. Such developments have been criticised as an intolerable vulgarisation, but the success of many historical theme parks and similar locations suggests that the majority of the public does not share this opinion. In a related development, the sharp distinction between museum and heritage sites on the one hand, and theme parks on the other, is gradually evaporating. They already borrow ideas and concepts from one another. For example, museums have adopted story lines for exhibitions, sites have accepted 'theming'as a relevant tool, and theme parks are moving towards more authenticity and research-based presentations. In zoos, animals are no longer kept in cages, but in great spaces, either in the open air or in enormous greenhouses, such as the jungle and desert environments in Burgers'Zoo in Holland. This particular trend is regarded as one of the major developments in the presentation of natural history in the twentieth century. Theme parks are undergoing other changes, too, as they try to present more serious social and cultural issues, and move away from fantasy. This development is a response to market forces and, although museums and heritage sites have a special, rather distinct, role to fulfil, they are also operating in a very competitive environment, where visitors make choices on how and where to spend their free time. Heritage and museum experts do not have to invent stories and recreate historical environments to attract their visitors: their assets are already in place. However, exhibits must be both based on artefacts and facts as we know them, and attractively presented. Those who are professionally engaged in the art of interpreting history are thus in a difficult position, as they must steer a narrow course between the demands of 'evidence' and 'attractiveness', especially given the increasing need in the heritage industry for income- generating activities. It could be claimed that in order to make everything in heritage more 'real', historical accuracy must be increasingly altered. For example, Pithecanthropus erectus is depicted in an Indonesian museum with Malay facial features, because this corresponds to public perceptions. Similarly, in the Museum of Natural History in Washington, Neanderthal man is shown making a dominant gesture to his wife. Such presentations tell us 38more about contemporary perceptions of the world than about our ancestors. There is one compensation, however, for the professionals who make these interpretations: if they did not provide the interpretation, visitors would do it for themselves, based on their own ideas, misconceptions and prejudices. And no matter how exciting the result, it would contain a lot more bias than the presentations provided by experts. Human bias is inevitable, but another source of bias in the representation of history has to do with the transitory nature of the materials themselves. The simple fact is that not everything from history survives the historical process. Castles, palaces and cathedrals have a longer lifespan than the dwellings of ordinary people. The same applies to the furnishings and other contents of the premises. In a town like Leyden in Holland, which in the seventeenth century was occupied by approximately the same number of inhabitants as today, people lived within the walled town, an area more than five times smaller than modern Leyden. In most of the houses several families lived together in circumstances beyond our imagination. Yet in museums, fine period rooms give only an image of the lifestyle of the upper class of that era. No wonder that people who stroll around exhibitions are filled with nostalgia; the evidence in museums indicates that life was so much better in the past. This notion is induced by the bias in its representation in museums and heritage centres.
Consumers prefer theme parks which avoid serious issues.
neutral
id_6074
The Development of Museums. The conviction that historical relics provide infallible testimony about the past is rooted in the nineteenth and early twentieth centuries, when science was regarded as objective and value free. As one writer observes: Although it is now evident that artefacts are as easily altered as chronicles, public faith in their veracity endures: a tangible relic seems ipso facto real. Such conviction was, until recently, reflected in museum displays. Museums used to look and some still do much like storage rooms of objects packed together in showcases: good for scholars who wanted to study the subtle differences in design, but not for the ordinary visitor, to whom it all looked alike. Similarly, the information accompanying the objects often made little sense to the lay visitor. The content and format of explanations dated back to a time when the museum was the exclusive domain of the scientific researcher. Recently, however, attitudes towards history and the way it should be presented have altered. The key word in heritage display is now experience, the more exciting the better and, if possible, involving all the senses. Good examples of this approach in the UK are the Jorvik Centre in York; the National Museum of Photography, Film and Television in Bradford; and the Imperial War Museum in London. In the US the trend emerged much earlier: Williamsburg has been a prototype for many heritage developments in other parts of the world. No one can predict where the process will end. On so-called heritage sites the re-enactment of historical events is increasingly popular, and computers will soon provide virtual reality experiences, which will present visitors with a vivid image of the period of their choice, in which they themselves can act as if part of the historical environment. Such developments have been criticized as an intolerable vulgarization, but the success of many historical theme parks and similar locations suggests that the majority of the public does not share this opinion. In a related development, the sharp distinction between museum and heritage sites on the one hand, and theme parks on the other, is gradually evaporating. They already borrow ideas and concepts from one another. For example, museums have adopted story lines for exhibitions, sites have accepted theming as a relevant tool, and theme parks are moving towards more authenticity and research-based presentations. In zoos, animals are no longer kept in cages, but in great spaces, either in the open air or in enormous greenhouses, such as the jungle and desert environments in Burgers Zoo in Holland. This particular trend is regarded as one of the major developments in the presentation of natural history in the twentieth century. Theme parks are undergoing other changes, too, as they try to present more serious social and cultural issues, and move away from fantasy. This development is a response to market forces and, although museums and heritage sites have a special, rather distinct, role to fulfil, they are also operating in a very competitive environment, where visitors make choices on how and where to spend their free time. Heritage and museum experts do not have to invent stories and recreate historical environments to attract their visitors: their assets are already in place. However, exhibits must be both based on artefacts and facts as we know them, and attractively presented. Those who are professionally engaged in the art of interpreting history are thus in difficult position, as they must steer a narrow course between the demands of evidence and attractiveness, especially given the increasing need in the heritage industry for income-generating activities. It could be claimed that in order to make everything in heritage more real, historical accuracy must be increasingly altered. For example, Pithecanthropus erectus is depicted in an Indonesian museum with Malay facial features, because this corresponds to public perceptions. Similarly, in the Museum of Natural History in Washington, Neanderthal man is shown making a dominant gesture to his wife. Such presentations tell us more about contemporary perceptions of the world than about our ancestors. There is one compensation, however, for the professionals who make these interpretations: if they did not provide the interpretation, visitors would do it for themselves, based on their own ideas, misconceptions and prejudices. And no matter how exciting the result, it would contain a lot more bias than the presentations provided by experts. Human bias is inevitable, but another source of bias in the representation of history has to do with the transitory nature of the materials themselves. The simple fact is that not everything from history survives the historical process. Castles, palaces and cathedrals have a longer lifespan than the dwellings of ordinary people. The same applies to the furnishing and other contents of the premises. In a town like Leyden in Holland, which in the seventeenth century was occupied by approximately the same number of inhabitants as today, people lived within the walled town, an area more than five times smaller than modern Leyden. In most of the houses several families lived together in circumstances beyond our imagination. Yet in museums, fine period rooms give only an image of the lifestyle of the upper class of that era. No wonder that people who stroll around exhibitions are filled with nostalgia; the evidence in museums indicates that life was so much better in past. This notion is induced by the bias in its representation in museums and heritage centres.
Consumers prefer theme parks which avoid serious issues.
contradiction
id_6075
The Development of Museums. The conviction that historical relics provide infallible testimony about the past is rooted in the nineteenth and early twentieth centuries, when science was regarded as objective and value free. As one writer observes: Although it is now evident that artefacts are as easily altered as chronicles, public faith in their veracity endures: a tangible relic seems ipso facto real. Such conviction was, until recently, reflected in museum displays. Museums used to look and some still do much like storage rooms of objects packed together in showcases: good for scholars who wanted to study the subtle differences in design, but not for the ordinary visitor, to whom it all looked alike. Similarly, the information accompanying the objects often made little sense to the lay visitor. The content and format of explanations dated back to a time when the museum was the exclusive domain of the scientific researcher. Recently, however, attitudes towards history and the way it should be presented have altered. The key word in heritage display is now experience, the more exciting the better and, if possible, involving all the senses. Good examples of this approach in the UK are the Jorvik Centre in York; the National Museum of Photography, Film and Television in Bradford; and the Imperial War Museum in London. In the US the trend emerged much earlier: Williamsburg has been a prototype for many heritage developments in other parts of the world. No one can predict where the process will end. On so-called heritage sites the re-enactment of historical events is increasingly popular, and computers will soon provide virtual reality experiences, which will present visitors with a vivid image of the period of their choice, in which they themselves can act as if part of the historical environment. Such developments have been criticized as an intolerable vulgarization, but the success of many historical theme parks and similar locations suggests that the majority of the public does not share this opinion. In a related development, the sharp distinction between museum and heritage sites on the one hand, and theme parks on the other, is gradually evaporating. They already borrow ideas and concepts from one another. For example, museums have adopted story lines for exhibitions, sites have accepted theming as a relevant tool, and theme parks are moving towards more authenticity and research-based presentations. In zoos, animals are no longer kept in cages, but in great spaces, either in the open air or in enormous greenhouses, such as the jungle and desert environments in Burgers Zoo in Holland. This particular trend is regarded as one of the major developments in the presentation of natural history in the twentieth century. Theme parks are undergoing other changes, too, as they try to present more serious social and cultural issues, and move away from fantasy. This development is a response to market forces and, although museums and heritage sites have a special, rather distinct, role to fulfil, they are also operating in a very competitive environment, where visitors make choices on how and where to spend their free time. Heritage and museum experts do not have to invent stories and recreate historical environments to attract their visitors: their assets are already in place. However, exhibits must be both based on artefacts and facts as we know them, and attractively presented. Those who are professionally engaged in the art of interpreting history are thus in difficult position, as they must steer a narrow course between the demands of evidence and attractiveness, especially given the increasing need in the heritage industry for income-generating activities. It could be claimed that in order to make everything in heritage more real, historical accuracy must be increasingly altered. For example, Pithecanthropus erectus is depicted in an Indonesian museum with Malay facial features, because this corresponds to public perceptions. Similarly, in the Museum of Natural History in Washington, Neanderthal man is shown making a dominant gesture to his wife. Such presentations tell us more about contemporary perceptions of the world than about our ancestors. There is one compensation, however, for the professionals who make these interpretations: if they did not provide the interpretation, visitors would do it for themselves, based on their own ideas, misconceptions and prejudices. And no matter how exciting the result, it would contain a lot more bias than the presentations provided by experts. Human bias is inevitable, but another source of bias in the representation of history has to do with the transitory nature of the materials themselves. The simple fact is that not everything from history survives the historical process. Castles, palaces and cathedrals have a longer lifespan than the dwellings of ordinary people. The same applies to the furnishing and other contents of the premises. In a town like Leyden in Holland, which in the seventeenth century was occupied by approximately the same number of inhabitants as today, people lived within the walled town, an area more than five times smaller than modern Leyden. In most of the houses several families lived together in circumstances beyond our imagination. Yet in museums, fine period rooms give only an image of the lifestyle of the upper class of that era. No wonder that people who stroll around exhibitions are filled with nostalgia; the evidence in museums indicates that life was so much better in past. This notion is induced by the bias in its representation in museums and heritage centres.
Museums can give a false impression of how life used to be.
entailment
id_6076
The Development of Museums. The conviction that historical relics provide infallible testimony about the past is rooted in the nineteenth and early twentieth centuries, when science was regarded as objective and value free. As one writer observes: Although it is now evident that artefacts are as easily altered as chronicles, public faith in their veracity endures: a tangible relic seems ipso facto real. Such conviction was, until recently, reflected in museum displays. Museums used to look and some still do much like storage rooms of objects packed together in showcases: good for scholars who wanted to study the subtle differences in design, but not for the ordinary visitor, to whom it all looked alike. Similarly, the information accompanying the objects often made little sense to the lay visitor. The content and format of explanations dated back to a time when the museum was the exclusive domain of the scientific researcher. Recently, however, attitudes towards history and the way it should be presented have altered. The key word in heritage display is now experience, the more exciting the better and, if possible, involving all the senses. Good examples of this approach in the UK are the Jorvik Centre in York; the National Museum of Photography, Film and Television in Bradford; and the Imperial War Museum in London. In the US the trend emerged much earlier: Williamsburg has been a prototype for many heritage developments in other parts of the world. No one can predict where the process will end. On so-called heritage sites the re-enactment of historical events is increasingly popular, and computers will soon provide virtual reality experiences, which will present visitors with a vivid image of the period of their choice, in which they themselves can act as if part of the historical environment. Such developments have been criticized as an intolerable vulgarization, but the success of many historical theme parks and similar locations suggests that the majority of the public does not share this opinion. In a related development, the sharp distinction between museum and heritage sites on the one hand, and theme parks on the other, is gradually evaporating. They already borrow ideas and concepts from one another. For example, museums have adopted story lines for exhibitions, sites have accepted theming as a relevant tool, and theme parks are moving towards more authenticity and research-based presentations. In zoos, animals are no longer kept in cages, but in great spaces, either in the open air or in enormous greenhouses, such as the jungle and desert environments in Burgers Zoo in Holland. This particular trend is regarded as one of the major developments in the presentation of natural history in the twentieth century. Theme parks are undergoing other changes, too, as they try to present more serious social and cultural issues, and move away from fantasy. This development is a response to market forces and, although museums and heritage sites have a special, rather distinct, role to fulfil, they are also operating in a very competitive environment, where visitors make choices on how and where to spend their free time. Heritage and museum experts do not have to invent stories and recreate historical environments to attract their visitors: their assets are already in place. However, exhibits must be both based on artefacts and facts as we know them, and attractively presented. Those who are professionally engaged in the art of interpreting history are thus in difficult position, as they must steer a narrow course between the demands of evidence and attractiveness, especially given the increasing need in the heritage industry for income-generating activities. It could be claimed that in order to make everything in heritage more real, historical accuracy must be increasingly altered. For example, Pithecanthropus erectus is depicted in an Indonesian museum with Malay facial features, because this corresponds to public perceptions. Similarly, in the Museum of Natural History in Washington, Neanderthal man is shown making a dominant gesture to his wife. Such presentations tell us more about contemporary perceptions of the world than about our ancestors. There is one compensation, however, for the professionals who make these interpretations: if they did not provide the interpretation, visitors would do it for themselves, based on their own ideas, misconceptions and prejudices. And no matter how exciting the result, it would contain a lot more bias than the presentations provided by experts. Human bias is inevitable, but another source of bias in the representation of history has to do with the transitory nature of the materials themselves. The simple fact is that not everything from history survives the historical process. Castles, palaces and cathedrals have a longer lifespan than the dwellings of ordinary people. The same applies to the furnishing and other contents of the premises. In a town like Leyden in Holland, which in the seventeenth century was occupied by approximately the same number of inhabitants as today, people lived within the walled town, an area more than five times smaller than modern Leyden. In most of the houses several families lived together in circumstances beyond our imagination. Yet in museums, fine period rooms give only an image of the lifestyle of the upper class of that era. No wonder that people who stroll around exhibitions are filled with nostalgia; the evidence in museums indicates that life was so much better in past. This notion is induced by the bias in its representation in museums and heritage centres.
More people visit museums than theme parks.
neutral
id_6077
The Development of Museums. The conviction that historical relics provide infallible testimony about the past is rooted in the nineteenth and early twentieth centuries, when science was regarded as objective and value free. As one writer observes: Although it is now evident that artefacts are as easily altered as chronicles, public faith in their veracity endures: a tangible relic seems ipso facto real. Such conviction was, until recently, reflected in museum displays. Museums used to look and some still do much like storage rooms of objects packed together in showcases: good for scholars who wanted to study the subtle differences in design, but not for the ordinary visitor, to whom it all looked alike. Similarly, the information accompanying the objects often made little sense to the lay visitor. The content and format of explanations dated back to a time when the museum was the exclusive domain of the scientific researcher. Recently, however, attitudes towards history and the way it should be presented have altered. The key word in heritage display is now experience, the more exciting the better and, if possible, involving all the senses. Good examples of this approach in the UK are the Jorvik Centre in York; the National Museum of Photography, Film and Television in Bradford; and the Imperial War Museum in London. In the US the trend emerged much earlier: Williamsburg has been a prototype for many heritage developments in other parts of the world. No one can predict where the process will end. On so-called heritage sites the re-enactment of historical events is increasingly popular, and computers will soon provide virtual reality experiences, which will present visitors with a vivid image of the period of their choice, in which they themselves can act as if part of the historical environment. Such developments have been criticized as an intolerable vulgarization, but the success of many historical theme parks and similar locations suggests that the majority of the public does not share this opinion. In a related development, the sharp distinction between museum and heritage sites on the one hand, and theme parks on the other, is gradually evaporating. They already borrow ideas and concepts from one another. For example, museums have adopted story lines for exhibitions, sites have accepted theming as a relevant tool, and theme parks are moving towards more authenticity and research-based presentations. In zoos, animals are no longer kept in cages, but in great spaces, either in the open air or in enormous greenhouses, such as the jungle and desert environments in Burgers Zoo in Holland. This particular trend is regarded as one of the major developments in the presentation of natural history in the twentieth century. Theme parks are undergoing other changes, too, as they try to present more serious social and cultural issues, and move away from fantasy. This development is a response to market forces and, although museums and heritage sites have a special, rather distinct, role to fulfil, they are also operating in a very competitive environment, where visitors make choices on how and where to spend their free time. Heritage and museum experts do not have to invent stories and recreate historical environments to attract their visitors: their assets are already in place. However, exhibits must be both based on artefacts and facts as we know them, and attractively presented. Those who are professionally engaged in the art of interpreting history are thus in difficult position, as they must steer a narrow course between the demands of evidence and attractiveness, especially given the increasing need in the heritage industry for income-generating activities. It could be claimed that in order to make everything in heritage more real, historical accuracy must be increasingly altered. For example, Pithecanthropus erectus is depicted in an Indonesian museum with Malay facial features, because this corresponds to public perceptions. Similarly, in the Museum of Natural History in Washington, Neanderthal man is shown making a dominant gesture to his wife. Such presentations tell us more about contemporary perceptions of the world than about our ancestors. There is one compensation, however, for the professionals who make these interpretations: if they did not provide the interpretation, visitors would do it for themselves, based on their own ideas, misconceptions and prejudices. And no matter how exciting the result, it would contain a lot more bias than the presentations provided by experts. Human bias is inevitable, but another source of bias in the representation of history has to do with the transitory nature of the materials themselves. The simple fact is that not everything from history survives the historical process. Castles, palaces and cathedrals have a longer lifespan than the dwellings of ordinary people. The same applies to the furnishing and other contents of the premises. In a town like Leyden in Holland, which in the seventeenth century was occupied by approximately the same number of inhabitants as today, people lived within the walled town, an area more than five times smaller than modern Leyden. In most of the houses several families lived together in circumstances beyond our imagination. Yet in museums, fine period rooms give only an image of the lifestyle of the upper class of that era. No wonder that people who stroll around exhibitions are filled with nostalgia; the evidence in museums indicates that life was so much better in past. This notion is induced by the bias in its representation in museums and heritage centres.
The boundaries of Leyden have changed little since the seventeenth century.
contradiction
id_6078
The Dinosaurs Footprints and Extinction A. EVERYBODY knows that the dinosaurs were killed by an asteroid. Something big hit the earth 65 million years ago and, when the dust had fallen, so had the great reptiles. There is thus a nice, if ironic, symmetry in the idea that a similar impact brought about the dinosaurs' rise. That is the thesis proposed by Paul Olsen, of Columbia University, and his colleagues in this week's Science. B. Dinosaurs first appear in the fossil record 230m years ago, dining the Triassic period. But they were mostly small, and they shared the earth with lots of other sorts of reptile. It was in the subsequent Jurassic, which began 202million years ago, that they overran the planet and turned into the monsters depicted in the book and movie Jurassic Park. (Actually, though, the dinosaurs that appeared on screen were from the still more recent Cretaceous period. ) Dr Olsen and his colleagues are not the first to suggest that the dinosaurs inherited the earth as the result of an asteroid strike. But they are the first to show that the takeover did, indeed, happen in a geological eyeblink. C. Dinosaur skeletons are rare. Dinosaur footprints are, however, surprisingly abundant. And the sizes of the prints are as good an indication of the sizes of the beasts as are the skeletons themselves. Dr Olsen and his colleagues therefore concentrated on prints, not bones. D. The prints in question were made in eastern North America, a part of the world then full of rift valleys similar to those in East Africa today. Like the modem African rift valleys, the Triassic /Jurassic American ones contained lakes, and these lakes grew and shrank at regular intervals because of climatic changes caused by periodic shifts in the earth's orbit. (A similar phenomenon is responsible for modem ice ages. ) That regularity, combined with reversals in the earth's magnetic field, which are detectable in the tiny fields of certain magnetic minerals, means that rocks from this place and period can be dated to within a few thousand years. As a bonus, squish lake-edge sediments are just the things for recording the tracks of passing animals. By dividing the labour between themselves, the ten authors of the paper were able to study such tracksat 80 sites. E. The researchers looked at 18 so-called ichnotaxa. These are recognisable types of footprint that cannot be matched precisely with the species of animal that left them. But they can be matched with a general sort of animal, and thus act as an indicator of the fate of that group, even when there are no bones to tell the story. Five of the ichnotaxa disappear before the end of the Triassic, and four march confidently across the boundary into the Jurassic. Six, however, vanish at the boundary, or only just splutter across it; and three appear from nowhere, almost as soon as the Jurassic begins. F. That boundary itself is suggestive. The first geological indication of the impact that killed the dinosaurs was an unusually high level of iridium in rocks at the end of the Cretaceous, when the beasts disappear from the fossil record. Iridium is normally rare at the earth's surface, but it is more abundant in meteorites. When people began to believe the impact theory, they started looking for other Cretaceous-end anomalies. One that turned up was a surprising abundance of fern spores in rocks just above the boundary layera phenomenon known as a fern spike G. That matched the theory nicely. Many modem ferns are opportunists. They cannot compete against plants with leaves, but if a piece of land is cleared by, say, a volcanic emption, they are often the first things to set up shop there. An asteroid strike would have scoured much of the earth of its vegetable cover, and provided a paradise for ferns. A fem spike in the rocks is thus a good indication that southing terrible has happened. H. Both an iridium anomaly and a fem spike appear in rocks at the end of the Triassic, too. That accounts for the disappearing ichnotaxa: the creatures that made them did not survive the holocaust. The surprise is how rapidly the new ichnotaxa appear. I. Dr Olsen and his colleagues suggest that the explanation for this rapid increase in size may be a phenomenon called ecological release. This is seen today when reptiles (which, in modem times, tend to be small creatures) reach islands where they face no competitors. The most spectacular example is on the Indonesian island of Komodo, where local lizards have grown so large that they are often referred to as dragons. The dinosaurs, in other words, could flourishonly when the competition had been knocked out. J. That leaves the question of where the impact happened. No large hole in the earth's crust seems to be 202m years old. It may, of course, have been overlooked. Old craters are eroded and buried, and not always easy to find. Alternatively, it may have vanished. Although continental crust is more or less permanent, the ocean floor is constantly recycled by the tectonic processes that bring about continental drift. There is no ocean floor left that is more than 200m years old, so a crater that formed in the ocean would have been swallowed up by now. K. There is a third possibility, however. This is that the crater is known, but has been misdated. The Manicouagan structure, a crater in Quebec, is thought to be 214m years old. It is hugesome 100km acrossand seems to be the largest of between three and five craters that formed within a few hours of each other as the lumps of a disintegrated comet hit the earth one by one.
Ichnotaxa showed that footprints of dinosaurs offer exact information of the trace left by an individual species.
contradiction
id_6079
The Dinosaurs Footprints and Extinction A. EVERYBODY knows that the dinosaurs were killed by an asteroid. Something big hit the earth 65 million years ago and, when the dust had fallen, so had the great reptiles. There is thus a nice, if ironic, symmetry in the idea that a similar impact brought about the dinosaurs' rise. That is the thesis proposed by Paul Olsen, of Columbia University, and his colleagues in this week's Science. B. Dinosaurs first appear in the fossil record 230m years ago, dining the Triassic period. But they were mostly small, and they shared the earth with lots of other sorts of reptile. It was in the subsequent Jurassic, which began 202million years ago, that they overran the planet and turned into the monsters depicted in the book and movie Jurassic Park. (Actually, though, the dinosaurs that appeared on screen were from the still more recent Cretaceous period. ) Dr Olsen and his colleagues are not the first to suggest that the dinosaurs inherited the earth as the result of an asteroid strike. But they are the first to show that the takeover did, indeed, happen in a geological eyeblink. C. Dinosaur skeletons are rare. Dinosaur footprints are, however, surprisingly abundant. And the sizes of the prints are as good an indication of the sizes of the beasts as are the skeletons themselves. Dr Olsen and his colleagues therefore concentrated on prints, not bones. D. The prints in question were made in eastern North America, a part of the world then full of rift valleys similar to those in East Africa today. Like the modem African rift valleys, the Triassic /Jurassic American ones contained lakes, and these lakes grew and shrank at regular intervals because of climatic changes caused by periodic shifts in the earth's orbit. (A similar phenomenon is responsible for modem ice ages. ) That regularity, combined with reversals in the earth's magnetic field, which are detectable in the tiny fields of certain magnetic minerals, means that rocks from this place and period can be dated to within a few thousand years. As a bonus, squish lake-edge sediments are just the things for recording the tracks of passing animals. By dividing the labour between themselves, the ten authors of the paper were able to study such tracksat 80 sites. E. The researchers looked at 18 so-called ichnotaxa. These are recognisable types of footprint that cannot be matched precisely with the species of animal that left them. But they can be matched with a general sort of animal, and thus act as an indicator of the fate of that group, even when there are no bones to tell the story. Five of the ichnotaxa disappear before the end of the Triassic, and four march confidently across the boundary into the Jurassic. Six, however, vanish at the boundary, or only just splutter across it; and three appear from nowhere, almost as soon as the Jurassic begins. F. That boundary itself is suggestive. The first geological indication of the impact that killed the dinosaurs was an unusually high level of iridium in rocks at the end of the Cretaceous, when the beasts disappear from the fossil record. Iridium is normally rare at the earth's surface, but it is more abundant in meteorites. When people began to believe the impact theory, they started looking for other Cretaceous-end anomalies. One that turned up was a surprising abundance of fern spores in rocks just above the boundary layera phenomenon known as a fern spike G. That matched the theory nicely. Many modem ferns are opportunists. They cannot compete against plants with leaves, but if a piece of land is cleared by, say, a volcanic emption, they are often the first things to set up shop there. An asteroid strike would have scoured much of the earth of its vegetable cover, and provided a paradise for ferns. A fem spike in the rocks is thus a good indication that southing terrible has happened. H. Both an iridium anomaly and a fem spike appear in rocks at the end of the Triassic, too. That accounts for the disappearing ichnotaxa: the creatures that made them did not survive the holocaust. The surprise is how rapidly the new ichnotaxa appear. I. Dr Olsen and his colleagues suggest that the explanation for this rapid increase in size may be a phenomenon called ecological release. This is seen today when reptiles (which, in modem times, tend to be small creatures) reach islands where they face no competitors. The most spectacular example is on the Indonesian island of Komodo, where local lizards have grown so large that they are often referred to as dragons. The dinosaurs, in other words, could flourishonly when the competition had been knocked out. J. That leaves the question of where the impact happened. No large hole in the earth's crust seems to be 202m years old. It may, of course, have been overlooked. Old craters are eroded and buried, and not always easy to find. Alternatively, it may have vanished. Although continental crust is more or less permanent, the ocean floor is constantly recycled by the tectonic processes that bring about continental drift. There is no ocean floor left that is more than 200m years old, so a crater that formed in the ocean would have been swallowed up by now. K. There is a third possibility, however. This is that the crater is known, but has been misdated. The Manicouagan structure, a crater in Quebec, is thought to be 214m years old. It is hugesome 100km acrossand seems to be the largest of between three and five craters that formed within a few hours of each other as the lumps of a disintegrated comet hit the earth one by one.
Dr Paul Olsen and his colleagues believe that asteroid knock may also lead to dinosaurs boom.
entailment
id_6080
The Dinosaurs Footprints and Extinction A. EVERYBODY knows that the dinosaurs were killed by an asteroid. Something big hit the earth 65 million years ago and, when the dust had fallen, so had the great reptiles. There is thus a nice, if ironic, symmetry in the idea that a similar impact brought about the dinosaurs' rise. That is the thesis proposed by Paul Olsen, of Columbia University, and his colleagues in this week's Science. B. Dinosaurs first appear in the fossil record 230m years ago, dining the Triassic period. But they were mostly small, and they shared the earth with lots of other sorts of reptile. It was in the subsequent Jurassic, which began 202million years ago, that they overran the planet and turned into the monsters depicted in the book and movie Jurassic Park. (Actually, though, the dinosaurs that appeared on screen were from the still more recent Cretaceous period. ) Dr Olsen and his colleagues are not the first to suggest that the dinosaurs inherited the earth as the result of an asteroid strike. But they are the first to show that the takeover did, indeed, happen in a geological eyeblink. C. Dinosaur skeletons are rare. Dinosaur footprints are, however, surprisingly abundant. And the sizes of the prints are as good an indication of the sizes of the beasts as are the skeletons themselves. Dr Olsen and his colleagues therefore concentrated on prints, not bones. D. The prints in question were made in eastern North America, a part of the world then full of rift valleys similar to those in East Africa today. Like the modem African rift valleys, the Triassic /Jurassic American ones contained lakes, and these lakes grew and shrank at regular intervals because of climatic changes caused by periodic shifts in the earth's orbit. (A similar phenomenon is responsible for modem ice ages. ) That regularity, combined with reversals in the earth's magnetic field, which are detectable in the tiny fields of certain magnetic minerals, means that rocks from this place and period can be dated to within a few thousand years. As a bonus, squish lake-edge sediments are just the things for recording the tracks of passing animals. By dividing the labour between themselves, the ten authors of the paper were able to study such tracksat 80 sites. E. The researchers looked at 18 so-called ichnotaxa. These are recognisable types of footprint that cannot be matched precisely with the species of animal that left them. But they can be matched with a general sort of animal, and thus act as an indicator of the fate of that group, even when there are no bones to tell the story. Five of the ichnotaxa disappear before the end of the Triassic, and four march confidently across the boundary into the Jurassic. Six, however, vanish at the boundary, or only just splutter across it; and three appear from nowhere, almost as soon as the Jurassic begins. F. That boundary itself is suggestive. The first geological indication of the impact that killed the dinosaurs was an unusually high level of iridium in rocks at the end of the Cretaceous, when the beasts disappear from the fossil record. Iridium is normally rare at the earth's surface, but it is more abundant in meteorites. When people began to believe the impact theory, they started looking for other Cretaceous-end anomalies. One that turned up was a surprising abundance of fern spores in rocks just above the boundary layera phenomenon known as a fern spike G. That matched the theory nicely. Many modem ferns are opportunists. They cannot compete against plants with leaves, but if a piece of land is cleared by, say, a volcanic emption, they are often the first things to set up shop there. An asteroid strike would have scoured much of the earth of its vegetable cover, and provided a paradise for ferns. A fem spike in the rocks is thus a good indication that southing terrible has happened. H. Both an iridium anomaly and a fem spike appear in rocks at the end of the Triassic, too. That accounts for the disappearing ichnotaxa: the creatures that made them did not survive the holocaust. The surprise is how rapidly the new ichnotaxa appear. I. Dr Olsen and his colleagues suggest that the explanation for this rapid increase in size may be a phenomenon called ecological release. This is seen today when reptiles (which, in modem times, tend to be small creatures) reach islands where they face no competitors. The most spectacular example is on the Indonesian island of Komodo, where local lizards have grown so large that they are often referred to as dragons. The dinosaurs, in other words, could flourishonly when the competition had been knocked out. J. That leaves the question of where the impact happened. No large hole in the earth's crust seems to be 202m years old. It may, of course, have been overlooked. Old craters are eroded and buried, and not always easy to find. Alternatively, it may have vanished. Although continental crust is more or less permanent, the ocean floor is constantly recycled by the tectonic processes that bring about continental drift. There is no ocean floor left that is more than 200m years old, so a crater that formed in the ocean would have been swallowed up by now. K. There is a third possibility, however. This is that the crater is known, but has been misdated. The Manicouagan structure, a crater in Quebec, is thought to be 214m years old. It is hugesome 100km acrossand seems to be the largest of between three and five craters that formed within a few hours of each other as the lumps of a disintegrated comet hit the earth one by one.
We can find more Iridium in the earths surface than in meteorites.
contradiction
id_6081
The Dinosaurs Footprints and Extinction A. EVERYBODY knows that the dinosaurs were killed by an asteroid. Something big hit the earth 65 million years ago and, when the dust had fallen, so had the great reptiles. There is thus a nice, if ironic, symmetry in the idea that a similar impact brought about the dinosaurs' rise. That is the thesis proposed by Paul Olsen, of Columbia University, and his colleagues in this week's Science. B. Dinosaurs first appear in the fossil record 230m years ago, dining the Triassic period. But they were mostly small, and they shared the earth with lots of other sorts of reptile. It was in the subsequent Jurassic, which began 202million years ago, that they overran the planet and turned into the monsters depicted in the book and movie Jurassic Park. (Actually, though, the dinosaurs that appeared on screen were from the still more recent Cretaceous period. ) Dr Olsen and his colleagues are not the first to suggest that the dinosaurs inherited the earth as the result of an asteroid strike. But they are the first to show that the takeover did, indeed, happen in a geological eyeblink. C. Dinosaur skeletons are rare. Dinosaur footprints are, however, surprisingly abundant. And the sizes of the prints are as good an indication of the sizes of the beasts as are the skeletons themselves. Dr Olsen and his colleagues therefore concentrated on prints, not bones. D. The prints in question were made in eastern North America, a part of the world then full of rift valleys similar to those in East Africa today. Like the modem African rift valleys, the Triassic /Jurassic American ones contained lakes, and these lakes grew and shrank at regular intervals because of climatic changes caused by periodic shifts in the earth's orbit. (A similar phenomenon is responsible for modem ice ages. ) That regularity, combined with reversals in the earth's magnetic field, which are detectable in the tiny fields of certain magnetic minerals, means that rocks from this place and period can be dated to within a few thousand years. As a bonus, squish lake-edge sediments are just the things for recording the tracks of passing animals. By dividing the labour between themselves, the ten authors of the paper were able to study such tracksat 80 sites. E. The researchers looked at 18 so-called ichnotaxa. These are recognisable types of footprint that cannot be matched precisely with the species of animal that left them. But they can be matched with a general sort of animal, and thus act as an indicator of the fate of that group, even when there are no bones to tell the story. Five of the ichnotaxa disappear before the end of the Triassic, and four march confidently across the boundary into the Jurassic. Six, however, vanish at the boundary, or only just splutter across it; and three appear from nowhere, almost as soon as the Jurassic begins. F. That boundary itself is suggestive. The first geological indication of the impact that killed the dinosaurs was an unusually high level of iridium in rocks at the end of the Cretaceous, when the beasts disappear from the fossil record. Iridium is normally rare at the earth's surface, but it is more abundant in meteorites. When people began to believe the impact theory, they started looking for other Cretaceous-end anomalies. One that turned up was a surprising abundance of fern spores in rocks just above the boundary layera phenomenon known as a fern spike G. That matched the theory nicely. Many modem ferns are opportunists. They cannot compete against plants with leaves, but if a piece of land is cleared by, say, a volcanic emption, they are often the first things to set up shop there. An asteroid strike would have scoured much of the earth of its vegetable cover, and provided a paradise for ferns. A fem spike in the rocks is thus a good indication that southing terrible has happened. H. Both an iridium anomaly and a fem spike appear in rocks at the end of the Triassic, too. That accounts for the disappearing ichnotaxa: the creatures that made them did not survive the holocaust. The surprise is how rapidly the new ichnotaxa appear. I. Dr Olsen and his colleagues suggest that the explanation for this rapid increase in size may be a phenomenon called ecological release. This is seen today when reptiles (which, in modem times, tend to be small creatures) reach islands where they face no competitors. The most spectacular example is on the Indonesian island of Komodo, where local lizards have grown so large that they are often referred to as dragons. The dinosaurs, in other words, could flourishonly when the competition had been knocked out. J. That leaves the question of where the impact happened. No large hole in the earth's crust seems to be 202m years old. It may, of course, have been overlooked. Old craters are eroded and buried, and not always easy to find. Alternatively, it may have vanished. Although continental crust is more or less permanent, the ocean floor is constantly recycled by the tectonic processes that bring about continental drift. There is no ocean floor left that is more than 200m years old, so a crater that formed in the ocean would have been swallowed up by now. K. There is a third possibility, however. This is that the crater is known, but has been misdated. The Manicouagan structure, a crater in Quebec, is thought to be 214m years old. It is hugesome 100km acrossand seems to be the largest of between three and five craters that formed within a few hours of each other as the lumps of a disintegrated comet hit the earth one by one.
Books and movie like Jurassic Park often exaggerate the size of the dinosaurs.
neutral
id_6082
The Dinosaurs Footprints and Extinction A. EVERYBODY knows that the dinosaurs were killed by an asteroid. Something big hit the earth 65 million years ago and, when the dust had fallen, so had the great reptiles. There is thus a nice, if ironic, symmetry in the idea that a similar impact brought about the dinosaurs' rise. That is the thesis proposed by Paul Olsen, of Columbia University, and his colleagues in this week's Science. B. Dinosaurs first appear in the fossil record 230m years ago, dining the Triassic period. But they were mostly small, and they shared the earth with lots of other sorts of reptile. It was in the subsequent Jurassic, which began 202million years ago, that they overran the planet and turned into the monsters depicted in the book and movie Jurassic Park. (Actually, though, the dinosaurs that appeared on screen were from the still more recent Cretaceous period. ) Dr Olsen and his colleagues are not the first to suggest that the dinosaurs inherited the earth as the result of an asteroid strike. But they are the first to show that the takeover did, indeed, happen in a geological eyeblink. C. Dinosaur skeletons are rare. Dinosaur footprints are, however, surprisingly abundant. And the sizes of the prints are as good an indication of the sizes of the beasts as are the skeletons themselves. Dr Olsen and his colleagues therefore concentrated on prints, not bones. D. The prints in question were made in eastern North America, a part of the world then full of rift valleys similar to those in East Africa today. Like the modem African rift valleys, the Triassic /Jurassic American ones contained lakes, and these lakes grew and shrank at regular intervals because of climatic changes caused by periodic shifts in the earth's orbit. (A similar phenomenon is responsible for modem ice ages. ) That regularity, combined with reversals in the earth's magnetic field, which are detectable in the tiny fields of certain magnetic minerals, means that rocks from this place and period can be dated to within a few thousand years. As a bonus, squish lake-edge sediments are just the things for recording the tracks of passing animals. By dividing the labour between themselves, the ten authors of the paper were able to study such tracksat 80 sites. E. The researchers looked at 18 so-called ichnotaxa. These are recognisable types of footprint that cannot be matched precisely with the species of animal that left them. But they can be matched with a general sort of animal, and thus act as an indicator of the fate of that group, even when there are no bones to tell the story. Five of the ichnotaxa disappear before the end of the Triassic, and four march confidently across the boundary into the Jurassic. Six, however, vanish at the boundary, or only just splutter across it; and three appear from nowhere, almost as soon as the Jurassic begins. F. That boundary itself is suggestive. The first geological indication of the impact that killed the dinosaurs was an unusually high level of iridium in rocks at the end of the Cretaceous, when the beasts disappear from the fossil record. Iridium is normally rare at the earth's surface, but it is more abundant in meteorites. When people began to believe the impact theory, they started looking for other Cretaceous-end anomalies. One that turned up was a surprising abundance of fern spores in rocks just above the boundary layera phenomenon known as a fern spike G. That matched the theory nicely. Many modem ferns are opportunists. They cannot compete against plants with leaves, but if a piece of land is cleared by, say, a volcanic emption, they are often the first things to set up shop there. An asteroid strike would have scoured much of the earth of its vegetable cover, and provided a paradise for ferns. A fem spike in the rocks is thus a good indication that southing terrible has happened. H. Both an iridium anomaly and a fem spike appear in rocks at the end of the Triassic, too. That accounts for the disappearing ichnotaxa: the creatures that made them did not survive the holocaust. The surprise is how rapidly the new ichnotaxa appear. I. Dr Olsen and his colleagues suggest that the explanation for this rapid increase in size may be a phenomenon called ecological release. This is seen today when reptiles (which, in modem times, tend to be small creatures) reach islands where they face no competitors. The most spectacular example is on the Indonesian island of Komodo, where local lizards have grown so large that they are often referred to as dragons. The dinosaurs, in other words, could flourishonly when the competition had been knocked out. J. That leaves the question of where the impact happened. No large hole in the earth's crust seems to be 202m years old. It may, of course, have been overlooked. Old craters are eroded and buried, and not always easy to find. Alternatively, it may have vanished. Although continental crust is more or less permanent, the ocean floor is constantly recycled by the tectonic processes that bring about continental drift. There is no ocean floor left that is more than 200m years old, so a crater that formed in the ocean would have been swallowed up by now. K. There is a third possibility, however. This is that the crater is known, but has been misdated. The Manicouagan structure, a crater in Quebec, is thought to be 214m years old. It is hugesome 100km acrossand seems to be the largest of between three and five craters that formed within a few hours of each other as the lumps of a disintegrated comet hit the earth one by one.
Dinosaur footprints are more adequate than dinosaur skeletons.
entailment
id_6083
The Dinosaurs Footprints and Extinction A. EVERYBODY knows that the dinosaurs were killed by an asteroid. Something big hit the earth 65 million years ago and, when the dust had fallen, so had the great reptiles. There is thus a nice, if ironic, symmetry in the idea that a similar impact brought about the dinosaurs' rise. That is the thesis proposed by Paul Olsen, of Columbia University, and his colleagues in this week's Science. B. Dinosaurs first appear in the fossil record 230m years ago, dining the Triassic period. But they were mostly small, and they shared the earth with lots of other sorts of reptile. It was in the subsequent Jurassic, which began 202million years ago, that they overran the planet and turned into the monsters depicted in the book and movie Jurassic Park. (Actually, though, the dinosaurs that appeared on screen were from the still more recent Cretaceous period. ) Dr Olsen and his colleagues are not the first to suggest that the dinosaurs inherited the earth as the result of an asteroid strike. But they are the first to show that the takeover did, indeed, happen in a geological eyeblink. C. Dinosaur skeletons are rare. Dinosaur footprints are, however, surprisingly abundant. And the sizes of the prints are as good an indication of the sizes of the beasts as are the skeletons themselves. Dr Olsen and his colleagues therefore concentrated on prints, not bones. D. The prints in question were made in eastern North America, a part of the world then full of rift valleys similar to those in East Africa today. Like the modem African rift valleys, the Triassic /Jurassic American ones contained lakes, and these lakes grew and shrank at regular intervals because of climatic changes caused by periodic shifts in the earth's orbit. (A similar phenomenon is responsible for modem ice ages. ) That regularity, combined with reversals in the earth's magnetic field, which are detectable in the tiny fields of certain magnetic minerals, means that rocks from this place and period can be dated to within a few thousand years. As a bonus, squish lake-edge sediments are just the things for recording the tracks of passing animals. By dividing the labour between themselves, the ten authors of the paper were able to study such tracksat 80 sites. E. The researchers looked at 18 so-called ichnotaxa. These are recognisable types of footprint that cannot be matched precisely with the species of animal that left them. But they can be matched with a general sort of animal, and thus act as an indicator of the fate of that group, even when there are no bones to tell the story. Five of the ichnotaxa disappear before the end of the Triassic, and four march confidently across the boundary into the Jurassic. Six, however, vanish at the boundary, or only just splutter across it; and three appear from nowhere, almost as soon as the Jurassic begins. F. That boundary itself is suggestive. The first geological indication of the impact that killed the dinosaurs was an unusually high level of iridium in rocks at the end of the Cretaceous, when the beasts disappear from the fossil record. Iridium is normally rare at the earth's surface, but it is more abundant in meteorites. When people began to believe the impact theory, they started looking for other Cretaceous-end anomalies. One that turned up was a surprising abundance of fern spores in rocks just above the boundary layera phenomenon known as a fern spike G. That matched the theory nicely. Many modem ferns are opportunists. They cannot compete against plants with leaves, but if a piece of land is cleared by, say, a volcanic emption, they are often the first things to set up shop there. An asteroid strike would have scoured much of the earth of its vegetable cover, and provided a paradise for ferns. A fem spike in the rocks is thus a good indication that southing terrible has happened. H. Both an iridium anomaly and a fem spike appear in rocks at the end of the Triassic, too. That accounts for the disappearing ichnotaxa: the creatures that made them did not survive the holocaust. The surprise is how rapidly the new ichnotaxa appear. I. Dr Olsen and his colleagues suggest that the explanation for this rapid increase in size may be a phenomenon called ecological release. This is seen today when reptiles (which, in modem times, tend to be small creatures) reach islands where they face no competitors. The most spectacular example is on the Indonesian island of Komodo, where local lizards have grown so large that they are often referred to as dragons. The dinosaurs, in other words, could flourishonly when the competition had been knocked out. J. That leaves the question of where the impact happened. No large hole in the earth's crust seems to be 202m years old. It may, of course, have been overlooked. Old craters are eroded and buried, and not always easy to find. Alternatively, it may have vanished. Although continental crust is more or less permanent, the ocean floor is constantly recycled by the tectonic processes that bring about continental drift. There is no ocean floor left that is more than 200m years old, so a crater that formed in the ocean would have been swallowed up by now. K. There is a third possibility, however. This is that the crater is known, but has been misdated. The Manicouagan structure, a crater in Quebec, is thought to be 214m years old. It is hugesome 100km acrossand seems to be the largest of between three and five craters that formed within a few hours of each other as the lumps of a disintegrated comet hit the earth one by one.
The prints were chosen by Dr Olsen to study because they are more detectable than earth magnetic field to track a date of geological precise within thousands years.
neutral
id_6084
The Discovery of Uranus Someone once put forward an attractive though unlikely theory. Throughout the Earths annual revolution around the sun, there is one point of space always hidden from our eyes. This point is the opposite part of the Earths orbit, which is always hidden by the sun. Could there be another planet there, essentially similar to our own, but always invisible? If a space probe today sent back evidence that such a world existed it would cause not much more sensation than Sir William Herschels discovery of a new planet, Uranus, in 1781. Herschel was an extraordinary man no other astronomer has ever covered so vast a field of work and his career deserves study. He was born in Hanover in Germany in 1738, left the German army in 1757, and arrived in England the same year with no money but quite exceptional music ability. He played the violin and oboe and at one time was organist in the Octagon Chapel in the city of Bath. Herschels was an active mind, and deep inside he was conscious that music was not his destiny; he therefore, read widely in science and the arts, but not until 1772 did he come across a book on astronomy. He was then 34, middle-aged by the standards of the time, but without hesitation he embarked on his new career, financing it by his professional work as a musician. He spent years mastering the art of telescope construction, and even by present-day standards his instruments are comparable with the best. Serious observation began 1774. He set himself the astonishing task of reviewing the heavens, in other words, pointing his telescope to every accessible part of the sky and recording what he saw. The first review was made in 1775; the second, and most momentous, in 1780-81. It was during the latter part of this that he discovered Uranus. Afterwards, supported by the royal grant in recognition of his work, he was able to devote himself entirely to astronomy. His final achievements spread from the sun and moon to remote galaxies (of which he discovered hundreds), and papers flooded from his pen until his death in 1822. Among these, there was one sent to the Royal Society in 1781, entitled An Account of a Comet. In his own words: On Tuesday the 13th of March, between ten and eleven in the evening, while I was examining the small stars in the neighbourhood of H Geminorum, I perceived one that appeared visibly larger than the rest; being struck with its uncommon magnitude, I compared it to H Geminorum and the small star in the quartile between Auriga and Gemini, and finding it to be much larger than either of them, suspected it to be a comet. Herschels care was the hallmark of a great observer; he was not prepared to jump any conclusions. Also, to be fair, the discovery of a new planet was the last thought in anybodys mind. But further observation by other astronomers besides Herschel revealed two curious facts. For the comet, it showed a remarkably sharp disc; furthermore, it was moving so slowly that it was thought to be a great distance from the sun, and comets are only normally visible in the immediate vicinity of the sun. As its orbit came to be worked out the truth dawned that it was a new planet far beyond Saturns realm, and that the reviewer of the heavens had stumbled across an unprecedented prize. Herschel wanted to call it georgium sidus (Star of George) in honour of his royal patron King George III of Great Britain. The planet was later for a time called Herschel in honour of its discoverer. The name Uranus, which was first proposed by the German astronomer Johann Elert Bode, was in use by the late 19th century. Uranus is a giant in construction, but not so much in size; its diameter compares unfavourably with that of Jupiter and Saturn, though on the terrestrial scale it is still colossal. Uranus atmosphere consists largely of hydrogen and helium, with a trace of methane. Through a telescope the planet appears as a small bluish-green disc with a faint green periphery. In 1977, while recording the occultation 1 of a star behind the planet, the American astronomer James L. Elliot discovered the presence of five rings encircling the equator of Uranus. Four more rings were discovered in January 1986 during the exploratory flight of Voyager 2 2 , In addition to its rings, Uranus has 15 satellites (moons), the last 10 discovered by Voyager 2 on the same flight; all revolve about its equator and move with the planet in an eastwest direction. The two largest moons, Titania and Oberon, were discovered by Herschel in 1787. The next two, Umbriel and Ariel, were found in 1851 by the British astronomer William Lassell. Miranda, thought before 1986 to be the innermost moon, was discovered in 1948 by the American astronomer Gerard Peter Kuiper.
Herschel was multi-talented
entailment
id_6085
The Discovery of Uranus Someone once put forward an attractive though unlikely theory. Throughout the Earths annual revolution around the sun, there is one point of space always hidden from our eyes. This point is the opposite part of the Earths orbit, which is always hidden by the sun. Could there be another planet there, essentially similar to our own, but always invisible? If a space probe today sent back evidence that such a world existed it would cause not much more sensation than Sir William Herschels discovery of a new planet, Uranus, in 1781. Herschel was an extraordinary man no other astronomer has ever covered so vast a field of work and his career deserves study. He was born in Hanover in Germany in 1738, left the German army in 1757, and arrived in England the same year with no money but quite exceptional music ability. He played the violin and oboe and at one time was organist in the Octagon Chapel in the city of Bath. Herschels was an active mind, and deep inside he was conscious that music was not his destiny; he therefore, read widely in science and the arts, but not until 1772 did he come across a book on astronomy. He was then 34, middle-aged by the standards of the time, but without hesitation he embarked on his new career, financing it by his professional work as a musician. He spent years mastering the art of telescope construction, and even by present-day standards his instruments are comparable with the best. Serious observation began 1774. He set himself the astonishing task of reviewing the heavens, in other words, pointing his telescope to every accessible part of the sky and recording what he saw. The first review was made in 1775; the second, and most momentous, in 1780-81. It was during the latter part of this that he discovered Uranus. Afterwards, supported by the royal grant in recognition of his work, he was able to devote himself entirely to astronomy. His final achievements spread from the sun and moon to remote galaxies (of which he discovered hundreds), and papers flooded from his pen until his death in 1822. Among these, there was one sent to the Royal Society in 1781, entitled An Account of a Comet. In his own words: On Tuesday the 13th of March, between ten and eleven in the evening, while I was examining the small stars in the neighbourhood of H Geminorum, I perceived one that appeared visibly larger than the rest; being struck with its uncommon magnitude, I compared it to H Geminorum and the small star in the quartile between Auriga and Gemini, and finding it to be much larger than either of them, suspected it to be a comet. Herschels care was the hallmark of a great observer; he was not prepared to jump any conclusions. Also, to be fair, the discovery of a new planet was the last thought in anybodys mind. But further observation by other astronomers besides Herschel revealed two curious facts. For the comet, it showed a remarkably sharp disc; furthermore, it was moving so slowly that it was thought to be a great distance from the sun, and comets are only normally visible in the immediate vicinity of the sun. As its orbit came to be worked out the truth dawned that it was a new planet far beyond Saturns realm, and that the reviewer of the heavens had stumbled across an unprecedented prize. Herschel wanted to call it georgium sidus (Star of George) in honour of his royal patron King George III of Great Britain. The planet was later for a time called Herschel in honour of its discoverer. The name Uranus, which was first proposed by the German astronomer Johann Elert Bode, was in use by the late 19th century. Uranus is a giant in construction, but not so much in size; its diameter compares unfavourably with that of Jupiter and Saturn, though on the terrestrial scale it is still colossal. Uranus atmosphere consists largely of hydrogen and helium, with a trace of methane. Through a telescope the planet appears as a small bluish-green disc with a faint green periphery. In 1977, while recording the occultation 1 of a star behind the planet, the American astronomer James L. Elliot discovered the presence of five rings encircling the equator of Uranus. Four more rings were discovered in January 1986 during the exploratory flight of Voyager 2 2 , In addition to its rings, Uranus has 15 satellites (moons), the last 10 discovered by Voyager 2 on the same flight; all revolve about its equator and move with the planet in an eastwest direction. The two largest moons, Titania and Oberon, were discovered by Herschel in 1787. The next two, Umbriel and Ariel, were found in 1851 by the British astronomer William Lassell. Miranda, thought before 1986 to be the innermost moon, was discovered in 1948 by the American astronomer Gerard Peter Kuiper.
Herschels discovery was the most important find of the last three hundred years.
neutral
id_6086
The Discovery of Uranus Someone once put forward an attractive though unlikely theory. Throughout the Earths annual revolution around the sun, there is one point of space always hidden from our eyes. This point is the opposite part of the Earths orbit, which is always hidden by the sun. Could there be another planet there, essentially similar to our own, but always invisible? If a space probe today sent back evidence that such a world existed it would cause not much more sensation than Sir William Herschels discovery of a new planet, Uranus, in 1781. Herschel was an extraordinary man no other astronomer has ever covered so vast a field of work and his career deserves study. He was born in Hanover in Germany in 1738, left the German army in 1757, and arrived in England the same year with no money but quite exceptional music ability. He played the violin and oboe and at one time was organist in the Octagon Chapel in the city of Bath. Herschels was an active mind, and deep inside he was conscious that music was not his destiny; he therefore, read widely in science and the arts, but not until 1772 did he come across a book on astronomy. He was then 34, middle-aged by the standards of the time, but without hesitation he embarked on his new career, financing it by his professional work as a musician. He spent years mastering the art of telescope construction, and even by present-day standards his instruments are comparable with the best. Serious observation began 1774. He set himself the astonishing task of reviewing the heavens, in other words, pointing his telescope to every accessible part of the sky and recording what he saw. The first review was made in 1775; the second, and most momentous, in 1780-81. It was during the latter part of this that he discovered Uranus. Afterwards, supported by the royal grant in recognition of his work, he was able to devote himself entirely to astronomy. His final achievements spread from the sun and moon to remote galaxies (of which he discovered hundreds), and papers flooded from his pen until his death in 1822. Among these, there was one sent to the Royal Society in 1781, entitled An Account of a Comet. In his own words: On Tuesday the 13th of March, between ten and eleven in the evening, while I was examining the small stars in the neighbourhood of H Geminorum, I perceived one that appeared visibly larger than the rest; being struck with its uncommon magnitude, I compared it to H Geminorum and the small star in the quartile between Auriga and Gemini, and finding it to be much larger than either of them, suspected it to be a comet. Herschels care was the hallmark of a great observer; he was not prepared to jump any conclusions. Also, to be fair, the discovery of a new planet was the last thought in anybodys mind. But further observation by other astronomers besides Herschel revealed two curious facts. For the comet, it showed a remarkably sharp disc; furthermore, it was moving so slowly that it was thought to be a great distance from the sun, and comets are only normally visible in the immediate vicinity of the sun. As its orbit came to be worked out the truth dawned that it was a new planet far beyond Saturns realm, and that the reviewer of the heavens had stumbled across an unprecedented prize. Herschel wanted to call it georgium sidus (Star of George) in honour of his royal patron King George III of Great Britain. The planet was later for a time called Herschel in honour of its discoverer. The name Uranus, which was first proposed by the German astronomer Johann Elert Bode, was in use by the late 19th century. Uranus is a giant in construction, but not so much in size; its diameter compares unfavourably with that of Jupiter and Saturn, though on the terrestrial scale it is still colossal. Uranus atmosphere consists largely of hydrogen and helium, with a trace of methane. Through a telescope the planet appears as a small bluish-green disc with a faint green periphery. In 1977, while recording the occultation 1 of a star behind the planet, the American astronomer James L. Elliot discovered the presence of five rings encircling the equator of Uranus. Four more rings were discovered in January 1986 during the exploratory flight of Voyager 2 2 , In addition to its rings, Uranus has 15 satellites (moons), the last 10 discovered by Voyager 2 on the same flight; all revolve about its equator and move with the planet in an eastwest direction. The two largest moons, Titania and Oberon, were discovered by Herschel in 1787. The next two, Umbriel and Ariel, were found in 1851 by the British astronomer William Lassell. Miranda, thought before 1986 to be the innermost moon, was discovered in 1948 by the American astronomer Gerard Peter Kuiper.
Herschels newly-discovered object was considered to be too far from the sun to be a comet.
entailment
id_6087
The Discovery of Uranus Someone once put forward an attractive though unlikely theory. Throughout the Earths annual revolution around the sun, there is one point of space always hidden from our eyes. This point is the opposite part of the Earths orbit, which is always hidden by the sun. Could there be another planet there, essentially similar to our own, but always invisible? If a space probe today sent back evidence that such a world existed it would cause not much more sensation than Sir William Herschels discovery of a new planet, Uranus, in 1781. Herschel was an extraordinary man no other astronomer has ever covered so vast a field of work and his career deserves study. He was born in Hanover in Germany in 1738, left the German army in 1757, and arrived in England the same year with no money but quite exceptional music ability. He played the violin and oboe and at one time was organist in the Octagon Chapel in the city of Bath. Herschels was an active mind, and deep inside he was conscious that music was not his destiny; he therefore, read widely in science and the arts, but not until 1772 did he come across a book on astronomy. He was then 34, middle-aged by the standards of the time, but without hesitation he embarked on his new career, financing it by his professional work as a musician. He spent years mastering the art of telescope construction, and even by present-day standards his instruments are comparable with the best. Serious observation began 1774. He set himself the astonishing task of reviewing the heavens, in other words, pointing his telescope to every accessible part of the sky and recording what he saw. The first review was made in 1775; the second, and most momentous, in 1780-81. It was during the latter part of this that he discovered Uranus. Afterwards, supported by the royal grant in recognition of his work, he was able to devote himself entirely to astronomy. His final achievements spread from the sun and moon to remote galaxies (of which he discovered hundreds), and papers flooded from his pen until his death in 1822. Among these, there was one sent to the Royal Society in 1781, entitled An Account of a Comet. In his own words: On Tuesday the 13th of March, between ten and eleven in the evening, while I was examining the small stars in the neighbourhood of H Geminorum, I perceived one that appeared visibly larger than the rest; being struck with its uncommon magnitude, I compared it to H Geminorum and the small star in the quartile between Auriga and Gemini, and finding it to be much larger than either of them, suspected it to be a comet. Herschels care was the hallmark of a great observer; he was not prepared to jump any conclusions. Also, to be fair, the discovery of a new planet was the last thought in anybodys mind. But further observation by other astronomers besides Herschel revealed two curious facts. For the comet, it showed a remarkably sharp disc; furthermore, it was moving so slowly that it was thought to be a great distance from the sun, and comets are only normally visible in the immediate vicinity of the sun. As its orbit came to be worked out the truth dawned that it was a new planet far beyond Saturns realm, and that the reviewer of the heavens had stumbled across an unprecedented prize. Herschel wanted to call it georgium sidus (Star of George) in honour of his royal patron King George III of Great Britain. The planet was later for a time called Herschel in honour of its discoverer. The name Uranus, which was first proposed by the German astronomer Johann Elert Bode, was in use by the late 19th century. Uranus is a giant in construction, but not so much in size; its diameter compares unfavourably with that of Jupiter and Saturn, though on the terrestrial scale it is still colossal. Uranus atmosphere consists largely of hydrogen and helium, with a trace of methane. Through a telescope the planet appears as a small bluish-green disc with a faint green periphery. In 1977, while recording the occultation 1 of a star behind the planet, the American astronomer James L. Elliot discovered the presence of five rings encircling the equator of Uranus. Four more rings were discovered in January 1986 during the exploratory flight of Voyager 2 2 , In addition to its rings, Uranus has 15 satellites (moons), the last 10 discovered by Voyager 2 on the same flight; all revolve about its equator and move with the planet in an eastwest direction. The two largest moons, Titania and Oberon, were discovered by Herschel in 1787. The next two, Umbriel and Ariel, were found in 1851 by the British astronomer William Lassell. Miranda, thought before 1986 to be the innermost moon, was discovered in 1948 by the American astronomer Gerard Peter Kuiper.
Herschel collaborated with other astronomers of his time.
neutral
id_6088
The Discovery of Uranus Someone once put forward an attractive though unlikely theory. Throughout the Earths annual revolution around the sun, there is one point of space always hidden from our eyes. This point is the opposite part of the Earths orbit, which is always hidden by the sun. Could there be another planet there, essentially similar to our own, but always invisible? If a space probe today sent back evidence that such a world existed it would cause not much more sensation than Sir William Herschels discovery of a new planet, Uranus, in 1781. Herschel was an extraordinary man no other astronomer has ever covered so vast a field of work and his career deserves study. He was born in Hanover in Germany in 1738, left the German army in 1757, and arrived in England the same year with no money but quite exceptional music ability. He played the violin and oboe and at one time was organist in the Octagon Chapel in the city of Bath. Herschels was an active mind, and deep inside he was conscious that music was not his destiny; he therefore, read widely in science and the arts, but not until 1772 did he come across a book on astronomy. He was then 34, middle-aged by the standards of the time, but without hesitation he embarked on his new career, financing it by his professional work as a musician. He spent years mastering the art of telescope construction, and even by present-day standards his instruments are comparable with the best. Serious observation began 1774. He set himself the astonishing task of reviewing the heavens, in other words, pointing his telescope to every accessible part of the sky and recording what he saw. The first review was made in 1775; the second, and most momentous, in 1780-81. It was during the latter part of this that he discovered Uranus. Afterwards, supported by the royal grant in recognition of his work, he was able to devote himself entirely to astronomy. His final achievements spread from the sun and moon to remote galaxies (of which he discovered hundreds), and papers flooded from his pen until his death in 1822. Among these, there was one sent to the Royal Society in 1781, entitled An Account of a Comet. In his own words: On Tuesday the 13th of March, between ten and eleven in the evening, while I was examining the small stars in the neighbourhood of H Geminorum, I perceived one that appeared visibly larger than the rest; being struck with its uncommon magnitude, I compared it to H Geminorum and the small star in the quartile between Auriga and Gemini, and finding it to be much larger than either of them, suspected it to be a comet. Herschels care was the hallmark of a great observer; he was not prepared to jump any conclusions. Also, to be fair, the discovery of a new planet was the last thought in anybodys mind. But further observation by other astronomers besides Herschel revealed two curious facts. For the comet, it showed a remarkably sharp disc; furthermore, it was moving so slowly that it was thought to be a great distance from the sun, and comets are only normally visible in the immediate vicinity of the sun. As its orbit came to be worked out the truth dawned that it was a new planet far beyond Saturns realm, and that the reviewer of the heavens had stumbled across an unprecedented prize. Herschel wanted to call it georgium sidus (Star of George) in honour of his royal patron King George III of Great Britain. The planet was later for a time called Herschel in honour of its discoverer. The name Uranus, which was first proposed by the German astronomer Johann Elert Bode, was in use by the late 19th century. Uranus is a giant in construction, but not so much in size; its diameter compares unfavourably with that of Jupiter and Saturn, though on the terrestrial scale it is still colossal. Uranus atmosphere consists largely of hydrogen and helium, with a trace of methane. Through a telescope the planet appears as a small bluish-green disc with a faint green periphery. In 1977, while recording the occultation 1 of a star behind the planet, the American astronomer James L. Elliot discovered the presence of five rings encircling the equator of Uranus. Four more rings were discovered in January 1986 during the exploratory flight of Voyager 2 2 , In addition to its rings, Uranus has 15 satellites (moons), the last 10 discovered by Voyager 2 on the same flight; all revolve about its equator and move with the planet in an eastwest direction. The two largest moons, Titania and Oberon, were discovered by Herschel in 1787. The next two, Umbriel and Ariel, were found in 1851 by the British astronomer William Lassell. Miranda, thought before 1986 to be the innermost moon, was discovered in 1948 by the American astronomer Gerard Peter Kuiper.
Herschel knew immediately that he had found a new planet.
contradiction
id_6089
The Discovery of Uranus Someone once put forward an attractive though unlikely theory. Throughout the Earths annual revolution around the sun, there is one point of space always hidden from our eyes. This point is the opposite part of the Earths orbit, which is always hidden by the sun. Could there be another planet there, essentially similar to our own, but always invisible? If a space probe today sent back evidence that such a world existed it would cause not much more sensation than Sir William Herschels discovery of a new planet, Uranus, in 1781. Herschel was an extraordinary man no other astronomer has ever covered so vast a field of work and his career deserves study. He was born in Hanover in Germany in 1738, left the German army in 1757, and arrived in England the same year with no money but quite exceptional music ability. He played the violin and oboe and at one time was organist in the Octagon Chapel in the city of Bath. Herschels was an active mind, and deep inside he was conscious that music was not his destiny; he therefore, read widely in science and the arts, but not until 1772 did he come across a book on astronomy. He was then 34, middle-aged by the standards of the time, but without hesitation he embarked on his new career, financing it by his professional work as a musician. He spent years mastering the art of telescope construction, and even by present-day standards his instruments are comparable with the best. Serious observation began 1774. He set himself the astonishing task of reviewing the heavens, in other words, pointing his telescope to every accessible part of the sky and recording what he saw. The first review was made in 1775; the second, and most momentous, in 1780-81. It was during the latter part of this that he discovered Uranus. Afterwards, supported by the royal grant in recognition of his work, he was able to devote himself entirely to astronomy. His final achievements spread from the sun and moon to remote galaxies (of which he discovered hundreds), and papers flooded from his pen until his death in 1822. Among these, there was one sent to the Royal Society in 1781, entitled An Account of a Comet. In his own words: On Tuesday the 13th of March, between ten and eleven in the evening, while I was examining the small stars in the neighbourhood of H Geminorum, I perceived one that appeared visibly larger than the rest; being struck with its uncommon magnitude, I compared it to H Geminorum and the small star in the quartile between Auriga and Gemini, and finding it to be much larger than either of them, suspected it to be a comet. Herschels care was the hallmark of a great observer; he was not prepared to jump any conclusions. Also, to be fair, the discovery of a new planet was the last thought in anybodys mind. But further observation by other astronomers besides Herschel revealed two curious facts. For the comet, it showed a remarkably sharp disc; furthermore, it was moving so slowly that it was thought to be a great distance from the sun, and comets are only normally visible in the immediate vicinity of the sun. As its orbit came to be worked out the truth dawned that it was a new planet far beyond Saturns realm, and that the reviewer of the heavens had stumbled across an unprecedented prize. Herschel wanted to call it georgium sidus (Star of George) in honour of his royal patron King George III of Great Britain. The planet was later for a time called Herschel in honour of its discoverer. The name Uranus, which was first proposed by the German astronomer Johann Elert Bode, was in use by the late 19th century. Uranus is a giant in construction, but not so much in size; its diameter compares unfavourably with that of Jupiter and Saturn, though on the terrestrial scale it is still colossal. Uranus atmosphere consists largely of hydrogen and helium, with a trace of methane. Through a telescope the planet appears as a small bluish-green disc with a faint green periphery. In 1977, while recording the occultation 1 of a star behind the planet, the American astronomer James L. Elliot discovered the presence of five rings encircling the equator of Uranus. Four more rings were discovered in January 1986 during the exploratory flight of Voyager 2 2 , In addition to its rings, Uranus has 15 satellites (moons), the last 10 discovered by Voyager 2 on the same flight; all revolve about its equator and move with the planet in an eastwest direction. The two largest moons, Titania and Oberon, were discovered by Herschel in 1787. The next two, Umbriel and Ariel, were found in 1851 by the British astronomer William Lassell. Miranda, thought before 1986 to be the innermost moon, was discovered in 1948 by the American astronomer Gerard Peter Kuiper.
It is improbable that there is a planet hidden behind the sun.
entailment
id_6090
The Dover Bronze Age Boat It was 1992. In England, workmen were building a new road through the heart of Dover, to connect the ancient port and the Channel Tunnel, which, when it opened just two years later, was to be the first land link between Britain and Europe for over 10,000 years. A small team from the Canterbury Archaeological Trust (CAT) worked alongside the workmen, recording new discoveries brought to light by the machines. At the base of a deep shaft six metres below the modem streets a wooden structure was revealed. Cleaning away the waterlogged site overlying the timbers, archaeologists realised its true nature. They had found a prehistoric boat, preserved by the type of sediment in which it was buried. It was then named the Dover Bronze-Age Boat. About nine metres of the boats length was recovered; one end lay beyond the excavation and had to be left. What survived consisted essentially of four intricately carved oak planks: two on the bottom, joined along a central seam by a complicated system of wedges and timbers, and two at the side, curved and stitched to the others. The seams had been made watertight by pads of moss, fixed by wedges and yew stitches. The timbers that closed the recovered end of the boat had been removed in antiquity when it was abandoned, but much about its original shape could be deduced. There was also evidence for missing upper side planks. The boat was not a wreck, but had been deliberately discarded, dismantled and broken. Perhaps it had been ritually killed at the end of its life, like other Bronze-Age objects. With hindsight, it was significant that the boat was found and studied by mainstream archaeologists who naturally focused on its cultural context. At the time, ancient boats were often considered only from a narrower technological perspective, but news about the Dover boat reached a broad audience. In 2002, on the tenth anniversary of the discovery, the Dover Bronze-Age Boat Trust hosted a conference, where this meeting of different traditions became apparent. Alongside technical papers about the boat, other speakers explored its social and economic contexts, and the religious perceptions of boats in Bronze-Age societies. Many speakers came from overseas, and debate about cultural connections was renewed. Within seven years of excavation, the Dover boat had been conserved and displayed, but it was apparent that there were issues that could not be resolved simply by studying the old wood. Experimental archaeology seemed to be the solution: a boat reconstruction, half-scale or full-sized, would permit assessment of the different hypotheses regarding its build and the missing end. The possibility of returning to Dover to search for the boats unexcavated northern end was explored, but practical and financial difficulties were insurmountable and there was no guarantee that the timbers had survived the previous decade in the changed environment. Detailed proposals to reconstruct the boat were drawn up in 2004. Archaeological evidence was beginning to suggest a Bronze-Age community straddling the Channel, brought together by the sea, rather than separated by it. In a region today divided by languages and borders, archaeologists had a duty to inform the general public about their common cultural heritage. The boat project began in England but it was conceived from the start as a European collaboration. Reconstruction was only part of a scheme that would include a major exhibition and an extensive educational and outreach programme. Discussions began early in 2005 with archaeological bodies, universities and heritage organisations either side of the Channel. There was much enthusiasm and support, and an official launch of the project was held at an international seminar in France in 2007. Financial support was confirmed in 2008 and the project then named BOAT 1550BC got under way in June 2011. A small team began to make the boat at the start of 2012 on the Roman Lawn outside Dover museum. A full-scale reconstruction of a mid-section had been made in 1996, primarily to see how Bronze- Age replica tools performed. In 2012, however, the hull shape was at the centre of the work, so modem power tools were used to carve the oak planks, before turning to prehistoric tools for finishing. It was decided to make the replica half-scale for reasons of cost and time, and synthetic materials were used for the stitching, owing to doubts about the seeding and tight timetable. Meanwhile, the exhibition was being prepared ready for opening in July 2012 at the Castle Museum in Boulogne-sur-Mer. Entitled Beyond the Horizon: Societies of the Channel & North Sea 3,500 years ago, it brought together for the first time a remarkable collection of Bronze-Age objects, including many new discoveries for commercial archaeology and some of the great treasure of the past. The reconstructed boat, as a symbol of the maritime connections that bound together the communities either side of the Channel, was the centerpiece.
Archaeologists went back to the site to try and find the missing northern end of the boat.
contradiction
id_6091
The Dover Bronze Age Boat It was 1992. In England, workmen were building a new road through the heart of Dover, to connect the ancient port and the Channel Tunnel, which, when it opened just two years later, was to be the first land link between Britain and Europe for over 10,000 years. A small team from the Canterbury Archaeological Trust (CAT) worked alongside the workmen, recording new discoveries brought to light by the machines. At the base of a deep shaft six metres below the modem streets a wooden structure was revealed. Cleaning away the waterlogged site overlying the timbers, archaeologists realised its true nature. They had found a prehistoric boat, preserved by the type of sediment in which it was buried. It was then named the Dover Bronze-Age Boat. About nine metres of the boats length was recovered; one end lay beyond the excavation and had to be left. What survived consisted essentially of four intricately carved oak planks: two on the bottom, joined along a central seam by a complicated system of wedges and timbers, and two at the side, curved and stitched to the others. The seams had been made watertight by pads of moss, fixed by wedges and yew stitches. The timbers that closed the recovered end of the boat had been removed in antiquity when it was abandoned, but much about its original shape could be deduced. There was also evidence for missing upper side planks. The boat was not a wreck, but had been deliberately discarded, dismantled and broken. Perhaps it had been ritually killed at the end of its life, like other Bronze-Age objects. With hindsight, it was significant that the boat was found and studied by mainstream archaeologists who naturally focused on its cultural context. At the time, ancient boats were often considered only from a narrower technological perspective, but news about the Dover boat reached a broad audience. In 2002, on the tenth anniversary of the discovery, the Dover Bronze-Age Boat Trust hosted a conference, where this meeting of different traditions became apparent. Alongside technical papers about the boat, other speakers explored its social and economic contexts, and the religious perceptions of boats in Bronze-Age societies. Many speakers came from overseas, and debate about cultural connections was renewed. Within seven years of excavation, the Dover boat had been conserved and displayed, but it was apparent that there were issues that could not be resolved simply by studying the old wood. Experimental archaeology seemed to be the solution: a boat reconstruction, half-scale or full-sized, would permit assessment of the different hypotheses regarding its build and the missing end. The possibility of returning to Dover to search for the boats unexcavated northern end was explored, but practical and financial difficulties were insurmountable and there was no guarantee that the timbers had survived the previous decade in the changed environment. Detailed proposals to reconstruct the boat were drawn up in 2004. Archaeological evidence was beginning to suggest a Bronze-Age community straddling the Channel, brought together by the sea, rather than separated by it. In a region today divided by languages and borders, archaeologists had a duty to inform the general public about their common cultural heritage. The boat project began in England but it was conceived from the start as a European collaboration. Reconstruction was only part of a scheme that would include a major exhibition and an extensive educational and outreach programme. Discussions began early in 2005 with archaeological bodies, universities and heritage organisations either side of the Channel. There was much enthusiasm and support, and an official launch of the project was held at an international seminar in France in 2007. Financial support was confirmed in 2008 and the project then named BOAT 1550BC got under way in June 2011. A small team began to make the boat at the start of 2012 on the Roman Lawn outside Dover museum. A full-scale reconstruction of a mid-section had been made in 1996, primarily to see how Bronze- Age replica tools performed. In 2012, however, the hull shape was at the centre of the work, so modem power tools were used to carve the oak planks, before turning to prehistoric tools for finishing. It was decided to make the replica half-scale for reasons of cost and time, and synthetic materials were used for the stitching, owing to doubts about the seeding and tight timetable. Meanwhile, the exhibition was being prepared ready for opening in July 2012 at the Castle Museum in Boulogne-sur-Mer. Entitled Beyond the Horizon: Societies of the Channel & North Sea 3,500 years ago, it brought together for the first time a remarkable collection of Bronze-Age objects, including many new discoveries for commercial archaeology and some of the great treasure of the past. The reconstructed boat, as a symbol of the maritime connections that bound together the communities either side of the Channel, was the centerpiece.
Initially, only the technological aspects of the boat were examined.
contradiction
id_6092
The Dover Bronze Age Boat It was 1992. In England, workmen were building a new road through the heart of Dover, to connect the ancient port and the Channel Tunnel, which, when it opened just two years later, was to be the first land link between Britain and Europe for over 10,000 years. A small team from the Canterbury Archaeological Trust (CAT) worked alongside the workmen, recording new discoveries brought to light by the machines. At the base of a deep shaft six metres below the modem streets a wooden structure was revealed. Cleaning away the waterlogged site overlying the timbers, archaeologists realised its true nature. They had found a prehistoric boat, preserved by the type of sediment in which it was buried. It was then named the Dover Bronze-Age Boat. About nine metres of the boats length was recovered; one end lay beyond the excavation and had to be left. What survived consisted essentially of four intricately carved oak planks: two on the bottom, joined along a central seam by a complicated system of wedges and timbers, and two at the side, curved and stitched to the others. The seams had been made watertight by pads of moss, fixed by wedges and yew stitches. The timbers that closed the recovered end of the boat had been removed in antiquity when it was abandoned, but much about its original shape could be deduced. There was also evidence for missing upper side planks. The boat was not a wreck, but had been deliberately discarded, dismantled and broken. Perhaps it had been ritually killed at the end of its life, like other Bronze-Age objects. With hindsight, it was significant that the boat was found and studied by mainstream archaeologists who naturally focused on its cultural context. At the time, ancient boats were often considered only from a narrower technological perspective, but news about the Dover boat reached a broad audience. In 2002, on the tenth anniversary of the discovery, the Dover Bronze-Age Boat Trust hosted a conference, where this meeting of different traditions became apparent. Alongside technical papers about the boat, other speakers explored its social and economic contexts, and the religious perceptions of boats in Bronze-Age societies. Many speakers came from overseas, and debate about cultural connections was renewed. Within seven years of excavation, the Dover boat had been conserved and displayed, but it was apparent that there were issues that could not be resolved simply by studying the old wood. Experimental archaeology seemed to be the solution: a boat reconstruction, half-scale or full-sized, would permit assessment of the different hypotheses regarding its build and the missing end. The possibility of returning to Dover to search for the boats unexcavated northern end was explored, but practical and financial difficulties were insurmountable and there was no guarantee that the timbers had survived the previous decade in the changed environment. Detailed proposals to reconstruct the boat were drawn up in 2004. Archaeological evidence was beginning to suggest a Bronze-Age community straddling the Channel, brought together by the sea, rather than separated by it. In a region today divided by languages and borders, archaeologists had a duty to inform the general public about their common cultural heritage. The boat project began in England but it was conceived from the start as a European collaboration. Reconstruction was only part of a scheme that would include a major exhibition and an extensive educational and outreach programme. Discussions began early in 2005 with archaeological bodies, universities and heritage organisations either side of the Channel. There was much enthusiasm and support, and an official launch of the project was held at an international seminar in France in 2007. Financial support was confirmed in 2008 and the project then named BOAT 1550BC got under way in June 2011. A small team began to make the boat at the start of 2012 on the Roman Lawn outside Dover museum. A full-scale reconstruction of a mid-section had been made in 1996, primarily to see how Bronze- Age replica tools performed. In 2012, however, the hull shape was at the centre of the work, so modem power tools were used to carve the oak planks, before turning to prehistoric tools for finishing. It was decided to make the replica half-scale for reasons of cost and time, and synthetic materials were used for the stitching, owing to doubts about the seeding and tight timetable. Meanwhile, the exhibition was being prepared ready for opening in July 2012 at the Castle Museum in Boulogne-sur-Mer. Entitled Beyond the Horizon: Societies of the Channel & North Sea 3,500 years ago, it brought together for the first time a remarkable collection of Bronze-Age objects, including many new discoveries for commercial archaeology and some of the great treasure of the past. The reconstructed boat, as a symbol of the maritime connections that bound together the communities either side of the Channel, was the centerpiece.
Evidence found in 2004 suggested that the Bronze-Age Boat had been used for trade.
neutral
id_6093
The Dover Bronze Age Boat It was 1992. In England, workmen were building a new road through the heart of Dover, to connect the ancient port and the Channel Tunnel, which, when it opened just two years later, was to be the first land link between Britain and Europe for over 10,000 years. A small team from the Canterbury Archaeological Trust (CAT) worked alongside the workmen, recording new discoveries brought to light by the machines. At the base of a deep shaft six metres below the modem streets a wooden structure was revealed. Cleaning away the waterlogged site overlying the timbers, archaeologists realised its true nature. They had found a prehistoric boat, preserved by the type of sediment in which it was buried. It was then named the Dover Bronze-Age Boat. About nine metres of the boats length was recovered; one end lay beyond the excavation and had to be left. What survived consisted essentially of four intricately carved oak planks: two on the bottom, joined along a central seam by a complicated system of wedges and timbers, and two at the side, curved and stitched to the others. The seams had been made watertight by pads of moss, fixed by wedges and yew stitches. The timbers that closed the recovered end of the boat had been removed in antiquity when it was abandoned, but much about its original shape could be deduced. There was also evidence for missing upper side planks. The boat was not a wreck, but had been deliberately discarded, dismantled and broken. Perhaps it had been ritually killed at the end of its life, like other Bronze-Age objects. With hindsight, it was significant that the boat was found and studied by mainstream archaeologists who naturally focused on its cultural context. At the time, ancient boats were often considered only from a narrower technological perspective, but news about the Dover boat reached a broad audience. In 2002, on the tenth anniversary of the discovery, the Dover Bronze-Age Boat Trust hosted a conference, where this meeting of different traditions became apparent. Alongside technical papers about the boat, other speakers explored its social and economic contexts, and the religious perceptions of boats in Bronze-Age societies. Many speakers came from overseas, and debate about cultural connections was renewed. Within seven years of excavation, the Dover boat had been conserved and displayed, but it was apparent that there were issues that could not be resolved simply by studying the old wood. Experimental archaeology seemed to be the solution: a boat reconstruction, half-scale or full-sized, would permit assessment of the different hypotheses regarding its build and the missing end. The possibility of returning to Dover to search for the boats unexcavated northern end was explored, but practical and financial difficulties were insurmountable and there was no guarantee that the timbers had survived the previous decade in the changed environment. Detailed proposals to reconstruct the boat were drawn up in 2004. Archaeological evidence was beginning to suggest a Bronze-Age community straddling the Channel, brought together by the sea, rather than separated by it. In a region today divided by languages and borders, archaeologists had a duty to inform the general public about their common cultural heritage. The boat project began in England but it was conceived from the start as a European collaboration. Reconstruction was only part of a scheme that would include a major exhibition and an extensive educational and outreach programme. Discussions began early in 2005 with archaeological bodies, universities and heritage organisations either side of the Channel. There was much enthusiasm and support, and an official launch of the project was held at an international seminar in France in 2007. Financial support was confirmed in 2008 and the project then named BOAT 1550BC got under way in June 2011. A small team began to make the boat at the start of 2012 on the Roman Lawn outside Dover museum. A full-scale reconstruction of a mid-section had been made in 1996, primarily to see how Bronze- Age replica tools performed. In 2012, however, the hull shape was at the centre of the work, so modem power tools were used to carve the oak planks, before turning to prehistoric tools for finishing. It was decided to make the replica half-scale for reasons of cost and time, and synthetic materials were used for the stitching, owing to doubts about the seeding and tight timetable. Meanwhile, the exhibition was being prepared ready for opening in July 2012 at the Castle Museum in Boulogne-sur-Mer. Entitled Beyond the Horizon: Societies of the Channel & North Sea 3,500 years ago, it brought together for the first time a remarkable collection of Bronze-Age objects, including many new discoveries for commercial archaeology and some of the great treasure of the past. The reconstructed boat, as a symbol of the maritime connections that bound together the communities either side of the Channel, was the centerpiece.
Archaeologists realised that the boat had been damaged on purpose.
entailment
id_6094
The Environmental Commissioner of the European Commission wants to introduce tough new limits for the omissions of carbon dioxide for all new vehicles. She wants mandatory maximum levels of emissions for all new cars by 2012. Manufacturers are lobbying against a mandatory limit and prefer a voluntary target for average emissions that is lowered annually, year on year. The luxury brand manufacturers are lobbying hardest, as they consider a mandatory limit to represent the greatest threat to their operations. The Industrial Commissioner has proposed a compromise that favours voluntary targets but will also commit manufacturers to realizing improvements in tyre performance, the introduction of emission-reducing speed management systems, and greener manufacturing and recycling of vehicles. European car makers believe that many jobs will be lost if the Environmental Commissioner gets her way. The 20 Commissioners who make up the Commission will have to decide.
The author sees the issue as a test of the Commissions green credentials.
neutral
id_6095
The Environmental Commissioner of the European Commission wants to introduce tough new limits for the omissions of carbon dioxide for all new vehicles. She wants mandatory maximum levels of emissions for all new cars by 2012. Manufacturers are lobbying against a mandatory limit and prefer a voluntary target for average emissions that is lowered annually, year on year. The luxury brand manufacturers are lobbying hardest, as they consider a mandatory limit to represent the greatest threat to their operations. The Industrial Commissioner has proposed a compromise that favours voluntary targets but will also commit manufacturers to realizing improvements in tyre performance, the introduction of emission-reducing speed management systems, and greener manufacturing and recycling of vehicles. European car makers believe that many jobs will be lost if the Environmental Commissioner gets her way. The 20 Commissioners who make up the Commission will have to decide.
Members of the Commission are split over the decision.
entailment
id_6096
The Environmental Commissioner of the European Commission wants to introduce tough new limits for the omissions of carbon dioxide for all new vehicles. She wants mandatory maximum levels of emissions for all new cars by 2012. Manufacturers are lobbying against a mandatory limit and prefer a voluntary target for average emissions that is lowered annually, year on year. The luxury brand manufacturers are lobbying hardest, as they consider a mandatory limit to represent the greatest threat to their operations. The Industrial Commissioner has proposed a compromise that favours voluntary targets but will also commit manufacturers to realizing improvements in tyre performance, the introduction of emission-reducing speed management systems, and greener manufacturing and recycling of vehicles. European car makers believe that many jobs will be lost if the Environmental Commissioner gets her way. The 20 Commissioners who make up the Commission will have to decide.
The passage contains a tautology.
entailment
id_6097
The Etruscan civilization is the name given today to the culture and way of life of a people of ancient Italy whom ancient Romans called Etrusci, ancient Greeks called Tyrrhenoi and who called themselves Rasenna, syncopated to Rasna. As distinguished by its own language, the civilization endured from an unknown prehistoric time prior to the foundation of Rome until its complete assimilation to Italic Rome in the Roman Republic. At its maximum extent during the foundation period of Rome and the Roman kingdom, it flourished in three confederacies: of Etruria, the Po valley and Latium and Campania. Rome was placed in its territory. There is considerable evidence that early Rome was founded and dominated by Etruscans.
The Etruscans called the Greeks the Tyrrhenoi.
contradiction
id_6098
The Etruscan civilization is the name given today to the culture and way of life of a people of ancient Italy whom ancient Romans called Etrusci, ancient Greeks called Tyrrhenoi and who called themselves Rasenna, syncopated to Rasna. As distinguished by its own language, the civilization endured from an unknown prehistoric time prior to the foundation of Rome until its complete assimilation to Italic Rome in the Roman Republic. At its maximum extent during the foundation period of Rome and the Roman kingdom, it flourished in three confederacies: of Etruria, the Po valley and Latium and Campania. Rome was placed in its territory. There is considerable evidence that early Rome was founded and dominated by Etruscans.
The Po valley is in Italy.
neutral
id_6099
The Etruscan civilization is the name given today to the culture and way of life of a people of ancient Italy whom ancient Romans called Etrusci, ancient Greeks called Tyrrhenoi and who called themselves Rasenna, syncopated to Rasna. As distinguished by its own language, the civilization endured from an unknown prehistoric time prior to the foundation of Rome until its complete assimilation to Italic Rome in the Roman Republic. At its maximum extent during the foundation period of Rome and the Roman kingdom, it flourished in three confederacies: of Etruria, the Po valley and Latium and Campania. Rome was placed in its territory. There is considerable evidence that early Rome was founded and dominated by Etruscans.
The Etruscan civilization became part of the Roman Republic.
entailment