diff --git "a/test.jsonl" "b/test.jsonl" new file mode 100644--- /dev/null +++ "b/test.jsonl" @@ -0,0 +1,805 @@ +{"uid": "id_803", "premise": "some time on the night of October 1st, the Copacabana Club was burnt to the ground. The police are treating the fire as suspicious. The only facts known at this stage are: The club was insured for more than its real value. The club belonged to John Hodges. Les Braithwaite was known to dislike John Hodges. Between October 1st and October 2nd, Les Braithwaite was away from home on a business trip. There were no fatalities. A plan of the club was found in Les Braithwaite's flat.", "hypothesis": "If the insurance company pays out in full, John Hodges stands to profit from the fire.", "label": "n"} +{"uid": "id_0", "premise": "This passage provides information on the subsidising of renewable energy and its effect on the usage of fossil fuels. The issue of subsidising sources of renewable energy came to the forefront of global politics as record emissions levels continue to be reached despite caps on carbon emissions being agreed up by several global powers. However, renewable energy sources tend more expensive than their fossil-fuel counter parts. In this way, renewable energy cannot be seen as a realistic alternative to fossil-fuel until it is at a price universally achievable. On the opposite side of the spectrum, commentators note that the average temperature is expected to rise by four degrees by the end of the decade. In order to prevent this, they suggest carbon emissions must be reduced by seventy per cent by 2050. Such commentators advocate government subsidised renewable energy forms as a way to achieve this target.", "hypothesis": "Government subsidiary could reduce renewable energy cost", "label": "e"} +{"uid": "id_1", "premise": "This passage provides information on the subsidising of renewable energy and its effect on the usage of fossil fuels. The issue of subsidising sources of renewable energy came to the forefront of global politics as record emissions levels continue to be reached despite caps on carbon emissions being agreed up by several global powers. However, renewable energy sources tend more expensive than their fossil-fuel counter parts. In this way, renewable energy cannot be seen as a realistic alternative to fossil-fuel until it is at a price universally achievable. On the opposite side of the spectrum, commentators note that the average temperature is expected to rise by four degrees by the end of the decade. In order to prevent this, they suggest carbon emissions must be reduced by seventy per cent by 2050. Such commentators advocate government subsidised renewable energy forms as a way to achieve this target.", "hypothesis": "Fossil-fuels are currently cheaper than forms of renewable energy.", "label": "e"} +{"uid": "id_2", "premise": "This passage provides information on the subsidising of renewable energy and its effect on the usage of fossil fuels. The issue of subsidising sources of renewable energy came to the forefront of global politics as record emissions levels continue to be reached despite caps on carbon emissions being agreed up by several global powers. However, renewable energy sources tend more expensive than their fossil-fuel counter parts. In this way, renewable energy cannot be seen as a realistic alternative to fossil-fuel until it is at a price universally achievable. On the opposite side of the spectrum, commentators note that the average temperature is expected to rise by four degrees by the end of the decade. In order to prevent this, they suggest carbon emissions must be reduced by seventy per cent by 2050. Such commentators advocate government subsidised renewable energy forms as a way to achieve this target.", "hypothesis": "The average temperature in the UK is set to rise by 4% by 2050.", "label": "c"} +{"uid": "id_3", "premise": "This year most of the shops and departmental stores are offering prizes and discounts on purchases to attract customers", "hypothesis": "Lots of goods are available but the sale is not shooting up. There is no cheer for the customers.", "label": "e"} +{"uid": "id_4", "premise": "This year most of the shops and departmental stores are offering prizes and discounts on purchases to attract customers", "hypothesis": "The shops and departmental stores have so far earned a lot of profit, so now they have started sharing it with the customers.", "label": "n"} +{"uid": "id_5", "premise": "Thomas Young The Last True Know-It-All Thomas Young (1773-1829) contributed 63 articles to the Encyclopedia Britannica, including 46 biographical entries (mostly on scientists and classicists) and substantial essays on Bridge, Chromatics, Egypt, Languages and Tides. Was someone who could write authoritatively about so many subjects a polymath, a genius or a dilettante? In an ambitious new biography, Andrew Robinson argues that Young is a good contender for the epitaph the last man who knew everything. Young has competition, however: The phrase, which Robinson takes for his title, also serves as the subtitle of two other recent biographies: Leonard Warrens 1998 life of paleontologist Joseph Leidy (1823-1891) and Paula Findlens 2004 book on Athanasius Kircher (1602-1680), another polymath. Young, of course, did more than write encyclopedia entries. He presented his first paper to the Royal Society of London at the age of 20 and was elected a Fellow a week after his 21st birthday. In the paper, Young explained the process of accommodation in the human eye on how the eye focuses properly on objects at varying distances. Young hypothesised that this was achieved by changes in the shape of the lens. Young also theorised that light traveled in waves and ho believed that, to account for the ability to see in color, there must be three receptors in the eye corresponding to the three principal colors to which the retina could respond: red, green, violet. All these hypotheses Were subsequently proved to be correct. Later in his life, when he was in his forties, Young was instrumental in cracking the code that unlocked the unknown script on the Rosetta Stone, a tablet that was found in Egypt by the Napoleonic army in 1799. The stone contains text in three alphabets: Greek, something unrecognisable and Egyptian hieroglyphs. The unrecognisable script is now known as demotic and, as Young deduced, is related directly to hieroglyphic. His initial work on this appeared in his Britannica entry on Egypt. In another entry, he coined the term Indo-European to describe the family of languages spoken throughout most of Europe and northern India. These are the landmark achievements of a man who was a child prodigy and who, unlike many remarkable children, did not disappear into oblivion as an adult. Bom in 1773 in Somerset in England, Young lived from an early age with his maternal grandfather, eventually leaving to attend boarding school. He had devoured books from the age of two, and through his own initiative he excelled at Latin, Greek, mathematics and natural philosophy. After leaving school, he was greatly encouraged by his mothers uncle, Richard Brocklesby, a physician and Fellow of the Royal Society. Following Brocklesbys lead, Young decided to pursue a career in medicine. He studied in London, following the medical circuit, and then moved on to more formal education in Edinburgh, Gottingen and Cambridge. After completing his medical training at the University of Cambridge in 1808, Young set up practice as a physician in London. He soon became a Fellow of the Royal College of Physicians and a few years later was appointed physician at St. Georges Hospital. Youngs skill as a physician, however, did not equal his skill as a scholar of natural philosophy or linguistics. Earlier, in 1801, he had been appointed to a professorship of natural philosophy at the Royal Institution, where he delivered as many as 60 lectures in a year. These were published in two volumes in 1807. In 1804 Young had become secretary to the Royal Society, a post he would hold until his death. His opinions were sought on civic and national matters, such as the introduction of gas lighting to London and methods of ship construction. From 1819 he was superintendent of the Nautical Almanac and secretary to the Board of Longitude. From 1824 to 1829 he was physician to and inspector of calculations for the Palladian Insurance Company. Between 1816 and 1825 he contributed his many and various entries to the Encyclopedia Britannica, and throughout his career he authored numerous books, essays and papers. Young is a perfect subject for a biography perfect, but daunting. Few men contributed so much to so many technical fields. Robinsons aim is to introduce non-scientists to Youngs work and life. He succeeds, providing clear expositions of the technical material (especially that on optics and Egyptian hieroglyphs). Some readers of this book will, like Robinson, find Youngs accomplishments impressive; others will see him as some historians have as a dilettante. Yet despite the rich material presented in this book, readers will not end up knowing Young personally. We catch glimpses of a playful Young, doodling Greek and Latin phrases in his notes on medical lectures and translating the verses that a young lady had written on the walls of a summerhouse into Greek elegiacs. Young was introduced into elite society, attended the theatre and learned to dance and play the flute. In addition, he was an accomplished horseman. However, his personal life looks pale next to his vibrant career and studies. Young married Eliza Maxwell in 1804, and according to Robinson, their marriage was a happy one and she appreciated his work, Almost all we know about her is that she sustained her husband through some rancorous disputes about optics and that she worried about money when his medical career was slow to take off. Very little evidence survives about the complexities of Youngs relationships with his mother and father. Robinson does not credit them, or anyone else, with shaping Youngs extraordinary mind. Despite the lack of details concerning Youngs relationships, however, anyone interested in what it means to be a genius should read this book.", "hypothesis": "All Youngs articles were published in Encyclopedia Britannica.", "label": "c"} +{"uid": "id_6", "premise": "Thomas Young The Last True Know-It-All Thomas Young (1773-1829) contributed 63 articles to the Encyclopedia Britannica, including 46 biographical entries (mostly on scientists and classicists) and substantial essays on Bridge, Chromatics, Egypt, Languages and Tides. Was someone who could write authoritatively about so many subjects a polymath, a genius or a dilettante? In an ambitious new biography, Andrew Robinson argues that Young is a good contender for the epitaph the last man who knew everything. Young has competition, however: The phrase, which Robinson takes for his title, also serves as the subtitle of two other recent biographies: Leonard Warrens 1998 life of paleontologist Joseph Leidy (1823-1891) and Paula Findlens 2004 book on Athanasius Kircher (1602-1680), another polymath. Young, of course, did more than write encyclopedia entries. He presented his first paper to the Royal Society of London at the age of 20 and was elected a Fellow a week after his 21st birthday. In the paper, Young explained the process of accommodation in the human eye on how the eye focuses properly on objects at varying distances. Young hypothesised that this was achieved by changes in the shape of the lens. Young also theorised that light traveled in waves and ho believed that, to account for the ability to see in color, there must be three receptors in the eye corresponding to the three principal colors to which the retina could respond: red, green, violet. All these hypotheses Were subsequently proved to be correct. Later in his life, when he was in his forties, Young was instrumental in cracking the code that unlocked the unknown script on the Rosetta Stone, a tablet that was found in Egypt by the Napoleonic army in 1799. The stone contains text in three alphabets: Greek, something unrecognisable and Egyptian hieroglyphs. The unrecognisable script is now known as demotic and, as Young deduced, is related directly to hieroglyphic. His initial work on this appeared in his Britannica entry on Egypt. In another entry, he coined the term Indo-European to describe the family of languages spoken throughout most of Europe and northern India. These are the landmark achievements of a man who was a child prodigy and who, unlike many remarkable children, did not disappear into oblivion as an adult. Bom in 1773 in Somerset in England, Young lived from an early age with his maternal grandfather, eventually leaving to attend boarding school. He had devoured books from the age of two, and through his own initiative he excelled at Latin, Greek, mathematics and natural philosophy. After leaving school, he was greatly encouraged by his mothers uncle, Richard Brocklesby, a physician and Fellow of the Royal Society. Following Brocklesbys lead, Young decided to pursue a career in medicine. He studied in London, following the medical circuit, and then moved on to more formal education in Edinburgh, Gottingen and Cambridge. After completing his medical training at the University of Cambridge in 1808, Young set up practice as a physician in London. He soon became a Fellow of the Royal College of Physicians and a few years later was appointed physician at St. Georges Hospital. Youngs skill as a physician, however, did not equal his skill as a scholar of natural philosophy or linguistics. Earlier, in 1801, he had been appointed to a professorship of natural philosophy at the Royal Institution, where he delivered as many as 60 lectures in a year. These were published in two volumes in 1807. In 1804 Young had become secretary to the Royal Society, a post he would hold until his death. His opinions were sought on civic and national matters, such as the introduction of gas lighting to London and methods of ship construction. From 1819 he was superintendent of the Nautical Almanac and secretary to the Board of Longitude. From 1824 to 1829 he was physician to and inspector of calculations for the Palladian Insurance Company. Between 1816 and 1825 he contributed his many and various entries to the Encyclopedia Britannica, and throughout his career he authored numerous books, essays and papers. Young is a perfect subject for a biography perfect, but daunting. Few men contributed so much to so many technical fields. Robinsons aim is to introduce non-scientists to Youngs work and life. He succeeds, providing clear expositions of the technical material (especially that on optics and Egyptian hieroglyphs). Some readers of this book will, like Robinson, find Youngs accomplishments impressive; others will see him as some historians have as a dilettante. Yet despite the rich material presented in this book, readers will not end up knowing Young personally. We catch glimpses of a playful Young, doodling Greek and Latin phrases in his notes on medical lectures and translating the verses that a young lady had written on the walls of a summerhouse into Greek elegiacs. Young was introduced into elite society, attended the theatre and learned to dance and play the flute. In addition, he was an accomplished horseman. However, his personal life looks pale next to his vibrant career and studies. Young married Eliza Maxwell in 1804, and according to Robinson, their marriage was a happy one and she appreciated his work, Almost all we know about her is that she sustained her husband through some rancorous disputes about optics and that she worried about money when his medical career was slow to take off. Very little evidence survives about the complexities of Youngs relationships with his mother and father. Robinson does not credit them, or anyone else, with shaping Youngs extraordinary mind. Despite the lack of details concerning Youngs relationships, however, anyone interested in what it means to be a genius should read this book.", "hypothesis": "The last man who knew everything has also been claimed to other people.", "label": "e"} +{"uid": "id_7", "premise": "Thomas Young The Last True Know-It-All Thomas Young (1773-1829) contributed 63 articles to the Encyclopedia Britannica, including 46 biographical entries (mostly on scientists and classicists) and substantial essays on Bridge, Chromatics, Egypt, Languages and Tides. Was someone who could write authoritatively about so many subjects a polymath, a genius or a dilettante? In an ambitious new biography, Andrew Robinson argues that Young is a good contender for the epitaph the last man who knew everything. Young has competition, however: The phrase, which Robinson takes for his title, also serves as the subtitle of two other recent biographies: Leonard Warrens 1998 life of paleontologist Joseph Leidy (1823-1891) and Paula Findlens 2004 book on Athanasius Kircher (1602-1680), another polymath. Young, of course, did more than write encyclopedia entries. He presented his first paper to the Royal Society of London at the age of 20 and was elected a Fellow a week after his 21st birthday. In the paper, Young explained the process of accommodation in the human eye on how the eye focuses properly on objects at varying distances. Young hypothesised that this was achieved by changes in the shape of the lens. Young also theorised that light traveled in waves and ho believed that, to account for the ability to see in color, there must be three receptors in the eye corresponding to the three principal colors to which the retina could respond: red, green, violet. All these hypotheses Were subsequently proved to be correct. Later in his life, when he was in his forties, Young was instrumental in cracking the code that unlocked the unknown script on the Rosetta Stone, a tablet that was found in Egypt by the Napoleonic army in 1799. The stone contains text in three alphabets: Greek, something unrecognisable and Egyptian hieroglyphs. The unrecognisable script is now known as demotic and, as Young deduced, is related directly to hieroglyphic. His initial work on this appeared in his Britannica entry on Egypt. In another entry, he coined the term Indo-European to describe the family of languages spoken throughout most of Europe and northern India. These are the landmark achievements of a man who was a child prodigy and who, unlike many remarkable children, did not disappear into oblivion as an adult. Bom in 1773 in Somerset in England, Young lived from an early age with his maternal grandfather, eventually leaving to attend boarding school. He had devoured books from the age of two, and through his own initiative he excelled at Latin, Greek, mathematics and natural philosophy. After leaving school, he was greatly encouraged by his mothers uncle, Richard Brocklesby, a physician and Fellow of the Royal Society. Following Brocklesbys lead, Young decided to pursue a career in medicine. He studied in London, following the medical circuit, and then moved on to more formal education in Edinburgh, Gottingen and Cambridge. After completing his medical training at the University of Cambridge in 1808, Young set up practice as a physician in London. He soon became a Fellow of the Royal College of Physicians and a few years later was appointed physician at St. Georges Hospital. Youngs skill as a physician, however, did not equal his skill as a scholar of natural philosophy or linguistics. Earlier, in 1801, he had been appointed to a professorship of natural philosophy at the Royal Institution, where he delivered as many as 60 lectures in a year. These were published in two volumes in 1807. In 1804 Young had become secretary to the Royal Society, a post he would hold until his death. His opinions were sought on civic and national matters, such as the introduction of gas lighting to London and methods of ship construction. From 1819 he was superintendent of the Nautical Almanac and secretary to the Board of Longitude. From 1824 to 1829 he was physician to and inspector of calculations for the Palladian Insurance Company. Between 1816 and 1825 he contributed his many and various entries to the Encyclopedia Britannica, and throughout his career he authored numerous books, essays and papers. Young is a perfect subject for a biography perfect, but daunting. Few men contributed so much to so many technical fields. Robinsons aim is to introduce non-scientists to Youngs work and life. He succeeds, providing clear expositions of the technical material (especially that on optics and Egyptian hieroglyphs). Some readers of this book will, like Robinson, find Youngs accomplishments impressive; others will see him as some historians have as a dilettante. Yet despite the rich material presented in this book, readers will not end up knowing Young personally. We catch glimpses of a playful Young, doodling Greek and Latin phrases in his notes on medical lectures and translating the verses that a young lady had written on the walls of a summerhouse into Greek elegiacs. Young was introduced into elite society, attended the theatre and learned to dance and play the flute. In addition, he was an accomplished horseman. However, his personal life looks pale next to his vibrant career and studies. Young married Eliza Maxwell in 1804, and according to Robinson, their marriage was a happy one and she appreciated his work, Almost all we know about her is that she sustained her husband through some rancorous disputes about optics and that she worried about money when his medical career was slow to take off. Very little evidence survives about the complexities of Youngs relationships with his mother and father. Robinson does not credit them, or anyone else, with shaping Youngs extraordinary mind. Despite the lack of details concerning Youngs relationships, however, anyone interested in what it means to be a genius should read this book.", "hypothesis": "Young suffered from a disease in his later years.", "label": "n"} +{"uid": "id_8", "premise": "Thomas Young The Last True Know-It-All Thomas Young (1773-1829) contributed 63 articles to the Encyclopedia Britannica, including 46 biographical entries (mostly on scientists and classicists) and substantial essays on Bridge, Chromatics, Egypt, Languages and Tides. Was someone who could write authoritatively about so many subjects a polymath, a genius or a dilettante? In an ambitious new biography, Andrew Robinson argues that Young is a good contender for the epitaph the last man who knew everything. Young has competition, however: The phrase, which Robinson takes for his title, also serves as the subtitle of two other recent biographies: Leonard Warrens 1998 life of paleontologist Joseph Leidy (1823-1891) and Paula Findlens 2004 book on Athanasius Kircher (1602-1680), another polymath. Young, of course, did more than write encyclopedia entries. He presented his first paper to the Royal Society of London at the age of 20 and was elected a Fellow a week after his 21st birthday. In the paper, Young explained the process of accommodation in the human eye on how the eye focuses properly on objects at varying distances. Young hypothesised that this was achieved by changes in the shape of the lens. Young also theorised that light traveled in waves and ho believed that, to account for the ability to see in color, there must be three receptors in the eye corresponding to the three principal colors to which the retina could respond: red, green, violet. All these hypotheses Were subsequently proved to be correct. Later in his life, when he was in his forties, Young was instrumental in cracking the code that unlocked the unknown script on the Rosetta Stone, a tablet that was found in Egypt by the Napoleonic army in 1799. The stone contains text in three alphabets: Greek, something unrecognisable and Egyptian hieroglyphs. The unrecognisable script is now known as demotic and, as Young deduced, is related directly to hieroglyphic. His initial work on this appeared in his Britannica entry on Egypt. In another entry, he coined the term Indo-European to describe the family of languages spoken throughout most of Europe and northern India. These are the landmark achievements of a man who was a child prodigy and who, unlike many remarkable children, did not disappear into oblivion as an adult. Bom in 1773 in Somerset in England, Young lived from an early age with his maternal grandfather, eventually leaving to attend boarding school. He had devoured books from the age of two, and through his own initiative he excelled at Latin, Greek, mathematics and natural philosophy. After leaving school, he was greatly encouraged by his mothers uncle, Richard Brocklesby, a physician and Fellow of the Royal Society. Following Brocklesbys lead, Young decided to pursue a career in medicine. He studied in London, following the medical circuit, and then moved on to more formal education in Edinburgh, Gottingen and Cambridge. After completing his medical training at the University of Cambridge in 1808, Young set up practice as a physician in London. He soon became a Fellow of the Royal College of Physicians and a few years later was appointed physician at St. Georges Hospital. Youngs skill as a physician, however, did not equal his skill as a scholar of natural philosophy or linguistics. Earlier, in 1801, he had been appointed to a professorship of natural philosophy at the Royal Institution, where he delivered as many as 60 lectures in a year. These were published in two volumes in 1807. In 1804 Young had become secretary to the Royal Society, a post he would hold until his death. His opinions were sought on civic and national matters, such as the introduction of gas lighting to London and methods of ship construction. From 1819 he was superintendent of the Nautical Almanac and secretary to the Board of Longitude. From 1824 to 1829 he was physician to and inspector of calculations for the Palladian Insurance Company. Between 1816 and 1825 he contributed his many and various entries to the Encyclopedia Britannica, and throughout his career he authored numerous books, essays and papers. Young is a perfect subject for a biography perfect, but daunting. Few men contributed so much to so many technical fields. Robinsons aim is to introduce non-scientists to Youngs work and life. He succeeds, providing clear expositions of the technical material (especially that on optics and Egyptian hieroglyphs). Some readers of this book will, like Robinson, find Youngs accomplishments impressive; others will see him as some historians have as a dilettante. Yet despite the rich material presented in this book, readers will not end up knowing Young personally. We catch glimpses of a playful Young, doodling Greek and Latin phrases in his notes on medical lectures and translating the verses that a young lady had written on the walls of a summerhouse into Greek elegiacs. Young was introduced into elite society, attended the theatre and learned to dance and play the flute. In addition, he was an accomplished horseman. However, his personal life looks pale next to his vibrant career and studies. Young married Eliza Maxwell in 1804, and according to Robinson, their marriage was a happy one and she appreciated his work, Almost all we know about her is that she sustained her husband through some rancorous disputes about optics and that she worried about money when his medical career was slow to take off. Very little evidence survives about the complexities of Youngs relationships with his mother and father. Robinson does not credit them, or anyone else, with shaping Youngs extraordinary mind. Despite the lack of details concerning Youngs relationships, however, anyone interested in what it means to be a genius should read this book.", "hypothesis": "Young took part in various social pastimes.", "label": "e"} +{"uid": "id_9", "premise": "Thomas Young The Last True Know-It-All Thomas Young (1773-1829) contributed 63 articles to the Encyclopedia Britannica, including 46 biographical entries (mostly on scientists and classicists) and substantial essays on Bridge, Chromatics, Egypt, Languages and Tides. Was someone who could write authoritatively about so many subjects a polymath, a genius or a dilettante? In an ambitious new biography, Andrew Robinson argues that Young is a good contender for the epitaph the last man who knew everything. Young has competition, however: The phrase, which Robinson takes for his title, also serves as the subtitle of two other recent biographies: Leonard Warrens 1998 life of paleontologist Joseph Leidy (1823-1891) and Paula Findlens 2004 book on Athanasius Kircher (1602-1680), another polymath. Young, of course, did more than write encyclopedia entries. He presented his first paper to the Royal Society of London at the age of 20 and was elected a Fellow a week after his 21st birthday. In the paper, Young explained the process of accommodation in the human eye on how the eye focuses properly on objects at varying distances. Young hypothesised that this was achieved by changes in the shape of the lens. Young also theorised that light traveled in waves and ho believed that, to account for the ability to see in color, there must be three receptors in the eye corresponding to the three principal colors to which the retina could respond: red, green, violet. All these hypotheses Were subsequently proved to be correct. Later in his life, when he was in his forties, Young was instrumental in cracking the code that unlocked the unknown script on the Rosetta Stone, a tablet that was found in Egypt by the Napoleonic army in 1799. The stone contains text in three alphabets: Greek, something unrecognisable and Egyptian hieroglyphs. The unrecognisable script is now known as demotic and, as Young deduced, is related directly to hieroglyphic. His initial work on this appeared in his Britannica entry on Egypt. In another entry, he coined the term Indo-European to describe the family of languages spoken throughout most of Europe and northern India. These are the landmark achievements of a man who was a child prodigy and who, unlike many remarkable children, did not disappear into oblivion as an adult. Bom in 1773 in Somerset in England, Young lived from an early age with his maternal grandfather, eventually leaving to attend boarding school. He had devoured books from the age of two, and through his own initiative he excelled at Latin, Greek, mathematics and natural philosophy. After leaving school, he was greatly encouraged by his mothers uncle, Richard Brocklesby, a physician and Fellow of the Royal Society. Following Brocklesbys lead, Young decided to pursue a career in medicine. He studied in London, following the medical circuit, and then moved on to more formal education in Edinburgh, Gottingen and Cambridge. After completing his medical training at the University of Cambridge in 1808, Young set up practice as a physician in London. He soon became a Fellow of the Royal College of Physicians and a few years later was appointed physician at St. Georges Hospital. Youngs skill as a physician, however, did not equal his skill as a scholar of natural philosophy or linguistics. Earlier, in 1801, he had been appointed to a professorship of natural philosophy at the Royal Institution, where he delivered as many as 60 lectures in a year. These were published in two volumes in 1807. In 1804 Young had become secretary to the Royal Society, a post he would hold until his death. His opinions were sought on civic and national matters, such as the introduction of gas lighting to London and methods of ship construction. From 1819 he was superintendent of the Nautical Almanac and secretary to the Board of Longitude. From 1824 to 1829 he was physician to and inspector of calculations for the Palladian Insurance Company. Between 1816 and 1825 he contributed his many and various entries to the Encyclopedia Britannica, and throughout his career he authored numerous books, essays and papers. Young is a perfect subject for a biography perfect, but daunting. Few men contributed so much to so many technical fields. Robinsons aim is to introduce non-scientists to Youngs work and life. He succeeds, providing clear expositions of the technical material (especially that on optics and Egyptian hieroglyphs). Some readers of this book will, like Robinson, find Youngs accomplishments impressive; others will see him as some historians have as a dilettante. Yet despite the rich material presented in this book, readers will not end up knowing Young personally. We catch glimpses of a playful Young, doodling Greek and Latin phrases in his notes on medical lectures and translating the verses that a young lady had written on the walls of a summerhouse into Greek elegiacs. Young was introduced into elite society, attended the theatre and learned to dance and play the flute. In addition, he was an accomplished horseman. However, his personal life looks pale next to his vibrant career and studies. Young married Eliza Maxwell in 1804, and according to Robinson, their marriage was a happy one and she appreciated his work, Almost all we know about her is that she sustained her husband through some rancorous disputes about optics and that she worried about money when his medical career was slow to take off. Very little evidence survives about the complexities of Youngs relationships with his mother and father. Robinson does not credit them, or anyone else, with shaping Youngs extraordinary mind. Despite the lack of details concerning Youngs relationships, however, anyone interested in what it means to be a genius should read this book.", "hypothesis": "Youngs advice was sought by people responsible for local and national issues.", "label": "e"} +{"uid": "id_10", "premise": "Thomas Young The Last True Know-It-All Thomas Young (1773-1829) contributed 63 articles to the Encyclopedia Britannica, including 46 biographical entries (mostly on scientists and classicists) and substantial essays on Bridge, Chromatics, Egypt, Languages and Tides. Was someone who could write authoritatively about so many subjects a polymath, a genius or a dilettante? In an ambitious new biography, Andrew Robinson argues that Young is a good contender for the epitaph the last man who knew everything. Young has competition, however: The phrase, which Robinson takes for his title, also serves as the subtitle of two other recent biographies: Leonard Warrens 1998 life of paleontologist Joseph Leidy (1823-1891) and Paula Findlens 2004 book on Athanasius Kircher (1602-1680), another polymath. Young, of course, did more than write encyclopedia entries. He presented his first paper to the Royal Society of London at the age of 20 and was elected a Fellow a week after his 21st birthday. In the paper, Young explained the process of accommodation in the human eye on how the eye focuses properly on objects at varying distances. Young hypothesised that this was achieved by changes in the shape of the lens. Young also theorised that light traveled in waves and ho believed that, to account for the ability to see in color, there must be three receptors in the eye corresponding to the three principal colors to which the retina could respond: red, green, violet. All these hypotheses Were subsequently proved to be correct. Later in his life, when he was in his forties, Young was instrumental in cracking the code that unlocked the unknown script on the Rosetta Stone, a tablet that was found in Egypt by the Napoleonic army in 1799. The stone contains text in three alphabets: Greek, something unrecognisable and Egyptian hieroglyphs. The unrecognisable script is now known as demotic and, as Young deduced, is related directly to hieroglyphic. His initial work on this appeared in his Britannica entry on Egypt. In another entry, he coined the term Indo-European to describe the family of languages spoken throughout most of Europe and northern India. These are the landmark achievements of a man who was a child prodigy and who, unlike many remarkable children, did not disappear into oblivion as an adult. Bom in 1773 in Somerset in England, Young lived from an early age with his maternal grandfather, eventually leaving to attend boarding school. He had devoured books from the age of two, and through his own initiative he excelled at Latin, Greek, mathematics and natural philosophy. After leaving school, he was greatly encouraged by his mothers uncle, Richard Brocklesby, a physician and Fellow of the Royal Society. Following Brocklesbys lead, Young decided to pursue a career in medicine. He studied in London, following the medical circuit, and then moved on to more formal education in Edinburgh, Gottingen and Cambridge. After completing his medical training at the University of Cambridge in 1808, Young set up practice as a physician in London. He soon became a Fellow of the Royal College of Physicians and a few years later was appointed physician at St. Georges Hospital. Youngs skill as a physician, however, did not equal his skill as a scholar of natural philosophy or linguistics. Earlier, in 1801, he had been appointed to a professorship of natural philosophy at the Royal Institution, where he delivered as many as 60 lectures in a year. These were published in two volumes in 1807. In 1804 Young had become secretary to the Royal Society, a post he would hold until his death. His opinions were sought on civic and national matters, such as the introduction of gas lighting to London and methods of ship construction. From 1819 he was superintendent of the Nautical Almanac and secretary to the Board of Longitude. From 1824 to 1829 he was physician to and inspector of calculations for the Palladian Insurance Company. Between 1816 and 1825 he contributed his many and various entries to the Encyclopedia Britannica, and throughout his career he authored numerous books, essays and papers. Young is a perfect subject for a biography perfect, but daunting. Few men contributed so much to so many technical fields. Robinsons aim is to introduce non-scientists to Youngs work and life. He succeeds, providing clear expositions of the technical material (especially that on optics and Egyptian hieroglyphs). Some readers of this book will, like Robinson, find Youngs accomplishments impressive; others will see him as some historians have as a dilettante. Yet despite the rich material presented in this book, readers will not end up knowing Young personally. We catch glimpses of a playful Young, doodling Greek and Latin phrases in his notes on medical lectures and translating the verses that a young lady had written on the walls of a summerhouse into Greek elegiacs. Young was introduced into elite society, attended the theatre and learned to dance and play the flute. In addition, he was an accomplished horseman. However, his personal life looks pale next to his vibrant career and studies. Young married Eliza Maxwell in 1804, and according to Robinson, their marriage was a happy one and she appreciated his work, Almost all we know about her is that she sustained her husband through some rancorous disputes about optics and that she worried about money when his medical career was slow to take off. Very little evidence survives about the complexities of Youngs relationships with his mother and father. Robinson does not credit them, or anyone else, with shaping Youngs extraordinary mind. Despite the lack of details concerning Youngs relationships, however, anyone interested in what it means to be a genius should read this book.", "hypothesis": "Youngs talent as a doctor surpassed his other skills.", "label": "c"} +{"uid": "id_11", "premise": "Thomas Young The Last True Know-It-All Thomas Young (1773-1829) contributed 63 articles to the Encyclopedia Britannica, including 46 biographical entries (mostly on scientists and classicists) and substantial essays on Bridge, Chromatics, Egypt, Languages and Tides. Was someone who could write authoritatively about so many subjects a polymath, a genius or a dilettante? In an ambitious new biography, Andrew Robinson argues that Young is a good contender for the epitaph the last man who knew everything. Young has competition, however: The phrase, which Robinson takes for his title, also serves as the subtitle of two other recent biographies: Leonard Warrens 1998 life of paleontologist Joseph Leidy (1823-1891) and Paula Findlens 2004 book on Athanasius Kircher (1602-1680), another polymath. Young, of course, did more than write encyclopedia entries. He presented his first paper to the Royal Society of London at the age of 20 and was elected a Fellow a week after his 21st birthday. In the paper, Young explained the process of accommodation in the human eye on how the eye focuses properly on objects at varying distances. Young hypothesised that this was achieved by changes in the shape of the lens. Young also theorised that light traveled in waves and ho believed that, to account for the ability to see in color, there must be three receptors in the eye corresponding to the three principal colors to which the retina could respond: red, green, violet. All these hypotheses Were subsequently proved to be correct. Later in his life, when he was in his forties, Young was instrumental in cracking the code that unlocked the unknown script on the Rosetta Stone, a tablet that was found in Egypt by the Napoleonic army in 1799. The stone contains text in three alphabets: Greek, something unrecognisable and Egyptian hieroglyphs. The unrecognisable script is now known as demotic and, as Young deduced, is related directly to hieroglyphic. His initial work on this appeared in his Britannica entry on Egypt. In another entry, he coined the term Indo-European to describe the family of languages spoken throughout most of Europe and northern India. These are the landmark achievements of a man who was a child prodigy and who, unlike many remarkable children, did not disappear into oblivion as an adult. Bom in 1773 in Somerset in England, Young lived from an early age with his maternal grandfather, eventually leaving to attend boarding school. He had devoured books from the age of two, and through his own initiative he excelled at Latin, Greek, mathematics and natural philosophy. After leaving school, he was greatly encouraged by his mothers uncle, Richard Brocklesby, a physician and Fellow of the Royal Society. Following Brocklesbys lead, Young decided to pursue a career in medicine. He studied in London, following the medical circuit, and then moved on to more formal education in Edinburgh, Gottingen and Cambridge. After completing his medical training at the University of Cambridge in 1808, Young set up practice as a physician in London. He soon became a Fellow of the Royal College of Physicians and a few years later was appointed physician at St. Georges Hospital. Youngs skill as a physician, however, did not equal his skill as a scholar of natural philosophy or linguistics. Earlier, in 1801, he had been appointed to a professorship of natural philosophy at the Royal Institution, where he delivered as many as 60 lectures in a year. These were published in two volumes in 1807. In 1804 Young had become secretary to the Royal Society, a post he would hold until his death. His opinions were sought on civic and national matters, such as the introduction of gas lighting to London and methods of ship construction. From 1819 he was superintendent of the Nautical Almanac and secretary to the Board of Longitude. From 1824 to 1829 he was physician to and inspector of calculations for the Palladian Insurance Company. Between 1816 and 1825 he contributed his many and various entries to the Encyclopedia Britannica, and throughout his career he authored numerous books, essays and papers. Young is a perfect subject for a biography perfect, but daunting. Few men contributed so much to so many technical fields. Robinsons aim is to introduce non-scientists to Youngs work and life. He succeeds, providing clear expositions of the technical material (especially that on optics and Egyptian hieroglyphs). Some readers of this book will, like Robinson, find Youngs accomplishments impressive; others will see him as some historians have as a dilettante. Yet despite the rich material presented in this book, readers will not end up knowing Young personally. We catch glimpses of a playful Young, doodling Greek and Latin phrases in his notes on medical lectures and translating the verses that a young lady had written on the walls of a summerhouse into Greek elegiacs. Young was introduced into elite society, attended the theatre and learned to dance and play the flute. In addition, he was an accomplished horseman. However, his personal life looks pale next to his vibrant career and studies. Young married Eliza Maxwell in 1804, and according to Robinson, their marriage was a happy one and she appreciated his work, Almost all we know about her is that she sustained her husband through some rancorous disputes about optics and that she worried about money when his medical career was slow to take off. Very little evidence survives about the complexities of Youngs relationships with his mother and father. Robinson does not credit them, or anyone else, with shaping Youngs extraordinary mind. Despite the lack of details concerning Youngs relationships, however, anyone interested in what it means to be a genius should read this book.", "hypothesis": "Like others, Young wasnt so brilliant when growing up.", "label": "c"} +{"uid": "id_12", "premise": "Though no heavy rain has been received in the city and water is receding from most areas in Chennai and massive relief operations are underway, the city is staring at an outbreak of epidemics with tones of stinking garbage littering the streets as bright sunshine further eased the situation.", "hypothesis": "Garbage in the city is the major concern of epidemics.", "label": "e"} +{"uid": "id_13", "premise": "Though no heavy rain has been received in the city and water is receding from most areas in Chennai and massive relief operations are underway, the city is staring at an outbreak of epidemics with tones of stinking garbage littering the streets as bright sunshine further eased the situation.", "hypothesis": "Chennai needs proper planning to overcome heavy rains in the city.", "label": "c"} +{"uid": "id_14", "premise": "Though no heavy rain has been received in the city and water is receding from most areas in Chennai and massive relief operations are underway, the city is staring at an outbreak of epidemics with tones of stinking garbage littering the streets as bright sunshine further eased the situation.", "hypothesis": "Improper drainage system in Chennai is the major cause of flood in the city.", "label": "c"} +{"uid": "id_15", "premise": "Though no heavy rain has been received in the city and water is receding from most areas in Chennai and massive relief operations are underway, the city is staring at an outbreak of epidemics with tones of stinking garbage littering the streets as bright sunshine further eased the situation.", "hypothesis": "Massive rains are the major cause of epidemics in the cities.", "label": "c"} +{"uid": "id_16", "premise": "Three athletes each receive a first, second and third prize for a different sporting event. Either Anne or Josie got the second prize for Tennis. Anne got the same prize for throwing the javelin as Josie got for swimming. Tanya got the first prize for swimming, and her prize for the javelin was the same as Josies for tennis and Annes for swimming.", "hypothesis": "Anne got the first prize for tennis", "label": "e"} +{"uid": "id_17", "premise": "Three athletes each receive a first, second and third prize for a different sporting event. Either Anne or Josie got the second prize for Tennis. Anne got the same prize for throwing the javelin as Josie got for swimming. Tanya got the first prize for swimming, and her prize for the javelin was the same as Josies for tennis and Annes for swimming.", "hypothesis": "Josie was best with the javelin", "label": "e"} +{"uid": "id_18", "premise": "Three athletes each receive a first, second and third prize for a different sporting event. Either Anne or Josie got the second prize for Tennis. Anne got the same prize for throwing the javelin as Josie got for swimming. Tanya got the first prize for swimming, and her prize for the javelin was the same as Josies for tennis and Annes for swimming.", "hypothesis": "Anne got the second prize for swimming", "label": "e"} +{"uid": "id_19", "premise": "Three pencils cost the same as two erasers. Four erasers cost the same as one ruler.", "hypothesis": "Pencils are more expensive than rulers.", "label": "c"} +{"uid": "id_20", "premise": "Throughout Europe, the three most prevalent ways of punishing an individual for a drug possession offence are warning, fine and suspended prison sentence; with the exception of a few countries, community work orders are rarely used. Those convicted of supply offences are more likely to receive a prison sentence. Notwithstanding, certain offenders receive long sentences and these are often brought up in public debate. Considerable differences between countries exist regarding where to draw the line between users as individuals needing treatment, and traffickers needing criminal punishment as deterrence. Each countrys criminal justice system recognises some people as sick, and thus tries to divert them to treatment, while punishing others.", "hypothesis": "Differentiation between drug criminals and people requiring treatment is made on a country-by-country basis.", "label": "e"} +{"uid": "id_21", "premise": "Throughout Europe, the three most prevalent ways of punishing an individual for a drug possession offence are warning, fine and suspended prison sentence; with the exception of a few countries, community work orders are rarely used. Those convicted of supply offences are more likely to receive a prison sentence. Notwithstanding, certain offenders receive long sentences and these are often brought up in public debate. Considerable differences between countries exist regarding where to draw the line between users as individuals needing treatment, and traffickers needing criminal punishment as deterrence. Each countrys criminal justice system recognises some people as sick, and thus tries to divert them to treatment, while punishing others.", "hypothesis": "Most countries in Europe do not tend to punish individuals for drug offences; rather they recognise them as sick and send them for treatment.", "label": "n"} +{"uid": "id_22", "premise": "Throughout Europe, the three most prevalent ways of punishing an individual for a drug possession offence are warning, fine and suspended prison sentence; with the exception of a few countries, community work orders are rarely used. Those convicted of supply offences are more likely to receive a prison sentence. Notwithstanding, certain offenders receive long sentences and these are often brought up in public debate. Considerable differences between countries exist regarding where to draw the line between users as individuals needing treatment, and traffickers needing criminal punishment as deterrence. Each countrys criminal justice system recognises some people as sick, and thus tries to divert them to treatment, while punishing others.", "hypothesis": "A person caught possessing drugs could face a warning, a fine, or a suspended prison sentence depending on the country where the arrest takes place.", "label": "n"} +{"uid": "id_23", "premise": "Tickled pink In 1973, the Australian fruit breeder John Cripps created a new variety of apple tree by crossing a red Australian Lady Williams variety with a pale-green American Golden Delicious. The offspring first fruited in 1979 and combined the best features of its parents in an apple that had an attractive pink hue on a yellow undertone. The new, improved apple was named the Cripps Pink after its inventor. Today the Cripps Pink is one of the most popular varieties of apple and is grown extensively in Australia, New Zealand, Canada, France and in California and Washington in the USA. By switching from northern hemisphere fruit to southern hemisphere fruit the apple is available at its seasonal best all year round. The highest-quality apples are marketed worldwide under the trademark Pink LadyTM. To preserve the premium price and appeal of the Pink Lady, apples that fail to meet the highest standards are sold under the name Cripps PinkTM. These standards are based on colour and flavour, in particular, the extent of the pink coverage and the sugar/acid balance. Consumers who buy a Pink Lady apple are ensured a product that is of consistently high quality. To earn the name Pink Lady the skin of a Cripps Pink apple must be at least 40% pink. Strong sunlight increases the pink coloration and it may be necessary to remove the uppermost leaves of a tree to let the light through. The extra work required to cultivate Cripps Pink trees is offset by its advantages, which include: vigorous trees; fruit that has tolerance to sunburn; a thin skin that does not crack; flesh that is resistant to browning after being cut and exposed to air; a cold-storage life of up to six months and a retail shelf-life of about four weeks. However, the main advantage for apple growers is the premium price that the Pink Lady brand is able to command. The Cripps Red variety, also known as Cripps II, is related to the Pink Lady and was developed at the same time. The premium grade is marketed as the Sun downerTM. Unlike the genuinely pink Pink Lady, the SundownerTM is a classic bi- coloured apple, with a skin that is 45% red from Lady Williams and 55% green from Golden Delicious. Apples that fall outside of this colour ratio are rejected at the packing station and used for juice, whilst the smaller apples are retained for the home market. The Sundowner is harvested after Cripps Pink in late May or early June, and a few weeks before Lady Williams. It has better cold-storage properties than Cripps Pink and it retains an excellent shelf life. Cripps Red apples have a coarser texture than Cripps Pink, are less sweet and have a stronger flavour. Both apples are sweeter than Lady Williams but neither is as sweet as Golden Delicious. The advantage of the Pink LadyTM brand is that it is a trademark of a premium product, not just a Cripps Pink apple. This means that new and improved strains of the Cripps Pink can use the Pink Lady brand name as long as they meet the minimum quality requirement of being 40% pink. Three such strains are the Rosy Glow, The Ruby Pink and the Lady in Red. The Rosy Glow apple was discovered in an orchard of Cripps Pink trees that had been planted in South Australia in 1996. One limb of a Cripps Pink tree had red-coloured apples while the rest of the limbs bore mostly green fruit. A bud was taken from the mutated branch and grafted onto rootstock to produce the new variety. The fruit from the new Rosy Glow tree was the same colour over the entire tree and a patent for this unique apple was granted in 2003. The Rosy Glow apple benefits from a larger area of pink than the Pink Lady and it ripens earlier in the season in climates that have less hours of sunshine. As a consequence, the Cripps Pink is likely to be phased out in favour of the Rosy Glow, with the apples branded as Pink LadyTM if they have 40% or more pink coverage. Ruby Pink and Lady in Red are two mutations of the Cripps Pink that were dis covered in New Zealand. Like the Rosy Glow, these improved varieties develop a larger area of pink than the Cripps Pink, which allows more apples to meet the quality requirements of the Pink LadyTM brand. Planting of these trees may need to be controlled otherwise the supply of Pink Lady apples will exceed the demand, to then threaten the price premium. Overproduction apart, the future of what has become possibly the worlds best-known modern apple and fruit brand, looks secure.", "hypothesis": "Colour is an important factor in the selection of both of the premium grades of Cripps apples referred to.", "label": "e"} +{"uid": "id_24", "premise": "Tickled pink In 1973, the Australian fruit breeder John Cripps created a new variety of apple tree by crossing a red Australian Lady Williams variety with a pale-green American Golden Delicious. The offspring first fruited in 1979 and combined the best features of its parents in an apple that had an attractive pink hue on a yellow undertone. The new, improved apple was named the Cripps Pink after its inventor. Today the Cripps Pink is one of the most popular varieties of apple and is grown extensively in Australia, New Zealand, Canada, France and in California and Washington in the USA. By switching from northern hemisphere fruit to southern hemisphere fruit the apple is available at its seasonal best all year round. The highest-quality apples are marketed worldwide under the trademark Pink LadyTM. To preserve the premium price and appeal of the Pink Lady, apples that fail to meet the highest standards are sold under the name Cripps PinkTM. These standards are based on colour and flavour, in particular, the extent of the pink coverage and the sugar/acid balance. Consumers who buy a Pink Lady apple are ensured a product that is of consistently high quality. To earn the name Pink Lady the skin of a Cripps Pink apple must be at least 40% pink. Strong sunlight increases the pink coloration and it may be necessary to remove the uppermost leaves of a tree to let the light through. The extra work required to cultivate Cripps Pink trees is offset by its advantages, which include: vigorous trees; fruit that has tolerance to sunburn; a thin skin that does not crack; flesh that is resistant to browning after being cut and exposed to air; a cold-storage life of up to six months and a retail shelf-life of about four weeks. However, the main advantage for apple growers is the premium price that the Pink Lady brand is able to command. The Cripps Red variety, also known as Cripps II, is related to the Pink Lady and was developed at the same time. The premium grade is marketed as the Sun downerTM. Unlike the genuinely pink Pink Lady, the SundownerTM is a classic bi- coloured apple, with a skin that is 45% red from Lady Williams and 55% green from Golden Delicious. Apples that fall outside of this colour ratio are rejected at the packing station and used for juice, whilst the smaller apples are retained for the home market. The Sundowner is harvested after Cripps Pink in late May or early June, and a few weeks before Lady Williams. It has better cold-storage properties than Cripps Pink and it retains an excellent shelf life. Cripps Red apples have a coarser texture than Cripps Pink, are less sweet and have a stronger flavour. Both apples are sweeter than Lady Williams but neither is as sweet as Golden Delicious. The advantage of the Pink LadyTM brand is that it is a trademark of a premium product, not just a Cripps Pink apple. This means that new and improved strains of the Cripps Pink can use the Pink Lady brand name as long as they meet the minimum quality requirement of being 40% pink. Three such strains are the Rosy Glow, The Ruby Pink and the Lady in Red. The Rosy Glow apple was discovered in an orchard of Cripps Pink trees that had been planted in South Australia in 1996. One limb of a Cripps Pink tree had red-coloured apples while the rest of the limbs bore mostly green fruit. A bud was taken from the mutated branch and grafted onto rootstock to produce the new variety. The fruit from the new Rosy Glow tree was the same colour over the entire tree and a patent for this unique apple was granted in 2003. The Rosy Glow apple benefits from a larger area of pink than the Pink Lady and it ripens earlier in the season in climates that have less hours of sunshine. As a consequence, the Cripps Pink is likely to be phased out in favour of the Rosy Glow, with the apples branded as Pink LadyTM if they have 40% or more pink coverage. Ruby Pink and Lady in Red are two mutations of the Cripps Pink that were dis covered in New Zealand. Like the Rosy Glow, these improved varieties develop a larger area of pink than the Cripps Pink, which allows more apples to meet the quality requirements of the Pink LadyTM brand. Planting of these trees may need to be controlled otherwise the supply of Pink Lady apples will exceed the demand, to then threaten the price premium. Overproduction apart, the future of what has become possibly the worlds best-known modern apple and fruit brand, looks secure.", "hypothesis": "One advantage of Cripps Pink trees is that they grow well.", "label": "e"} +{"uid": "id_25", "premise": "Tickled pink In 1973, the Australian fruit breeder John Cripps created a new variety of apple tree by crossing a red Australian Lady Williams variety with a pale-green American Golden Delicious. The offspring first fruited in 1979 and combined the best features of its parents in an apple that had an attractive pink hue on a yellow undertone. The new, improved apple was named the Cripps Pink after its inventor. Today the Cripps Pink is one of the most popular varieties of apple and is grown extensively in Australia, New Zealand, Canada, France and in California and Washington in the USA. By switching from northern hemisphere fruit to southern hemisphere fruit the apple is available at its seasonal best all year round. The highest-quality apples are marketed worldwide under the trademark Pink LadyTM. To preserve the premium price and appeal of the Pink Lady, apples that fail to meet the highest standards are sold under the name Cripps PinkTM. These standards are based on colour and flavour, in particular, the extent of the pink coverage and the sugar/acid balance. Consumers who buy a Pink Lady apple are ensured a product that is of consistently high quality. To earn the name Pink Lady the skin of a Cripps Pink apple must be at least 40% pink. Strong sunlight increases the pink coloration and it may be necessary to remove the uppermost leaves of a tree to let the light through. The extra work required to cultivate Cripps Pink trees is offset by its advantages, which include: vigorous trees; fruit that has tolerance to sunburn; a thin skin that does not crack; flesh that is resistant to browning after being cut and exposed to air; a cold-storage life of up to six months and a retail shelf-life of about four weeks. However, the main advantage for apple growers is the premium price that the Pink Lady brand is able to command. The Cripps Red variety, also known as Cripps II, is related to the Pink Lady and was developed at the same time. The premium grade is marketed as the Sun downerTM. Unlike the genuinely pink Pink Lady, the SundownerTM is a classic bi- coloured apple, with a skin that is 45% red from Lady Williams and 55% green from Golden Delicious. Apples that fall outside of this colour ratio are rejected at the packing station and used for juice, whilst the smaller apples are retained for the home market. The Sundowner is harvested after Cripps Pink in late May or early June, and a few weeks before Lady Williams. It has better cold-storage properties than Cripps Pink and it retains an excellent shelf life. Cripps Red apples have a coarser texture than Cripps Pink, are less sweet and have a stronger flavour. Both apples are sweeter than Lady Williams but neither is as sweet as Golden Delicious. The advantage of the Pink LadyTM brand is that it is a trademark of a premium product, not just a Cripps Pink apple. This means that new and improved strains of the Cripps Pink can use the Pink Lady brand name as long as they meet the minimum quality requirement of being 40% pink. Three such strains are the Rosy Glow, The Ruby Pink and the Lady in Red. The Rosy Glow apple was discovered in an orchard of Cripps Pink trees that had been planted in South Australia in 1996. One limb of a Cripps Pink tree had red-coloured apples while the rest of the limbs bore mostly green fruit. A bud was taken from the mutated branch and grafted onto rootstock to produce the new variety. The fruit from the new Rosy Glow tree was the same colour over the entire tree and a patent for this unique apple was granted in 2003. The Rosy Glow apple benefits from a larger area of pink than the Pink Lady and it ripens earlier in the season in climates that have less hours of sunshine. As a consequence, the Cripps Pink is likely to be phased out in favour of the Rosy Glow, with the apples branded as Pink LadyTM if they have 40% or more pink coverage. Ruby Pink and Lady in Red are two mutations of the Cripps Pink that were dis covered in New Zealand. Like the Rosy Glow, these improved varieties develop a larger area of pink than the Cripps Pink, which allows more apples to meet the quality requirements of the Pink LadyTM brand. Planting of these trees may need to be controlled otherwise the supply of Pink Lady apples will exceed the demand, to then threaten the price premium. Overproduction apart, the future of what has become possibly the worlds best-known modern apple and fruit brand, looks secure.", "hypothesis": "Pink Lady apples are the highest grade of Cripps Pink apples.", "label": "e"} +{"uid": "id_26", "premise": "Tickled pink In 1973, the Australian fruit breeder John Cripps created a new variety of apple tree by crossing a red Australian Lady Williams variety with a pale-green American Golden Delicious. The offspring first fruited in 1979 and combined the best features of its parents in an apple that had an attractive pink hue on a yellow undertone. The new, improved apple was named the Cripps Pink after its inventor. Today the Cripps Pink is one of the most popular varieties of apple and is grown extensively in Australia, New Zealand, Canada, France and in California and Washington in the USA. By switching from northern hemisphere fruit to southern hemisphere fruit the apple is available at its seasonal best all year round. The highest-quality apples are marketed worldwide under the trademark Pink LadyTM. To preserve the premium price and appeal of the Pink Lady, apples that fail to meet the highest standards are sold under the name Cripps PinkTM. These standards are based on colour and flavour, in particular, the extent of the pink coverage and the sugar/acid balance. Consumers who buy a Pink Lady apple are ensured a product that is of consistently high quality. To earn the name Pink Lady the skin of a Cripps Pink apple must be at least 40% pink. Strong sunlight increases the pink coloration and it may be necessary to remove the uppermost leaves of a tree to let the light through. The extra work required to cultivate Cripps Pink trees is offset by its advantages, which include: vigorous trees; fruit that has tolerance to sunburn; a thin skin that does not crack; flesh that is resistant to browning after being cut and exposed to air; a cold-storage life of up to six months and a retail shelf-life of about four weeks. However, the main advantage for apple growers is the premium price that the Pink Lady brand is able to command. The Cripps Red variety, also known as Cripps II, is related to the Pink Lady and was developed at the same time. The premium grade is marketed as the Sun downerTM. Unlike the genuinely pink Pink Lady, the SundownerTM is a classic bi- coloured apple, with a skin that is 45% red from Lady Williams and 55% green from Golden Delicious. Apples that fall outside of this colour ratio are rejected at the packing station and used for juice, whilst the smaller apples are retained for the home market. The Sundowner is harvested after Cripps Pink in late May or early June, and a few weeks before Lady Williams. It has better cold-storage properties than Cripps Pink and it retains an excellent shelf life. Cripps Red apples have a coarser texture than Cripps Pink, are less sweet and have a stronger flavour. Both apples are sweeter than Lady Williams but neither is as sweet as Golden Delicious. The advantage of the Pink LadyTM brand is that it is a trademark of a premium product, not just a Cripps Pink apple. This means that new and improved strains of the Cripps Pink can use the Pink Lady brand name as long as they meet the minimum quality requirement of being 40% pink. Three such strains are the Rosy Glow, The Ruby Pink and the Lady in Red. The Rosy Glow apple was discovered in an orchard of Cripps Pink trees that had been planted in South Australia in 1996. One limb of a Cripps Pink tree had red-coloured apples while the rest of the limbs bore mostly green fruit. A bud was taken from the mutated branch and grafted onto rootstock to produce the new variety. The fruit from the new Rosy Glow tree was the same colour over the entire tree and a patent for this unique apple was granted in 2003. The Rosy Glow apple benefits from a larger area of pink than the Pink Lady and it ripens earlier in the season in climates that have less hours of sunshine. As a consequence, the Cripps Pink is likely to be phased out in favour of the Rosy Glow, with the apples branded as Pink LadyTM if they have 40% or more pink coverage. Ruby Pink and Lady in Red are two mutations of the Cripps Pink that were dis covered in New Zealand. Like the Rosy Glow, these improved varieties develop a larger area of pink than the Cripps Pink, which allows more apples to meet the quality requirements of the Pink LadyTM brand. Planting of these trees may need to be controlled otherwise the supply of Pink Lady apples will exceed the demand, to then threaten the price premium. Overproduction apart, the future of what has become possibly the worlds best-known modern apple and fruit brand, looks secure.", "hypothesis": "Cripps Pink trees produce an abundance of fruit.", "label": "n"} +{"uid": "id_27", "premise": "Tickled pink In 1973, the Australian fruit breeder John Cripps created a new variety of apple tree by crossing a red Australian Lady Williams variety with a pale-green American Golden Delicious. The offspring first fruited in 1979 and combined the best features of its parents in an apple that had an attractive pink hue on a yellow undertone. The new, improved apple was named the Cripps Pink after its inventor. Today the Cripps Pink is one of the most popular varieties of apple and is grown extensively in Australia, New Zealand, Canada, France and in California and Washington in the USA. By switching from northern hemisphere fruit to southern hemisphere fruit the apple is available at its seasonal best all year round. The highest-quality apples are marketed worldwide under the trademark Pink LadyTM. To preserve the premium price and appeal of the Pink Lady, apples that fail to meet the highest standards are sold under the name Cripps PinkTM. These standards are based on colour and flavour, in particular, the extent of the pink coverage and the sugar/acid balance. Consumers who buy a Pink Lady apple are ensured a product that is of consistently high quality. To earn the name Pink Lady the skin of a Cripps Pink apple must be at least 40% pink. Strong sunlight increases the pink coloration and it may be necessary to remove the uppermost leaves of a tree to let the light through. The extra work required to cultivate Cripps Pink trees is offset by its advantages, which include: vigorous trees; fruit that has tolerance to sunburn; a thin skin that does not crack; flesh that is resistant to browning after being cut and exposed to air; a cold-storage life of up to six months and a retail shelf-life of about four weeks. However, the main advantage for apple growers is the premium price that the Pink Lady brand is able to command. The Cripps Red variety, also known as Cripps II, is related to the Pink Lady and was developed at the same time. The premium grade is marketed as the Sun downerTM. Unlike the genuinely pink Pink Lady, the SundownerTM is a classic bi- coloured apple, with a skin that is 45% red from Lady Williams and 55% green from Golden Delicious. Apples that fall outside of this colour ratio are rejected at the packing station and used for juice, whilst the smaller apples are retained for the home market. The Sundowner is harvested after Cripps Pink in late May or early June, and a few weeks before Lady Williams. It has better cold-storage properties than Cripps Pink and it retains an excellent shelf life. Cripps Red apples have a coarser texture than Cripps Pink, are less sweet and have a stronger flavour. Both apples are sweeter than Lady Williams but neither is as sweet as Golden Delicious. The advantage of the Pink LadyTM brand is that it is a trademark of a premium product, not just a Cripps Pink apple. This means that new and improved strains of the Cripps Pink can use the Pink Lady brand name as long as they meet the minimum quality requirement of being 40% pink. Three such strains are the Rosy Glow, The Ruby Pink and the Lady in Red. The Rosy Glow apple was discovered in an orchard of Cripps Pink trees that had been planted in South Australia in 1996. One limb of a Cripps Pink tree had red-coloured apples while the rest of the limbs bore mostly green fruit. A bud was taken from the mutated branch and grafted onto rootstock to produce the new variety. The fruit from the new Rosy Glow tree was the same colour over the entire tree and a patent for this unique apple was granted in 2003. The Rosy Glow apple benefits from a larger area of pink than the Pink Lady and it ripens earlier in the season in climates that have less hours of sunshine. As a consequence, the Cripps Pink is likely to be phased out in favour of the Rosy Glow, with the apples branded as Pink LadyTM if they have 40% or more pink coverage. Ruby Pink and Lady in Red are two mutations of the Cripps Pink that were dis covered in New Zealand. Like the Rosy Glow, these improved varieties develop a larger area of pink than the Cripps Pink, which allows more apples to meet the quality requirements of the Pink LadyTM brand. Planting of these trees may need to be controlled otherwise the supply of Pink Lady apples will exceed the demand, to then threaten the price premium. Overproduction apart, the future of what has become possibly the worlds best-known modern apple and fruit brand, looks secure.", "hypothesis": "Pink Lady apples are less expensive to buy than Cripps Pink apples.", "label": "c"} +{"uid": "id_28", "premise": "Tickled pink In 1973, the Australian fruit breeder John Cripps created a new variety of apple tree by crossing a red Australian Lady Williams variety with a pale-green American Golden Delicious. The offspring first fruited in 1979 and combined the best features of its parents in an apple that had an attractive pink hue on a yellow undertone. The new, improved apple was named the Cripps Pink after its inventor. Today the Cripps Pink is one of the most popular varieties of apple and is grown extensively in Australia, New Zealand, Canada, France and in California and Washington in the USA. By switching from northern hemisphere fruit to southern hemisphere fruit the apple is available at its seasonal best all year round. The highest-quality apples are marketed worldwide under the trademark Pink LadyTM. To preserve the premium price and appeal of the Pink Lady, apples that fail to meet the highest standards are sold under the name Cripps PinkTM. These standards are based on colour and flavour, in particular, the extent of the pink coverage and the sugar/acid balance. Consumers who buy a Pink Lady apple are ensured a product that is of consistently high quality. To earn the name Pink Lady the skin of a Cripps Pink apple must be at least 40% pink. Strong sunlight increases the pink coloration and it may be necessary to remove the uppermost leaves of a tree to let the light through. The extra work required to cultivate Cripps Pink trees is offset by its advantages, which include: vigorous trees; fruit that has tolerance to sunburn; a thin skin that does not crack; flesh that is resistant to browning after being cut and exposed to air; a cold-storage life of up to six months and a retail shelf-life of about four weeks. However, the main advantage for apple growers is the premium price that the Pink Lady brand is able to command. The Cripps Red variety, also known as Cripps II, is related to the Pink Lady and was developed at the same time. The premium grade is marketed as the Sun downerTM. Unlike the genuinely pink Pink Lady, the SundownerTM is a classic bi- coloured apple, with a skin that is 45% red from Lady Williams and 55% green from Golden Delicious. Apples that fall outside of this colour ratio are rejected at the packing station and used for juice, whilst the smaller apples are retained for the home market. The Sundowner is harvested after Cripps Pink in late May or early June, and a few weeks before Lady Williams. It has better cold-storage properties than Cripps Pink and it retains an excellent shelf life. Cripps Red apples have a coarser texture than Cripps Pink, are less sweet and have a stronger flavour. Both apples are sweeter than Lady Williams but neither is as sweet as Golden Delicious. The advantage of the Pink LadyTM brand is that it is a trademark of a premium product, not just a Cripps Pink apple. This means that new and improved strains of the Cripps Pink can use the Pink Lady brand name as long as they meet the minimum quality requirement of being 40% pink. Three such strains are the Rosy Glow, The Ruby Pink and the Lady in Red. The Rosy Glow apple was discovered in an orchard of Cripps Pink trees that had been planted in South Australia in 1996. One limb of a Cripps Pink tree had red-coloured apples while the rest of the limbs bore mostly green fruit. A bud was taken from the mutated branch and grafted onto rootstock to produce the new variety. The fruit from the new Rosy Glow tree was the same colour over the entire tree and a patent for this unique apple was granted in 2003. The Rosy Glow apple benefits from a larger area of pink than the Pink Lady and it ripens earlier in the season in climates that have less hours of sunshine. As a consequence, the Cripps Pink is likely to be phased out in favour of the Rosy Glow, with the apples branded as Pink LadyTM if they have 40% or more pink coverage. Ruby Pink and Lady in Red are two mutations of the Cripps Pink that were dis covered in New Zealand. Like the Rosy Glow, these improved varieties develop a larger area of pink than the Cripps Pink, which allows more apples to meet the quality requirements of the Pink LadyTM brand. Planting of these trees may need to be controlled otherwise the supply of Pink Lady apples will exceed the demand, to then threaten the price premium. Overproduction apart, the future of what has become possibly the worlds best-known modern apple and fruit brand, looks secure.", "hypothesis": "Lady Williams apples are sweeter than Golden Delicious.", "label": "c"} +{"uid": "id_29", "premise": "Time and temperature Food poisoning is still prevalent in the UK, with more than 90,000 reported cases in 2007, though unreported cases could be as much as 10 times higher, because most people with mild symptoms fail to report the incident. Millions of bacteria are needed to produce food poisoning. Under favourable conditions, rapid multiplication takes place by binary fission every 10 to 20 minutes. Pathogenic bacteria can grow at temperatures as low as 5 C and as high as 63 C; food kept in this danger zone should never be reheated. Fridges and cold stores at 1 to 4 C stop the multiplication of pathogenic bacteria but not of food spoilage bacteria. The latter can continue to grow at temperatures as low as minus 18 C, below which they remain dormant. Bacteria are not destroyed by freezing and can multiply again after the food thaws out. Campylobacter is responsible for most of the food poisoning in the UK, with about four times as many cases as occur with Salmonella. Campylobacter is also referred to as a food-borne disease because it remains dormant at room temperature but multiplies rapidly at body temperature (37 C); it is destroyed at temperatures above 48 C. Most cases of Salmonella food poisoning are caused by storing prepared food at room temperature. Salmonella is quickly destroyed at temperatures above 74 C. Other food-borne pathogens include Listeria, E. coli and Clostridium perfringens, which is spore forming and can survive cooking. Both Campylobacter and Salmonella are associated with raw meat, poultry, eggs and unpasteurised milk. Examples of cross-contamination include kitchen staff failing to wash their hands when taking eggs out of the fridge, a drop of juice from a fresh chicken at the top of the fridge contaminating cooked foods below, and using the same chopping board to prepare meat and vegetables. Spread is not normally from person to person.", "hypothesis": "A single cell of Campylobacter can multiply to more than 1,000 bacteria in less than two hours on food at room temperature.", "label": "c"} +{"uid": "id_30", "premise": "Time and temperature Food poisoning is still prevalent in the UK, with more than 90,000 reported cases in 2007, though unreported cases could be as much as 10 times higher, because most people with mild symptoms fail to report the incident. Millions of bacteria are needed to produce food poisoning. Under favourable conditions, rapid multiplication takes place by binary fission every 10 to 20 minutes. Pathogenic bacteria can grow at temperatures as low as 5 C and as high as 63 C; food kept in this danger zone should never be reheated. Fridges and cold stores at 1 to 4 C stop the multiplication of pathogenic bacteria but not of food spoilage bacteria. The latter can continue to grow at temperatures as low as minus 18 C, below which they remain dormant. Bacteria are not destroyed by freezing and can multiply again after the food thaws out. Campylobacter is responsible for most of the food poisoning in the UK, with about four times as many cases as occur with Salmonella. Campylobacter is also referred to as a food-borne disease because it remains dormant at room temperature but multiplies rapidly at body temperature (37 C); it is destroyed at temperatures above 48 C. Most cases of Salmonella food poisoning are caused by storing prepared food at room temperature. Salmonella is quickly destroyed at temperatures above 74 C. Other food-borne pathogens include Listeria, E. coli and Clostridium perfringens, which is spore forming and can survive cooking. Both Campylobacter and Salmonella are associated with raw meat, poultry, eggs and unpasteurised milk. Examples of cross-contamination include kitchen staff failing to wash their hands when taking eggs out of the fridge, a drop of juice from a fresh chicken at the top of the fridge contaminating cooked foods below, and using the same chopping board to prepare meat and vegetables. Spread is not normally from person to person.", "hypothesis": "The ingestion of a small number of Campylobacter cells could make you ill.", "label": "e"} +{"uid": "id_31", "premise": "Time and temperature Food poisoning is still prevalent in the UK, with more than 90,000 reported cases in 2007, though unreported cases could be as much as 10 times higher, because most people with mild symptoms fail to report the incident. Millions of bacteria are needed to produce food poisoning. Under favourable conditions, rapid multiplication takes place by binary fission every 10 to 20 minutes. Pathogenic bacteria can grow at temperatures as low as 5 C and as high as 63 C; food kept in this danger zone should never be reheated. Fridges and cold stores at 1 to 4 C stop the multiplication of pathogenic bacteria but not of food spoilage bacteria. The latter can continue to grow at temperatures as low as minus 18 C, below which they remain dormant. Bacteria are not destroyed by freezing and can multiply again after the food thaws out. Campylobacter is responsible for most of the food poisoning in the UK, with about four times as many cases as occur with Salmonella. Campylobacter is also referred to as a food-borne disease because it remains dormant at room temperature but multiplies rapidly at body temperature (37 C); it is destroyed at temperatures above 48 C. Most cases of Salmonella food poisoning are caused by storing prepared food at room temperature. Salmonella is quickly destroyed at temperatures above 74 C. Other food-borne pathogens include Listeria, E. coli and Clostridium perfringens, which is spore forming and can survive cooking. Both Campylobacter and Salmonella are associated with raw meat, poultry, eggs and unpasteurised milk. Examples of cross-contamination include kitchen staff failing to wash their hands when taking eggs out of the fridge, a drop of juice from a fresh chicken at the top of the fridge contaminating cooked foods below, and using the same chopping board to prepare meat and vegetables. Spread is not normally from person to person.", "hypothesis": "Heating food to 75 C will destroy most bacteria responsible for food poisoning in the UK.", "label": "e"} +{"uid": "id_32", "premise": "Time and temperature Food poisoning is still prevalent in the UK, with more than 90,000 reported cases in 2007, though unreported cases could be as much as 10 times higher, because most people with mild symptoms fail to report the incident. Millions of bacteria are needed to produce food poisoning. Under favourable conditions, rapid multiplication takes place by binary fission every 10 to 20 minutes. Pathogenic bacteria can grow at temperatures as low as 5 C and as high as 63 C; food kept in this danger zone should never be reheated. Fridges and cold stores at 1 to 4 C stop the multiplication of pathogenic bacteria but not of food spoilage bacteria. The latter can continue to grow at temperatures as low as minus 18 C, below which they remain dormant. Bacteria are not destroyed by freezing and can multiply again after the food thaws out. Campylobacter is responsible for most of the food poisoning in the UK, with about four times as many cases as occur with Salmonella. Campylobacter is also referred to as a food-borne disease because it remains dormant at room temperature but multiplies rapidly at body temperature (37 C); it is destroyed at temperatures above 48 C. Most cases of Salmonella food poisoning are caused by storing prepared food at room temperature. Salmonella is quickly destroyed at temperatures above 74 C. Other food-borne pathogens include Listeria, E. coli and Clostridium perfringens, which is spore forming and can survive cooking. Both Campylobacter and Salmonella are associated with raw meat, poultry, eggs and unpasteurised milk. Examples of cross-contamination include kitchen staff failing to wash their hands when taking eggs out of the fridge, a drop of juice from a fresh chicken at the top of the fridge contaminating cooked foods below, and using the same chopping board to prepare meat and vegetables. Spread is not normally from person to person.", "hypothesis": "Pathogenic and food spoilage bacteria remain dormant below minus 18 C.", "label": "e"} +{"uid": "id_33", "premise": "Time to cool it REFRIGERATORS are the epitome of clunky technology: solid, reliable and just a little bit dull. They have not changed much over the past century, but then they have not needed to. They are based on a robust and effective idea--draw heat from the thing you want to cool by evaporating a liquid next to it, and then dump that heat by pumping the vapour elsewhere and condensing it. This method of pumping heat from one place to another served mankind well when refrigerators' main jobs were preserving food and, as air conditioners, cooling buildings. Today's high-tech world, however, demands high-tech refrigeration. Heat pumps are no longer up to the job. The search is on for something to replace them. One set of candidates are known as paraelectric materials. These act like batteries when they undergo a temperature change: attach electrodes to them and they generate a current. This effect is used in infra-red cameras. An array of tiny pieces of paraelectric material can sense the heat radiated by, for example, a person, and the pattern of the array's electrical outputs can then be used to construct an image. But until recently no one had bothered much with the inverse of this process. That inverse exists, however. Apply an appropriate current to a paraelectric material and it will cool down. Someone who is looking at this inverse effect is Alex Mischenko, of Cambridge University. Using commercially available paraelectric film, he and his colleagues have generated temperature drops five times bigger than any previously recorded. That may be enough to change the phenomenon from a laboratory curiosity to something with commercial applications. As to what those applications might be, Dr Mischenko is still a little hazy. He has, nevertheless, set up a company to pursue them. He foresees putting his discovery to use in more efficient domestic fridges and air conditioners. The real money, though, may be in cooling computers. Gadgets containing microprocessors have been getting hotter for a long time. One consequence of Moore's Law, which describes the doubling of the number of transistors on a chip every 18 months, is that the amount of heat produced doubles as well. In fact, it more than doubles, because besides increasing in number, the components are getting faster. Heat is released every time a logical operation is performed inside a microprocessor, so the faster the processor is, the more heat it generates. Doubling the frequency quadruples the heat output. And the frequency has doubled a lot. The first Pentium chips sold by Dr Moore's company, Intel, in 1993, ran at 60m cycles a second. The Pentium 4--the last \"single-core\" desktop processor--clocked up 3.2 billion cycles a second. Disposing of this heat is a big obstruction to further miniaturisation and higher speeds. The innards of a desktop computer commonly hit 80C. At 85C, they stop working. Tweaking the processor's heat sinks (copper or aluminium boxes designed to radiate heat away) has reached its limit. So has tweaking the fans that circulate air over those heat sinks. And the idea of shifting from single-core processors to systems that divided processing power between first two, and then four, subunits, in order to spread the thermal load, also seems to have the end of the road in sight. One way out of this may be a second curious physical phenomenon, the thermoelectric effect. Like paraelectric materials, this generates electricity from a heat source and produces cooling from an electrical source. Unlike paraelectrics, a significant body of researchers is already working on it. The trick to a good thermoelectric material is a crystal structure in which electrons can flow freely, but the path of phonons--heat-carrying vibrations that are larger than electrons--is constantly interrupted. In practice, this trick is hard to pull off, and thermoelectric materials are thus less efficient than paraelectric ones (or, at least, than those examined by Dr Mischenko). Nevertheless, Rama Venkatasubramanian, of Nextreme Thermal Solutions in North Carolina, claims to have made thermoelectric refrigerators that can sit on the back of computer chips and cool hotspots by 10C. Ali Shakouri, of the University of California, Santa Cruz, says his are even smaller--so small that they can go inside the chip. The last word in computer cooling, though, may go to a system even less techy than a heat pump--a miniature version of a car radiator. Last year Apple launched a personal computer that is cooled by liquid that is pumped through little channels in the processor, and thence to a radiator, where it gives up its heat to the atmosphere. To improve on this, IBM's research laboratory in Zurich is experimenting with tiny jets that stir the liquid up and thus make sure all of it eventually touches the outside of the channel--the part where the heat exchange takes place. In the future, therefore, a combination of microchannels and either thermoelectrics or paraelectrics might cool computers. The old, as it were, hand in hand with the new.", "hypothesis": "IBM will achieve better computer cooling by combining microchannels with paraelectrics.", "label": "n"} +{"uid": "id_34", "premise": "Time to cool it REFRIGERATORS are the epitome of clunky technology: solid, reliable and just a little bit dull. They have not changed much over the past century, but then they have not needed to. They are based on a robust and effective idea--draw heat from the thing you want to cool by evaporating a liquid next to it, and then dump that heat by pumping the vapour elsewhere and condensing it. This method of pumping heat from one place to another served mankind well when refrigerators' main jobs were preserving food and, as air conditioners, cooling buildings. Today's high-tech world, however, demands high-tech refrigeration. Heat pumps are no longer up to the job. The search is on for something to replace them. One set of candidates are known as paraelectric materials. These act like batteries when they undergo a temperature change: attach electrodes to them and they generate a current. This effect is used in infra-red cameras. An array of tiny pieces of paraelectric material can sense the heat radiated by, for example, a person, and the pattern of the array's electrical outputs can then be used to construct an image. But until recently no one had bothered much with the inverse of this process. That inverse exists, however. Apply an appropriate current to a paraelectric material and it will cool down. Someone who is looking at this inverse effect is Alex Mischenko, of Cambridge University. Using commercially available paraelectric film, he and his colleagues have generated temperature drops five times bigger than any previously recorded. That may be enough to change the phenomenon from a laboratory curiosity to something with commercial applications. As to what those applications might be, Dr Mischenko is still a little hazy. He has, nevertheless, set up a company to pursue them. He foresees putting his discovery to use in more efficient domestic fridges and air conditioners. The real money, though, may be in cooling computers. Gadgets containing microprocessors have been getting hotter for a long time. One consequence of Moore's Law, which describes the doubling of the number of transistors on a chip every 18 months, is that the amount of heat produced doubles as well. In fact, it more than doubles, because besides increasing in number, the components are getting faster. Heat is released every time a logical operation is performed inside a microprocessor, so the faster the processor is, the more heat it generates. Doubling the frequency quadruples the heat output. And the frequency has doubled a lot. The first Pentium chips sold by Dr Moore's company, Intel, in 1993, ran at 60m cycles a second. The Pentium 4--the last \"single-core\" desktop processor--clocked up 3.2 billion cycles a second. Disposing of this heat is a big obstruction to further miniaturisation and higher speeds. The innards of a desktop computer commonly hit 80C. At 85C, they stop working. Tweaking the processor's heat sinks (copper or aluminium boxes designed to radiate heat away) has reached its limit. So has tweaking the fans that circulate air over those heat sinks. And the idea of shifting from single-core processors to systems that divided processing power between first two, and then four, subunits, in order to spread the thermal load, also seems to have the end of the road in sight. One way out of this may be a second curious physical phenomenon, the thermoelectric effect. Like paraelectric materials, this generates electricity from a heat source and produces cooling from an electrical source. Unlike paraelectrics, a significant body of researchers is already working on it. The trick to a good thermoelectric material is a crystal structure in which electrons can flow freely, but the path of phonons--heat-carrying vibrations that are larger than electrons--is constantly interrupted. In practice, this trick is hard to pull off, and thermoelectric materials are thus less efficient than paraelectric ones (or, at least, than those examined by Dr Mischenko). Nevertheless, Rama Venkatasubramanian, of Nextreme Thermal Solutions in North Carolina, claims to have made thermoelectric refrigerators that can sit on the back of computer chips and cool hotspots by 10C. Ali Shakouri, of the University of California, Santa Cruz, says his are even smaller--so small that they can go inside the chip. The last word in computer cooling, though, may go to a system even less techy than a heat pump--a miniature version of a car radiator. Last year Apple launched a personal computer that is cooled by liquid that is pumped through little channels in the processor, and thence to a radiator, where it gives up its heat to the atmosphere. To improve on this, IBM's research laboratory in Zurich is experimenting with tiny jets that stir the liquid up and thus make sure all of it eventually touches the outside of the channel--the part where the heat exchange takes place. In the future, therefore, a combination of microchannels and either thermoelectrics or paraelectrics might cool computers. The old, as it were, hand in hand with the new.", "hypothesis": "Dr. Mischenko has successfully applied his laboratory discovery to manufacturing more efficient referigerators.", "label": "c"} +{"uid": "id_35", "premise": "Time to cool it REFRIGERATORS are the epitome of clunky technology: solid, reliable and just a little bit dull. They have not changed much over the past century, but then they have not needed to. They are based on a robust and effective idea--draw heat from the thing you want to cool by evaporating a liquid next to it, and then dump that heat by pumping the vapour elsewhere and condensing it. This method of pumping heat from one place to another served mankind well when refrigerators' main jobs were preserving food and, as air conditioners, cooling buildings. Today's high-tech world, however, demands high-tech refrigeration. Heat pumps are no longer up to the job. The search is on for something to replace them. One set of candidates are known as paraelectric materials. These act like batteries when they undergo a temperature change: attach electrodes to them and they generate a current. This effect is used in infra-red cameras. An array of tiny pieces of paraelectric material can sense the heat radiated by, for example, a person, and the pattern of the array's electrical outputs can then be used to construct an image. But until recently no one had bothered much with the inverse of this process. That inverse exists, however. Apply an appropriate current to a paraelectric material and it will cool down. Someone who is looking at this inverse effect is Alex Mischenko, of Cambridge University. Using commercially available paraelectric film, he and his colleagues have generated temperature drops five times bigger than any previously recorded. That may be enough to change the phenomenon from a laboratory curiosity to something with commercial applications. As to what those applications might be, Dr Mischenko is still a little hazy. He has, nevertheless, set up a company to pursue them. He foresees putting his discovery to use in more efficient domestic fridges and air conditioners. The real money, though, may be in cooling computers. Gadgets containing microprocessors have been getting hotter for a long time. One consequence of Moore's Law, which describes the doubling of the number of transistors on a chip every 18 months, is that the amount of heat produced doubles as well. In fact, it more than doubles, because besides increasing in number, the components are getting faster. Heat is released every time a logical operation is performed inside a microprocessor, so the faster the processor is, the more heat it generates. Doubling the frequency quadruples the heat output. And the frequency has doubled a lot. The first Pentium chips sold by Dr Moore's company, Intel, in 1993, ran at 60m cycles a second. The Pentium 4--the last \"single-core\" desktop processor--clocked up 3.2 billion cycles a second. Disposing of this heat is a big obstruction to further miniaturisation and higher speeds. The innards of a desktop computer commonly hit 80C. At 85C, they stop working. Tweaking the processor's heat sinks (copper or aluminium boxes designed to radiate heat away) has reached its limit. So has tweaking the fans that circulate air over those heat sinks. And the idea of shifting from single-core processors to systems that divided processing power between first two, and then four, subunits, in order to spread the thermal load, also seems to have the end of the road in sight. One way out of this may be a second curious physical phenomenon, the thermoelectric effect. Like paraelectric materials, this generates electricity from a heat source and produces cooling from an electrical source. Unlike paraelectrics, a significant body of researchers is already working on it. The trick to a good thermoelectric material is a crystal structure in which electrons can flow freely, but the path of phonons--heat-carrying vibrations that are larger than electrons--is constantly interrupted. In practice, this trick is hard to pull off, and thermoelectric materials are thus less efficient than paraelectric ones (or, at least, than those examined by Dr Mischenko). Nevertheless, Rama Venkatasubramanian, of Nextreme Thermal Solutions in North Carolina, claims to have made thermoelectric refrigerators that can sit on the back of computer chips and cool hotspots by 10C. Ali Shakouri, of the University of California, Santa Cruz, says his are even smaller--so small that they can go inside the chip. The last word in computer cooling, though, may go to a system even less techy than a heat pump--a miniature version of a car radiator. Last year Apple launched a personal computer that is cooled by liquid that is pumped through little channels in the processor, and thence to a radiator, where it gives up its heat to the atmosphere. To improve on this, IBM's research laboratory in Zurich is experimenting with tiny jets that stir the liquid up and thus make sure all of it eventually touches the outside of the channel--the part where the heat exchange takes place. In the future, therefore, a combination of microchannels and either thermoelectrics or paraelectrics might cool computers. The old, as it were, hand in hand with the new.", "hypothesis": "Doubling the frequency of logical operations inside a microprocessor doubles the heat output.", "label": "c"} +{"uid": "id_36", "premise": "Time to cool it REFRIGERATORS are the epitome of clunky technology: solid, reliable and just a little bit dull. They have not changed much over the past century, but then they have not needed to. They are based on a robust and effective idea--draw heat from the thing you want to cool by evaporating a liquid next to it, and then dump that heat by pumping the vapour elsewhere and condensing it. This method of pumping heat from one place to another served mankind well when refrigerators' main jobs were preserving food and, as air conditioners, cooling buildings. Today's high-tech world, however, demands high-tech refrigeration. Heat pumps are no longer up to the job. The search is on for something to replace them. One set of candidates are known as paraelectric materials. These act like batteries when they undergo a temperature change: attach electrodes to them and they generate a current. This effect is used in infra-red cameras. An array of tiny pieces of paraelectric material can sense the heat radiated by, for example, a person, and the pattern of the array's electrical outputs can then be used to construct an image. But until recently no one had bothered much with the inverse of this process. That inverse exists, however. Apply an appropriate current to a paraelectric material and it will cool down. Someone who is looking at this inverse effect is Alex Mischenko, of Cambridge University. Using commercially available paraelectric film, he and his colleagues have generated temperature drops five times bigger than any previously recorded. That may be enough to change the phenomenon from a laboratory curiosity to something with commercial applications. As to what those applications might be, Dr Mischenko is still a little hazy. He has, nevertheless, set up a company to pursue them. He foresees putting his discovery to use in more efficient domestic fridges and air conditioners. The real money, though, may be in cooling computers. Gadgets containing microprocessors have been getting hotter for a long time. One consequence of Moore's Law, which describes the doubling of the number of transistors on a chip every 18 months, is that the amount of heat produced doubles as well. In fact, it more than doubles, because besides increasing in number, the components are getting faster. Heat is released every time a logical operation is performed inside a microprocessor, so the faster the processor is, the more heat it generates. Doubling the frequency quadruples the heat output. And the frequency has doubled a lot. The first Pentium chips sold by Dr Moore's company, Intel, in 1993, ran at 60m cycles a second. The Pentium 4--the last \"single-core\" desktop processor--clocked up 3.2 billion cycles a second. Disposing of this heat is a big obstruction to further miniaturisation and higher speeds. The innards of a desktop computer commonly hit 80C. At 85C, they stop working. Tweaking the processor's heat sinks (copper or aluminium boxes designed to radiate heat away) has reached its limit. So has tweaking the fans that circulate air over those heat sinks. And the idea of shifting from single-core processors to systems that divided processing power between first two, and then four, subunits, in order to spread the thermal load, also seems to have the end of the road in sight. One way out of this may be a second curious physical phenomenon, the thermoelectric effect. Like paraelectric materials, this generates electricity from a heat source and produces cooling from an electrical source. Unlike paraelectrics, a significant body of researchers is already working on it. The trick to a good thermoelectric material is a crystal structure in which electrons can flow freely, but the path of phonons--heat-carrying vibrations that are larger than electrons--is constantly interrupted. In practice, this trick is hard to pull off, and thermoelectric materials are thus less efficient than paraelectric ones (or, at least, than those examined by Dr Mischenko). Nevertheless, Rama Venkatasubramanian, of Nextreme Thermal Solutions in North Carolina, claims to have made thermoelectric refrigerators that can sit on the back of computer chips and cool hotspots by 10C. Ali Shakouri, of the University of California, Santa Cruz, says his are even smaller--so small that they can go inside the chip. The last word in computer cooling, though, may go to a system even less techy than a heat pump--a miniature version of a car radiator. Last year Apple launched a personal computer that is cooled by liquid that is pumped through little channels in the processor, and thence to a radiator, where it gives up its heat to the atmosphere. To improve on this, IBM's research laboratory in Zurich is experimenting with tiny jets that stir the liquid up and thus make sure all of it eventually touches the outside of the channel--the part where the heat exchange takes place. In the future, therefore, a combination of microchannels and either thermoelectrics or paraelectrics might cool computers. The old, as it were, hand in hand with the new.", "hypothesis": "Paraelectric materials can generate a current when electrodes are attached to them.", "label": "e"} +{"uid": "id_37", "premise": "Timekeeper 2 Invention of Marine Chronometer It was, as Dava Sobel has described a phenomenon: the greatest scientific problem of the age. The reality was that in the 18th century no one had ever made a clock that could suffer the great rolling and pitching of a ship and the large changes in temperature whilst still keeping time accurately enough to be of any use. Indeed, most of the scientific community thought such clock impossibility. Knowing one's position on the earth requires two very simple but essential coordinates; rather like using a street map where one thinks in terms of how far one is up/down and how far side to side. The longitude is a measure of how far around the world one has come from home and has no naturally occurring base line like the equator. The crew of a given ship was naturally only concerned with how far round they were from their own particular home base. Even when in the middle of the ocean, with no land in sight, knowing this longitude position is very simple in theory. The key to knowing how far around the world you are from home is to know, at that very moment, what time it is back home. A comparison with your local time (easily found by checking the position of the Sim) will then tell you the time difference between you and home, and thus how far round the Earth you are from home. Up until the middle of the 18th century, navigators hadbeen unable to determine their position at sea with accuracy and they faced the huge attendant risks of shipwreck or running out of supplies before reaching then destination. The angular position of Moon and other bright stars was recorded in three-hour intervals of Greenwich Time. In order to determine longitude, sailors had to measure the angle between Moon centre and a given star - lunar distance - together with height of both planets using the naval sextant. The sailors also had to calculate the Moons position if seen form the centre of Earth. Time corresponding to Greenwich Time was determined using the nautical almanac. Then the difference between the obtained time and local time served for calculation in longitude from Greenwich. The great flaw in this simple theory was - how does the sailor know time back home when he is in the middle of an ocean? The obvious and again simple answer is that he takes an accurate clock with him, which he sets to home time before leaving. All he has to do is keep it wound up and running, and he must never reset the hands throughout the voyage This clock then provides home time, so if, for example, it is midday on board your ship and your home time clock says that at that same moment it is midnight at home, you know immediately there is a twelve hour time-difference and you must be exactly round the other side of the world, 180 degrees of longitude from home. After 1714 when the British government offered the huge sum of 20,000 for a solution to the problem, with the prize to be administered by die splendidly titled Board of Longitude. The Government prize of 20,000 was the highest of three sums on offer for varying degrees of accuracy, the full prize only payable for a method that could find the longitude at sea within half a degree. If the solution was to be by timekeeper (and there were other methods since the prize was offered for any solution to the problem), then the timekeeping required to achieve this goal would have to be within 2.8 seconds a day, a performance considered impossible for any clock at sea and unthinkable for a watch, even under the very best conditions. It was this prize, worth about 2 million today, which inspired the self-taught Yorkshfre carpenter, John Harrison, to attempt a design for a practical marineclock. During the latter part of his early career, he worked with his younger brother James. Their first major project was a revolutionary turret clock for the stables at Brocklesby Park, seat of the Pelham family. The clock was revolutionary because it required no lubrication. 18th century clock oils were uniformly poor and one of the major causes of failure in clocks of the period. Rather than concentrating on improvements to the oil, Harrison designed a clock which didn't need it. In 1730 Harrison created a description and drawings for a proposed marine clock to compete for the Longitude Prize and went to London seeking financial assistance. He presented his ideas to Edmond Halley, the Astronomer Royal. Halley referred him to George Graham, the country's foremost clockmaker. He must have been impressed by Harrison, for Graham personally loaned Harrison money to build a model of his marine clock. It took Harrison five years to build Harrison Number One or HI. He demonstrated it to members of the Royal Society who spoke on his behalf to the Board of Longitude. The clock was the first proposal that the Board considered to be worthy of a sea trial. In 1736, After several attempts to design a betterment of HI, Harrison believed that the ' solution to the longitude problem lay in an entirely different design. H4 is completely different from the other three timekeepers. It looks like a very large pocket watch. Harrison's son William set sail for the West Indies, with H4, aboard the ship Deptford on 18 November 1761. It was a remarkable achievement but it would be some time before the Board of Longitude was sufficiently satisfied to award Harrison the prize. John Hadley, an English mathematician, developed sextant, who was a competitor of Harrison at that time for the luring prize. A sextant is an instrument used for measuring angles, for example between the sun and the horizon, so that the position of a ship or aeroplane can be calculated. Making this measurement is known as sighting the object, shooting the object, or taking a sight and it is an essential part of celestial navigation. The angle, and the time when it was measured, can be used to calculate a position line on a nautical or aeronautical chart. A sextant can also be used to measure the Lunar distance between the moon and another celestial object (e. g. , star, planet) in order to determine Greenwich time which is important because it can then be used to determine the longitude. The majority within this next generation of chronometer pioneers were English, but the story is by no means wholly that of English achievement. One French name, Pierre Le Roy of Paris, stands out as a major presence in the early history of the chronometer. Another great name in the story is that of theLancastrian, Thomas Eamshaw, a slightly younger contemporary of John Arnold's. It was Eamshaw who created the final form of chronometer escapement, the spring detent escapement, and finalized the format and the production system for the marine chronometer, making it truly an article of commerce, and a practical means of safer navigation at sea over the next century and half.", "hypothesis": "In theory, by calculating the longitude degrees covered by a sail journey, the distance between the start and the end points can be obtained.", "label": "n"} +{"uid": "id_38", "premise": "Timekeeper 2 Invention of Marine Chronometer It was, as Dava Sobel has described a phenomenon: the greatest scientific problem of the age. The reality was that in the 18th century no one had ever made a clock that could suffer the great rolling and pitching of a ship and the large changes in temperature whilst still keeping time accurately enough to be of any use. Indeed, most of the scientific community thought such clock impossibility. Knowing one's position on the earth requires two very simple but essential coordinates; rather like using a street map where one thinks in terms of how far one is up/down and how far side to side. The longitude is a measure of how far around the world one has come from home and has no naturally occurring base line like the equator. The crew of a given ship was naturally only concerned with how far round they were from their own particular home base. Even when in the middle of the ocean, with no land in sight, knowing this longitude position is very simple in theory. The key to knowing how far around the world you are from home is to know, at that very moment, what time it is back home. A comparison with your local time (easily found by checking the position of the Sim) will then tell you the time difference between you and home, and thus how far round the Earth you are from home. Up until the middle of the 18th century, navigators hadbeen unable to determine their position at sea with accuracy and they faced the huge attendant risks of shipwreck or running out of supplies before reaching then destination. The angular position of Moon and other bright stars was recorded in three-hour intervals of Greenwich Time. In order to determine longitude, sailors had to measure the angle between Moon centre and a given star - lunar distance - together with height of both planets using the naval sextant. The sailors also had to calculate the Moons position if seen form the centre of Earth. Time corresponding to Greenwich Time was determined using the nautical almanac. Then the difference between the obtained time and local time served for calculation in longitude from Greenwich. The great flaw in this simple theory was - how does the sailor know time back home when he is in the middle of an ocean? The obvious and again simple answer is that he takes an accurate clock with him, which he sets to home time before leaving. All he has to do is keep it wound up and running, and he must never reset the hands throughout the voyage This clock then provides home time, so if, for example, it is midday on board your ship and your home time clock says that at that same moment it is midnight at home, you know immediately there is a twelve hour time-difference and you must be exactly round the other side of the world, 180 degrees of longitude from home. After 1714 when the British government offered the huge sum of 20,000 for a solution to the problem, with the prize to be administered by die splendidly titled Board of Longitude. The Government prize of 20,000 was the highest of three sums on offer for varying degrees of accuracy, the full prize only payable for a method that could find the longitude at sea within half a degree. If the solution was to be by timekeeper (and there were other methods since the prize was offered for any solution to the problem), then the timekeeping required to achieve this goal would have to be within 2.8 seconds a day, a performance considered impossible for any clock at sea and unthinkable for a watch, even under the very best conditions. It was this prize, worth about 2 million today, which inspired the self-taught Yorkshfre carpenter, John Harrison, to attempt a design for a practical marineclock. During the latter part of his early career, he worked with his younger brother James. Their first major project was a revolutionary turret clock for the stables at Brocklesby Park, seat of the Pelham family. The clock was revolutionary because it required no lubrication. 18th century clock oils were uniformly poor and one of the major causes of failure in clocks of the period. Rather than concentrating on improvements to the oil, Harrison designed a clock which didn't need it. In 1730 Harrison created a description and drawings for a proposed marine clock to compete for the Longitude Prize and went to London seeking financial assistance. He presented his ideas to Edmond Halley, the Astronomer Royal. Halley referred him to George Graham, the country's foremost clockmaker. He must have been impressed by Harrison, for Graham personally loaned Harrison money to build a model of his marine clock. It took Harrison five years to build Harrison Number One or HI. He demonstrated it to members of the Royal Society who spoke on his behalf to the Board of Longitude. The clock was the first proposal that the Board considered to be worthy of a sea trial. In 1736, After several attempts to design a betterment of HI, Harrison believed that the ' solution to the longitude problem lay in an entirely different design. H4 is completely different from the other three timekeepers. It looks like a very large pocket watch. Harrison's son William set sail for the West Indies, with H4, aboard the ship Deptford on 18 November 1761. It was a remarkable achievement but it would be some time before the Board of Longitude was sufficiently satisfied to award Harrison the prize. John Hadley, an English mathematician, developed sextant, who was a competitor of Harrison at that time for the luring prize. A sextant is an instrument used for measuring angles, for example between the sun and the horizon, so that the position of a ship or aeroplane can be calculated. Making this measurement is known as sighting the object, shooting the object, or taking a sight and it is an essential part of celestial navigation. The angle, and the time when it was measured, can be used to calculate a position line on a nautical or aeronautical chart. A sextant can also be used to measure the Lunar distance between the moon and another celestial object (e. g. , star, planet) in order to determine Greenwich time which is important because it can then be used to determine the longitude. The majority within this next generation of chronometer pioneers were English, but the story is by no means wholly that of English achievement. One French name, Pierre Le Roy of Paris, stands out as a major presence in the early history of the chronometer. Another great name in the story is that of theLancastrian, Thomas Eamshaw, a slightly younger contemporary of John Arnold's. It was Eamshaw who created the final form of chronometer escapement, the spring detent escapement, and finalized the format and the production system for the marine chronometer, making it truly an article of commerce, and a practical means of safer navigation at sea over the next century and half.", "hypothesis": "To determine the longitude, a measurement of distance from moon to a given star is a must.", "label": "c"} +{"uid": "id_39", "premise": "Timekeeper 2 Invention of Marine Chronometer It was, as Dava Sobel has described a phenomenon: the greatest scientific problem of the age. The reality was that in the 18th century no one had ever made a clock that could suffer the great rolling and pitching of a ship and the large changes in temperature whilst still keeping time accurately enough to be of any use. Indeed, most of the scientific community thought such clock impossibility. Knowing one's position on the earth requires two very simple but essential coordinates; rather like using a street map where one thinks in terms of how far one is up/down and how far side to side. The longitude is a measure of how far around the world one has come from home and has no naturally occurring base line like the equator. The crew of a given ship was naturally only concerned with how far round they were from their own particular home base. Even when in the middle of the ocean, with no land in sight, knowing this longitude position is very simple in theory. The key to knowing how far around the world you are from home is to know, at that very moment, what time it is back home. A comparison with your local time (easily found by checking the position of the Sim) will then tell you the time difference between you and home, and thus how far round the Earth you are from home. Up until the middle of the 18th century, navigators hadbeen unable to determine their position at sea with accuracy and they faced the huge attendant risks of shipwreck or running out of supplies before reaching then destination. The angular position of Moon and other bright stars was recorded in three-hour intervals of Greenwich Time. In order to determine longitude, sailors had to measure the angle between Moon centre and a given star - lunar distance - together with height of both planets using the naval sextant. The sailors also had to calculate the Moons position if seen form the centre of Earth. Time corresponding to Greenwich Time was determined using the nautical almanac. Then the difference between the obtained time and local time served for calculation in longitude from Greenwich. The great flaw in this simple theory was - how does the sailor know time back home when he is in the middle of an ocean? The obvious and again simple answer is that he takes an accurate clock with him, which he sets to home time before leaving. All he has to do is keep it wound up and running, and he must never reset the hands throughout the voyage This clock then provides home time, so if, for example, it is midday on board your ship and your home time clock says that at that same moment it is midnight at home, you know immediately there is a twelve hour time-difference and you must be exactly round the other side of the world, 180 degrees of longitude from home. After 1714 when the British government offered the huge sum of 20,000 for a solution to the problem, with the prize to be administered by die splendidly titled Board of Longitude. The Government prize of 20,000 was the highest of three sums on offer for varying degrees of accuracy, the full prize only payable for a method that could find the longitude at sea within half a degree. If the solution was to be by timekeeper (and there were other methods since the prize was offered for any solution to the problem), then the timekeeping required to achieve this goal would have to be within 2.8 seconds a day, a performance considered impossible for any clock at sea and unthinkable for a watch, even under the very best conditions. It was this prize, worth about 2 million today, which inspired the self-taught Yorkshfre carpenter, John Harrison, to attempt a design for a practical marineclock. During the latter part of his early career, he worked with his younger brother James. Their first major project was a revolutionary turret clock for the stables at Brocklesby Park, seat of the Pelham family. The clock was revolutionary because it required no lubrication. 18th century clock oils were uniformly poor and one of the major causes of failure in clocks of the period. Rather than concentrating on improvements to the oil, Harrison designed a clock which didn't need it. In 1730 Harrison created a description and drawings for a proposed marine clock to compete for the Longitude Prize and went to London seeking financial assistance. He presented his ideas to Edmond Halley, the Astronomer Royal. Halley referred him to George Graham, the country's foremost clockmaker. He must have been impressed by Harrison, for Graham personally loaned Harrison money to build a model of his marine clock. It took Harrison five years to build Harrison Number One or HI. He demonstrated it to members of the Royal Society who spoke on his behalf to the Board of Longitude. The clock was the first proposal that the Board considered to be worthy of a sea trial. In 1736, After several attempts to design a betterment of HI, Harrison believed that the ' solution to the longitude problem lay in an entirely different design. H4 is completely different from the other three timekeepers. It looks like a very large pocket watch. Harrison's son William set sail for the West Indies, with H4, aboard the ship Deptford on 18 November 1761. It was a remarkable achievement but it would be some time before the Board of Longitude was sufficiently satisfied to award Harrison the prize. John Hadley, an English mathematician, developed sextant, who was a competitor of Harrison at that time for the luring prize. A sextant is an instrument used for measuring angles, for example between the sun and the horizon, so that the position of a ship or aeroplane can be calculated. Making this measurement is known as sighting the object, shooting the object, or taking a sight and it is an essential part of celestial navigation. The angle, and the time when it was measured, can be used to calculate a position line on a nautical or aeronautical chart. A sextant can also be used to measure the Lunar distance between the moon and another celestial object (e. g. , star, planet) in order to determine Greenwich time which is important because it can then be used to determine the longitude. The majority within this next generation of chronometer pioneers were English, but the story is by no means wholly that of English achievement. One French name, Pierre Le Roy of Paris, stands out as a major presence in the early history of the chronometer. Another great name in the story is that of theLancastrian, Thomas Eamshaw, a slightly younger contemporary of John Arnold's. It was Eamshaw who created the final form of chronometer escapement, the spring detent escapement, and finalized the format and the production system for the marine chronometer, making it truly an article of commerce, and a practical means of safer navigation at sea over the next century and half.", "hypothesis": "It is with no great effort by sailors to calculate the position when in the center of the ocean theoretically.", "label": "e"} +{"uid": "id_40", "premise": "Timekeeper: Invention of Marine Chronometer Up to the middle of the 18th century, the navigators were still unable to exactly identify the position at sea, so they might face a great number of risks such as the shipwreck or running out of supplies before arriving at the destination. Knowing ones position on the earth requires two simple but essential coordinates, one of which is the longitude. The longitude is a term that can be used to measure the distance that one has covered from ones home to another place around the world without the limitations of naturally occurring baseline like the equator. To determine longitude, navigators had no choice but to measure the angle with the naval sextant between Moon centre and a specific star lunar distancealong with the height of both heavenly bodies. Together with the nautical almanac, Greenwich Mean Time (GMT) was determined, which could be adopted to calculate longitude because one hour in GMT means 15-degree longitude. Unfortunately, this approach laid great reliance on the weather conditions, which brought great inconvenience to the crew members. Therefore, another method was proposed, that is, the time difference between the home time and the local time served for the measurement. Theoretically, knowing the longitude position was quite simple, even for the people in the middle of the sea with no land in sight. The key element for calculating the distance travelled was to know, at the very moment, the accurate home time. But the greatest problem is: how can a sailor know the home time at sea? The simple and again obvious answer is that one takes an accurate clock with him, which he sets to the home time before leaving. A comparison with the local time (easily identified by checking the position of the Sun) would indicate the time difference between the home time and the local time, and thus the distance from home was obtained. The truth was that nobody in the 18th century had ever managed to create a clock that could endure the violent shaking of a ship and the fluctuating temperature while still maintaining the accuracy of time for navigation. After 1714, as an attempt to find a solution to the problem, the British government offered a tremendous amount of 20,000, which were to be managed by the magnificently named Board of Longitude. If timekeeper was the answer (and there could be other proposed solutions, since the money wasnt only offered for timekeeper), then the error of the required timekeeping for achieving this goal needed to be within 2.8 seconds a day, which was considered impossible for any clock or watch at sea, even when they were in their finest conditions. This award, worth about 2 million today, inspired the self-taught Yorkshire carpenter John Harrison to attempt a design for a practical marine clock. In the later stage of his early career, he worked alongside his younger brother James. The first big project of theirs was to build a turret clock for the stables at Brockelsby Park, which was revolutionary because it required no lubrication. Harrison designed a marine clock in 1730, and he travelled to London in seek of financial aid. He explained his ideas to Edmond Halley, the Astronomer Royal, who then introduced him to George Graham, Britains first-class clockmaker. Graham provided him with financial aid for his early-stage work on sea clocks. It took Harrison five years to build Harrison Number One or HI. Later, he sought the improvement from alternate design and produced H4 with the giant clock appearance. Remarkable as it was, the Board of Longitude wouldnt grant him the prize for some time until it was adequately satisfied. Harrison had a principal contestant for the tempting prize at that time, an English mathematician called John Hadley, who developed the sextant. The sextant is the tool that people adopt to measure angles, such as the one between the Sun and the horizon, for a calculation of the location of ships or planes. In addition, his invention is significant since it can help determine longitude. Most chronometer forerunners of that particular generation were English, but that doesnt mean every achievement was made by them. One wonderful figure in the history is the Lancastrian Thomas Earnshaw, who created the ultimate form of chronometer escapementthe spring detent escapementand made the final decision on format and productions system for the marine chronometer, which turns it into a genuine modem commercial product, as well as a safe and pragmatic way of navigation at sea over the next century and half.", "hypothesis": "Greenwich Mean Time was set up by the English navigators.", "label": "n"} +{"uid": "id_41", "premise": "Timekeeper: Invention of Marine Chronometer Up to the middle of the 18th century, the navigators were still unable to exactly identify the position at sea, so they might face a great number of risks such as the shipwreck or running out of supplies before arriving at the destination. Knowing ones position on the earth requires two simple but essential coordinates, one of which is the longitude. The longitude is a term that can be used to measure the distance that one has covered from ones home to another place around the world without the limitations of naturally occurring baseline like the equator. To determine longitude, navigators had no choice but to measure the angle with the naval sextant between Moon centre and a specific star lunar distancealong with the height of both heavenly bodies. Together with the nautical almanac, Greenwich Mean Time (GMT) was determined, which could be adopted to calculate longitude because one hour in GMT means 15-degree longitude. Unfortunately, this approach laid great reliance on the weather conditions, which brought great inconvenience to the crew members. Therefore, another method was proposed, that is, the time difference between the home time and the local time served for the measurement. Theoretically, knowing the longitude position was quite simple, even for the people in the middle of the sea with no land in sight. The key element for calculating the distance travelled was to know, at the very moment, the accurate home time. But the greatest problem is: how can a sailor know the home time at sea? The simple and again obvious answer is that one takes an accurate clock with him, which he sets to the home time before leaving. A comparison with the local time (easily identified by checking the position of the Sun) would indicate the time difference between the home time and the local time, and thus the distance from home was obtained. The truth was that nobody in the 18th century had ever managed to create a clock that could endure the violent shaking of a ship and the fluctuating temperature while still maintaining the accuracy of time for navigation. After 1714, as an attempt to find a solution to the problem, the British government offered a tremendous amount of 20,000, which were to be managed by the magnificently named Board of Longitude. If timekeeper was the answer (and there could be other proposed solutions, since the money wasnt only offered for timekeeper), then the error of the required timekeeping for achieving this goal needed to be within 2.8 seconds a day, which was considered impossible for any clock or watch at sea, even when they were in their finest conditions. This award, worth about 2 million today, inspired the self-taught Yorkshire carpenter John Harrison to attempt a design for a practical marine clock. In the later stage of his early career, he worked alongside his younger brother James. The first big project of theirs was to build a turret clock for the stables at Brockelsby Park, which was revolutionary because it required no lubrication. Harrison designed a marine clock in 1730, and he travelled to London in seek of financial aid. He explained his ideas to Edmond Halley, the Astronomer Royal, who then introduced him to George Graham, Britains first-class clockmaker. Graham provided him with financial aid for his early-stage work on sea clocks. It took Harrison five years to build Harrison Number One or HI. Later, he sought the improvement from alternate design and produced H4 with the giant clock appearance. Remarkable as it was, the Board of Longitude wouldnt grant him the prize for some time until it was adequately satisfied. Harrison had a principal contestant for the tempting prize at that time, an English mathematician called John Hadley, who developed the sextant. The sextant is the tool that people adopt to measure angles, such as the one between the Sun and the horizon, for a calculation of the location of ships or planes. In addition, his invention is significant since it can help determine longitude. Most chronometer forerunners of that particular generation were English, but that doesnt mean every achievement was made by them. One wonderful figure in the history is the Lancastrian Thomas Earnshaw, who created the ultimate form of chronometer escapementthe spring detent escapementand made the final decision on format and productions system for the marine chronometer, which turns it into a genuine modem commercial product, as well as a safe and pragmatic way of navigation at sea over the next century and half.", "hypothesis": "To determine longitude, the measurement of the distance from the Moon to a given star is essential.", "label": "e"} +{"uid": "id_42", "premise": "Timekeeper: Invention of Marine Chronometer Up to the middle of the 18th century, the navigators were still unable to exactly identify the position at sea, so they might face a great number of risks such as the shipwreck or running out of supplies before arriving at the destination. Knowing ones position on the earth requires two simple but essential coordinates, one of which is the longitude. The longitude is a term that can be used to measure the distance that one has covered from ones home to another place around the world without the limitations of naturally occurring baseline like the equator. To determine longitude, navigators had no choice but to measure the angle with the naval sextant between Moon centre and a specific star lunar distancealong with the height of both heavenly bodies. Together with the nautical almanac, Greenwich Mean Time (GMT) was determined, which could be adopted to calculate longitude because one hour in GMT means 15-degree longitude. Unfortunately, this approach laid great reliance on the weather conditions, which brought great inconvenience to the crew members. Therefore, another method was proposed, that is, the time difference between the home time and the local time served for the measurement. Theoretically, knowing the longitude position was quite simple, even for the people in the middle of the sea with no land in sight. The key element for calculating the distance travelled was to know, at the very moment, the accurate home time. But the greatest problem is: how can a sailor know the home time at sea? The simple and again obvious answer is that one takes an accurate clock with him, which he sets to the home time before leaving. A comparison with the local time (easily identified by checking the position of the Sun) would indicate the time difference between the home time and the local time, and thus the distance from home was obtained. The truth was that nobody in the 18th century had ever managed to create a clock that could endure the violent shaking of a ship and the fluctuating temperature while still maintaining the accuracy of time for navigation. After 1714, as an attempt to find a solution to the problem, the British government offered a tremendous amount of 20,000, which were to be managed by the magnificently named Board of Longitude. If timekeeper was the answer (and there could be other proposed solutions, since the money wasnt only offered for timekeeper), then the error of the required timekeeping for achieving this goal needed to be within 2.8 seconds a day, which was considered impossible for any clock or watch at sea, even when they were in their finest conditions. This award, worth about 2 million today, inspired the self-taught Yorkshire carpenter John Harrison to attempt a design for a practical marine clock. In the later stage of his early career, he worked alongside his younger brother James. The first big project of theirs was to build a turret clock for the stables at Brockelsby Park, which was revolutionary because it required no lubrication. Harrison designed a marine clock in 1730, and he travelled to London in seek of financial aid. He explained his ideas to Edmond Halley, the Astronomer Royal, who then introduced him to George Graham, Britains first-class clockmaker. Graham provided him with financial aid for his early-stage work on sea clocks. It took Harrison five years to build Harrison Number One or HI. Later, he sought the improvement from alternate design and produced H4 with the giant clock appearance. Remarkable as it was, the Board of Longitude wouldnt grant him the prize for some time until it was adequately satisfied. Harrison had a principal contestant for the tempting prize at that time, an English mathematician called John Hadley, who developed the sextant. The sextant is the tool that people adopt to measure angles, such as the one between the Sun and the horizon, for a calculation of the location of ships or planes. In addition, his invention is significant since it can help determine longitude. Most chronometer forerunners of that particular generation were English, but that doesnt mean every achievement was made by them. One wonderful figure in the history is the Lancastrian Thomas Earnshaw, who created the ultimate form of chronometer escapementthe spring detent escapementand made the final decision on format and productions system for the marine chronometer, which turns it into a genuine modem commercial product, as well as a safe and pragmatic way of navigation at sea over the next century and half.", "hypothesis": "In theory, sailors can easily calculate their longitude position at sea.", "label": "e"} +{"uid": "id_43", "premise": "To determine whether interbreeding took place among Homo species before the populations that became modern humans left Africa, evolutionary biologists studied DNA from two African hunter-gatherer groups, the Biaka Pygmies and the San, and from a West African agricultural population, the Mandenka. Each of these groups is descended from populations thought to have remained in Africa, meaning they would have avoided the genetic bottleneck effect that usually occurs with migration. This means the groups show particularly high genetic diversity, which makes their genomes more likely to have retained evidence of ancient genetic mixing. The researchers looked at 61 non-coding DNA regions in all three groups. Because direct comparison to archaic specimens wasn't possible, the authors used computer models to simulate how infiltration from different populations might have affected patterns of variation within modern genomes. On chromosomes 4, 13 and 18 of the three African populations, the researchers found genetic regions that were more divergent on average than known modern sequences at the same locations, hinting at a different origin.", "hypothesis": "When population groups migrate, they apparently do not breed much with different groups at first.", "label": "e"} +{"uid": "id_44", "premise": "To determine whether interbreeding took place among Homo species before the populations that became modern humans left Africa, evolutionary biologists studied DNA from two African hunter-gatherer groups, the Biaka Pygmies and the San, and from a West African agricultural population, the Mandenka. Each of these groups is descended from populations thought to have remained in Africa, meaning they would have avoided the genetic bottleneck effect that usually occurs with migration. This means the groups show particularly high genetic diversity, which makes their genomes more likely to have retained evidence of ancient genetic mixing. The researchers looked at 61 non-coding DNA regions in all three groups. Because direct comparison to archaic specimens wasn't possible, the authors used computer models to simulate how infiltration from different populations might have affected patterns of variation within modern genomes. On chromosomes 4, 13 and 18 of the three African populations, the researchers found genetic regions that were more divergent on average than known modern sequences at the same locations, hinting at a different origin.", "hypothesis": "Since the genetic diversity of the three African populations was high, while that of the indigenous population was low, researchers concluded that the three African populations had interbred.", "label": "c"} +{"uid": "id_45", "premise": "To determine whether interbreeding took place among Homo species before the populations that became modern humans left Africa, evolutionary biologists studied DNA from two African hunter-gatherer groups, the Biaka Pygmies and the San, and from a West African agricultural population, the Mandenka. Each of these groups is descended from populations thought to have remained in Africa, meaning they would have avoided the genetic bottleneck effect that usually occurs with migration. This means the groups show particularly high genetic diversity, which makes their genomes more likely to have retained evidence of ancient genetic mixing. The researchers looked at 61 non-coding DNA regions in all three groups. Because direct comparison to archaic specimens wasn't possible, the authors used computer models to simulate how infiltration from different populations might have affected patterns of variation within modern genomes. On chromosomes 4, 13 and 18 of the three African populations, the researchers found genetic regions that were more divergent on average than known modern sequences at the same locations, hinting at a different origin.", "hypothesis": "These African groups were selected for the study because they represent both the early hunter-gatherers as well as the later farmers.", "label": "c"} +{"uid": "id_46", "premise": "To enjoy a comfortable retirement, many retired people recommend retiring on two- thirds of final salary and around 4 million workers have paid into pension schemes for the bulk of their working lives in order to realize this goal. Those who have contributed to a final salary pension scheme will reach that standard and in fact exceed it when the persons state pension is added to the equation. Those workers who have contributed to a pension scheme that lacks the final salary guarantee and instead depend on the investment value of their total contributions to purchase their pension on retirement are less fortunate. Even when their state pension is included the bulk of these people will retire on an income of around 40 per cent of their final salary. As for the remaining 11 million workers who have made little or no contri- bution to any other pension scheme than the compulsory state scheme, it is feared that they will find themselves dependent on means-tested benefits.", "hypothesis": "Workers with pension schemes without the final salary guarantee will have to manage on a lot less than the amount thought to be needed for a secure retirement.", "label": "n"} +{"uid": "id_47", "premise": "To enjoy a comfortable retirement, many retired people recommend retiring on two- thirds of final salary and around 4 million workers have paid into pension schemes for the bulk of their working lives in order to realize this goal. Those who have contributed to a final salary pension scheme will reach that standard and in fact exceed it when the persons state pension is added to the equation. Those workers who have contributed to a pension scheme that lacks the final salary guarantee and instead depend on the investment value of their total contributions to purchase their pension on retirement are less fortunate. Even when their state pension is included the bulk of these people will retire on an income of around 40 per cent of their final salary. As for the remaining 11 million workers who have made little or no contri- bution to any other pension scheme than the compulsory state scheme, it is feared that they will find themselves dependent on means-tested benefits.", "hypothesis": "The country to which the passage refers has a total population of 15 million.", "label": "c"} +{"uid": "id_48", "premise": "To enjoy a comfortable retirement, many retired people recommend retiring on two- thirds of final salary and around 4 million workers have paid into pension schemes for the bulk of their working lives in order to realize this goal. Those who have contributed to a final salary pension scheme will reach that standard and in fact exceed it when the persons state pension is added to the equation. Those workers who have contributed to a pension scheme that lacks the final salary guarantee and instead depend on the investment value of their total contributions to purchase their pension on retirement are less fortunate. Even when their state pension is included the bulk of these people will retire on an income of around 40 per cent of their final salary. As for the remaining 11 million workers who have made little or no contri- bution to any other pension scheme than the compulsory state scheme, it is feared that they will find themselves dependent on means-tested benefits.", "hypothesis": "Four million workers will reach or exceed the standard where they retire on two-thirds of the final salary.", "label": "c"} +{"uid": "id_49", "premise": "To get to his home at Tranton Park, Geoff takes the 17.45 train from Central Station. Rona avoids public transport whenever possible, but walks with him to the station, where she has left her car. Her drive to her home in Hampton takes 15 minutes, although it would have taken exactly the same time by train. Like Geoff, Sam takes the train, but avoids the rush by taking the 17.15 from Central Station. Bella, who works in the same office as the rest and who prefers the train, always makes the journey with Sam as far as Hampton, where she lives. Sam continues to Nately, which is his hometown, a journey that is three times as long as hers. Geoff arrives at Tranton Park an hour and a quarter after Bella gets to Hampton.", "hypothesis": "Sam never travels by train", "label": "n"} +{"uid": "id_50", "premise": "To get to his home at Tranton Park, Geoff takes the 17.45 train from Central Station. Rona avoids public transport whenever possible, but walks with him to the station, where she has left her car. Her drive to her home in Hampton takes 15 minutes, although it would have taken exactly the same time by train. Like Geoff, Sam takes the train, but avoids the rush by taking the 17.15 from Central Station. Bella, who works in the same office as the rest and who prefers the train, always makes the journey with Sam as far as Hampton, where she lives. Sam continues to Nately, which is his hometown, a journey that is three times as long as hers. Geoff arrives at Tranton Park an hour and a quarter after Bella gets to Hampton.", "hypothesis": "Bella, apart from Geoff, is most likely to travel by train.", "label": "e"} +{"uid": "id_51", "premise": "To get to his home at Tranton Park, Geoff takes the 17.45 train from Central Station. Rona avoids public transport whenever possible, but walks with him to the station, where she has left her car. Her drive to her home in Hampton takes 15 minutes, although it would have taken exactly the same time by train. Like Geoff, Sam takes the train, but avoids the rush by taking the 17.15 from Central Station. Bella, who works in the same office as the rest and who prefers the train, always makes the journey with Sam as far as Hampton, where she lives. Sam continues to Nately, which is his hometown, a journey that is three times as long as hers. Geoff arrives at Tranton Park an hour and a quarter after Bella gets to Hampton.", "hypothesis": "Bella is most likely to arrive home first", "label": "e"} +{"uid": "id_52", "premise": "To get to his home at Tranton Park, Geoff takes the 17.45 train from Central Station. Rona avoids public transport whenever possible, but walks with him to the station, where she has left her car. Her drive to her home in Hampton takes 15 minutes, although it would have taken exactly the same time by train. Like Geoff, Sam takes the train, but avoids the rush by taking the 17.15 from Central Station. Bella, who works in the same office as the rest and who prefers the train, always makes the journey with Sam as far as Hampton, where she lives. Sam continues to Nately, which is his hometown, a journey that is three times as long as hers. Geoff arrives at Tranton Park an hour and a quarter after Bella gets to Hampton.", "hypothesis": "the journey time between Nately and Tranton Park is 15 minutes.", "label": "e"} +{"uid": "id_53", "premise": "To get to his home at Tranton Park, Geoff takes the 17.45 train from Central Station. Rona avoids public transport whenever possible, but walks with him to the station, where she has left her car. Her drive to her home in Hampton takes 15 minutes, although it would have taken exactly the same time by train. Like Geoff, Sam takes the train, but avoids the rush by taking the 17.15 from Central Station. Bella, who works in the same office as the rest and who prefers the train, always makes the journey with Sam as far as Hampton, where she lives. Sam continues to Nately, which is his hometown, a journey that is three times as long as hers. Geoff arrives at Tranton Park an hour and a quarter after Bella gets to Hampton.", "hypothesis": "Geoff probably has the longest journey.", "label": "e"} +{"uid": "id_54", "premise": "To keep myself up to date, i always listen to 9:00 p. m. news on radio. ---- A candidate tells the interview board.", "hypothesis": "Recent news are broadcast only on radio.", "label": "n"} +{"uid": "id_55", "premise": "To keep myself up to date, i always listen to 9:00 p. m. news on radio. ---- A candidate tells the interview board.", "hypothesis": "The candidate does not read newspaper", "label": "n"} +{"uid": "id_56", "premise": "To save the environment enforce total ban on illegal mining throughout the country.", "hypothesis": "Mining is one of the factors responsible for environment degradation. Syndicate Bank (PO)", "label": "e"} +{"uid": "id_57", "premise": "To save the environment enforce total ban on illegal mining throughout the country.", "hypothesis": "Mining which is done legally does not cause any harm to the environment", "label": "n"} +{"uid": "id_58", "premise": "To what extent does advertising a product at a sporting event increase sales? In light of the London Olympics, the relationship between sporting events and advertising is under greater scrutiny by British companies than ever before. Research suggests that in the year prior to the Games, twelve percent of adults talked about the Olympics on a typical day. With this in mind, it is estimated that more than one billion pounds have been invested in the Games in the form of sponsorship from companies. In return for their investment, the exposure gained by sponsors is now legally protected by statute to prevent non-official sponsors from profiting.", "hypothesis": "As a result of the London Olympics sporting events and advertising is receiving more attention from adults.", "label": "n"} +{"uid": "id_59", "premise": "To what extent does advertising a product at a sporting event increase sales? In light of the London Olympics, the relationship between sporting events and advertising is under greater scrutiny by British companies than ever before. Research suggests that in the year prior to the Games, twelve percent of adults talked about the Olympics on a typical day. With this in mind, it is estimated that more than one billion pounds have been invested in the Games in the form of sponsorship from companies. In return for their investment, the exposure gained by sponsors is now legally protected by statute to prevent non-official sponsors from profiting.", "hypothesis": "As a result of the London Olympics sporting events and advertising has been researched for the first time.", "label": "n"} +{"uid": "id_60", "premise": "To what extent does advertising a product at a sporting event increase sales? In light of the London Olympics, the relationship between sporting events and advertising is under greater scrutiny by British companies than ever before. Research suggests that in the year prior to the Games, twelve percent of adults talked about the Olympics on a typical day. With this in mind, it is estimated that more than one billion pounds have been invested in the Games in the form of sponsorship from companies. In return for their investment, the exposure gained by sponsors is now legally protected by statute to prevent non-official sponsors from profiting.", "hypothesis": "As a result of the London Olympics sporting events and advertising is now protected by statute.", "label": "c"} +{"uid": "id_61", "premise": "To what extent does advertising a product at a sporting event increase sales? In light of the London Olympics, the relationship between sporting events and advertising is under greater scrutiny by British companies than ever before. Research suggests that in the year prior to the Games, twelve percent of adults talked about the Olympics on a typical day. With this in mind, it is estimated that more than one billion pounds have been invested in the Games in the form of sponsorship from companies. In return for their investment, the exposure gained by sponsors is now legally protected by statute to prevent non-official sponsors from profiting.", "hypothesis": "As a result of the London Olympics sporting events and advertising is receiving more attention from companies", "label": "e"} +{"uid": "id_62", "premise": "Toby, Rob and Frank all take a holiday by the sea, whilst Sam, Jo and Tony go hiking in the mountains. Frank, Sam and Jo travel by air. Jo, Rob and Tony do not enjoy their holiday.", "hypothesis": "Tony does not travel by air and goes hiking", "label": "e"} +{"uid": "id_63", "premise": "Toby, Rob and Frank all take a holiday by the sea, whilst Sam, Jo and Tony go hiking in the mountains. Frank, Sam and Jo travel by air. Jo, Rob and Tony do not enjoy their holiday.", "hypothesis": "Rob goes to the sea and does not enjoy the holiday", "label": "e"} +{"uid": "id_64", "premise": "Today, the term surreal is used to denote a curious imaginative effect. The words provenance can be traced back to the revolutionary surrealism movement which grew out of Dadaism in the mid-1920s. Surrealism spread quite quickly across European arts and literature, particularly in France, between the two world wars. The movements founder French poet Andre Breton was influenced heavily by Freuds theories, as he reacted against reason and logic in order to free the imagination from the unconscious mind. Surrealist works, both visual and oral, juxtaposed seemingly unrelated everyday objects and placed these in dreamlike settings. Thus, the popularity of surrealist paintings, including Salvador Dalis, lies in the unconventional positioning of powerful images such as leaping tigers, melting watches and metronomes. Surrealist art is widely known today, unlike the less easily accessible works of the French surrealist writers who, ignoring the literal meanings of words, focused instead on word associations and implications. That said, the literary surrealist tradition still survives in modern-day proponents of experimental writing.", "hypothesis": "Salvador Dalis work is more popular than Andre Bretons output.", "label": "e"} +{"uid": "id_65", "premise": "Today, the term surreal is used to denote a curious imaginative effect. The words provenance can be traced back to the revolutionary surrealism movement which grew out of Dadaism in the mid-1920s. Surrealism spread quite quickly across European arts and literature, particularly in France, between the two world wars. The movements founder French poet Andre Breton was influenced heavily by Freuds theories, as he reacted against reason and logic in order to free the imagination from the unconscious mind. Surrealist works, both visual and oral, juxtaposed seemingly unrelated everyday objects and placed these in dreamlike settings. Thus, the popularity of surrealist paintings, including Salvador Dalis, lies in the unconventional positioning of powerful images such as leaping tigers, melting watches and metronomes. Surrealist art is widely known today, unlike the less easily accessible works of the French surrealist writers who, ignoring the literal meanings of words, focused instead on word associations and implications. That said, the literary surrealist tradition still survives in modern-day proponents of experimental writing.", "hypothesis": "At one time Dadaism and Surrealism were closely affiliated.", "label": "e"} +{"uid": "id_66", "premise": "Today, the term surreal is used to denote a curious imaginative effect. The words provenance can be traced back to the revolutionary surrealism movement which grew out of Dadaism in the mid-1920s. Surrealism spread quite quickly across European arts and literature, particularly in France, between the two world wars. The movements founder French poet Andre Breton was influenced heavily by Freuds theories, as he reacted against reason and logic in order to free the imagination from the unconscious mind. Surrealist works, both visual and oral, juxtaposed seemingly unrelated everyday objects and placed these in dreamlike settings. Thus, the popularity of surrealist paintings, including Salvador Dalis, lies in the unconventional positioning of powerful images such as leaping tigers, melting watches and metronomes. Surrealist art is widely known today, unlike the less easily accessible works of the French surrealist writers who, ignoring the literal meanings of words, focused instead on word associations and implications. That said, the literary surrealist tradition still survives in modern-day proponents of experimental writing.", "hypothesis": "Some experimental writing is surreal.", "label": "e"} +{"uid": "id_67", "premise": "Today, the term surreal is used to denote a curious imaginative effect. The words provenance can be traced back to the revolutionary surrealism movement which grew out of Dadaism in the mid-1920s. Surrealism spread quite quickly across European arts and literature, particularly in France, between the two world wars. The movements founder French poet Andre Breton was influenced heavily by Freuds theories, as he reacted against reason and logic in order to free the imagination from the unconscious mind. Surrealist works, both visual and oral, juxtaposed seemingly unrelated everyday objects and placed these in dreamlike settings. Thus, the popularity of surrealist paintings, including Salvador Dalis, lies in the unconventional positioning of powerful images such as leaping tigers, melting watches and metronomes. Surrealist art is widely known today, unlike the less easily accessible works of the French surrealist writers who, ignoring the literal meanings of words, focused instead on word associations and implications. That said, the literary surrealist tradition still survives in modern-day proponents of experimental writing.", "hypothesis": "Surrealist painting is renowned for the arbitrary portrayal of everyday objects.", "label": "e"} +{"uid": "id_68", "premise": "Today, the term surreal is used to denote a curious imaginative effect. The words provenance can be traced back to the revolutionary surrealism movement which grew out of Dadaism in the mid-1920s. Surrealism spread quite quickly across European arts and literature, particularly in France, between the two world wars. The movements founder French poet Andre Breton was influenced heavily by Freuds theories, as he reacted against reason and logic in order to free the imagination from the unconscious mind. Surrealist works, both visual and oral, juxtaposed seemingly unrelated everyday objects and placed these in dreamlike settings. Thus, the popularity of surrealist paintings, including Salvador Dalis, lies in the unconventional positioning of powerful images such as leaping tigers, melting watches and metronomes. Surrealist art is widely known today, unlike the less easily accessible works of the French surrealist writers who, ignoring the literal meanings of words, focused instead on word associations and implications. That said, the literary surrealist tradition still survives in modern-day proponents of experimental writing.", "hypothesis": "Salvador Dali was a French surrealist painter.", "label": "n"} +{"uid": "id_69", "premise": "Todays historians aim to construct a record of human activities and to use this record to achieve a more profound understanding of humanity. This conception of their task is quite recent, dating from the development from 18th and early 19th centuries of scientific history, and cultivated largely by professional historians who adopted the assumption that the study of natural, inevitable human activity. Before the late 18th century, history was taught in virtually no schools, and it did not attempt to provide an interpretation of human life as a whole. This is more appropriately the function of religion, of philosophy, or even perhaps of poetry.", "hypothesis": "That which constitutes the study of history has changed over time.", "label": "n"} +{"uid": "id_70", "premise": "Todays historians aim to construct a record of human activities and to use this record to achieve a more profound understanding of humanity. This conception of their task is quite recent, dating from the development from 18th and early 19th centuries of scientific history, and cultivated largely by professional historians who adopted the assumption that the study of natural, inevitable human activity. Before the late 18th century, history was taught in virtually no schools, and it did not attempt to provide an interpretation of human life as a whole. This is more appropriately the function of religion, of philosophy, or even perhaps of poetry.", "hypothesis": "In the 17th century, history would not have been thought of as a way of understanding humanity.", "label": "e"} +{"uid": "id_71", "premise": "Todays historians aim to construct a record of human activities and to use this record to achieve a more profound understanding of humanity. This conception of their task is quite recent, dating from the development from 18th and early 19th centuries of scientific history, and cultivated largely by professional historians who adopted the assumption that the study of natural, inevitable human activity. Before the late 18th century, history was taught in virtually no schools, and it did not attempt to provide an interpretation of human life as a whole. This is more appropriately the function of religion, of philosophy, or even perhaps of poetry.", "hypothesis": "Professional historians did not exist before 18th century.", "label": "n"} +{"uid": "id_72", "premise": "Todays historians aim to construct a record of human activities and to use this record to achieve a more profound understanding of humanity. This conception of their task is quite recent, dating from the development from 18th and early 19th centuries of scientific history, and cultivated largely by professional historians who adopted the assumption that the study of natural, inevitable human activity. Before the late 18th century, history was taught in virtually no schools, and it did not attempt to provide an interpretation of human life as a whole. This is more appropriately the function of religion, of philosophy, or even perhaps of poetry.", "hypothesis": "That which constitutes the study of history has changed over time.", "label": "e"} +{"uid": "id_73", "premise": "Tom puts on his socks before he puts on his shoes. He puts on his shirt before he puts on his jacket.", "hypothesis": "Tom puts on his shoes before he puts on his shirt.", "label": "n"} +{"uid": "id_74", "premise": "Total stocks of most minerals in the earths crust are still large in relation to the current rates of use, and a high proportion of the minerals that are consumed in the production process could, in principle, be recycled. The technological and financial constraints on recycling such concentrations of minerals are considerable, however, and there is no guarantee that these constraints could be overcome. Substitution of abundant for scarce resources would avoid the problem, but such substitution is not always technologically feasible.", "hypothesis": "The technical constraints of recovering any mineral are considerable.", "label": "c"} +{"uid": "id_75", "premise": "Total stocks of most minerals in the earths crust are still large in relation to the current rates of use, and a high proportion of the minerals that are consumed in the production process could, in principle, be recycled. The technological and financial constraints on recycling such concentrations of minerals are considerable, however, and there is no guarantee that these constraints could be overcome. Substitution of abundant for scarce resources would avoid the problem, but such substitution is not always technologically feasible.", "hypothesis": "It is wrong to assume that the substitution of abundant for scarce resources will create insurmountable technical problems on every occasion.", "label": "e"} +{"uid": "id_76", "premise": "Total stocks of most minerals in the earths crust are still large in relation to the current rates of use, and a high proportion of the minerals that are consumed in the production process could, in principle, be recycled. The technological and financial constraints on recycling such concentrations of minerals are considerable, however, and there is no guarantee that these constraints could be overcome. Substitution of abundant for scarce resources would avoid the problem, but such substitution is not always technologically feasible.", "hypothesis": "Most of the minerals consumed in the production process can be economically recycled.", "label": "n"} +{"uid": "id_77", "premise": "Tourism in Mexico They appear out of nowhere like a heat-addled mirage on the flat, straight, mangrove-fringed road. The first sign of humanity in 40 miles, the tourists have ripened to pink under the glare of the tropical sun, with their legs wrapped around shiny red all-terrain vehicles buzzing down the asphalt like one giant invasive insect. It's a strange sight, all right. But it's eclipsed moments later by an even stranger one. Looming on the Caribbean just beyond the end of the road is the world's largest cruise ship, the Independence of the Seas, harboring a bounty of 3,811 passengers. Thanks to cruise ships like this one, Mexico's Costa Maya (not to be confused with the Riviera Maya farther north), set along a once mostly deserted stretch of the Yucatan Peninsula, is becoming one of the most visited, albeit least known, tourist regions in the nation. In 2006, just five years after the opening of the cruise ship facility here, 850,000 passengers sailed into port. By then, the once tiny fishing village of Mahahual had exploded from 80 souls dependent on the sea, to 3,500 dependent on tourism. The region begins about 80 miles south of Cancun and stretches from the vast Sian Ka'an Biosphere Reserve almost to the Belize border. It encompasses huge swaths of protected jungle, a number of lesser-known Maya archaeological sites, indigenous villages, pristine lagoons and top-notch diving. Plans call for low-rise, low-density development emphasizing small, eco-friendly hotels that cater to adventure seekers and cultural travelers. South of Tulum, a lengthy stretch of almost uninterrupted resort development comes to an abrupt halt at the northern edge of the Sian Ka'an Reserve. The UNESCO World Heritage site (whose name is Maya for \"where the sky is born\") is a 1.3-million-acre haven of tropical forest and wetlands. It's alive with more than 300 bird species, pig-like peccaries, monkeys, puma and jaguar. It harbors turquoise lagoons where orchids and bromeliads cling to mangroves whose spiny roots grasp the earth like gnarled fingers. Save for a few fishing lodges, Sian Ka'an isn't set up for overnight visitors. But day trips are organized by a number of tour operators, including Community Tours of Sian Ka'an, a cooperative formed in an attempt to keep profits - and residents - in the small Maya town of Muyil.", "hypothesis": "Costa Maya is still not well-known by tourists.", "label": "c"} +{"uid": "id_78", "premise": "Tourism in Mexico They appear out of nowhere like a heat-addled mirage on the flat, straight, mangrove-fringed road. The first sign of humanity in 40 miles, the tourists have ripened to pink under the glare of the tropical sun, with their legs wrapped around shiny red all-terrain vehicles buzzing down the asphalt like one giant invasive insect. It's a strange sight, all right. But it's eclipsed moments later by an even stranger one. Looming on the Caribbean just beyond the end of the road is the world's largest cruise ship, the Independence of the Seas, harboring a bounty of 3,811 passengers. Thanks to cruise ships like this one, Mexico's Costa Maya (not to be confused with the Riviera Maya farther north), set along a once mostly deserted stretch of the Yucatan Peninsula, is becoming one of the most visited, albeit least known, tourist regions in the nation. In 2006, just five years after the opening of the cruise ship facility here, 850,000 passengers sailed into port. By then, the once tiny fishing village of Mahahual had exploded from 80 souls dependent on the sea, to 3,500 dependent on tourism. The region begins about 80 miles south of Cancun and stretches from the vast Sian Ka'an Biosphere Reserve almost to the Belize border. It encompasses huge swaths of protected jungle, a number of lesser-known Maya archaeological sites, indigenous villages, pristine lagoons and top-notch diving. Plans call for low-rise, low-density development emphasizing small, eco-friendly hotels that cater to adventure seekers and cultural travelers. South of Tulum, a lengthy stretch of almost uninterrupted resort development comes to an abrupt halt at the northern edge of the Sian Ka'an Reserve. The UNESCO World Heritage site (whose name is Maya for \"where the sky is born\") is a 1.3-million-acre haven of tropical forest and wetlands. It's alive with more than 300 bird species, pig-like peccaries, monkeys, puma and jaguar. It harbors turquoise lagoons where orchids and bromeliads cling to mangroves whose spiny roots grasp the earth like gnarled fingers. Save for a few fishing lodges, Sian Ka'an isn't set up for overnight visitors. But day trips are organized by a number of tour operators, including Community Tours of Sian Ka'an, a cooperative formed in an attempt to keep profits - and residents - in the small Maya town of Muyil.", "hypothesis": "The UNESCO site has a larger area of tropical forest than any other area of Mexico.", "label": "n"} +{"uid": "id_79", "premise": "Tourism in Mexico They appear out of nowhere like a heat-addled mirage on the flat, straight, mangrove-fringed road. The first sign of humanity in 40 miles, the tourists have ripened to pink under the glare of the tropical sun, with their legs wrapped around shiny red all-terrain vehicles buzzing down the asphalt like one giant invasive insect. It's a strange sight, all right. But it's eclipsed moments later by an even stranger one. Looming on the Caribbean just beyond the end of the road is the world's largest cruise ship, the Independence of the Seas, harboring a bounty of 3,811 passengers. Thanks to cruise ships like this one, Mexico's Costa Maya (not to be confused with the Riviera Maya farther north), set along a once mostly deserted stretch of the Yucatan Peninsula, is becoming one of the most visited, albeit least known, tourist regions in the nation. In 2006, just five years after the opening of the cruise ship facility here, 850,000 passengers sailed into port. By then, the once tiny fishing village of Mahahual had exploded from 80 souls dependent on the sea, to 3,500 dependent on tourism. The region begins about 80 miles south of Cancun and stretches from the vast Sian Ka'an Biosphere Reserve almost to the Belize border. It encompasses huge swaths of protected jungle, a number of lesser-known Maya archaeological sites, indigenous villages, pristine lagoons and top-notch diving. Plans call for low-rise, low-density development emphasizing small, eco-friendly hotels that cater to adventure seekers and cultural travelers. South of Tulum, a lengthy stretch of almost uninterrupted resort development comes to an abrupt halt at the northern edge of the Sian Ka'an Reserve. The UNESCO World Heritage site (whose name is Maya for \"where the sky is born\") is a 1.3-million-acre haven of tropical forest and wetlands. It's alive with more than 300 bird species, pig-like peccaries, monkeys, puma and jaguar. It harbors turquoise lagoons where orchids and bromeliads cling to mangroves whose spiny roots grasp the earth like gnarled fingers. Save for a few fishing lodges, Sian Ka'an isn't set up for overnight visitors. But day trips are organized by a number of tour operators, including Community Tours of Sian Ka'an, a cooperative formed in an attempt to keep profits - and residents - in the small Maya town of Muyil.", "hypothesis": "It's difficult to find a hotel with vacancies in Sian Ka'an.", "label": "e"} +{"uid": "id_80", "premise": "Tourism in Mexico They appear out of nowhere like a heat-addled mirage on the flat, straight, mangrove-fringed road. The first sign of humanity in 40 miles, the tourists have ripened to pink under the glare of the tropical sun, with their legs wrapped around shiny red all-terrain vehicles buzzing down the asphalt like one giant invasive insect. It's a strange sight, all right. But it's eclipsed moments later by an even stranger one. Looming on the Caribbean just beyond the end of the road is the world's largest cruise ship, the Independence of the Seas, harboring a bounty of 3,811 passengers. Thanks to cruise ships like this one, Mexico's Costa Maya (not to be confused with the Riviera Maya farther north), set along a once mostly deserted stretch of the Yucatan Peninsula, is becoming one of the most visited, albeit least known, tourist regions in the nation. In 2006, just five years after the opening of the cruise ship facility here, 850,000 passengers sailed into port. By then, the once tiny fishing village of Mahahual had exploded from 80 souls dependent on the sea, to 3,500 dependent on tourism. The region begins about 80 miles south of Cancun and stretches from the vast Sian Ka'an Biosphere Reserve almost to the Belize border. It encompasses huge swaths of protected jungle, a number of lesser-known Maya archaeological sites, indigenous villages, pristine lagoons and top-notch diving. Plans call for low-rise, low-density development emphasizing small, eco-friendly hotels that cater to adventure seekers and cultural travelers. South of Tulum, a lengthy stretch of almost uninterrupted resort development comes to an abrupt halt at the northern edge of the Sian Ka'an Reserve. The UNESCO World Heritage site (whose name is Maya for \"where the sky is born\") is a 1.3-million-acre haven of tropical forest and wetlands. It's alive with more than 300 bird species, pig-like peccaries, monkeys, puma and jaguar. It harbors turquoise lagoons where orchids and bromeliads cling to mangroves whose spiny roots grasp the earth like gnarled fingers. Save for a few fishing lodges, Sian Ka'an isn't set up for overnight visitors. But day trips are organized by a number of tour operators, including Community Tours of Sian Ka'an, a cooperative formed in an attempt to keep profits - and residents - in the small Maya town of Muyil.", "hypothesis": "The Independence of the Seas is currently the largest ship in the Caribbean.", "label": "n"} +{"uid": "id_81", "premise": "Tourism in Mexico They appear out of nowhere like a heat-addled mirage on the flat, straight, mangrove-fringed road. The first sign of humanity in 40 miles, the tourists have ripened to pink under the glare of the tropical sun, with their legs wrapped around shiny red all-terrain vehicles buzzing down the asphalt like one giant invasive insect. It's a strange sight, all right. But it's eclipsed moments later by an even stranger one. Looming on the Caribbean just beyond the end of the road is the world's largest cruise ship, the Independence of the Seas, harboring a bounty of 3,811 passengers. Thanks to cruise ships like this one, Mexico's Costa Maya (not to be confused with the Riviera Maya farther north), set along a once mostly deserted stretch of the Yucatan Peninsula, is becoming one of the most visited, albeit least known, tourist regions in the nation. In 2006, just five years after the opening of the cruise ship facility here, 850,000 passengers sailed into port. By then, the once tiny fishing village of Mahahual had exploded from 80 souls dependent on the sea, to 3,500 dependent on tourism. The region begins about 80 miles south of Cancun and stretches from the vast Sian Ka'an Biosphere Reserve almost to the Belize border. It encompasses huge swaths of protected jungle, a number of lesser-known Maya archaeological sites, indigenous villages, pristine lagoons and top-notch diving. Plans call for low-rise, low-density development emphasizing small, eco-friendly hotels that cater to adventure seekers and cultural travelers. South of Tulum, a lengthy stretch of almost uninterrupted resort development comes to an abrupt halt at the northern edge of the Sian Ka'an Reserve. The UNESCO World Heritage site (whose name is Maya for \"where the sky is born\") is a 1.3-million-acre haven of tropical forest and wetlands. It's alive with more than 300 bird species, pig-like peccaries, monkeys, puma and jaguar. It harbors turquoise lagoons where orchids and bromeliads cling to mangroves whose spiny roots grasp the earth like gnarled fingers. Save for a few fishing lodges, Sian Ka'an isn't set up for overnight visitors. But day trips are organized by a number of tour operators, including Community Tours of Sian Ka'an, a cooperative formed in an attempt to keep profits - and residents - in the small Maya town of Muyil.", "hypothesis": "Costa Maya is a great place for tourists who enjoy diving.", "label": "e"} +{"uid": "id_82", "premise": "Tourism in Mexico They appear out of nowhere like a heat-addled mirage on the flat, straight, mangrove-fringed road. The first sign of humanity in 40 miles, the tourists have ripened to pink under the glare of the tropical sun, with their legs wrapped around shiny red all-terrain vehicles buzzing down the asphalt like one giant invasive insect. It's a strange sight, all right. But it's eclipsed moments later by an even stranger one. Looming on the Caribbean just beyond the end of the road is the world's largest cruise ship, the Independence of the Seas, harboring a bounty of 3,811 passengers. Thanks to cruise ships like this one, Mexico's Costa Maya (not to be confused with the Riviera Maya farther north), set along a once mostly deserted stretch of the Yucatan Peninsula, is becoming one of the most visited, albeit least known, tourist regions in the nation. In 2006, just five years after the opening of the cruise ship facility here, 850,000 passengers sailed into port. By then, the once tiny fishing village of Mahahual had exploded from 80 souls dependent on the sea, to 3,500 dependent on tourism. The region begins about 80 miles south of Cancun and stretches from the vast Sian Ka'an Biosphere Reserve almost to the Belize border. It encompasses huge swaths of protected jungle, a number of lesser-known Maya archaeological sites, indigenous villages, pristine lagoons and top-notch diving. Plans call for low-rise, low-density development emphasizing small, eco-friendly hotels that cater to adventure seekers and cultural travelers. South of Tulum, a lengthy stretch of almost uninterrupted resort development comes to an abrupt halt at the northern edge of the Sian Ka'an Reserve. The UNESCO World Heritage site (whose name is Maya for \"where the sky is born\") is a 1.3-million-acre haven of tropical forest and wetlands. It's alive with more than 300 bird species, pig-like peccaries, monkeys, puma and jaguar. It harbors turquoise lagoons where orchids and bromeliads cling to mangroves whose spiny roots grasp the earth like gnarled fingers. Save for a few fishing lodges, Sian Ka'an isn't set up for overnight visitors. But day trips are organized by a number of tour operators, including Community Tours of Sian Ka'an, a cooperative formed in an attempt to keep profits - and residents - in the small Maya town of Muyil.", "hypothesis": "Mahahual now has a population of 3,500.", "label": "n"} +{"uid": "id_83", "premise": "Tourism is big business. The annual profit and popularity of several top tourist attractions in the United Kingdom has been researched and presented by visitengland. com. Almost 30 million international visitors travel to London every year, marking the city as the most popular international travel destination in the world. In 2011, Londons most popular tourist attraction was the British Museum. The second most popular destination was Madame Tussauds. Outside of the capital, popular tourist destinations include Alton Towers and the Cadburys Factory. Tourist attractions contribute over two billion pounds to the UK economy and can be seen as one of the most profitable sectors. This information has not bypassed local authorities keen to bolster their income; some are spending hundreds of thousands of pounds on publicity drives. Whilst Essex County Council wont receive a penny from ticket sales, they are part-funding a new stadium in the hope that the increased spending by visitors will filter through to them in the form of business rates and local taxes.", "hypothesis": "In 2011 the British Museum received the most visits of any tourist attraction in the UK.", "label": "n"} +{"uid": "id_84", "premise": "Tourism is big business. The annual profit and popularity of several top tourist attractions in the United Kingdom has been researched and presented by visitengland. com. Almost 30 million international visitors travel to London every year, marking the city as the most popular international travel destination in the world. In 2011, Londons most popular tourist attraction was the British Museum. The second most popular destination was Madame Tussauds. Outside of the capital, popular tourist destinations include Alton Towers and the Cadburys Factory. Tourist attractions contribute over two billion pounds to the UK economy and can be seen as one of the most profitable sectors. This information has not bypassed local authorities keen to bolster their income; some are spending hundreds of thousands of pounds on publicity drives. Whilst Essex County Council wont receive a penny from ticket sales, they are part-funding a new stadium in the hope that the increased spending by visitors will filter through to them in the form of business rates and local taxes.", "hypothesis": "New York typically receives less than 30 million international visitors each year.", "label": "e"} +{"uid": "id_85", "premise": "Tourism is big business. The annual profit and popularity of several top tourist attractions in the United Kingdom has been researched and presented by visitengland. com. Almost 30 million international visitors travel to London every year, marking the city as the most popular international travel destination in the world. In 2011, Londons most popular tourist attraction was the British Museum. The second most popular destination was Madame Tussauds. Outside of the capital, popular tourist destinations include Alton Towers and the Cadburys Factory. Tourist attractions contribute over two billion pounds to the UK economy and can be seen as one of the most profitable sectors. This information has not bypassed local authorities keen to bolster their income; some are spending hundreds of thousands of pounds on publicity drives. Whilst Essex County Council wont receive a penny from ticket sales, they are part-funding a new stadium in the hope that the increased spending by visitors will filter through to them in the form of business rates and local taxes.", "hypothesis": "Whilst tourism brings in lots of money, the industry is one of the least profitable.", "label": "c"} +{"uid": "id_86", "premise": "Towns that have become commuter and second-home hotspots are valued for their housing stock, schools and unspoilt civic centres. It is now possible for working families to relocate away from cities without affecting their earning power. Commuting three days a week and working from home the rest has meant that many more people are willing to give up the city life and move to more rural areas to fulfil their dream of homes with gardens and cricket on the green. So many metropolitan dwellers have made the move that property prices in the more popular locations have become amongst the most expensive in the country.", "hypothesis": "New technology is the reason why it is possible for working families to relocate without affecting their earning power.", "label": "n"} +{"uid": "id_87", "premise": "Towns that have become commuter and second-home hotspots are valued for their housing stock, schools and unspoilt civic centres. It is now possible for working families to relocate away from cities without affecting their earning power. Commuting three days a week and working from home the rest has meant that many more people are willing to give up the city life and move to more rural areas to fulfil their dream of homes with gardens and cricket on the green. So many metropolitan dwellers have made the move that property prices in the more popular locations have become amongst the most expensive in the country.", "hypothesis": "An idea of an unspoilt civic centre could include, along with cricket on the green, a traditional high street with local shops.", "label": "e"} +{"uid": "id_88", "premise": "Towns that have become commuter and second-home hotspots are valued for their housing stock, schools and unspoilt civic centres. It is now possible for working families to relocate away from cities without affecting their earning power. Commuting three days a week and working from home the rest has meant that many more people are willing to give up the city life and move to more rural areas to fulfil their dream of homes with gardens and cricket on the green. So many metropolitan dwellers have made the move that property prices in the more popular locations have become amongst the most expensive in the country.", "hypothesis": "The only reason for these locations becoming so popular is only due to commuting even if for just part of the week.", "label": "c"} +{"uid": "id_89", "premise": "Trade and Early State Formation Bartering was a basic trade mechanism for many thousands of years; often sporadic and usually based on notions of reciprocity, it involved the mutual exchange of commodities or objects between individuals or groups. Redistribution of these goods through society lay in the hands of chiefs, religious leaders, or kin groups. Such redistribution was a basic element in chiefdoms. The change from redistribution to formal trade often based on regulated commerce that perhaps involved fixed prices and even currency was closely tied to growing political and social complexity and hence to the development of the state in the ancient world. In the 1970s, a number of archaeologists gave trade a primary role in the rise of ancient states. British archaeologist Colin Renfrew attributed the dramatic flowering of the Minoan civilization on Crete and through the Aegean to intensified trading contacts and to the impact of olive and vine cultivation on local communities. As agricultural economies became more diversified and local food supplies could be purchased both locally and over longer distances, a far-reaching economic interdependence resulted. Eventually, this led to redistribution systems for luxuries and basic commodities, systems that were organized and controlled by Minoan rulers from their palaces. As time went on, the self-sufficiency of communities was replaced by mutual dependence. Interest in long-distance trade brought about some cultural homogeneity from trade and gift exchange, and perhaps even led to piracy. Thus, intensified trade and interaction, and the flowering of specialist crafts, in a complex process of positive feedback, led to much more complex societies based on palaces, which were the economic hubs of a new Minoan civilization. Renfrew's model made some assumptions that are now discounted. For example, he argued that the introduction of domesticated vines and olives allowed a substantial expansion of land under cultivation and helped to power the emergence of complex society. Many archaeologists and paleobotanists now question this view, pointing out that the available evidence for cultivated vines and olives suggests that they were present only in the later Bronze Age. Trade, nevertheless, was probably one of many variables that led to the emergence of palace economies in Minoan Crete. American archaeologist William Rathje developed a hypothesis that considered an explosion in long-distance exchange a fundamental cause of Mayan civilization in Mesoamerica. He suggested that the lowland Mayan environment was deficient in many vital resources, among them obsidian, salt, stone for grinding maize, and many luxury materials. All these could be obtained from the nearby highlands, from the Valley of Mexico, and from other regions, if the necessary trading networks came into being. Such connections, and the trading expeditions to maintain them, could not be organized by individual villages. The Maya lived in a relatively uniform environment, where every community suffered from the same resource deficiencies. Thus, argued Rathje, long-distance trade networks were organized through local ceremonial centers and their leaders. In time, this organization became a state, and knowledge of its functioning was exportable, as were pottery, tropical bird feathers, specialized stone materials, and other local commodities. Rathje's hypothesis probably explains part of the complex process of Mayan state formation, but it suffers from the objection that suitable alternative raw materials can be found in the lowlands. It could be, too, that warfare became a competitive response to population growth and to the increasing scarcity of prime agricultural land, and that it played an important role in the emergence of the Mayan states.", "hypothesis": "The regulation of profits provided incentives for future trade.", "label": "e"} +{"uid": "id_90", "premise": "Trade and Early State Formation Bartering was a basic trade mechanism for many thousands of years; often sporadic and usually based on notions of reciprocity, it involved the mutual exchange of commodities or objects between individuals or groups. Redistribution of these goods through society lay in the hands of chiefs, religious leaders, or kin groups. Such redistribution was a basic element in chiefdoms. The change from redistribution to formal trade often based on regulated commerce that perhaps involved fixed prices and even currency was closely tied to growing political and social complexity and hence to the development of the state in the ancient world. In the 1970s, a number of archaeologists gave trade a primary role in the rise of ancient states. British archaeologist Colin Renfrew attributed the dramatic flowering of the Minoan civilization on Crete and through the Aegean to intensified trading contacts and to the impact of olive and vine cultivation on local communities. As agricultural economies became more diversified and local food supplies could be purchased both locally and over longer distances, a far-reaching economic interdependence resulted. Eventually, this led to redistribution systems for luxuries and basic commodities, systems that were organized and controlled by Minoan rulers from their palaces. As time went on, the self-sufficiency of communities was replaced by mutual dependence. Interest in long-distance trade brought about some cultural homogeneity from trade and gift exchange, and perhaps even led to piracy. Thus, intensified trade and interaction, and the flowering of specialist crafts, in a complex process of positive feedback, led to much more complex societies based on palaces, which were the economic hubs of a new Minoan civilization. Renfrew's model made some assumptions that are now discounted. For example, he argued that the introduction of domesticated vines and olives allowed a substantial expansion of land under cultivation and helped to power the emergence of complex society. Many archaeologists and paleobotanists now question this view, pointing out that the available evidence for cultivated vines and olives suggests that they were present only in the later Bronze Age. Trade, nevertheless, was probably one of many variables that led to the emergence of palace economies in Minoan Crete. American archaeologist William Rathje developed a hypothesis that considered an explosion in long-distance exchange a fundamental cause of Mayan civilization in Mesoamerica. He suggested that the lowland Mayan environment was deficient in many vital resources, among them obsidian, salt, stone for grinding maize, and many luxury materials. All these could be obtained from the nearby highlands, from the Valley of Mexico, and from other regions, if the necessary trading networks came into being. Such connections, and the trading expeditions to maintain them, could not be organized by individual villages. The Maya lived in a relatively uniform environment, where every community suffered from the same resource deficiencies. Thus, argued Rathje, long-distance trade networks were organized through local ceremonial centers and their leaders. In time, this organization became a state, and knowledge of its functioning was exportable, as were pottery, tropical bird feathers, specialized stone materials, and other local commodities. Rathje's hypothesis probably explains part of the complex process of Mayan state formation, but it suffers from the objection that suitable alternative raw materials can be found in the lowlands. It could be, too, that warfare became a competitive response to population growth and to the increasing scarcity of prime agricultural land, and that it played an important role in the emergence of the Mayan states.", "hypothesis": "Some markets had clearly established trading routes.", "label": "e"} +{"uid": "id_91", "premise": "Trade and Early State Formation Bartering was a basic trade mechanism for many thousands of years; often sporadic and usually based on notions of reciprocity, it involved the mutual exchange of commodities or objects between individuals or groups. Redistribution of these goods through society lay in the hands of chiefs, religious leaders, or kin groups. Such redistribution was a basic element in chiefdoms. The change from redistribution to formal trade often based on regulated commerce that perhaps involved fixed prices and even currency was closely tied to growing political and social complexity and hence to the development of the state in the ancient world. In the 1970s, a number of archaeologists gave trade a primary role in the rise of ancient states. British archaeologist Colin Renfrew attributed the dramatic flowering of the Minoan civilization on Crete and through the Aegean to intensified trading contacts and to the impact of olive and vine cultivation on local communities. As agricultural economies became more diversified and local food supplies could be purchased both locally and over longer distances, a far-reaching economic interdependence resulted. Eventually, this led to redistribution systems for luxuries and basic commodities, systems that were organized and controlled by Minoan rulers from their palaces. As time went on, the self-sufficiency of communities was replaced by mutual dependence. Interest in long-distance trade brought about some cultural homogeneity from trade and gift exchange, and perhaps even led to piracy. Thus, intensified trade and interaction, and the flowering of specialist crafts, in a complex process of positive feedback, led to much more complex societies based on palaces, which were the economic hubs of a new Minoan civilization. Renfrew's model made some assumptions that are now discounted. For example, he argued that the introduction of domesticated vines and olives allowed a substantial expansion of land under cultivation and helped to power the emergence of complex society. Many archaeologists and paleobotanists now question this view, pointing out that the available evidence for cultivated vines and olives suggests that they were present only in the later Bronze Age. Trade, nevertheless, was probably one of many variables that led to the emergence of palace economies in Minoan Crete. American archaeologist William Rathje developed a hypothesis that considered an explosion in long-distance exchange a fundamental cause of Mayan civilization in Mesoamerica. He suggested that the lowland Mayan environment was deficient in many vital resources, among them obsidian, salt, stone for grinding maize, and many luxury materials. All these could be obtained from the nearby highlands, from the Valley of Mexico, and from other regions, if the necessary trading networks came into being. Such connections, and the trading expeditions to maintain them, could not be organized by individual villages. The Maya lived in a relatively uniform environment, where every community suffered from the same resource deficiencies. Thus, argued Rathje, long-distance trade networks were organized through local ceremonial centers and their leaders. In time, this organization became a state, and knowledge of its functioning was exportable, as were pottery, tropical bird feathers, specialized stone materials, and other local commodities. Rathje's hypothesis probably explains part of the complex process of Mayan state formation, but it suffers from the objection that suitable alternative raw materials can be found in the lowlands. It could be, too, that warfare became a competitive response to population growth and to the increasing scarcity of prime agricultural land, and that it played an important role in the emergence of the Mayan states.", "hypothesis": "Political conditions were more important than demand for goods in the development of trade.", "label": "n"} +{"uid": "id_92", "premise": "Trade and Early State Formation Bartering was a basic trade mechanism for many thousands of years; often sporadic and usually based on notions of reciprocity, it involved the mutual exchange of commodities or objects between individuals or groups. Redistribution of these goods through society lay in the hands of chiefs, religious leaders, or kin groups. Such redistribution was a basic element in chiefdoms. The change from redistribution to formal trade often based on regulated commerce that perhaps involved fixed prices and even currency was closely tied to growing political and social complexity and hence to the development of the state in the ancient world. In the 1970s, a number of archaeologists gave trade a primary role in the rise of ancient states. British archaeologist Colin Renfrew attributed the dramatic flowering of the Minoan civilization on Crete and through the Aegean to intensified trading contacts and to the impact of olive and vine cultivation on local communities. As agricultural economies became more diversified and local food supplies could be purchased both locally and over longer distances, a far-reaching economic interdependence resulted. Eventually, this led to redistribution systems for luxuries and basic commodities, systems that were organized and controlled by Minoan rulers from their palaces. As time went on, the self-sufficiency of communities was replaced by mutual dependence. Interest in long-distance trade brought about some cultural homogeneity from trade and gift exchange, and perhaps even led to piracy. Thus, intensified trade and interaction, and the flowering of specialist crafts, in a complex process of positive feedback, led to much more complex societies based on palaces, which were the economic hubs of a new Minoan civilization. Renfrew's model made some assumptions that are now discounted. For example, he argued that the introduction of domesticated vines and olives allowed a substantial expansion of land under cultivation and helped to power the emergence of complex society. Many archaeologists and paleobotanists now question this view, pointing out that the available evidence for cultivated vines and olives suggests that they were present only in the later Bronze Age. Trade, nevertheless, was probably one of many variables that led to the emergence of palace economies in Minoan Crete. American archaeologist William Rathje developed a hypothesis that considered an explosion in long-distance exchange a fundamental cause of Mayan civilization in Mesoamerica. He suggested that the lowland Mayan environment was deficient in many vital resources, among them obsidian, salt, stone for grinding maize, and many luxury materials. All these could be obtained from the nearby highlands, from the Valley of Mexico, and from other regions, if the necessary trading networks came into being. Such connections, and the trading expeditions to maintain them, could not be organized by individual villages. The Maya lived in a relatively uniform environment, where every community suffered from the same resource deficiencies. Thus, argued Rathje, long-distance trade networks were organized through local ceremonial centers and their leaders. In time, this organization became a state, and knowledge of its functioning was exportable, as were pottery, tropical bird feathers, specialized stone materials, and other local commodities. Rathje's hypothesis probably explains part of the complex process of Mayan state formation, but it suffers from the objection that suitable alternative raw materials can be found in the lowlands. It could be, too, that warfare became a competitive response to population growth and to the increasing scarcity of prime agricultural land, and that it played an important role in the emergence of the Mayan states.", "hypothesis": "The spread of trade was influenced by many variables, none of which was the main cause.", "label": "e"} +{"uid": "id_93", "premise": "Traditional Farming System in Africa A. By tradition land in Luapula is not owned by individuals, but as in many other parts of Africa is allocated by the headman or headwoman of a village to people of either sex, according to need. Since land is generally prepared by hand, one ulupwa cannot take on a very large area; in this sense land has not been a limiting resource over large parts of the province. The situation has already changed near the main townships, and there has long been a scarcity of land for cultivation in the Valley. In these areas registered ownership patterns are becoming prevalent. B. Most of the traditional cropping in Luapula, as in the Bemba area to the east, is based on citemene, a system whereby crops are grown on the ashes of tree branches. As a rule, entire trees are not felled, but are pollarded so that they can regenerate. Branches are cut over an area of varying size early in the dry season, and stacked to dry over a rough circle about a fifth to a tenth of the pollarded area. The wood is fired before the rains and in the first year planted with the African cereal finger millet (Eleusine coracana). C. During the second season, and possibly for a few seasons more the area is planted to variously mixed combinations of annuals such as maize, pumpkins (Telfiria occidentalis) and other cucurbits, sweet potatoes, groundnuts, Phaseolus beans and various leafy vegetables, grown with a certain amount of rotation. The diverse sequence ends with vegetable cassava, which is often planted into the developing last-but-one crop as a relay. D. Richards (1969) observed that the practice of citemene entails a definite division of labour between men and women. A man stakes out a plot in an unobtrusive manner, since it is considered provocative towards one's neighbours to mark boundaries in an explicit way. The dangerous work of felling branches is the men's province, and involves much pride. Branches are stacke by the women, and fired by the men. Formerly women and men cooperated in the planting work, but the harvesting was always done by the women. At the beginning of thecycle little weeding is necessary, since the firing of the branches effectively destroys weeds. As the cycle progresses weeds increase and nutrients eventually become depleted to a point where further effort with annual crops is judged to be not worthwhile: at this point the cassava is planted, since it can produce a crop on nearly exhausted soil. Thereafter the plot is abandoned, and a new area pollarded for the next citemene cycle. E. When forest is not available - this is increasingly the case nowadays - various ridging systems (ibala) are built on small areas, to be planted with combinations of maize, beans, groundnuts and sweet potatoes, usually relayed with cassava. These plots are usually tended by women, and provide subsistence. Where their roots have year-round access to water tables mango, guava and oil-palm trees often grow around houses, forming a traditional agroforestry system. In season some of the fruit is sold by the roadside or in local markets. F. The margins of dambos are sometimes planted to local varieties of rice during the rainy season, and areas adjacent to vegetables irrigated with water from the dambo during the dry season. The extent of cultivation is very limited, no doubt because the growing of crops under dambo conditions calls for a great deal of skill. Near towns some of the vegetable produce is sold in local markets. G. Fishing has long provided a much needed protein supplement to the diet of Luapulans, as well as being the one substantial source of cash. Much fish is dried for sale to areas away from the main waterways. The Mweru and Bangweulu Lake Basins are the main areas of year-round fishing, but the Luapula River is also exploited during the latter part of the dry season. Several previously abundant and desirable species, such as the Luapula salmon or mpumbu (Labeoaltivelis) and pale (Sarotherodon machochir) have all but disappeared from Lake Mweru, apparently due to mismanagement. H. Fishing has always been a far more remunerative activity in Luapula that crop husbandry. A fisherman may earn more in a week than a bean or maize grower in a whole season. I sometimes heard claims that the relatively high earnings to be obtained from fishing induced an easy come, easy go outlook among Luapulan men. On the other hand, someone who secures good but erratic earnings may feel that their investment in an economically productive activity is not worthwhile because Luapulans fail to cooperate well in such activities. Besides, a fisherman with spare cash will find little in the way of working equipment to spend his money on. Better spend one's money in the bars and have a good time! I. Only small numbers of cattle or oxen are kept in the province owing to the prevalence of the tse-tse fly. For the few herds, the dambos provide subsistence grazing during the dry season. The absence of animal draft power greatly limits peoples' ability to plough and cultivate land: a married couple can rarely manage to prepare by hand-hoeing. Most people keep freely roaming chickens and goats. These act as a reserve for bartering, but may also be occasionally slaughtered for ceremonies or for entertaining important visitors. These animals are not a regular part of most peoples' diet. J. Citemene has been an ingenious system for providing people with seasonal production of high quality cereals and vegetables in regions of acid, heavily leached soils. Nutritionally, the most serious deficiency was that of protein. This could at times be alleviated when fish was available, provided that cultivators lived near the Valley and could find the means of bartering for dried fish. The citemene/fishing system was well adapted to the ecology of the miombo regions and sustainable for long periods, but only as long as human population densities stayed at low levels. Although population densities are still much lower than in several countries of South-East Asia, neither the fisheries nor the forests and woodlands of Luapula are capable, with unmodified traditional practices, of supporting the people in a sustainable manner. Overall, people must learn to intensify and diversify their productive systems while yet ensuring that these systems will remain productive in the future, when even more people will need food. Increasing overall production offood, though a vast challenge in itself, will not be enough, however. At the same time storage and distribution systems must allow everyone access to at least a moderate share of the total.", "hypothesis": "People rarely use animals to cultivate land.", "label": "e"} +{"uid": "id_94", "premise": "Traditional Farming System in Africa A. By tradition land in Luapula is not owned by individuals, but as in many other parts of Africa is allocated by the headman or headwoman of a village to people of either sex, according to need. Since land is generally prepared by hand, one ulupwa cannot take on a very large area; in this sense land has not been a limiting resource over large parts of the province. The situation has already changed near the main townships, and there has long been a scarcity of land for cultivation in the Valley. In these areas registered ownership patterns are becoming prevalent. B. Most of the traditional cropping in Luapula, as in the Bemba area to the east, is based on citemene, a system whereby crops are grown on the ashes of tree branches. As a rule, entire trees are not felled, but are pollarded so that they can regenerate. Branches are cut over an area of varying size early in the dry season, and stacked to dry over a rough circle about a fifth to a tenth of the pollarded area. The wood is fired before the rains and in the first year planted with the African cereal finger millet (Eleusine coracana). C. During the second season, and possibly for a few seasons more the area is planted to variously mixed combinations of annuals such as maize, pumpkins (Telfiria occidentalis) and other cucurbits, sweet potatoes, groundnuts, Phaseolus beans and various leafy vegetables, grown with a certain amount of rotation. The diverse sequence ends with vegetable cassava, which is often planted into the developing last-but-one crop as a relay. D. Richards (1969) observed that the practice of citemene entails a definite division of labour between men and women. A man stakes out a plot in an unobtrusive manner, since it is considered provocative towards one's neighbours to mark boundaries in an explicit way. The dangerous work of felling branches is the men's province, and involves much pride. Branches are stacke by the women, and fired by the men. Formerly women and men cooperated in the planting work, but the harvesting was always done by the women. At the beginning of thecycle little weeding is necessary, since the firing of the branches effectively destroys weeds. As the cycle progresses weeds increase and nutrients eventually become depleted to a point where further effort with annual crops is judged to be not worthwhile: at this point the cassava is planted, since it can produce a crop on nearly exhausted soil. Thereafter the plot is abandoned, and a new area pollarded for the next citemene cycle. E. When forest is not available - this is increasingly the case nowadays - various ridging systems (ibala) are built on small areas, to be planted with combinations of maize, beans, groundnuts and sweet potatoes, usually relayed with cassava. These plots are usually tended by women, and provide subsistence. Where their roots have year-round access to water tables mango, guava and oil-palm trees often grow around houses, forming a traditional agroforestry system. In season some of the fruit is sold by the roadside or in local markets. F. The margins of dambos are sometimes planted to local varieties of rice during the rainy season, and areas adjacent to vegetables irrigated with water from the dambo during the dry season. The extent of cultivation is very limited, no doubt because the growing of crops under dambo conditions calls for a great deal of skill. Near towns some of the vegetable produce is sold in local markets. G. Fishing has long provided a much needed protein supplement to the diet of Luapulans, as well as being the one substantial source of cash. Much fish is dried for sale to areas away from the main waterways. The Mweru and Bangweulu Lake Basins are the main areas of year-round fishing, but the Luapula River is also exploited during the latter part of the dry season. Several previously abundant and desirable species, such as the Luapula salmon or mpumbu (Labeoaltivelis) and pale (Sarotherodon machochir) have all but disappeared from Lake Mweru, apparently due to mismanagement. H. Fishing has always been a far more remunerative activity in Luapula that crop husbandry. A fisherman may earn more in a week than a bean or maize grower in a whole season. I sometimes heard claims that the relatively high earnings to be obtained from fishing induced an easy come, easy go outlook among Luapulan men. On the other hand, someone who secures good but erratic earnings may feel that their investment in an economically productive activity is not worthwhile because Luapulans fail to cooperate well in such activities. Besides, a fisherman with spare cash will find little in the way of working equipment to spend his money on. Better spend one's money in the bars and have a good time! I. Only small numbers of cattle or oxen are kept in the province owing to the prevalence of the tse-tse fly. For the few herds, the dambos provide subsistence grazing during the dry season. The absence of animal draft power greatly limits peoples' ability to plough and cultivate land: a married couple can rarely manage to prepare by hand-hoeing. Most people keep freely roaming chickens and goats. These act as a reserve for bartering, but may also be occasionally slaughtered for ceremonies or for entertaining important visitors. These animals are not a regular part of most peoples' diet. J. Citemene has been an ingenious system for providing people with seasonal production of high quality cereals and vegetables in regions of acid, heavily leached soils. Nutritionally, the most serious deficiency was that of protein. This could at times be alleviated when fish was available, provided that cultivators lived near the Valley and could find the means of bartering for dried fish. The citemene/fishing system was well adapted to the ecology of the miombo regions and sustainable for long periods, but only as long as human population densities stayed at low levels. Although population densities are still much lower than in several countries of South-East Asia, neither the fisheries nor the forests and woodlands of Luapula are capable, with unmodified traditional practices, of supporting the people in a sustainable manner. Overall, people must learn to intensify and diversify their productive systems while yet ensuring that these systems will remain productive in the future, when even more people will need food. Increasing overall production offood, though a vast challenge in itself, will not be enough, however. At the same time storage and distribution systems must allow everyone access to at least a moderate share of the total.", "hypothesis": "Though citemene has been a sophisticated system, it could not provide enough protein.", "label": "e"} +{"uid": "id_95", "premise": "Traditional Farming System in Africa A. By tradition land in Luapula is not owned by individuals, but as in many other parts of Africa is allocated by the headman or headwoman of a village to people of either sex, according to need. Since land is generally prepared by hand, one ulupwa cannot take on a very large area; in this sense land has not been a limiting resource over large parts of the province. The situation has already changed near the main townships, and there has long been a scarcity of land for cultivation in the Valley. In these areas registered ownership patterns are becoming prevalent. B. Most of the traditional cropping in Luapula, as in the Bemba area to the east, is based on citemene, a system whereby crops are grown on the ashes of tree branches. As a rule, entire trees are not felled, but are pollarded so that they can regenerate. Branches are cut over an area of varying size early in the dry season, and stacked to dry over a rough circle about a fifth to a tenth of the pollarded area. The wood is fired before the rains and in the first year planted with the African cereal finger millet (Eleusine coracana). C. During the second season, and possibly for a few seasons more the area is planted to variously mixed combinations of annuals such as maize, pumpkins (Telfiria occidentalis) and other cucurbits, sweet potatoes, groundnuts, Phaseolus beans and various leafy vegetables, grown with a certain amount of rotation. The diverse sequence ends with vegetable cassava, which is often planted into the developing last-but-one crop as a relay. D. Richards (1969) observed that the practice of citemene entails a definite division of labour between men and women. A man stakes out a plot in an unobtrusive manner, since it is considered provocative towards one's neighbours to mark boundaries in an explicit way. The dangerous work of felling branches is the men's province, and involves much pride. Branches are stacke by the women, and fired by the men. Formerly women and men cooperated in the planting work, but the harvesting was always done by the women. At the beginning of thecycle little weeding is necessary, since the firing of the branches effectively destroys weeds. As the cycle progresses weeds increase and nutrients eventually become depleted to a point where further effort with annual crops is judged to be not worthwhile: at this point the cassava is planted, since it can produce a crop on nearly exhausted soil. Thereafter the plot is abandoned, and a new area pollarded for the next citemene cycle. E. When forest is not available - this is increasingly the case nowadays - various ridging systems (ibala) are built on small areas, to be planted with combinations of maize, beans, groundnuts and sweet potatoes, usually relayed with cassava. These plots are usually tended by women, and provide subsistence. Where their roots have year-round access to water tables mango, guava and oil-palm trees often grow around houses, forming a traditional agroforestry system. In season some of the fruit is sold by the roadside or in local markets. F. The margins of dambos are sometimes planted to local varieties of rice during the rainy season, and areas adjacent to vegetables irrigated with water from the dambo during the dry season. The extent of cultivation is very limited, no doubt because the growing of crops under dambo conditions calls for a great deal of skill. Near towns some of the vegetable produce is sold in local markets. G. Fishing has long provided a much needed protein supplement to the diet of Luapulans, as well as being the one substantial source of cash. Much fish is dried for sale to areas away from the main waterways. The Mweru and Bangweulu Lake Basins are the main areas of year-round fishing, but the Luapula River is also exploited during the latter part of the dry season. Several previously abundant and desirable species, such as the Luapula salmon or mpumbu (Labeoaltivelis) and pale (Sarotherodon machochir) have all but disappeared from Lake Mweru, apparently due to mismanagement. H. Fishing has always been a far more remunerative activity in Luapula that crop husbandry. A fisherman may earn more in a week than a bean or maize grower in a whole season. I sometimes heard claims that the relatively high earnings to be obtained from fishing induced an easy come, easy go outlook among Luapulan men. On the other hand, someone who secures good but erratic earnings may feel that their investment in an economically productive activity is not worthwhile because Luapulans fail to cooperate well in such activities. Besides, a fisherman with spare cash will find little in the way of working equipment to spend his money on. Better spend one's money in the bars and have a good time! I. Only small numbers of cattle or oxen are kept in the province owing to the prevalence of the tse-tse fly. For the few herds, the dambos provide subsistence grazing during the dry season. The absence of animal draft power greatly limits peoples' ability to plough and cultivate land: a married couple can rarely manage to prepare by hand-hoeing. Most people keep freely roaming chickens and goats. These act as a reserve for bartering, but may also be occasionally slaughtered for ceremonies or for entertaining important visitors. These animals are not a regular part of most peoples' diet. J. Citemene has been an ingenious system for providing people with seasonal production of high quality cereals and vegetables in regions of acid, heavily leached soils. Nutritionally, the most serious deficiency was that of protein. This could at times be alleviated when fish was available, provided that cultivators lived near the Valley and could find the means of bartering for dried fish. The citemene/fishing system was well adapted to the ecology of the miombo regions and sustainable for long periods, but only as long as human population densities stayed at low levels. Although population densities are still much lower than in several countries of South-East Asia, neither the fisheries nor the forests and woodlands of Luapula are capable, with unmodified traditional practices, of supporting the people in a sustainable manner. Overall, people must learn to intensify and diversify their productive systems while yet ensuring that these systems will remain productive in the future, when even more people will need food. Increasing overall production offood, though a vast challenge in itself, will not be enough, however. At the same time storage and distribution systems must allow everyone access to at least a moderate share of the total.", "hypothesis": "When it is a busy time, children usually took part in the labor force.", "label": "n"} +{"uid": "id_96", "premise": "Traditional Farming System in Africa A. By tradition land in Luapula is not owned by individuals, but as in many other parts of Africa is allocated by the headman or headwoman of a village to people of either sex, according to need. Since land is generally prepared by hand, one ulupwa cannot take on a very large area; in this sense land has not been a limiting resource over large parts of the province. The situation has already changed near the main townships, and there has long been a scarcity of land for cultivation in the Valley. In these areas registered ownership patterns are becoming prevalent. B. Most of the traditional cropping in Luapula, as in the Bemba area to the east, is based on citemene, a system whereby crops are grown on the ashes of tree branches. As a rule, entire trees are not felled, but are pollarded so that they can regenerate. Branches are cut over an area of varying size early in the dry season, and stacked to dry over a rough circle about a fifth to a tenth of the pollarded area. The wood is fired before the rains and in the first year planted with the African cereal finger millet (Eleusine coracana). C. During the second season, and possibly for a few seasons more the area is planted to variously mixed combinations of annuals such as maize, pumpkins (Telfiria occidentalis) and other cucurbits, sweet potatoes, groundnuts, Phaseolus beans and various leafy vegetables, grown with a certain amount of rotation. The diverse sequence ends with vegetable cassava, which is often planted into the developing last-but-one crop as a relay. D. Richards (1969) observed that the practice of citemene entails a definite division of labour between men and women. A man stakes out a plot in an unobtrusive manner, since it is considered provocative towards one's neighbours to mark boundaries in an explicit way. The dangerous work of felling branches is the men's province, and involves much pride. Branches are stacke by the women, and fired by the men. Formerly women and men cooperated in the planting work, but the harvesting was always done by the women. At the beginning of thecycle little weeding is necessary, since the firing of the branches effectively destroys weeds. As the cycle progresses weeds increase and nutrients eventually become depleted to a point where further effort with annual crops is judged to be not worthwhile: at this point the cassava is planted, since it can produce a crop on nearly exhausted soil. Thereafter the plot is abandoned, and a new area pollarded for the next citemene cycle. E. When forest is not available - this is increasingly the case nowadays - various ridging systems (ibala) are built on small areas, to be planted with combinations of maize, beans, groundnuts and sweet potatoes, usually relayed with cassava. These plots are usually tended by women, and provide subsistence. Where their roots have year-round access to water tables mango, guava and oil-palm trees often grow around houses, forming a traditional agroforestry system. In season some of the fruit is sold by the roadside or in local markets. F. The margins of dambos are sometimes planted to local varieties of rice during the rainy season, and areas adjacent to vegetables irrigated with water from the dambo during the dry season. The extent of cultivation is very limited, no doubt because the growing of crops under dambo conditions calls for a great deal of skill. Near towns some of the vegetable produce is sold in local markets. G. Fishing has long provided a much needed protein supplement to the diet of Luapulans, as well as being the one substantial source of cash. Much fish is dried for sale to areas away from the main waterways. The Mweru and Bangweulu Lake Basins are the main areas of year-round fishing, but the Luapula River is also exploited during the latter part of the dry season. Several previously abundant and desirable species, such as the Luapula salmon or mpumbu (Labeoaltivelis) and pale (Sarotherodon machochir) have all but disappeared from Lake Mweru, apparently due to mismanagement. H. Fishing has always been a far more remunerative activity in Luapula that crop husbandry. A fisherman may earn more in a week than a bean or maize grower in a whole season. I sometimes heard claims that the relatively high earnings to be obtained from fishing induced an easy come, easy go outlook among Luapulan men. On the other hand, someone who secures good but erratic earnings may feel that their investment in an economically productive activity is not worthwhile because Luapulans fail to cooperate well in such activities. Besides, a fisherman with spare cash will find little in the way of working equipment to spend his money on. Better spend one's money in the bars and have a good time! I. Only small numbers of cattle or oxen are kept in the province owing to the prevalence of the tse-tse fly. For the few herds, the dambos provide subsistence grazing during the dry season. The absence of animal draft power greatly limits peoples' ability to plough and cultivate land: a married couple can rarely manage to prepare by hand-hoeing. Most people keep freely roaming chickens and goats. These act as a reserve for bartering, but may also be occasionally slaughtered for ceremonies or for entertaining important visitors. These animals are not a regular part of most peoples' diet. J. Citemene has been an ingenious system for providing people with seasonal production of high quality cereals and vegetables in regions of acid, heavily leached soils. Nutritionally, the most serious deficiency was that of protein. This could at times be alleviated when fish was available, provided that cultivators lived near the Valley and could find the means of bartering for dried fish. The citemene/fishing system was well adapted to the ecology of the miombo regions and sustainable for long periods, but only as long as human population densities stayed at low levels. Although population densities are still much lower than in several countries of South-East Asia, neither the fisheries nor the forests and woodlands of Luapula are capable, with unmodified traditional practices, of supporting the people in a sustainable manner. Overall, people must learn to intensify and diversify their productive systems while yet ensuring that these systems will remain productive in the future, when even more people will need food. Increasing overall production offood, though a vast challenge in itself, will not be enough, however. At the same time storage and distribution systems must allow everyone access to at least a moderate share of the total.", "hypothesis": "The local residents eat goats on a regular time.", "label": "e"} +{"uid": "id_97", "premise": "Traditionally medicine was the science of curing illness with treatments. For thousands of years people would have used plants and would have turned to priests for cures. In more recent times illness has been attributed less to the intervention of gods or magic and instead to natural causes. Medicine today is as much concerned with prevention as cure. Doctors use treatments of many types, including radiation and vaccination, both of which were unknown until very recent times. Other treatments have been known about and practised for centuries. Muslim doctors were skilled surgeons and treated pain with opium. When Europeans first reached the Americas they found healers who used many plants to cure illnesses. The Europeans adopted many of these treatments and some are still effective and in use today.", "hypothesis": "Vaccination is a relatively recent discovery.", "label": "e"} +{"uid": "id_98", "premise": "Traditionally medicine was the science of curing illness with treatments. For thousands of years people would have used plants and would have turned to priests for cures. In more recent times illness has been attributed less to the intervention of gods or magic and instead to natural causes. Medicine today is as much concerned with prevention as cure. Doctors use treatments of many types, including radiation and vaccination, both of which were unknown until very recent times. Other treatments have been known about and practised for centuries. Muslim doctors were skilled surgeons and treated pain with opium. When Europeans first reached the Americas they found healers who used many plants to cure illnesses. The Europeans adopted many of these treatments and some are still effective and in use today.", "hypothesis": "Practitioners of modern medicine make use of many techniques and technologies.", "label": "e"} +{"uid": "id_99", "premise": "Traditionally medicine was the science of curing illness with treatments. For thousands of years people would have used plants and would have turned to priests for cures. In more recent times illness has been attributed less to the intervention of gods or magic and instead to natural causes. Medicine today is as much concerned with prevention as cure. Doctors use treatments of many types, including radiation and vaccination, both of which were unknown until very recent times. Other treatments have been known about and practised for centuries. Muslim doctors were skilled surgeons and treated pain with opium. When Europeans first reached the Americas they found healers who used many plants to cure illnesses. The Europeans adopted many of these treatments and some are still effective and in use today.", "hypothesis": "The author of the passage believes that prevention is better than cure.", "label": "n"} +{"uid": "id_100", "premise": "Traditionally medicine was the science of curing illness with treatments. For thousands of years people would have used plants and would have turned to priests for cures. In more recent times illness has been attributed less to the intervention of gods or magic and instead to natural causes. Medicine today is as much concerned with prevention as cure. Doctors use treatments of many types, including radiation and vaccination, both of which were unknown until very recent times. Other treatments have been known about and practised for centuries. Muslim doctors were skilled surgeons and treated pain with opium. When Europeans first reached the Americas they found healers who used many plants to cure illnesses. The Europeans adopted many of these treatments and some are still effective and in use today.", "hypothesis": "Medicine is a science that owes its success to modern treatments.", "label": "c"} +{"uid": "id_101", "premise": "Traditionally medicine was the science of curing illness with treatments. For thousands of years people would have used plants and would have turned to priests for cures. In more recent times illness has been attributed less to the intervention of gods or magic and instead to natural causes. Medicine today is as much concerned with prevention as cure. Doctors use treatments of many types, including radiation and vaccination, both of which were unknown until very recent times. Other treatments have been known about and practised for centuries. Muslim doctors were skilled surgeons and treated pain with opium. When Europeans first reached the Americas they found healers who used many plants to cure illnesses. The Europeans adopted many of these treatments and some are still effective and in use today.", "hypothesis": "Modern medicine is the science of curing illness.", "label": "c"} +{"uid": "id_102", "premise": "Traditionally uniforms were and for some industries still are manufactured to protect the worker. When they were first designed, it is also likely that all uniforms made symbolic sense - those for the military, for example, were originally intended to impress and even terrify the enemy; other uniforms denoted a hierarchy - chefs wore white because they worked with flour, but the main chef wore a black hat to show he supervised. The last 30 years, however, have seen an increasing emphasis on their role in projecting the image of an organisation and in uniting the workforce into a homogeneous unit particularly in customer facing\" industries, and especially in financial services and retailing. From uniforms and workwear has emerged corporate clothing. \"The people you employ are your ambassadors, \" says Peter Griffin, managing director of a major retailer in the UK. \"What they say, how they look, and how they behave is terribly important. \" The result is a new way of looking at corporate workwear. From being a simple means of identifying who is a member of staff, the uniform is emerging as a new channel of marketing communication. Truly effective marketing through visual cues such as uniforms is a subtle art, however. Wittingly or unwittingly, how we look sends all sorts of powerful subliminal messages to other people. Dark colours give an aura of authority while lighter pastel shades suggest approachability. Certain dress style creates a sense of conservatism, others a sense of openness to new ideas. Neatness can suggest efficiency but, if it is overdone, it can spill over and indicate an obsession with power. \"If the company is selling quality, then it must have quality uniforms. If it is selling style, its uniforms must be stylish. If it wants to appear innovative, everybody cant look exactly the same. Subliminally we see all these things, \" says Lynn Elvy, a director of image consultants House of Colour. But translating corporate philosophies into the right mix of colour, style, degree of branding and uniformity can be a fraught process. And it is not always successful. According to Company Clothing magazine, there are 1000 companies supplying the workwear and corporate clothing market. Of these, 22 account for 85% of total sales - 380 million in 1994. A successful uniform needs to balance two key sets of needs. On the one hand, no uniform will work if staff feel uncomfortable or ugly. Giving the wearers a choice has become a key element in the way corporate clothing is introduced and managed. On the other, it is pointless if the look doesnt express the businesss marketing strategy. The greatest challenge in this respect is time. When it comes to human perceptions, first impressions count. Customers will size up the way staff look in just a few seconds, and that few seconds will colour their attitudes from then on. Those few seconds can beReading so important that big companies are prepared to invest years, and millions of pounds, getting them right. In addition, some uniform companies also offer rental services. \"There will be an increasing specialisation in the marketplace, \" predicts Mr Blyth, Customer Services Manager of a large UK bank. The past two or three years have seen consolidation. Increasingly, the big suppliers are becoming managing agents, which means they offer a total service to put together the whole complex operation of a companys corporate clothing package - which includes reliable sourcing, managing the inventory, budget control and distribution to either central locations or to each staff member individually. Huge investments have been made in new systems, information technology and amassing quality assurance accreditations. Corporate clothing does have potential for further growth. Some banks have yet to introduce a full corporate look; police forces are researching a complete new look for the 21st century. And many employees now welcome a company wardrobe. A recent survey of staff found that 90 per cent welcomed having clothing which reflected the corporate identity.", "hypothesis": "Clothing companies are planning to offer financial services in the future.", "label": "c"} +{"uid": "id_103", "premise": "Traditionally uniforms were and for some industries still are manufactured to protect the worker. When they were first designed, it is also likely that all uniforms made symbolic sense - those for the military, for example, were originally intended to impress and even terrify the enemy; other uniforms denoted a hierarchy - chefs wore white because they worked with flour, but the main chef wore a black hat to show he supervised. The last 30 years, however, have seen an increasing emphasis on their role in projecting the image of an organisation and in uniting the workforce into a homogeneous unit particularly in customer facing\" industries, and especially in financial services and retailing. From uniforms and workwear has emerged corporate clothing. \"The people you employ are your ambassadors, \" says Peter Griffin, managing director of a major retailer in the UK. \"What they say, how they look, and how they behave is terribly important. \" The result is a new way of looking at corporate workwear. From being a simple means of identifying who is a member of staff, the uniform is emerging as a new channel of marketing communication. Truly effective marketing through visual cues such as uniforms is a subtle art, however. Wittingly or unwittingly, how we look sends all sorts of powerful subliminal messages to other people. Dark colours give an aura of authority while lighter pastel shades suggest approachability. Certain dress style creates a sense of conservatism, others a sense of openness to new ideas. Neatness can suggest efficiency but, if it is overdone, it can spill over and indicate an obsession with power. \"If the company is selling quality, then it must have quality uniforms. If it is selling style, its uniforms must be stylish. If it wants to appear innovative, everybody cant look exactly the same. Subliminally we see all these things, \" says Lynn Elvy, a director of image consultants House of Colour. But translating corporate philosophies into the right mix of colour, style, degree of branding and uniformity can be a fraught process. And it is not always successful. According to Company Clothing magazine, there are 1000 companies supplying the workwear and corporate clothing market. Of these, 22 account for 85% of total sales - 380 million in 1994. A successful uniform needs to balance two key sets of needs. On the one hand, no uniform will work if staff feel uncomfortable or ugly. Giving the wearers a choice has become a key element in the way corporate clothing is introduced and managed. On the other, it is pointless if the look doesnt express the businesss marketing strategy. The greatest challenge in this respect is time. When it comes to human perceptions, first impressions count. Customers will size up the way staff look in just a few seconds, and that few seconds will colour their attitudes from then on. Those few seconds can beReading so important that big companies are prepared to invest years, and millions of pounds, getting them right. In addition, some uniform companies also offer rental services. \"There will be an increasing specialisation in the marketplace, \" predicts Mr Blyth, Customer Services Manager of a large UK bank. The past two or three years have seen consolidation. Increasingly, the big suppliers are becoming managing agents, which means they offer a total service to put together the whole complex operation of a companys corporate clothing package - which includes reliable sourcing, managing the inventory, budget control and distribution to either central locations or to each staff member individually. Huge investments have been made in new systems, information technology and amassing quality assurance accreditations. Corporate clothing does have potential for further growth. Some banks have yet to introduce a full corporate look; police forces are researching a complete new look for the 21st century. And many employees now welcome a company wardrobe. A recent survey of staff found that 90 per cent welcomed having clothing which reflected the corporate identity.", "hypothesis": "Uniforms are best selected by marketing consultants.", "label": "n"} +{"uid": "id_104", "premise": "Traditionally uniforms were and for some industries still are manufactured to protect the worker. When they were first designed, it is also likely that all uniforms made symbolic sense - those for the military, for example, were originally intended to impress and even terrify the enemy; other uniforms denoted a hierarchy - chefs wore white because they worked with flour, but the main chef wore a black hat to show he supervised. The last 30 years, however, have seen an increasing emphasis on their role in projecting the image of an organisation and in uniting the workforce into a homogeneous unit particularly in customer facing\" industries, and especially in financial services and retailing. From uniforms and workwear has emerged corporate clothing. \"The people you employ are your ambassadors, \" says Peter Griffin, managing director of a major retailer in the UK. \"What they say, how they look, and how they behave is terribly important. \" The result is a new way of looking at corporate workwear. From being a simple means of identifying who is a member of staff, the uniform is emerging as a new channel of marketing communication. Truly effective marketing through visual cues such as uniforms is a subtle art, however. Wittingly or unwittingly, how we look sends all sorts of powerful subliminal messages to other people. Dark colours give an aura of authority while lighter pastel shades suggest approachability. Certain dress style creates a sense of conservatism, others a sense of openness to new ideas. Neatness can suggest efficiency but, if it is overdone, it can spill over and indicate an obsession with power. \"If the company is selling quality, then it must have quality uniforms. If it is selling style, its uniforms must be stylish. If it wants to appear innovative, everybody cant look exactly the same. Subliminally we see all these things, \" says Lynn Elvy, a director of image consultants House of Colour. But translating corporate philosophies into the right mix of colour, style, degree of branding and uniformity can be a fraught process. And it is not always successful. According to Company Clothing magazine, there are 1000 companies supplying the workwear and corporate clothing market. Of these, 22 account for 85% of total sales - 380 million in 1994. A successful uniform needs to balance two key sets of needs. On the one hand, no uniform will work if staff feel uncomfortable or ugly. Giving the wearers a choice has become a key element in the way corporate clothing is introduced and managed. On the other, it is pointless if the look doesnt express the businesss marketing strategy. The greatest challenge in this respect is time. When it comes to human perceptions, first impressions count. Customers will size up the way staff look in just a few seconds, and that few seconds will colour their attitudes from then on. Those few seconds can beReading so important that big companies are prepared to invest years, and millions of pounds, getting them right. In addition, some uniform companies also offer rental services. \"There will be an increasing specialisation in the marketplace, \" predicts Mr Blyth, Customer Services Manager of a large UK bank. The past two or three years have seen consolidation. Increasingly, the big suppliers are becoming managing agents, which means they offer a total service to put together the whole complex operation of a companys corporate clothing package - which includes reliable sourcing, managing the inventory, budget control and distribution to either central locations or to each staff member individually. Huge investments have been made in new systems, information technology and amassing quality assurance accreditations. Corporate clothing does have potential for further growth. Some banks have yet to introduce a full corporate look; police forces are researching a complete new look for the 21st century. And many employees now welcome a company wardrobe. A recent survey of staff found that 90 per cent welcomed having clothing which reflected the corporate identity.", "hypothesis": "Most businesses that supply company clothing are successful.", "label": "c"} +{"uid": "id_105", "premise": "Traditionally uniforms were and for some industries still are manufactured to protect the worker. When they were first designed, it is also likely that all uniforms made symbolic sense - those for the military, for example, were originally intended to impress and even terrify the enemy; other uniforms denoted a hierarchy - chefs wore white because they worked with flour, but the main chef wore a black hat to show he supervised. The last 30 years, however, have seen an increasing emphasis on their role in projecting the image of an organisation and in uniting the workforce into a homogeneous unit particularly in customer facing\" industries, and especially in financial services and retailing. From uniforms and workwear has emerged corporate clothing. \"The people you employ are your ambassadors, \" says Peter Griffin, managing director of a major retailer in the UK. \"What they say, how they look, and how they behave is terribly important. \" The result is a new way of looking at corporate workwear. From being a simple means of identifying who is a member of staff, the uniform is emerging as a new channel of marketing communication. Truly effective marketing through visual cues such as uniforms is a subtle art, however. Wittingly or unwittingly, how we look sends all sorts of powerful subliminal messages to other people. Dark colours give an aura of authority while lighter pastel shades suggest approachability. Certain dress style creates a sense of conservatism, others a sense of openness to new ideas. Neatness can suggest efficiency but, if it is overdone, it can spill over and indicate an obsession with power. \"If the company is selling quality, then it must have quality uniforms. If it is selling style, its uniforms must be stylish. If it wants to appear innovative, everybody cant look exactly the same. Subliminally we see all these things, \" says Lynn Elvy, a director of image consultants House of Colour. But translating corporate philosophies into the right mix of colour, style, degree of branding and uniformity can be a fraught process. And it is not always successful. According to Company Clothing magazine, there are 1000 companies supplying the workwear and corporate clothing market. Of these, 22 account for 85% of total sales - 380 million in 1994. A successful uniform needs to balance two key sets of needs. On the one hand, no uniform will work if staff feel uncomfortable or ugly. Giving the wearers a choice has become a key element in the way corporate clothing is introduced and managed. On the other, it is pointless if the look doesnt express the businesss marketing strategy. The greatest challenge in this respect is time. When it comes to human perceptions, first impressions count. Customers will size up the way staff look in just a few seconds, and that few seconds will colour their attitudes from then on. Those few seconds can beReading so important that big companies are prepared to invest years, and millions of pounds, getting them right. In addition, some uniform companies also offer rental services. \"There will be an increasing specialisation in the marketplace, \" predicts Mr Blyth, Customer Services Manager of a large UK bank. The past two or three years have seen consolidation. Increasingly, the big suppliers are becoming managing agents, which means they offer a total service to put together the whole complex operation of a companys corporate clothing package - which includes reliable sourcing, managing the inventory, budget control and distribution to either central locations or to each staff member individually. Huge investments have been made in new systems, information technology and amassing quality assurance accreditations. Corporate clothing does have potential for further growth. Some banks have yet to introduce a full corporate look; police forces are researching a complete new look for the 21st century. And many employees now welcome a company wardrobe. A recent survey of staff found that 90 per cent welcomed having clothing which reflected the corporate identity.", "hypothesis": "Being too smart could have a negative impact on customers.", "label": "e"} +{"uid": "id_106", "premise": "Traditionally uniforms were and for some industries still are manufactured to protect the worker. When they were first designed, it is also likely that all uniforms made symbolic sense - those for the military, for example, were originally intended to impress and even terrify the enemy; other uniforms denoted a hierarchy - chefs wore white because they worked with flour, but the main chef wore a black hat to show he supervised. The last 30 years, however, have seen an increasing emphasis on their role in projecting the image of an organisation and in uniting the workforce into a homogeneous unit particularly in customer facing\" industries, and especially in financial services and retailing. From uniforms and workwear has emerged corporate clothing. \"The people you employ are your ambassadors, \" says Peter Griffin, managing director of a major retailer in the UK. \"What they say, how they look, and how they behave is terribly important. \" The result is a new way of looking at corporate workwear. From being a simple means of identifying who is a member of staff, the uniform is emerging as a new channel of marketing communication. Truly effective marketing through visual cues such as uniforms is a subtle art, however. Wittingly or unwittingly, how we look sends all sorts of powerful subliminal messages to other people. Dark colours give an aura of authority while lighter pastel shades suggest approachability. Certain dress style creates a sense of conservatism, others a sense of openness to new ideas. Neatness can suggest efficiency but, if it is overdone, it can spill over and indicate an obsession with power. \"If the company is selling quality, then it must have quality uniforms. If it is selling style, its uniforms must be stylish. If it wants to appear innovative, everybody cant look exactly the same. Subliminally we see all these things, \" says Lynn Elvy, a director of image consultants House of Colour. But translating corporate philosophies into the right mix of colour, style, degree of branding and uniformity can be a fraught process. And it is not always successful. According to Company Clothing magazine, there are 1000 companies supplying the workwear and corporate clothing market. Of these, 22 account for 85% of total sales - 380 million in 1994. A successful uniform needs to balance two key sets of needs. On the one hand, no uniform will work if staff feel uncomfortable or ugly. Giving the wearers a choice has become a key element in the way corporate clothing is introduced and managed. On the other, it is pointless if the look doesnt express the businesss marketing strategy. The greatest challenge in this respect is time. When it comes to human perceptions, first impressions count. Customers will size up the way staff look in just a few seconds, and that few seconds will colour their attitudes from then on. Those few seconds can beReading so important that big companies are prepared to invest years, and millions of pounds, getting them right. In addition, some uniform companies also offer rental services. \"There will be an increasing specialisation in the marketplace, \" predicts Mr Blyth, Customer Services Manager of a large UK bank. The past two or three years have seen consolidation. Increasingly, the big suppliers are becoming managing agents, which means they offer a total service to put together the whole complex operation of a companys corporate clothing package - which includes reliable sourcing, managing the inventory, budget control and distribution to either central locations or to each staff member individually. Huge investments have been made in new systems, information technology and amassing quality assurance accreditations. Corporate clothing does have potential for further growth. Some banks have yet to introduce a full corporate look; police forces are researching a complete new look for the 21st century. And many employees now welcome a company wardrobe. A recent survey of staff found that 90 per cent welcomed having clothing which reflected the corporate identity.", "hypothesis": "Uniforms were more carefully made in the past than they are today.", "label": "n"} +{"uid": "id_107", "premise": "Traditionally uniforms were and for some industries still are manufactured to protect the worker. When they were first designed, it is also likely that all uniforms made symbolic sense - those for the military, for example, were originally intended to impress and even terrify the enemy; other uniforms denoted a hierarchy - chefs wore white because they worked with flour, but the main chef wore a black hat to show he supervised. The last 30 years, however, have seen an increasing emphasis on their role in projecting the image of an organisation and in uniting the workforce into a homogeneous unit particularly in customer facing\" industries, and especially in financial services and retailing. From uniforms and workwear has emerged corporate clothing. \"The people you employ are your ambassadors, \" says Peter Griffin, managing director of a major retailer in the UK. \"What they say, how they look, and how they behave is terribly important. \" The result is a new way of looking at corporate workwear. From being a simple means of identifying who is a member of staff, the uniform is emerging as a new channel of marketing communication. Truly effective marketing through visual cues such as uniforms is a subtle art, however. Wittingly or unwittingly, how we look sends all sorts of powerful subliminal messages to other people. Dark colours give an aura of authority while lighter pastel shades suggest approachability. Certain dress style creates a sense of conservatism, others a sense of openness to new ideas. Neatness can suggest efficiency but, if it is overdone, it can spill over and indicate an obsession with power. \"If the company is selling quality, then it must have quality uniforms. If it is selling style, its uniforms must be stylish. If it wants to appear innovative, everybody cant look exactly the same. Subliminally we see all these things, \" says Lynn Elvy, a director of image consultants House of Colour. But translating corporate philosophies into the right mix of colour, style, degree of branding and uniformity can be a fraught process. And it is not always successful. According to Company Clothing magazine, there are 1000 companies supplying the workwear and corporate clothing market. Of these, 22 account for 85% of total sales - 380 million in 1994. A successful uniform needs to balance two key sets of needs. On the one hand, no uniform will work if staff feel uncomfortable or ugly. Giving the wearers a choice has become a key element in the way corporate clothing is introduced and managed. On the other, it is pointless if the look doesnt express the businesss marketing strategy. The greatest challenge in this respect is time. When it comes to human perceptions, first impressions count. Customers will size up the way staff look in just a few seconds, and that few seconds will colour their attitudes from then on. Those few seconds can beReading so important that big companies are prepared to invest years, and millions of pounds, getting them right. In addition, some uniform companies also offer rental services. \"There will be an increasing specialisation in the marketplace, \" predicts Mr Blyth, Customer Services Manager of a large UK bank. The past two or three years have seen consolidation. Increasingly, the big suppliers are becoming managing agents, which means they offer a total service to put together the whole complex operation of a companys corporate clothing package - which includes reliable sourcing, managing the inventory, budget control and distribution to either central locations or to each staff member individually. Huge investments have been made in new systems, information technology and amassing quality assurance accreditations. Corporate clothing does have potential for further growth. Some banks have yet to introduce a full corporate look; police forces are researching a complete new look for the 21st century. And many employees now welcome a company wardrobe. A recent survey of staff found that 90 per cent welcomed having clothing which reflected the corporate identity.", "hypothesis": "Uniforms make employees feel part of a team.", "label": "e"} +{"uid": "id_108", "premise": "Traditionally uniforms were and for some industries still are manufactured to protect the worker. When they were first designed, it is also likely that all uniforms made symbolic sense - those for the military, for example, were originally intended to impress and even terrify the enemy; other uniforms denoted a hierarchy - chefs wore white because they worked with flour, but the main chef wore a black hat to show he supervised. The last 30 years, however, have seen an increasing emphasis on their role in projecting the image of an organisation and in uniting the workforce into a homogeneous unit particularly in customer facing\" industries, and especially in financial services and retailing. From uniforms and workwear has emerged corporate clothing. \"The people you employ are your ambassadors, \" says Peter Griffin, managing director of a major retailer in the UK. \"What they say, how they look, and how they behave is terribly important. \" The result is a new way of looking at corporate workwear. From being a simple means of identifying who is a member of staff, the uniform is emerging as a new channel of marketing communication. Truly effective marketing through visual cues such as uniforms is a subtle art, however. Wittingly or unwittingly, how we look sends all sorts of powerful subliminal messages to other people. Dark colours give an aura of authority while lighter pastel shades suggest approachability. Certain dress style creates a sense of conservatism, others a sense of openness to new ideas. Neatness can suggest efficiency but, if it is overdone, it can spill over and indicate an obsession with power. \"If the company is selling quality, then it must have quality uniforms. If it is selling style, its uniforms must be stylish. If it wants to appear innovative, everybody cant look exactly the same. Subliminally we see all these things, \" says Lynn Elvy, a director of image consultants House of Colour. But translating corporate philosophies into the right mix of colour, style, degree of branding and uniformity can be a fraught process. And it is not always successful. According to Company Clothing magazine, there are 1000 companies supplying the workwear and corporate clothing market. Of these, 22 account for 85% of total sales - 380 million in 1994. A successful uniform needs to balance two key sets of needs. On the one hand, no uniform will work if staff feel uncomfortable or ugly. Giving the wearers a choice has become a key element in the way corporate clothing is introduced and managed. On the other, it is pointless if the look doesnt express the businesss marketing strategy. The greatest challenge in this respect is time. When it comes to human perceptions, first impressions count. Customers will size up the way staff look in just a few seconds, and that few seconds will colour their attitudes from then on. Those few seconds can beReading so important that big companies are prepared to invest years, and millions of pounds, getting them right. In addition, some uniform companies also offer rental services. \"There will be an increasing specialisation in the marketplace, \" predicts Mr Blyth, Customer Services Manager of a large UK bank. The past two or three years have seen consolidation. Increasingly, the big suppliers are becoming managing agents, which means they offer a total service to put together the whole complex operation of a companys corporate clothing package - which includes reliable sourcing, managing the inventory, budget control and distribution to either central locations or to each staff member individually. Huge investments have been made in new systems, information technology and amassing quality assurance accreditations. Corporate clothing does have potential for further growth. Some banks have yet to introduce a full corporate look; police forces are researching a complete new look for the 21st century. And many employees now welcome a company wardrobe. A recent survey of staff found that 90 per cent welcomed having clothing which reflected the corporate identity.", "hypothesis": "Using uniforms as a marketing tool requires great care.", "label": "e"} +{"uid": "id_109", "premise": "Traffic jams on most of the roads in the city have become a regular feature during monsoon.", "hypothesis": "Material used for road construction cannot withstand the fury of monsoon resulting into innumerable pot holes on the roads.", "label": "e"} +{"uid": "id_110", "premise": "Traffic jams on most of the roads in the city have become a regular feature during monsoon.", "hypothesis": "Number of vehicles coming on the roads is much more in monsoon as compared to other seasons.", "label": "n"} +{"uid": "id_111", "premise": "Traffic levels have fallen by 15% and congestion is down by a third. In August the Mayor of London announced a plan to extend the 5 charge for driving in central London during the working week westwards to Kensington and Chelsea. This was despite a consultative process in which almost 70,000 people and the vast majority of respondents said they did not want the scheme extended. However, the problem with the proposed extension is not only political. Extending the zone to a thickly populated area of London will mean that many people will qualify for the residents discount, allowing them to drive to the city without paying any extra. Extending the scheme therefore may mean that total revenues drop from the current 90 million a year.", "hypothesis": "The extended scheme may face continued public opposition.", "label": "e"} +{"uid": "id_112", "premise": "Traffic levels have fallen by 15% and congestion is down by a third. In August the Mayor of London announced a plan to extend the 5 charge for driving in central London during the working week westwards to Kensington and Chelsea. This was despite a consultative process in which almost 70,000 people and the vast majority of respondents said they did not want the scheme extended. However, the problem with the proposed extension is not only political. Extending the zone to a thickly populated area of London will mean that many people will qualify for the residents discount, allowing them to drive to the city without paying any extra. Extending the scheme therefore may mean that total revenues drop from the current 90 million a year.", "hypothesis": "Kensington and Chelsea have high residential populations.", "label": "e"} +{"uid": "id_113", "premise": "Traffic levels have fallen by 15% and congestion is down by a third. In August the Mayor of London announced a plan to extend the 5 charge for driving in central London during the working week westwards to Kensington and Chelsea. This was despite a consultative process in which almost 70,000 people and the vast majority of respondents said they did not want the scheme extended. However, the problem with the proposed extension is not only political. Extending the zone to a thickly populated area of London will mean that many people will qualify for the residents discount, allowing them to drive to the city without paying any extra. Extending the scheme therefore may mean that total revenues drop from the current 90 million a year.", "hypothesis": "Whilst the majority of respondents voted against the extension it is possible that they welcomed the fall in traffic levels and lower congestion.", "label": "e"} +{"uid": "id_114", "premise": "Traffic levels have fallen by 15% and congestion is down by a third. In August the Mayor of London announced a plan to extend the 5 charge for driving in central London during the working week westwards to Kensington and Chelsea. This was despite a consultative process in which almost 70,000 people and the vast majority of respondents said they did not want the scheme extended. However, the problem with the proposed extension is not only political. Extending the zone to a thickly populated area of London will mean that many people will qualify for the residents discount, allowing them to drive to the city without paying any extra. Extending the scheme therefore may mean that total revenues drop from the current 90 million a year.", "hypothesis": "People who live in Kensington currently do not have to pay 5 to drive to the city.", "label": "c"} +{"uid": "id_115", "premise": "Training Facilities The International College of Hospitality Management has more than 120 professional lecturers and international-standard, training facilities. These include three public restaurants, ten commercial training kitchens, simulated front office training facilities, four computer suites, a fully operational winery, and a food science laboratory. The Learning Resource Centre collection is extensive. The student support services provide professional counselling in the areas of health, learning support, language skills, accommodation and welfare. Childcare facilities are also available on campus. International Home The International College of Hospitality Management has students enrolled from more than 20 countries, some of whom stay on campus in International House. Built in 1999, International House is accommodation comprising villa-style units. Each student has their own bedroom, sharing en suite facilities with another student. An adjoining kitchenette and lounge area is shared by the four students in the villa. All meals are served in the College dining room which is next to the student common room. Student privacy and security are priorities. A computer outlet in each bedroom enables student to connect into the College network, providing 24 hour-a-day access. The residence is a two-minute walk to the Colleges sporting and training facilities, and is on a regular bus service to the city centre 10 km away. International House is also being used to enhance on-campus training, from Monday to Friday, Year 1 students, supervised by 2nd Years, are assigned kitchen, waiting, housekeeping and receptionist duties. Simulated check-in/check-out exercises, receptionist duties and breakfast service to a limited number of rooms are also part of the program.", "hypothesis": "The training facility has 10 kitchens", "label": "e"} +{"uid": "id_116", "premise": "Training Facilities The International College of Hospitality Management has more than 120 professional lecturers and international-standard, training facilities. These include three public restaurants, ten commercial training kitchens, simulated front office training facilities, four computer suites, a fully operational winery, and a food science laboratory. The Learning Resource Centre collection is extensive. The student support services provide professional counselling in the areas of health, learning support, language skills, accommodation and welfare. Childcare facilities are also available on campus. International Home The International College of Hospitality Management has students enrolled from more than 20 countries, some of whom stay on campus in International House. Built in 1999, International House is accommodation comprising villa-style units. Each student has their own bedroom, sharing en suite facilities with another student. An adjoining kitchenette and lounge area is shared by the four students in the villa. All meals are served in the College dining room which is next to the student common room. Student privacy and security are priorities. A computer outlet in each bedroom enables student to connect into the College network, providing 24 hour-a-day access. The residence is a two-minute walk to the Colleges sporting and training facilities, and is on a regular bus service to the city centre 10 km away. International House is also being used to enhance on-campus training, from Monday to Friday, Year 1 students, supervised by 2nd Years, are assigned kitchen, waiting, housekeeping and receptionist duties. Simulated check-in/check-out exercises, receptionist duties and breakfast service to a limited number of rooms are also part of the program.", "hypothesis": "All students in the program live at International House", "label": "c"} +{"uid": "id_117", "premise": "Training Facilities The International College of Hospitality Management has more than 120 professional lecturers and international-standard, training facilities. These include three public restaurants, ten commercial training kitchens, simulated front office training facilities, four computer suites, a fully operational winery, and a food science laboratory. The Learning Resource Centre collection is extensive. The student support services provide professional counselling in the areas of health, learning support, language skills, accommodation and welfare. Childcare facilities are also available on campus. International Home The International College of Hospitality Management has students enrolled from more than 20 countries, some of whom stay on campus in International House. Built in 1999, International House is accommodation comprising villa-style units. Each student has their own bedroom, sharing en suite facilities with another student. An adjoining kitchenette and lounge area is shared by the four students in the villa. All meals are served in the College dining room which is next to the student common room. Student privacy and security are priorities. A computer outlet in each bedroom enables student to connect into the College network, providing 24 hour-a-day access. The residence is a two-minute walk to the Colleges sporting and training facilities, and is on a regular bus service to the city centre 10 km away. International House is also being used to enhance on-campus training, from Monday to Friday, Year 1 students, supervised by 2nd Years, are assigned kitchen, waiting, housekeeping and receptionist duties. Simulated check-in/check-out exercises, receptionist duties and breakfast service to a limited number of rooms are also part of the program.", "hypothesis": "Four students share a unit in the residence", "label": "e"} +{"uid": "id_118", "premise": "Training Facilities The International College of Hospitality Management has more than 120 professional lecturers and international-standard, training facilities. These include three public restaurants, ten commercial training kitchens, simulated front office training facilities, four computer suites, a fully operational winery, and a food science laboratory. The Learning Resource Centre collection is extensive. The student support services provide professional counselling in the areas of health, learning support, language skills, accommodation and welfare. Childcare facilities are also available on campus. International Home The International College of Hospitality Management has students enrolled from more than 20 countries, some of whom stay on campus in International House. Built in 1999, International House is accommodation comprising villa-style units. Each student has their own bedroom, sharing en suite facilities with another student. An adjoining kitchenette and lounge area is shared by the four students in the villa. All meals are served in the College dining room which is next to the student common room. Student privacy and security are priorities. A computer outlet in each bedroom enables student to connect into the College network, providing 24 hour-a-day access. The residence is a two-minute walk to the Colleges sporting and training facilities, and is on a regular bus service to the city centre 10 km away. International House is also being used to enhance on-campus training, from Monday to Friday, Year 1 students, supervised by 2nd Years, are assigned kitchen, waiting, housekeeping and receptionist duties. Simulated check-in/check-out exercises, receptionist duties and breakfast service to a limited number of rooms are also part of the program.", "hypothesis": "The residence is used as part of the training program", "label": "e"} +{"uid": "id_119", "premise": "Training Facilities The International College of Hospitality Management has more than 120 professional lecturers and international-standard, training facilities. These include three public restaurants, ten commercial training kitchens, simulated front office training facilities, four computer suites, a fully operational winery, and a food science laboratory. The Learning Resource Centre collection is extensive. The student support services provide professional counselling in the areas of health, learning support, language skills, accommodation and welfare. Childcare facilities are also available on campus. International Home The International College of Hospitality Management has students enrolled from more than 20 countries, some of whom stay on campus in International House. Built in 1999, International House is accommodation comprising villa-style units. Each student has their own bedroom, sharing en suite facilities with another student. An adjoining kitchenette and lounge area is shared by the four students in the villa. All meals are served in the College dining room which is next to the student common room. Student privacy and security are priorities. A computer outlet in each bedroom enables student to connect into the College network, providing 24 hour-a-day access. The residence is a two-minute walk to the Colleges sporting and training facilities, and is on a regular bus service to the city centre 10 km away. International House is also being used to enhance on-campus training, from Monday to Friday, Year 1 students, supervised by 2nd Years, are assigned kitchen, waiting, housekeeping and receptionist duties. Simulated check-in/check-out exercises, receptionist duties and breakfast service to a limited number of rooms are also part of the program.", "hypothesis": "All meals in the residence are prepared by the students", "label": "n"} +{"uid": "id_120", "premise": "Trans Fatty Acids A recent editorial in the British Medical Journal (BMJ), written by researchers from the University of Oxford, has called for food labels to list trans fats as well as cholesterol and saturated fat. Trans fats (or trans fatty acids) are a type of unsaturated fatty acid. They occur naturally in small amounts in foods produced from ruminant animals e. g. milk, beef and lamb. However, most of the trans fatty acids in the diet are produced during the process of partial hydrogenation (hardening) of vegetable oils into semi-solid fats. They are therefore found in hard margarines, partially hydrogenated cooking oils, and in some bakery products, fried foods, and other processed foods that are made using these. Trans fatty acids have an adverse effect on certain chemicals, known as lipids, which are found in the blood and have been shown to increase the risk of heart disease. They also increase LDL-cholesterol (the bad cholesterol) and decrease HDL-cholesterol (the good cholesterol). They may also have adverse effects on cardiovascular disease risk that are independent of an effect on blood lipids (Mozaffarian et al. 2006). In a recent review of prospective studies investigating the effects of trans fatty acids, a 2% increase in energy intake from trans fatty acids was associated with a 23% increase in the incidence of heart disease. The authors also reported that the adverse effects of trans fatty acids were observed even at very low intakes (3% of total daily energy intake, or about 2-7g per day) (Mozaffarian et al. 2006). However, in this recent review it is only trans fatty acids produced during the hardening of vegetable oils that are found to be harmful to health. The public health implications of consuming trans fatty acids from ruminant products are considered to be relatively limited. Over the last decade, population intakes of trans fatty acids in the UK fell and are now, on average, well below the recommended 2% of total energy set by the Department of Health in 1991, at 1.2% of energy (Henderson et al. 2003). This is not to say that intakes of trans fatty acids are not still a problem, and dietary advice states that those individuals who are in the top end of the distribution of intake should still make efforts to reduce their intakes. Currently, trans fatty acids in foods are labelled in the USA, but not in the UK and Europe. The UK Food Standards Agency (FSA) is in favour of the revision of the European directive that governs the content and format of food labels so that trans fatty acids are labelled. This should enable consumers to make better food choices with regard to heart health (Clarke & Lewington 2006). Recognising the adverse health effects of trans fatty acids, many food manufacturers and retailers have been systematically removing them from their products in recent years. For example, they have been absent for some time from major brands of margarine and other fat spreads, which are now manufactured using a different technique. Also, many companies now have guidelines in place that are resulting in reformulation and reduction or elimination of trans fatty acids in products where they have in the past been found, such as snack products, fried products and baked goods. Consequently, the vast majority of savoury biscuits and crisps produced in the UK do not contain partially hydrogenated oils. Similarly, changes are being made to the way bakery products are manufactured. For example, a leading European manufacturer of major brands of biscuits, cakes and snacks has recently announced that these are now made without partially hydrogenated vegetable oils, a transition that began in 2004. Alongside these changes, the manufacturer has also reported a cut in the amount of saturates. It is clear that a major technical challenge in achieving such changes is to avoid simply exchanging trans fatty acids for saturated fatty acids, which also have damaging health effects. Foods that are labelled as containing partially-hydrogenated oils or fats are a source of trans fatty acids (sometimes partially-hydrogenated fats are just labelled as hydrogenated fats). These foods include hard margarines, some fried products and some manufactured bakery products e. g. biscuits, pastries and cakes. It is important to note that intake may have changed in the light of reformulation of foods that has taken place over the past six years in the UK, as referred to earlier. Furthermore, the average intake of trans fatty acids is lower in the UK than in the USA (where legislation has now been introduced). However, this does not mean there is room for complacency, as the intake in some sectors of the population is known to be higher than recommended.", "hypothesis": "The amount of saturated fats in processed meats is being reduced by some major producers.", "label": "n"} +{"uid": "id_121", "premise": "Trans Fatty Acids A recent editorial in the British Medical Journal (BMJ), written by researchers from the University of Oxford, has called for food labels to list trans fats as well as cholesterol and saturated fat. Trans fats (or trans fatty acids) are a type of unsaturated fatty acid. They occur naturally in small amounts in foods produced from ruminant animals e. g. milk, beef and lamb. However, most of the trans fatty acids in the diet are produced during the process of partial hydrogenation (hardening) of vegetable oils into semi-solid fats. They are therefore found in hard margarines, partially hydrogenated cooking oils, and in some bakery products, fried foods, and other processed foods that are made using these. Trans fatty acids have an adverse effect on certain chemicals, known as lipids, which are found in the blood and have been shown to increase the risk of heart disease. They also increase LDL-cholesterol (the bad cholesterol) and decrease HDL-cholesterol (the good cholesterol). They may also have adverse effects on cardiovascular disease risk that are independent of an effect on blood lipids (Mozaffarian et al. 2006). In a recent review of prospective studies investigating the effects of trans fatty acids, a 2% increase in energy intake from trans fatty acids was associated with a 23% increase in the incidence of heart disease. The authors also reported that the adverse effects of trans fatty acids were observed even at very low intakes (3% of total daily energy intake, or about 2-7g per day) (Mozaffarian et al. 2006). However, in this recent review it is only trans fatty acids produced during the hardening of vegetable oils that are found to be harmful to health. The public health implications of consuming trans fatty acids from ruminant products are considered to be relatively limited. Over the last decade, population intakes of trans fatty acids in the UK fell and are now, on average, well below the recommended 2% of total energy set by the Department of Health in 1991, at 1.2% of energy (Henderson et al. 2003). This is not to say that intakes of trans fatty acids are not still a problem, and dietary advice states that those individuals who are in the top end of the distribution of intake should still make efforts to reduce their intakes. Currently, trans fatty acids in foods are labelled in the USA, but not in the UK and Europe. The UK Food Standards Agency (FSA) is in favour of the revision of the European directive that governs the content and format of food labels so that trans fatty acids are labelled. This should enable consumers to make better food choices with regard to heart health (Clarke & Lewington 2006). Recognising the adverse health effects of trans fatty acids, many food manufacturers and retailers have been systematically removing them from their products in recent years. For example, they have been absent for some time from major brands of margarine and other fat spreads, which are now manufactured using a different technique. Also, many companies now have guidelines in place that are resulting in reformulation and reduction or elimination of trans fatty acids in products where they have in the past been found, such as snack products, fried products and baked goods. Consequently, the vast majority of savoury biscuits and crisps produced in the UK do not contain partially hydrogenated oils. Similarly, changes are being made to the way bakery products are manufactured. For example, a leading European manufacturer of major brands of biscuits, cakes and snacks has recently announced that these are now made without partially hydrogenated vegetable oils, a transition that began in 2004. Alongside these changes, the manufacturer has also reported a cut in the amount of saturates. It is clear that a major technical challenge in achieving such changes is to avoid simply exchanging trans fatty acids for saturated fatty acids, which also have damaging health effects. Foods that are labelled as containing partially-hydrogenated oils or fats are a source of trans fatty acids (sometimes partially-hydrogenated fats are just labelled as hydrogenated fats). These foods include hard margarines, some fried products and some manufactured bakery products e. g. biscuits, pastries and cakes. It is important to note that intake may have changed in the light of reformulation of foods that has taken place over the past six years in the UK, as referred to earlier. Furthermore, the average intake of trans fatty acids is lower in the UK than in the USA (where legislation has now been introduced). However, this does not mean there is room for complacency, as the intake in some sectors of the population is known to be higher than recommended.", "hypothesis": "In Britain, the intake of trans fatty acids is continuing to decline.", "label": "n"} +{"uid": "id_122", "premise": "Trans Fatty Acids A recent editorial in the British Medical Journal (BMJ), written by researchers from the University of Oxford, has called for food labels to list trans fats as well as cholesterol and saturated fat. Trans fats (or trans fatty acids) are a type of unsaturated fatty acid. They occur naturally in small amounts in foods produced from ruminant animals e. g. milk, beef and lamb. However, most of the trans fatty acids in the diet are produced during the process of partial hydrogenation (hardening) of vegetable oils into semi-solid fats. They are therefore found in hard margarines, partially hydrogenated cooking oils, and in some bakery products, fried foods, and other processed foods that are made using these. Trans fatty acids have an adverse effect on certain chemicals, known as lipids, which are found in the blood and have been shown to increase the risk of heart disease. They also increase LDL-cholesterol (the bad cholesterol) and decrease HDL-cholesterol (the good cholesterol). They may also have adverse effects on cardiovascular disease risk that are independent of an effect on blood lipids (Mozaffarian et al. 2006). In a recent review of prospective studies investigating the effects of trans fatty acids, a 2% increase in energy intake from trans fatty acids was associated with a 23% increase in the incidence of heart disease. The authors also reported that the adverse effects of trans fatty acids were observed even at very low intakes (3% of total daily energy intake, or about 2-7g per day) (Mozaffarian et al. 2006). However, in this recent review it is only trans fatty acids produced during the hardening of vegetable oils that are found to be harmful to health. The public health implications of consuming trans fatty acids from ruminant products are considered to be relatively limited. Over the last decade, population intakes of trans fatty acids in the UK fell and are now, on average, well below the recommended 2% of total energy set by the Department of Health in 1991, at 1.2% of energy (Henderson et al. 2003). This is not to say that intakes of trans fatty acids are not still a problem, and dietary advice states that those individuals who are in the top end of the distribution of intake should still make efforts to reduce their intakes. Currently, trans fatty acids in foods are labelled in the USA, but not in the UK and Europe. The UK Food Standards Agency (FSA) is in favour of the revision of the European directive that governs the content and format of food labels so that trans fatty acids are labelled. This should enable consumers to make better food choices with regard to heart health (Clarke & Lewington 2006). Recognising the adverse health effects of trans fatty acids, many food manufacturers and retailers have been systematically removing them from their products in recent years. For example, they have been absent for some time from major brands of margarine and other fat spreads, which are now manufactured using a different technique. Also, many companies now have guidelines in place that are resulting in reformulation and reduction or elimination of trans fatty acids in products where they have in the past been found, such as snack products, fried products and baked goods. Consequently, the vast majority of savoury biscuits and crisps produced in the UK do not contain partially hydrogenated oils. Similarly, changes are being made to the way bakery products are manufactured. For example, a leading European manufacturer of major brands of biscuits, cakes and snacks has recently announced that these are now made without partially hydrogenated vegetable oils, a transition that began in 2004. Alongside these changes, the manufacturer has also reported a cut in the amount of saturates. It is clear that a major technical challenge in achieving such changes is to avoid simply exchanging trans fatty acids for saturated fatty acids, which also have damaging health effects. Foods that are labelled as containing partially-hydrogenated oils or fats are a source of trans fatty acids (sometimes partially-hydrogenated fats are just labelled as hydrogenated fats). These foods include hard margarines, some fried products and some manufactured bakery products e. g. biscuits, pastries and cakes. It is important to note that intake may have changed in the light of reformulation of foods that has taken place over the past six years in the UK, as referred to earlier. Furthermore, the average intake of trans fatty acids is lower in the UK than in the USA (where legislation has now been introduced). However, this does not mean there is room for complacency, as the intake in some sectors of the population is known to be higher than recommended.", "hypothesis": "It is proving difficult to find a safe substitute for trans fatty acids.", "label": "e"} +{"uid": "id_123", "premise": "Trans Fatty Acids A recent editorial in the British Medical Journal (BMJ), written by researchers from the University of Oxford, has called for food labels to list trans fats as well as cholesterol and saturated fat. Trans fats (or trans fatty acids) are a type of unsaturated fatty acid. They occur naturally in small amounts in foods produced from ruminant animals e. g. milk, beef and lamb. However, most of the trans fatty acids in the diet are produced during the process of partial hydrogenation (hardening) of vegetable oils into semi-solid fats. They are therefore found in hard margarines, partially hydrogenated cooking oils, and in some bakery products, fried foods, and other processed foods that are made using these. Trans fatty acids have an adverse effect on certain chemicals, known as lipids, which are found in the blood and have been shown to increase the risk of heart disease. They also increase LDL-cholesterol (the bad cholesterol) and decrease HDL-cholesterol (the good cholesterol). They may also have adverse effects on cardiovascular disease risk that are independent of an effect on blood lipids (Mozaffarian et al. 2006). In a recent review of prospective studies investigating the effects of trans fatty acids, a 2% increase in energy intake from trans fatty acids was associated with a 23% increase in the incidence of heart disease. The authors also reported that the adverse effects of trans fatty acids were observed even at very low intakes (3% of total daily energy intake, or about 2-7g per day) (Mozaffarian et al. 2006). However, in this recent review it is only trans fatty acids produced during the hardening of vegetable oils that are found to be harmful to health. The public health implications of consuming trans fatty acids from ruminant products are considered to be relatively limited. Over the last decade, population intakes of trans fatty acids in the UK fell and are now, on average, well below the recommended 2% of total energy set by the Department of Health in 1991, at 1.2% of energy (Henderson et al. 2003). This is not to say that intakes of trans fatty acids are not still a problem, and dietary advice states that those individuals who are in the top end of the distribution of intake should still make efforts to reduce their intakes. Currently, trans fatty acids in foods are labelled in the USA, but not in the UK and Europe. The UK Food Standards Agency (FSA) is in favour of the revision of the European directive that governs the content and format of food labels so that trans fatty acids are labelled. This should enable consumers to make better food choices with regard to heart health (Clarke & Lewington 2006). Recognising the adverse health effects of trans fatty acids, many food manufacturers and retailers have been systematically removing them from their products in recent years. For example, they have been absent for some time from major brands of margarine and other fat spreads, which are now manufactured using a different technique. Also, many companies now have guidelines in place that are resulting in reformulation and reduction or elimination of trans fatty acids in products where they have in the past been found, such as snack products, fried products and baked goods. Consequently, the vast majority of savoury biscuits and crisps produced in the UK do not contain partially hydrogenated oils. Similarly, changes are being made to the way bakery products are manufactured. For example, a leading European manufacturer of major brands of biscuits, cakes and snacks has recently announced that these are now made without partially hydrogenated vegetable oils, a transition that began in 2004. Alongside these changes, the manufacturer has also reported a cut in the amount of saturates. It is clear that a major technical challenge in achieving such changes is to avoid simply exchanging trans fatty acids for saturated fatty acids, which also have damaging health effects. Foods that are labelled as containing partially-hydrogenated oils or fats are a source of trans fatty acids (sometimes partially-hydrogenated fats are just labelled as hydrogenated fats). These foods include hard margarines, some fried products and some manufactured bakery products e. g. biscuits, pastries and cakes. It is important to note that intake may have changed in the light of reformulation of foods that has taken place over the past six years in the UK, as referred to earlier. Furthermore, the average intake of trans fatty acids is lower in the UK than in the USA (where legislation has now been introduced). However, this does not mean there is room for complacency, as the intake in some sectors of the population is known to be higher than recommended.", "hypothesis": "Trans fatty acids are found in all types of meat.", "label": "c"} +{"uid": "id_124", "premise": "Trans Fatty Acids A recent editorial in the British Medical Journal (BMJ), written by researchers from the University of Oxford, has called for food labels to list trans fats as well as cholesterol and saturated fat. Trans fats (or trans fatty acids) are a type of unsaturated fatty acid. They occur naturally in small amounts in foods produced from ruminant animals e. g. milk, beef and lamb. However, most of the trans fatty acids in the diet are produced during the process of partial hydrogenation (hardening) of vegetable oils into semi-solid fats. They are therefore found in hard margarines, partially hydrogenated cooking oils, and in some bakery products, fried foods, and other processed foods that are made using these. Trans fatty acids have an adverse effect on certain chemicals, known as lipids, which are found in the blood and have been shown to increase the risk of heart disease. They also increase LDL-cholesterol (the bad cholesterol) and decrease HDL-cholesterol (the good cholesterol). They may also have adverse effects on cardiovascular disease risk that are independent of an effect on blood lipids (Mozaffarian et al. 2006). In a recent review of prospective studies investigating the effects of trans fatty acids, a 2% increase in energy intake from trans fatty acids was associated with a 23% increase in the incidence of heart disease. The authors also reported that the adverse effects of trans fatty acids were observed even at very low intakes (3% of total daily energy intake, or about 2-7g per day) (Mozaffarian et al. 2006). However, in this recent review it is only trans fatty acids produced during the hardening of vegetable oils that are found to be harmful to health. The public health implications of consuming trans fatty acids from ruminant products are considered to be relatively limited. Over the last decade, population intakes of trans fatty acids in the UK fell and are now, on average, well below the recommended 2% of total energy set by the Department of Health in 1991, at 1.2% of energy (Henderson et al. 2003). This is not to say that intakes of trans fatty acids are not still a problem, and dietary advice states that those individuals who are in the top end of the distribution of intake should still make efforts to reduce their intakes. Currently, trans fatty acids in foods are labelled in the USA, but not in the UK and Europe. The UK Food Standards Agency (FSA) is in favour of the revision of the European directive that governs the content and format of food labels so that trans fatty acids are labelled. This should enable consumers to make better food choices with regard to heart health (Clarke & Lewington 2006). Recognising the adverse health effects of trans fatty acids, many food manufacturers and retailers have been systematically removing them from their products in recent years. For example, they have been absent for some time from major brands of margarine and other fat spreads, which are now manufactured using a different technique. Also, many companies now have guidelines in place that are resulting in reformulation and reduction or elimination of trans fatty acids in products where they have in the past been found, such as snack products, fried products and baked goods. Consequently, the vast majority of savoury biscuits and crisps produced in the UK do not contain partially hydrogenated oils. Similarly, changes are being made to the way bakery products are manufactured. For example, a leading European manufacturer of major brands of biscuits, cakes and snacks has recently announced that these are now made without partially hydrogenated vegetable oils, a transition that began in 2004. Alongside these changes, the manufacturer has also reported a cut in the amount of saturates. It is clear that a major technical challenge in achieving such changes is to avoid simply exchanging trans fatty acids for saturated fatty acids, which also have damaging health effects. Foods that are labelled as containing partially-hydrogenated oils or fats are a source of trans fatty acids (sometimes partially-hydrogenated fats are just labelled as hydrogenated fats). These foods include hard margarines, some fried products and some manufactured bakery products e. g. biscuits, pastries and cakes. It is important to note that intake may have changed in the light of reformulation of foods that has taken place over the past six years in the UK, as referred to earlier. Furthermore, the average intake of trans fatty acids is lower in the UK than in the USA (where legislation has now been introduced). However, this does not mean there is room for complacency, as the intake in some sectors of the population is known to be higher than recommended.", "hypothesis": "Some people are still consuming larger quantities of trans fatty acids than the experts consider safe.", "label": "e"} +{"uid": "id_125", "premise": "Trans Fatty Acids A recent editorial in the British Medical Journal (BMJ), written by researchers from the University of Oxford, has called for food labels to list trans fats as well as cholesterol and saturated fat. Trans fats (or trans fatty acids) are a type of unsaturated fatty acid. They occur naturally in small amounts in foods produced from ruminant animals e. g. milk, beef and lamb. However, most of the trans fatty acids in the diet are produced during the process of partial hydrogenation (hardening) of vegetable oils into semi-solid fats. They are therefore found in hard margarines, partially hydrogenated cooking oils, and in some bakery products, fried foods, and other processed foods that are made using these. Trans fatty acids have an adverse effect on certain chemicals, known as lipids, which are found in the blood and have been shown to increase the risk of heart disease. They also increase LDL-cholesterol (the bad cholesterol) and decrease HDL-cholesterol (the good cholesterol). They may also have adverse effects on cardiovascular disease risk that are independent of an effect on blood lipids (Mozaffarian et al. 2006). In a recent review of prospective studies investigating the effects of trans fatty acids, a 2% increase in energy intake from trans fatty acids was associated with a 23% increase in the incidence of heart disease. The authors also reported that the adverse effects of trans fatty acids were observed even at very low intakes (3% of total daily energy intake, or about 2-7g per day) (Mozaffarian et al. 2006). However, in this recent review it is only trans fatty acids produced during the hardening of vegetable oils that are found to be harmful to health. The public health implications of consuming trans fatty acids from ruminant products are considered to be relatively limited. Over the last decade, population intakes of trans fatty acids in the UK fell and are now, on average, well below the recommended 2% of total energy set by the Department of Health in 1991, at 1.2% of energy (Henderson et al. 2003). This is not to say that intakes of trans fatty acids are not still a problem, and dietary advice states that those individuals who are in the top end of the distribution of intake should still make efforts to reduce their intakes. Currently, trans fatty acids in foods are labelled in the USA, but not in the UK and Europe. The UK Food Standards Agency (FSA) is in favour of the revision of the European directive that governs the content and format of food labels so that trans fatty acids are labelled. This should enable consumers to make better food choices with regard to heart health (Clarke & Lewington 2006). Recognising the adverse health effects of trans fatty acids, many food manufacturers and retailers have been systematically removing them from their products in recent years. For example, they have been absent for some time from major brands of margarine and other fat spreads, which are now manufactured using a different technique. Also, many companies now have guidelines in place that are resulting in reformulation and reduction or elimination of trans fatty acids in products where they have in the past been found, such as snack products, fried products and baked goods. Consequently, the vast majority of savoury biscuits and crisps produced in the UK do not contain partially hydrogenated oils. Similarly, changes are being made to the way bakery products are manufactured. For example, a leading European manufacturer of major brands of biscuits, cakes and snacks has recently announced that these are now made without partially hydrogenated vegetable oils, a transition that began in 2004. Alongside these changes, the manufacturer has also reported a cut in the amount of saturates. It is clear that a major technical challenge in achieving such changes is to avoid simply exchanging trans fatty acids for saturated fatty acids, which also have damaging health effects. Foods that are labelled as containing partially-hydrogenated oils or fats are a source of trans fatty acids (sometimes partially-hydrogenated fats are just labelled as hydrogenated fats). These foods include hard margarines, some fried products and some manufactured bakery products e. g. biscuits, pastries and cakes. It is important to note that intake may have changed in the light of reformulation of foods that has taken place over the past six years in the UK, as referred to earlier. Furthermore, the average intake of trans fatty acids is lower in the UK than in the USA (where legislation has now been introduced). However, this does not mean there is room for complacency, as the intake in some sectors of the population is known to be higher than recommended.", "hypothesis": "Experts consider that the trans fatty acids contained in animal products are unlikely to be a serious health risk.", "label": "e"} +{"uid": "id_126", "premise": "Trans Fatty Acids A recent editorial in the British Medical Journal (BMJ), written by researchers from the University of Oxford, has called for food labels to list trans fats as well as cholesterol and saturated fat. Trans fats (or trans fatty acids) are a type of unsaturated fatty acid. They occur naturally in small amounts in foods produced from ruminant animals e. g. milk, beef and lamb. However, most of the trans fatty acids in the diet are produced during the process of partial hydrogenation (hardening) of vegetable oils into semi-solid fats. They are therefore found in hard margarines, partially hydrogenated cooking oils, and in some bakery products, fried foods, and other processed foods that are made using these. Trans fatty acids have an adverse effect on certain chemicals, known as lipids, which are found in the blood and have been shown to increase the risk of heart disease. They also increase LDL-cholesterol (the bad cholesterol) and decrease HDL-cholesterol (the good cholesterol). They may also have adverse effects on cardiovascular disease risk that are independent of an effect on blood lipids (Mozaffarian et al. 2006). In a recent review of prospective studies investigating the effects of trans fatty acids, a 2% increase in energy intake from trans fatty acids was associated with a 23% increase in the incidence of heart disease. The authors also reported that the adverse effects of trans fatty acids were observed even at very low intakes (3% of total daily energy intake, or about 2-7g per day) (Mozaffarian et al. 2006). However, in this recent review it is only trans fatty acids produced during the hardening of vegetable oils that are found to be harmful to health. The public health implications of consuming trans fatty acids from ruminant products are considered to be relatively limited. Over the last decade, population intakes of trans fatty acids in the UK fell and are now, on average, well below the recommended 2% of total energy set by the Department of Health in 1991, at 1.2% of energy (Henderson et al. 2003). This is not to say that intakes of trans fatty acids are not still a problem, and dietary advice states that those individuals who are in the top end of the distribution of intake should still make efforts to reduce their intakes. Currently, trans fatty acids in foods are labelled in the USA, but not in the UK and Europe. The UK Food Standards Agency (FSA) is in favour of the revision of the European directive that governs the content and format of food labels so that trans fatty acids are labelled. This should enable consumers to make better food choices with regard to heart health (Clarke & Lewington 2006). Recognising the adverse health effects of trans fatty acids, many food manufacturers and retailers have been systematically removing them from their products in recent years. For example, they have been absent for some time from major brands of margarine and other fat spreads, which are now manufactured using a different technique. Also, many companies now have guidelines in place that are resulting in reformulation and reduction or elimination of trans fatty acids in products where they have in the past been found, such as snack products, fried products and baked goods. Consequently, the vast majority of savoury biscuits and crisps produced in the UK do not contain partially hydrogenated oils. Similarly, changes are being made to the way bakery products are manufactured. For example, a leading European manufacturer of major brands of biscuits, cakes and snacks has recently announced that these are now made without partially hydrogenated vegetable oils, a transition that began in 2004. Alongside these changes, the manufacturer has also reported a cut in the amount of saturates. It is clear that a major technical challenge in achieving such changes is to avoid simply exchanging trans fatty acids for saturated fatty acids, which also have damaging health effects. Foods that are labelled as containing partially-hydrogenated oils or fats are a source of trans fatty acids (sometimes partially-hydrogenated fats are just labelled as hydrogenated fats). These foods include hard margarines, some fried products and some manufactured bakery products e. g. biscuits, pastries and cakes. It is important to note that intake may have changed in the light of reformulation of foods that has taken place over the past six years in the UK, as referred to earlier. Furthermore, the average intake of trans fatty acids is lower in the UK than in the USA (where legislation has now been introduced). However, this does not mean there is room for complacency, as the intake in some sectors of the population is known to be higher than recommended.", "hypothesis": "Health problems can be caused by the consumption of small amounts of trans fatty acids.", "label": "e"} +{"uid": "id_127", "premise": "Transgenic Plants Genes from virtually any organism, from viruses to humans, can now be inserted into plants, creating what are known as transgenic plants. Now used in agriculture, there are approximately 109 million acres of transgenic crops grown worldwide, 68 percent of which are in the United States. The most common transgenic crops are soybeans, corn, cotton, and canola. Most often, these plants either contain a gene making them resistant to the herbicide glyphosate or they contain an insect-resistant gene that produces a protein called Bt toxin. On the positive side, proponents of transgenic crops argue that these crops are environmentally friendly because they allow farmers to use fewer and less noxious chemicals for crop production. For example, a 21 percent reduction in the use of insecticide has been reported on Bt cotton (transgenic cotton that produces Bt toxin). In addition, when glyphosate is used to control weeds, other, more persistent herbicides do not need to be applied. On the negative side, opponents of transgenic crops suggest that there are many questions that need to be answered before transgenic crops are grown on a large scale. One question deals with the effects that Bt plants have on nontarget organisms such as beneficial insects, worms, and birds that consume the genetically engineered crop. For example, monarch caterpillars feeding on milkweed plants near Bt cornfields will eat some corn pollen that has fallen on the milkweed leaves. Laboratory studies indicate that caterpillars can die from eating Bt pollen. However, field tests indicate that Bt corn is not likely to harm monarchs. Furthermore, the application of pesticides (the alternative to growing Bt plants) has been demonstrated to cause widespread harm to nontarget insects. Another unanswered question is whether herbicide-resistant genes will move into the populations of weeds. Crop plants are sometimes grown in areas where weedy relatives also live. If the crop plants hybridize and reproduce with weedy relatives, then this herbicide-resistant gene will be perpetuated in the offspring. In this way, the resistant gene can make its way into the weed population. If this happens, a farmer can no longer use glyphosate, for example, to kill those weeds. This scenario is not likely to occur in many instances because there are no weedy relatives growing near the crop plant. However, in some cases, it may become a serious problem. For example, canola readily hybridizes with mustard weed species and could transfer its herbicide-resistant genes to those weeds. We know that evolution will occur when transgenic plants are grown on a large scale over a period of time. Of special concern is the development of insect populations resistant to the Bt toxin. This pesticide has been applied to plants for decades without the development of insect-resistant populations. However, transgenic Bt plants express the toxin in all tissues throughout the growing season. Therefore, all insects carrying genes that make them susceptible to the toxin will die. That leaves only the genetically resistant insects alive to perpetuate the population. When these resistant insects mate, they will produce a high proportion of offspring capable of surviving in the presence of the Bt toxin. Farmers are attempting to slow the development of insect resistance in Bt crops by, for example, planting nontransgenic border rows to provide a refuge for susceptible insects. These insects may allow Bt susceptibility to remain in the population. Perhaps the most serious concern about the transgenic crop plants currently in use is that they encourage farmers to move farther away from sustainable agricultural farming practices, meaning ones that allow natural resources to continually regenerate over the long run. Transgenics, at least superficially, simplify farming by reducing the choices made by the manager. Planting a glyphosate-resistant crop commits a farmer to using that herbicide for the season, probably to the exclusion of all other herbicides and other weed-control practices. Farmers who use Bt transgenics may not feel that they need to follow through with integrated pest-management practices that use beneficial insects and timely applications of pesticides to control insect pests. A more sustainable approach would be to plant nontransgenic corn, monitor the fields throughout the growing season, and then apply a pesticide only if and when needed.", "hypothesis": "Planting nontransgenic plants alongside Bt plants may help Bt-susceptible insects to remain part of the population.", "label": "e"} +{"uid": "id_128", "premise": "Transgenic Plants Genes from virtually any organism, from viruses to humans, can now be inserted into plants, creating what are known as transgenic plants. Now used in agriculture, there are approximately 109 million acres of transgenic crops grown worldwide, 68 percent of which are in the United States. The most common transgenic crops are soybeans, corn, cotton, and canola. Most often, these plants either contain a gene making them resistant to the herbicide glyphosate or they contain an insect-resistant gene that produces a protein called Bt toxin. On the positive side, proponents of transgenic crops argue that these crops are environmentally friendly because they allow farmers to use fewer and less noxious chemicals for crop production. For example, a 21 percent reduction in the use of insecticide has been reported on Bt cotton (transgenic cotton that produces Bt toxin). In addition, when glyphosate is used to control weeds, other, more persistent herbicides do not need to be applied. On the negative side, opponents of transgenic crops suggest that there are many questions that need to be answered before transgenic crops are grown on a large scale. One question deals with the effects that Bt plants have on nontarget organisms such as beneficial insects, worms, and birds that consume the genetically engineered crop. For example, monarch caterpillars feeding on milkweed plants near Bt cornfields will eat some corn pollen that has fallen on the milkweed leaves. Laboratory studies indicate that caterpillars can die from eating Bt pollen. However, field tests indicate that Bt corn is not likely to harm monarchs. Furthermore, the application of pesticides (the alternative to growing Bt plants) has been demonstrated to cause widespread harm to nontarget insects. Another unanswered question is whether herbicide-resistant genes will move into the populations of weeds. Crop plants are sometimes grown in areas where weedy relatives also live. If the crop plants hybridize and reproduce with weedy relatives, then this herbicide-resistant gene will be perpetuated in the offspring. In this way, the resistant gene can make its way into the weed population. If this happens, a farmer can no longer use glyphosate, for example, to kill those weeds. This scenario is not likely to occur in many instances because there are no weedy relatives growing near the crop plant. However, in some cases, it may become a serious problem. For example, canola readily hybridizes with mustard weed species and could transfer its herbicide-resistant genes to those weeds. We know that evolution will occur when transgenic plants are grown on a large scale over a period of time. Of special concern is the development of insect populations resistant to the Bt toxin. This pesticide has been applied to plants for decades without the development of insect-resistant populations. However, transgenic Bt plants express the toxin in all tissues throughout the growing season. Therefore, all insects carrying genes that make them susceptible to the toxin will die. That leaves only the genetically resistant insects alive to perpetuate the population. When these resistant insects mate, they will produce a high proportion of offspring capable of surviving in the presence of the Bt toxin. Farmers are attempting to slow the development of insect resistance in Bt crops by, for example, planting nontransgenic border rows to provide a refuge for susceptible insects. These insects may allow Bt susceptibility to remain in the population. Perhaps the most serious concern about the transgenic crop plants currently in use is that they encourage farmers to move farther away from sustainable agricultural farming practices, meaning ones that allow natural resources to continually regenerate over the long run. Transgenics, at least superficially, simplify farming by reducing the choices made by the manager. Planting a glyphosate-resistant crop commits a farmer to using that herbicide for the season, probably to the exclusion of all other herbicides and other weed-control practices. Farmers who use Bt transgenics may not feel that they need to follow through with integrated pest-management practices that use beneficial insects and timely applications of pesticides to control insect pests. A more sustainable approach would be to plant nontransgenic corn, monitor the fields throughout the growing season, and then apply a pesticide only if and when needed.", "hypothesis": "Because Bt plants are toxic at all times and in all tissues, they allow only Bt-resistant insects to survive and reproduce.", "label": "e"} +{"uid": "id_129", "premise": "Transgenic Plants Genes from virtually any organism, from viruses to humans, can now be inserted into plants, creating what are known as transgenic plants. Now used in agriculture, there are approximately 109 million acres of transgenic crops grown worldwide, 68 percent of which are in the United States. The most common transgenic crops are soybeans, corn, cotton, and canola. Most often, these plants either contain a gene making them resistant to the herbicide glyphosate or they contain an insect-resistant gene that produces a protein called Bt toxin. On the positive side, proponents of transgenic crops argue that these crops are environmentally friendly because they allow farmers to use fewer and less noxious chemicals for crop production. For example, a 21 percent reduction in the use of insecticide has been reported on Bt cotton (transgenic cotton that produces Bt toxin). In addition, when glyphosate is used to control weeds, other, more persistent herbicides do not need to be applied. On the negative side, opponents of transgenic crops suggest that there are many questions that need to be answered before transgenic crops are grown on a large scale. One question deals with the effects that Bt plants have on nontarget organisms such as beneficial insects, worms, and birds that consume the genetically engineered crop. For example, monarch caterpillars feeding on milkweed plants near Bt cornfields will eat some corn pollen that has fallen on the milkweed leaves. Laboratory studies indicate that caterpillars can die from eating Bt pollen. However, field tests indicate that Bt corn is not likely to harm monarchs. Furthermore, the application of pesticides (the alternative to growing Bt plants) has been demonstrated to cause widespread harm to nontarget insects. Another unanswered question is whether herbicide-resistant genes will move into the populations of weeds. Crop plants are sometimes grown in areas where weedy relatives also live. If the crop plants hybridize and reproduce with weedy relatives, then this herbicide-resistant gene will be perpetuated in the offspring. In this way, the resistant gene can make its way into the weed population. If this happens, a farmer can no longer use glyphosate, for example, to kill those weeds. This scenario is not likely to occur in many instances because there are no weedy relatives growing near the crop plant. However, in some cases, it may become a serious problem. For example, canola readily hybridizes with mustard weed species and could transfer its herbicide-resistant genes to those weeds. We know that evolution will occur when transgenic plants are grown on a large scale over a period of time. Of special concern is the development of insect populations resistant to the Bt toxin. This pesticide has been applied to plants for decades without the development of insect-resistant populations. However, transgenic Bt plants express the toxin in all tissues throughout the growing season. Therefore, all insects carrying genes that make them susceptible to the toxin will die. That leaves only the genetically resistant insects alive to perpetuate the population. When these resistant insects mate, they will produce a high proportion of offspring capable of surviving in the presence of the Bt toxin. Farmers are attempting to slow the development of insect resistance in Bt crops by, for example, planting nontransgenic border rows to provide a refuge for susceptible insects. These insects may allow Bt susceptibility to remain in the population. Perhaps the most serious concern about the transgenic crop plants currently in use is that they encourage farmers to move farther away from sustainable agricultural farming practices, meaning ones that allow natural resources to continually regenerate over the long run. Transgenics, at least superficially, simplify farming by reducing the choices made by the manager. Planting a glyphosate-resistant crop commits a farmer to using that herbicide for the season, probably to the exclusion of all other herbicides and other weed-control practices. Farmers who use Bt transgenics may not feel that they need to follow through with integrated pest-management practices that use beneficial insects and timely applications of pesticides to control insect pests. A more sustainable approach would be to plant nontransgenic corn, monitor the fields throughout the growing season, and then apply a pesticide only if and when needed.", "hypothesis": "Regular use of Bt pesticides has not created resistant insect populations, so the use of Bt plants is probably safe as well.", "label": "c"} +{"uid": "id_130", "premise": "Transgenic Plants Genes from virtually any organism, from viruses to humans, can now be inserted into plants, creating what are known as transgenic plants. Now used in agriculture, there are approximately 109 million acres of transgenic crops grown worldwide, 68 percent of which are in the United States. The most common transgenic crops are soybeans, corn, cotton, and canola. Most often, these plants either contain a gene making them resistant to the herbicide glyphosate or they contain an insect-resistant gene that produces a protein called Bt toxin. On the positive side, proponents of transgenic crops argue that these crops are environmentally friendly because they allow farmers to use fewer and less noxious chemicals for crop production. For example, a 21 percent reduction in the use of insecticide has been reported on Bt cotton (transgenic cotton that produces Bt toxin). In addition, when glyphosate is used to control weeds, other, more persistent herbicides do not need to be applied. On the negative side, opponents of transgenic crops suggest that there are many questions that need to be answered before transgenic crops are grown on a large scale. One question deals with the effects that Bt plants have on nontarget organisms such as beneficial insects, worms, and birds that consume the genetically engineered crop. For example, monarch caterpillars feeding on milkweed plants near Bt cornfields will eat some corn pollen that has fallen on the milkweed leaves. Laboratory studies indicate that caterpillars can die from eating Bt pollen. However, field tests indicate that Bt corn is not likely to harm monarchs. Furthermore, the application of pesticides (the alternative to growing Bt plants) has been demonstrated to cause widespread harm to nontarget insects. Another unanswered question is whether herbicide-resistant genes will move into the populations of weeds. Crop plants are sometimes grown in areas where weedy relatives also live. If the crop plants hybridize and reproduce with weedy relatives, then this herbicide-resistant gene will be perpetuated in the offspring. In this way, the resistant gene can make its way into the weed population. If this happens, a farmer can no longer use glyphosate, for example, to kill those weeds. This scenario is not likely to occur in many instances because there are no weedy relatives growing near the crop plant. However, in some cases, it may become a serious problem. For example, canola readily hybridizes with mustard weed species and could transfer its herbicide-resistant genes to those weeds. We know that evolution will occur when transgenic plants are grown on a large scale over a period of time. Of special concern is the development of insect populations resistant to the Bt toxin. This pesticide has been applied to plants for decades without the development of insect-resistant populations. However, transgenic Bt plants express the toxin in all tissues throughout the growing season. Therefore, all insects carrying genes that make them susceptible to the toxin will die. That leaves only the genetically resistant insects alive to perpetuate the population. When these resistant insects mate, they will produce a high proportion of offspring capable of surviving in the presence of the Bt toxin. Farmers are attempting to slow the development of insect resistance in Bt crops by, for example, planting nontransgenic border rows to provide a refuge for susceptible insects. These insects may allow Bt susceptibility to remain in the population. Perhaps the most serious concern about the transgenic crop plants currently in use is that they encourage farmers to move farther away from sustainable agricultural farming practices, meaning ones that allow natural resources to continually regenerate over the long run. Transgenics, at least superficially, simplify farming by reducing the choices made by the manager. Planting a glyphosate-resistant crop commits a farmer to using that herbicide for the season, probably to the exclusion of all other herbicides and other weed-control practices. Farmers who use Bt transgenics may not feel that they need to follow through with integrated pest-management practices that use beneficial insects and timely applications of pesticides to control insect pests. A more sustainable approach would be to plant nontransgenic corn, monitor the fields throughout the growing season, and then apply a pesticide only if and when needed.", "hypothesis": "The evolution of Bt-resistant insect populations will happen eventually if use of transgenic plants becomes widespread.", "label": "e"} +{"uid": "id_131", "premise": "Translated novels written by female writers are a small subset. Translations make up a tiny fraction of the books published in the UK and US, and roughly a quarter of them are written by women. Various recent counts have found that about 26% of English translations are female-authored books (although the gender balance among the translators of this subgroup is roughly equal). That means that fewer than 100 foreign-language books authored by women make their way to the UK every year. But things may be changing. Two new publishing houses have been founded in the UK, whose mission is to publish only translations of books authored by women. There is still plenty of non-English writing waiting to be published.", "hypothesis": "Many more books written by women will be translated in the future.", "label": "n"} +{"uid": "id_132", "premise": "Translated novels written by female writers are a small subset. Translations make up a tiny fraction of the books published in the UK and US, and roughly a quarter of them are written by women. Various recent counts have found that about 26% of English translations are female-authored books (although the gender balance among the translators of this subgroup is roughly equal). That means that fewer than 100 foreign-language books authored by women make their way to the UK every year. But things may be changing. Two new publishing houses have been founded in the UK, whose mission is to publish only translations of books authored by women. There is still plenty of non-English writing waiting to be published.", "hypothesis": "Each year, at most 400 English-translated books are published in the UK", "label": "e"} +{"uid": "id_133", "premise": "Translated novels written by female writers are a small subset. Translations make up a tiny fraction of the books published in the UK and US, and roughly a quarter of them are written by women. Various recent counts have found that about 26% of English translations are female-authored books (although the gender balance among the translators of this subgroup is roughly equal). That means that fewer than 100 foreign-language books authored by women make their way to the UK every year. But things may be changing. Two new publishing houses have been founded in the UK, whose mission is to publish only translations of books authored by women. There is still plenty of non-English writing waiting to be published.", "hypothesis": "About half of translators in the UK are women.", "label": "n"} +{"uid": "id_134", "premise": "Trends in the Indian fashion and textile industries During the 1950s, the Indian fashion scene was exciting, stylish and very graceful. There were no celebrity designers or models, nor were there any labels that were widely recognised. The value of a garment was judged by its style and fabric rather than by who made it. It was regarded as perfectly acceptable, even for high-society women, to approach an unknown tailor who could make a garment for a few rupees, providing the perfect fit, finish and style. They were proud of getting a bargain, and of giving their own name to the end result. The 1960s was an era full of mischievousness and celebration in the arts, music and cinema. The period was characterised by freedom from restrictions and, in the fashion world, an acceptance of innovative types of material such as plastic and coated polyester. Tight-fitting kurtas and churidars and high coiffures were a trend among women. The following decade witnessed an increase in the export of traditional materials, and the arrival in India of international fashion. Synthetics became trendy, and the disco culture affected the fashion scene. It was in the early 80s when the first fashion store Ravissant opened in Mumbai. At that time garments were retailed for a four-figure price tag. American designers like Calvin Klein became popular. In India too, contours became more masculine, and even the salwar kameez was designed with shoulder pads. With the evolution of designer stores came the culture of designer fashion, along with its hefty price tags. Whatever a garment was like, consumers were convinced that a higher price tag signified elegant designer fashion, so garments were sold at unbelievable prices. Meanwhile, designers decided to get themselves noticed by making showy outfits and associating with the right celebrities. Soon, fashion shows became competitive, each designer attempting to out-do the other in theme, guest list and media coverage. In the last decade of the millennium, the market shrank and ethnic wear made a comeback. During the recession, there was a push to sell at any cost. With fierce competition the inevitable occurred: the once hefty price tags began their downward journey, and the fashion-show industry followed suit. However, the liveliness of the Indian fashion scene had not ended it had merely reached a stable level. At the beginning of the 21st century, with new designers and models, and more sensible designs, the fashion industry accelerated once again. As far as the global fashion industry is concerned, Indian ethnic designs and materials are currently in demand from fashion houses and garment manufacturers. India is the third largest producer of cotton, the second largest producer of silk, and the fifth largest producer of man-made fibres in the world. The Indian garment and fabric industries have many fundamental advantages, in terms of a cheaper, skilled work force, cost-effective production, raw materials, flexibility, and a wide range of designs with sequins, beadwork, and embroidery. In addition, that India provides garments to international fashion houses at competitive prices, with a shorter lead time, and an effective monopoly on certain designs, is accepted the whole world over. India has always been regarded as the default source in the embroidered garments segment, but changes in the rate of exchange between the rupee and the dollar has further depressed prices, thereby attracting more buyers. So the international fashion houses walk away with customised goods, and craftwork is sold at very low rates. As far as the fabric market is concerned, the range available in India can attract as well as confuse the buyer. Much of the production takes place in the small town of Chapa in the eastern state of Bihar, a name one might never have heard of. Here fabric-making is a family industry; the range and quality of raw silks churned out here belie the crude production methods and equipment. Surat in Gujarat, is the supplier of an amazing set of jacquards, moss crepes and georgette sheers all fabrics in high demand. Another Indian fabric design that has been adopted by the fashion industry is the Madras check, originally utilised for the universal lungi, a simple lower-body wrap worn in southern India. This design has now found its way on to bandannas, blouses, home furnishings and almost anything one can think of. Ethnic Indian designs with batik and hand-embroidered motifs have also become popular across the world. Decorative bead work is another product in demand in the international market. Beads are used to prepare accessory items like belts and bags, and beadwork is now available for haute couture evening wear too.", "hypothesis": "At the start of the 21st century, key elements in the Indian fashion industry changed.", "label": "e"} +{"uid": "id_135", "premise": "Trends in the Indian fashion and textile industries During the 1950s, the Indian fashion scene was exciting, stylish and very graceful. There were no celebrity designers or models, nor were there any labels that were widely recognised. The value of a garment was judged by its style and fabric rather than by who made it. It was regarded as perfectly acceptable, even for high-society women, to approach an unknown tailor who could make a garment for a few rupees, providing the perfect fit, finish and style. They were proud of getting a bargain, and of giving their own name to the end result. The 1960s was an era full of mischievousness and celebration in the arts, music and cinema. The period was characterised by freedom from restrictions and, in the fashion world, an acceptance of innovative types of material such as plastic and coated polyester. Tight-fitting kurtas and churidars and high coiffures were a trend among women. The following decade witnessed an increase in the export of traditional materials, and the arrival in India of international fashion. Synthetics became trendy, and the disco culture affected the fashion scene. It was in the early 80s when the first fashion store Ravissant opened in Mumbai. At that time garments were retailed for a four-figure price tag. American designers like Calvin Klein became popular. In India too, contours became more masculine, and even the salwar kameez was designed with shoulder pads. With the evolution of designer stores came the culture of designer fashion, along with its hefty price tags. Whatever a garment was like, consumers were convinced that a higher price tag signified elegant designer fashion, so garments were sold at unbelievable prices. Meanwhile, designers decided to get themselves noticed by making showy outfits and associating with the right celebrities. Soon, fashion shows became competitive, each designer attempting to out-do the other in theme, guest list and media coverage. In the last decade of the millennium, the market shrank and ethnic wear made a comeback. During the recession, there was a push to sell at any cost. With fierce competition the inevitable occurred: the once hefty price tags began their downward journey, and the fashion-show industry followed suit. However, the liveliness of the Indian fashion scene had not ended it had merely reached a stable level. At the beginning of the 21st century, with new designers and models, and more sensible designs, the fashion industry accelerated once again. As far as the global fashion industry is concerned, Indian ethnic designs and materials are currently in demand from fashion houses and garment manufacturers. India is the third largest producer of cotton, the second largest producer of silk, and the fifth largest producer of man-made fibres in the world. The Indian garment and fabric industries have many fundamental advantages, in terms of a cheaper, skilled work force, cost-effective production, raw materials, flexibility, and a wide range of designs with sequins, beadwork, and embroidery. In addition, that India provides garments to international fashion houses at competitive prices, with a shorter lead time, and an effective monopoly on certain designs, is accepted the whole world over. India has always been regarded as the default source in the embroidered garments segment, but changes in the rate of exchange between the rupee and the dollar has further depressed prices, thereby attracting more buyers. So the international fashion houses walk away with customised goods, and craftwork is sold at very low rates. As far as the fabric market is concerned, the range available in India can attract as well as confuse the buyer. Much of the production takes place in the small town of Chapa in the eastern state of Bihar, a name one might never have heard of. Here fabric-making is a family industry; the range and quality of raw silks churned out here belie the crude production methods and equipment. Surat in Gujarat, is the supplier of an amazing set of jacquards, moss crepes and georgette sheers all fabrics in high demand. Another Indian fabric design that has been adopted by the fashion industry is the Madras check, originally utilised for the universal lungi, a simple lower-body wrap worn in southern India. This design has now found its way on to bandannas, blouses, home furnishings and almost anything one can think of. Ethnic Indian designs with batik and hand-embroidered motifs have also become popular across the world. Decorative bead work is another product in demand in the international market. Beads are used to prepare accessory items like belts and bags, and beadwork is now available for haute couture evening wear too.", "hypothesis": "India now exports more than half of the cotton it produces.", "label": "n"} +{"uid": "id_136", "premise": "Trends in the Indian fashion and textile industries During the 1950s, the Indian fashion scene was exciting, stylish and very graceful. There were no celebrity designers or models, nor were there any labels that were widely recognised. The value of a garment was judged by its style and fabric rather than by who made it. It was regarded as perfectly acceptable, even for high-society women, to approach an unknown tailor who could make a garment for a few rupees, providing the perfect fit, finish and style. They were proud of getting a bargain, and of giving their own name to the end result. The 1960s was an era full of mischievousness and celebration in the arts, music and cinema. The period was characterised by freedom from restrictions and, in the fashion world, an acceptance of innovative types of material such as plastic and coated polyester. Tight-fitting kurtas and churidars and high coiffures were a trend among women. The following decade witnessed an increase in the export of traditional materials, and the arrival in India of international fashion. Synthetics became trendy, and the disco culture affected the fashion scene. It was in the early 80s when the first fashion store Ravissant opened in Mumbai. At that time garments were retailed for a four-figure price tag. American designers like Calvin Klein became popular. In India too, contours became more masculine, and even the salwar kameez was designed with shoulder pads. With the evolution of designer stores came the culture of designer fashion, along with its hefty price tags. Whatever a garment was like, consumers were convinced that a higher price tag signified elegant designer fashion, so garments were sold at unbelievable prices. Meanwhile, designers decided to get themselves noticed by making showy outfits and associating with the right celebrities. Soon, fashion shows became competitive, each designer attempting to out-do the other in theme, guest list and media coverage. In the last decade of the millennium, the market shrank and ethnic wear made a comeback. During the recession, there was a push to sell at any cost. With fierce competition the inevitable occurred: the once hefty price tags began their downward journey, and the fashion-show industry followed suit. However, the liveliness of the Indian fashion scene had not ended it had merely reached a stable level. At the beginning of the 21st century, with new designers and models, and more sensible designs, the fashion industry accelerated once again. As far as the global fashion industry is concerned, Indian ethnic designs and materials are currently in demand from fashion houses and garment manufacturers. India is the third largest producer of cotton, the second largest producer of silk, and the fifth largest producer of man-made fibres in the world. The Indian garment and fabric industries have many fundamental advantages, in terms of a cheaper, skilled work force, cost-effective production, raw materials, flexibility, and a wide range of designs with sequins, beadwork, and embroidery. In addition, that India provides garments to international fashion houses at competitive prices, with a shorter lead time, and an effective monopoly on certain designs, is accepted the whole world over. India has always been regarded as the default source in the embroidered garments segment, but changes in the rate of exchange between the rupee and the dollar has further depressed prices, thereby attracting more buyers. So the international fashion houses walk away with customised goods, and craftwork is sold at very low rates. As far as the fabric market is concerned, the range available in India can attract as well as confuse the buyer. Much of the production takes place in the small town of Chapa in the eastern state of Bihar, a name one might never have heard of. Here fabric-making is a family industry; the range and quality of raw silks churned out here belie the crude production methods and equipment. Surat in Gujarat, is the supplier of an amazing set of jacquards, moss crepes and georgette sheers all fabrics in high demand. Another Indian fabric design that has been adopted by the fashion industry is the Madras check, originally utilised for the universal lungi, a simple lower-body wrap worn in southern India. This design has now found its way on to bandannas, blouses, home furnishings and almost anything one can think of. Ethnic Indian designs with batik and hand-embroidered motifs have also become popular across the world. Decorative bead work is another product in demand in the international market. Beads are used to prepare accessory items like belts and bags, and beadwork is now available for haute couture evening wear too.", "hypothesis": "Modern machinery accounts for the high quality of Chapas silk.", "label": "c"} +{"uid": "id_137", "premise": "Trends in the Indian fashion and textile industries During the 1950s, the Indian fashion scene was exciting, stylish and very graceful. There were no celebrity designers or models, nor were there any labels that were widely recognised. The value of a garment was judged by its style and fabric rather than by who made it. It was regarded as perfectly acceptable, even for high-society women, to approach an unknown tailor who could make a garment for a few rupees, providing the perfect fit, finish and style. They were proud of getting a bargain, and of giving their own name to the end result. The 1960s was an era full of mischievousness and celebration in the arts, music and cinema. The period was characterised by freedom from restrictions and, in the fashion world, an acceptance of innovative types of material such as plastic and coated polyester. Tight-fitting kurtas and churidars and high coiffures were a trend among women. The following decade witnessed an increase in the export of traditional materials, and the arrival in India of international fashion. Synthetics became trendy, and the disco culture affected the fashion scene. It was in the early 80s when the first fashion store Ravissant opened in Mumbai. At that time garments were retailed for a four-figure price tag. American designers like Calvin Klein became popular. In India too, contours became more masculine, and even the salwar kameez was designed with shoulder pads. With the evolution of designer stores came the culture of designer fashion, along with its hefty price tags. Whatever a garment was like, consumers were convinced that a higher price tag signified elegant designer fashion, so garments were sold at unbelievable prices. Meanwhile, designers decided to get themselves noticed by making showy outfits and associating with the right celebrities. Soon, fashion shows became competitive, each designer attempting to out-do the other in theme, guest list and media coverage. In the last decade of the millennium, the market shrank and ethnic wear made a comeback. During the recession, there was a push to sell at any cost. With fierce competition the inevitable occurred: the once hefty price tags began their downward journey, and the fashion-show industry followed suit. However, the liveliness of the Indian fashion scene had not ended it had merely reached a stable level. At the beginning of the 21st century, with new designers and models, and more sensible designs, the fashion industry accelerated once again. As far as the global fashion industry is concerned, Indian ethnic designs and materials are currently in demand from fashion houses and garment manufacturers. India is the third largest producer of cotton, the second largest producer of silk, and the fifth largest producer of man-made fibres in the world. The Indian garment and fabric industries have many fundamental advantages, in terms of a cheaper, skilled work force, cost-effective production, raw materials, flexibility, and a wide range of designs with sequins, beadwork, and embroidery. In addition, that India provides garments to international fashion houses at competitive prices, with a shorter lead time, and an effective monopoly on certain designs, is accepted the whole world over. India has always been regarded as the default source in the embroidered garments segment, but changes in the rate of exchange between the rupee and the dollar has further depressed prices, thereby attracting more buyers. So the international fashion houses walk away with customised goods, and craftwork is sold at very low rates. As far as the fabric market is concerned, the range available in India can attract as well as confuse the buyer. Much of the production takes place in the small town of Chapa in the eastern state of Bihar, a name one might never have heard of. Here fabric-making is a family industry; the range and quality of raw silks churned out here belie the crude production methods and equipment. Surat in Gujarat, is the supplier of an amazing set of jacquards, moss crepes and georgette sheers all fabrics in high demand. Another Indian fabric design that has been adopted by the fashion industry is the Madras check, originally utilised for the universal lungi, a simple lower-body wrap worn in southern India. This design has now found its way on to bandannas, blouses, home furnishings and almost anything one can think of. Ethnic Indian designs with batik and hand-embroidered motifs have also become popular across the world. Decorative bead work is another product in demand in the international market. Beads are used to prepare accessory items like belts and bags, and beadwork is now available for haute couture evening wear too.", "hypothesis": "Some types of Indian craftwork which are internationally popular had humble origins.", "label": "e"} +{"uid": "id_138", "premise": "Trends in the Indian fashion and textile industries During the 1950s, the Indian fashion scene was exciting, stylish and very graceful. There were no celebrity designers or models, nor were there any labels that were widely recognised. The value of a garment was judged by its style and fabric rather than by who made it. It was regarded as perfectly acceptable, even for high-society women, to approach an unknown tailor who could make a garment for a few rupees, providing the perfect fit, finish and style. They were proud of getting a bargain, and of giving their own name to the end result. The 1960s was an era full of mischievousness and celebration in the arts, music and cinema. The period was characterised by freedom from restrictions and, in the fashion world, an acceptance of innovative types of material such as plastic and coated polyester. Tight-fitting kurtas and churidars and high coiffures were a trend among women. The following decade witnessed an increase in the export of traditional materials, and the arrival in India of international fashion. Synthetics became trendy, and the disco culture affected the fashion scene. It was in the early 80s when the first fashion store Ravissant opened in Mumbai. At that time garments were retailed for a four-figure price tag. American designers like Calvin Klein became popular. In India too, contours became more masculine, and even the salwar kameez was designed with shoulder pads. With the evolution of designer stores came the culture of designer fashion, along with its hefty price tags. Whatever a garment was like, consumers were convinced that a higher price tag signified elegant designer fashion, so garments were sold at unbelievable prices. Meanwhile, designers decided to get themselves noticed by making showy outfits and associating with the right celebrities. Soon, fashion shows became competitive, each designer attempting to out-do the other in theme, guest list and media coverage. In the last decade of the millennium, the market shrank and ethnic wear made a comeback. During the recession, there was a push to sell at any cost. With fierce competition the inevitable occurred: the once hefty price tags began their downward journey, and the fashion-show industry followed suit. However, the liveliness of the Indian fashion scene had not ended it had merely reached a stable level. At the beginning of the 21st century, with new designers and models, and more sensible designs, the fashion industry accelerated once again. As far as the global fashion industry is concerned, Indian ethnic designs and materials are currently in demand from fashion houses and garment manufacturers. India is the third largest producer of cotton, the second largest producer of silk, and the fifth largest producer of man-made fibres in the world. The Indian garment and fabric industries have many fundamental advantages, in terms of a cheaper, skilled work force, cost-effective production, raw materials, flexibility, and a wide range of designs with sequins, beadwork, and embroidery. In addition, that India provides garments to international fashion houses at competitive prices, with a shorter lead time, and an effective monopoly on certain designs, is accepted the whole world over. India has always been regarded as the default source in the embroidered garments segment, but changes in the rate of exchange between the rupee and the dollar has further depressed prices, thereby attracting more buyers. So the international fashion houses walk away with customised goods, and craftwork is sold at very low rates. As far as the fabric market is concerned, the range available in India can attract as well as confuse the buyer. Much of the production takes place in the small town of Chapa in the eastern state of Bihar, a name one might never have heard of. Here fabric-making is a family industry; the range and quality of raw silks churned out here belie the crude production methods and equipment. Surat in Gujarat, is the supplier of an amazing set of jacquards, moss crepes and georgette sheers all fabrics in high demand. Another Indian fabric design that has been adopted by the fashion industry is the Madras check, originally utilised for the universal lungi, a simple lower-body wrap worn in southern India. This design has now found its way on to bandannas, blouses, home furnishings and almost anything one can think of. Ethnic Indian designs with batik and hand-embroidered motifs have also become popular across the world. Decorative bead work is another product in demand in the international market. Beads are used to prepare accessory items like belts and bags, and beadwork is now available for haute couture evening wear too.", "hypothesis": "Conditions in India are generally well suited to the manufacture of clothing.", "label": "e"} +{"uid": "id_139", "premise": "Trends in the Indian fashion and textile industries During the 1950s, the Indian fashion scene was exciting, stylish and very graceful. There were no celebrity designers or models, nor were there any labels that were widely recognised. The value of a garment was judged by its style and fabric rather than by who made it. It was regarded as perfectly acceptable, even for high-society women, to approach an unknown tailor who could make a garment for a few rupees, providing the perfect fit, finish and style. They were proud of getting a bargain, and of giving their own name to the end result. The 1960s was an era full of mischievousness and celebration in the arts, music and cinema. The period was characterised by freedom from restrictions and, in the fashion world, an acceptance of innovative types of material such as plastic and coated polyester. Tight-fitting kurtas and churidars and high coiffures were a trend among women. The following decade witnessed an increase in the export of traditional materials, and the arrival in India of international fashion. Synthetics became trendy, and the disco culture affected the fashion scene. It was in the early 80s when the first fashion store Ravissant opened in Mumbai. At that time garments were retailed for a four-figure price tag. American designers like Calvin Klein became popular. In India too, contours became more masculine, and even the salwar kameez was designed with shoulder pads. With the evolution of designer stores came the culture of designer fashion, along with its hefty price tags. Whatever a garment was like, consumers were convinced that a higher price tag signified elegant designer fashion, so garments were sold at unbelievable prices. Meanwhile, designers decided to get themselves noticed by making showy outfits and associating with the right celebrities. Soon, fashion shows became competitive, each designer attempting to out-do the other in theme, guest list and media coverage. In the last decade of the millennium, the market shrank and ethnic wear made a comeback. During the recession, there was a push to sell at any cost. With fierce competition the inevitable occurred: the once hefty price tags began their downward journey, and the fashion-show industry followed suit. However, the liveliness of the Indian fashion scene had not ended it had merely reached a stable level. At the beginning of the 21st century, with new designers and models, and more sensible designs, the fashion industry accelerated once again. As far as the global fashion industry is concerned, Indian ethnic designs and materials are currently in demand from fashion houses and garment manufacturers. India is the third largest producer of cotton, the second largest producer of silk, and the fifth largest producer of man-made fibres in the world. The Indian garment and fabric industries have many fundamental advantages, in terms of a cheaper, skilled work force, cost-effective production, raw materials, flexibility, and a wide range of designs with sequins, beadwork, and embroidery. In addition, that India provides garments to international fashion houses at competitive prices, with a shorter lead time, and an effective monopoly on certain designs, is accepted the whole world over. India has always been regarded as the default source in the embroidered garments segment, but changes in the rate of exchange between the rupee and the dollar has further depressed prices, thereby attracting more buyers. So the international fashion houses walk away with customised goods, and craftwork is sold at very low rates. As far as the fabric market is concerned, the range available in India can attract as well as confuse the buyer. Much of the production takes place in the small town of Chapa in the eastern state of Bihar, a name one might never have heard of. Here fabric-making is a family industry; the range and quality of raw silks churned out here belie the crude production methods and equipment. Surat in Gujarat, is the supplier of an amazing set of jacquards, moss crepes and georgette sheers all fabrics in high demand. Another Indian fabric design that has been adopted by the fashion industry is the Madras check, originally utilised for the universal lungi, a simple lower-body wrap worn in southern India. This design has now found its way on to bandannas, blouses, home furnishings and almost anything one can think of. Ethnic Indian designs with batik and hand-embroidered motifs have also become popular across the world. Decorative bead work is another product in demand in the international market. Beads are used to prepare accessory items like belts and bags, and beadwork is now available for haute couture evening wear too.", "hypothesis": "Indian clothing exports have suffered from changes in the value of its currency.", "label": "c"} +{"uid": "id_140", "premise": "Trespassing occurs when a person enters a building without permission from the owner and (1) posted signs or verbal warnings prohibit the presence of unauthorized persons, or (2) the owner or other authorized person asks that person to leave.", "hypothesis": "Frank is looking at new cars at a dealership after hours when a security guard tells him to leave, which he does immediately. This is the best example of Trespassing.", "label": "c"} +{"uid": "id_141", "premise": "Trespassing occurs when a person enters a building without permission from the owner and (1) posted signs or verbal warnings prohibit the presence of unauthorized persons, or (2) the owner or other authorized person asks that person to leave.", "hypothesis": "A transient is sleeping in a vacant office building. Posted on the wall is a sign that reads No Trespassing. Private Property.", "label": "e"} +{"uid": "id_142", "premise": "Trespassing occurs when a person enters a building without permission from the owner and (1) posted signs or verbal warnings prohibit the presence of unauthorized persons, or (2) the owner or other authorized person asks that person to leave.", "hypothesis": "Alonzo is hosting a party with his roommate, Manny, who gets angry and tells Alonzo to leave or he'll have him arrested for trespassing. This is the best example of Trespassing.", "label": "c"} +{"uid": "id_143", "premise": "Trespassing occurs when a person enters a building without permission from the owner and (1) posted signs or verbal warnings prohibit the presence of unauthorized persons, or (2) the owner or other authorized person asks that person to leave.", "hypothesis": "Ben is walking home from school one afternoon, takes a shortcut through Mrs. Benson's front yard, and then hears her yelling at him that she is going to have him arrested for trespassing. This is the best example of Trespassing.", "label": "c"} +{"uid": "id_144", "premise": "Twenty-four billion is invested in premium bonds and in the past 10 years the number of bonds in the draw has increased sevenfold. The chances of winning have recently changed from 27,500 to one to 24,000 to one. Record sales have meant that a new machine to select winning numbers randomly was required. The predecessor took five and a half hours to complete the draw, while the new machine can complete the task in half that time. Each month there are 1 million winners.", "hypothesis": "The new machine takes 150 minutes to draw the 1 million winning numbers.", "label": "c"} +{"uid": "id_145", "premise": "Twenty-four billion is invested in premium bonds and in the past 10 years the number of bonds in the draw has increased sevenfold. The chances of winning have recently changed from 27,500 to one to 24,000 to one. Record sales have meant that a new machine to select winning numbers randomly was required. The predecessor took five and a half hours to complete the draw, while the new machine can complete the task in half that time. Each month there are 1 million winners.", "hypothesis": "The chances of winning a prize have increased and there are now more winners numbers.", "label": "e"} +{"uid": "id_146", "premise": "Twenty-four billion is invested in premium bonds and in the past 10 years the number of bonds in the draw has increased sevenfold. The chances of winning have recently changed from 27,500 to one to 24,000 to one. Record sales have meant that a new machine to select winning numbers randomly was required. The predecessor took five and a half hours to complete the draw, while the new machine can complete the task in half that time. Each month there are 1 million winners.", "hypothesis": "The new machine is a computer.", "label": "n"} +{"uid": "id_147", "premise": "Twenty-seven-year-old Tom Smith is a very successful long distance runner. Because he is classified as an elite athlete he receives financial support. When he was at school Tom won the European Junior Cross-country Championship. As an adult Tom has repre- sented Great Britain on many occasions. His form has improved dramatically over the last two years. On the basis of coming third in the 10,000 metres at the European Championships and second in the marathon at the World Championships, he was selected to run in the 10,000 metres and the marathon at the Olympic Games. However, following random drug testing Tom was found to have taken a banned stimulant. It is also known that: Tom maintains his innocence and has appealed against the finding. Tom has been tested and found to be clean on several previous occasions. Tom has been using a nasal decongestant spray. Tom is coached by a former East German coach who had links with athletes who have been banned for using performance- enhancing drugs. Tom claims his performance has improved because he can now afford to train at altitude in the USA.", "hypothesis": "Tom was third in the marathon at the World Championships.", "label": "c"} +{"uid": "id_148", "premise": "Twenty-seven-year-old Tom Smith is a very successful long distance runner. Because he is classified as an elite athlete he receives financial support. When he was at school Tom won the European Junior Cross-country Championship. As an adult Tom has repre- sented Great Britain on many occasions. His form has improved dramatically over the last two years. On the basis of coming third in the 10,000 metres at the European Championships and second in the marathon at the World Championships, he was selected to run in the 10,000 metres and the marathon at the Olympic Games. However, following random drug testing Tom was found to have taken a banned stimulant. It is also known that: Tom maintains his innocence and has appealed against the finding. Tom has been tested and found to be clean on several previous occasions. Tom has been using a nasal decongestant spray. Tom is coached by a former East German coach who had links with athletes who have been banned for using performance- enhancing drugs. Tom claims his performance has improved because he can now afford to train at altitude in the USA.", "hypothesis": "Tom was given an illegal stimulant by his coach.", "label": "n"} +{"uid": "id_149", "premise": "Twenty-seven-year-old Tom Smith is a very successful long distance runner. Because he is classified as an elite athlete he receives financial support. When he was at school Tom won the European Junior Cross-country Championship. As an adult Tom has repre- sented Great Britain on many occasions. His form has improved dramatically over the last two years. On the basis of coming third in the 10,000 metres at the European Championships and second in the marathon at the World Championships, he was selected to run in the 10,000 metres and the marathon at the Olympic Games. However, following random drug testing Tom was found to have taken a banned stimulant. It is also known that: Tom maintains his innocence and has appealed against the finding. Tom has been tested and found to be clean on several previous occasions. Tom has been using a nasal decongestant spray. Tom is coached by a former East German coach who had links with athletes who have been banned for using performance- enhancing drugs. Tom claims his performance has improved because he can now afford to train at altitude in the USA.", "hypothesis": "Tom had been the European Junior Cross-country champion.", "label": "e"} +{"uid": "id_150", "premise": "Twenty-seven-year-old Tom Smith is a very successful long distance runner. Because he is classified as an elite athlete he receives financial support. When he was at school Tom won the European Junior Cross-country Championship. As an adult Tom has repre- sented Great Britain on many occasions. His form has improved dramatically over the last two years. On the basis of coming third in the 10,000 metres at the European Championships and second in the marathon at the World Championships, he was selected to run in the 10,000 metres and the marathon at the Olympic Games. However, following random drug testing Tom was found to have taken a banned stimulant. It is also known that: Tom maintains his innocence and has appealed against the finding. Tom has been tested and found to be clean on several previous occasions. Tom has been using a nasal decongestant spray. Tom is coached by a former East German coach who had links with athletes who have been banned for using performance- enhancing drugs. Tom claims his performance has improved because he can now afford to train at altitude in the USA.", "hypothesis": "Tom had been taking medication.", "label": "e"} +{"uid": "id_151", "premise": "Twenty-seven-year-old Tom Smith is a very successful long distance runner. Because he is classified as an elite athlete he receives financial support. When he was at school Tom won the European Junior Cross-country Championship. As an adult Tom has repre- sented Great Britain on many occasions. His form has improved dramatically over the last two years. On the basis of coming third in the 10,000 metres at the European Championships and second in the marathon at the World Championships, he was selected to run in the 10,000 metres and the marathon at the Olympic Games. However, following random drug testing Tom was found to have taken a banned stimulant. It is also known that: Tom maintains his innocence and has appealed against the finding. Tom has been tested and found to be clean on several previous occasions. Tom has been using a nasal decongestant spray. Tom is coached by a former East German coach who had links with athletes who have been banned for using performance- enhancing drugs. Tom claims his performance has improved because he can now afford to train at altitude in the USA.", "hypothesis": "Tom depends entirely on his winnings for his income.", "label": "c"} +{"uid": "id_152", "premise": "Twice as many people live till they are 100 in France as in Britain. Yet the two coun- tries have similar sized populations and have diets with similar amounts of fat. In fact life expectancy is considerably better in France from the age of 65 onwards and it seems that lifestyle and diet may have a lot to do with it. Leaving aside the fact that the French probably have the best national health service in the world, statistics suggest that the French remain active longer and consume more units of fruit and vegetables. They also enjoy considerably more glasses of red wine and it seems these differences give rise to far lower levels of death caused by heart disease and this allows significant numbers of people to live until their centenary.", "hypothesis": "Four differences are attributed to the reason the French have a far lower level of death caused by heart disease: the best national health service, remaining active, consuming more fruit and vegetables and enjoying more red wine.", "label": "c"} +{"uid": "id_153", "premise": "Twice as many people live till they are 100 in France as in Britain. Yet the two coun- tries have similar sized populations and have diets with similar amounts of fat. In fact life expectancy is considerably better in France from the age of 65 onwards and it seems that lifestyle and diet may have a lot to do with it. Leaving aside the fact that the French probably have the best national health service in the world, statistics suggest that the French remain active longer and consume more units of fruit and vegetables. They also enjoy considerably more glasses of red wine and it seems these differences give rise to far lower levels of death caused by heart disease and this allows significant numbers of people to live until their centenary.", "hypothesis": "Even if twice as many people in France see their centenary it may be that very few people live to see their 100th birthday in either country.", "label": "c"} +{"uid": "id_154", "premise": "Two Wings and a Toolkit Betty and her mate Abel are captive crows in the care of Alex Kacelnik, an expert in animal behaviour at Oxford University. They belong to a forest-dwelling species of bird (Corvus rnoneduloides) confined to two islands in the South Pacific. New Caledonian crows are tenacious predators, and the only birds that habitually use a wide selection of self-made tools to find food. One of the wild crows cleverest tools is the crochet hook, made by detaching a side twig from a larger one, leaving enough of the larger twig to shape into a hook. Equally cunning is a tool crafted from the barbed vine-leaf, which consists of a central rib with paired leaflets each with a rose-like thorn at its base. They strip out a piece of this rib, removing the leaflets and all but one thorn at the top, which remains as a ready-made hook to prise out insects from awkward cracks. The crows also make an ingenious tool called a padanus probe from padanus tree leaves. The tool has a broad base, sharp tip, a row of tiny hooks along one edge, and a tapered shape created by the crow nipping and tearing to form a progression of three or four steps along the other edge of the leaf. What makes this tool special is that they manufacture it to a standard design, as if following a set of instructions. Although it is rare to catch a crow in the act of clipping out a padanus probe, we do have ample proof of their workmanship: the discarded leaves from which the tools are cut. The remarkable thing that these counterpart leaves tell us is that crows consistently produce the same design every time, with no in-between or trial versions. Its left the researchers wondering whether, like people, they envisage the tool before they start and perform the actions they know are needed to make it. Research has revealed that genetics plays a part in the less sophisticated toolmaking skills of finches in the Galapagos islands. No one knows if thats also the case for New Caledonian crows, but its highly unlikely that their toolmaking skills are hardwired into the brain. The picture so far points to a combination of cultural transmission from parent birds to their young and individual resourcefulness, says Kacelnik. In a test at Oxford, Kacelniks team offered Betty and Abel an original challenge food in a bucket at the bottom of a well. The only way to get the food was to hook the bucket out by its handle. Given a choice of tools a straight length of wire and one with a hooked end the birds immediately picked the hook, showing that they did indeed understand the functional properties of the tool. But do they also have the foresight and creativity to plan the construction of their tools? It appears they do. In one bucket-in-the-well test, Abel carried off the hook, leaving Betty with nothing but the straight wire. What happened next was absolutely amazing, says Kacelnik. She wedged the tip of the wire into a crack in a plastic dish and pulled the other end to fashion her own hook. Wild crows dont have access to pliable, bendable material that retains its shape, and Bettys only similar experience was a brief encounter with some pipe cleaners a year earlier. In nine out of ten further tests, she again made hooks and retrieved the bucket. The question of whats going on in a crows mind will take time and a lot more experiments to answer, but there could be a lesson in it for understanding our own evolution. Maybe our ancestors, who suddenly began to create symmetrical tools with carefully worked edges some 1.5 million years ago, didnt actually have the sophisticated mental abilities with which we credit them. Closer scrutiny of the brains of New Caledonian crows might provide a few pointers to the special attributes they would have needed. If were lucky we may find specific developments in the brain that set these animals apart, says Kacelnik. One of these might be a very strong degree of laterality the specialisation of one side of the brain to perform specific tasks. In people, the left side of the brain controls the processing of complex sequential tasks, and also language and speech. One of the consequences of this is thought to be right-handedness. Interestingly, biologists have noticed that most padanus probes are cut from the left side of the leaf, meaning that the birds clip them with the right side of their beaks the crow equivalent of right- handedness. The team thinks this reflects the fact that the left side of the crows brain is specialised to handle the sequential processing required to make complex tools. Under what conditions might this extraordinary talent have emerged in these two species? They are both social creatures, and wide-ranging in their feeding habits. These factors were probably important but, ironically, it may have been their shortcomings that triggered the evolution of toolmaking. Maybe the ancestors of crows and humans found themselves in a position where they couldnt make the physical adaptations required for survival so they had to change their behaviour instead. The stage was then set for the evolution of those rare cognitive skills that produce sophisticated tools. New Caledonian crows may tell us what those crucial skills are.", "hypothesis": "Research into how the padanus probe is made has helped to explain the toolmaking skills of many other bird species.", "label": "n"} +{"uid": "id_155", "premise": "Two Wings and a Toolkit Betty and her mate Abel are captive crows in the care of Alex Kacelnik, an expert in animal behaviour at Oxford University. They belong to a forest-dwelling species of bird (Corvus rnoneduloides) confined to two islands in the South Pacific. New Caledonian crows are tenacious predators, and the only birds that habitually use a wide selection of self-made tools to find food. One of the wild crows cleverest tools is the crochet hook, made by detaching a side twig from a larger one, leaving enough of the larger twig to shape into a hook. Equally cunning is a tool crafted from the barbed vine-leaf, which consists of a central rib with paired leaflets each with a rose-like thorn at its base. They strip out a piece of this rib, removing the leaflets and all but one thorn at the top, which remains as a ready-made hook to prise out insects from awkward cracks. The crows also make an ingenious tool called a padanus probe from padanus tree leaves. The tool has a broad base, sharp tip, a row of tiny hooks along one edge, and a tapered shape created by the crow nipping and tearing to form a progression of three or four steps along the other edge of the leaf. What makes this tool special is that they manufacture it to a standard design, as if following a set of instructions. Although it is rare to catch a crow in the act of clipping out a padanus probe, we do have ample proof of their workmanship: the discarded leaves from which the tools are cut. The remarkable thing that these counterpart leaves tell us is that crows consistently produce the same design every time, with no in-between or trial versions. Its left the researchers wondering whether, like people, they envisage the tool before they start and perform the actions they know are needed to make it. Research has revealed that genetics plays a part in the less sophisticated toolmaking skills of finches in the Galapagos islands. No one knows if thats also the case for New Caledonian crows, but its highly unlikely that their toolmaking skills are hardwired into the brain. The picture so far points to a combination of cultural transmission from parent birds to their young and individual resourcefulness, says Kacelnik. In a test at Oxford, Kacelniks team offered Betty and Abel an original challenge food in a bucket at the bottom of a well. The only way to get the food was to hook the bucket out by its handle. Given a choice of tools a straight length of wire and one with a hooked end the birds immediately picked the hook, showing that they did indeed understand the functional properties of the tool. But do they also have the foresight and creativity to plan the construction of their tools? It appears they do. In one bucket-in-the-well test, Abel carried off the hook, leaving Betty with nothing but the straight wire. What happened next was absolutely amazing, says Kacelnik. She wedged the tip of the wire into a crack in a plastic dish and pulled the other end to fashion her own hook. Wild crows dont have access to pliable, bendable material that retains its shape, and Bettys only similar experience was a brief encounter with some pipe cleaners a year earlier. In nine out of ten further tests, she again made hooks and retrieved the bucket. The question of whats going on in a crows mind will take time and a lot more experiments to answer, but there could be a lesson in it for understanding our own evolution. Maybe our ancestors, who suddenly began to create symmetrical tools with carefully worked edges some 1.5 million years ago, didnt actually have the sophisticated mental abilities with which we credit them. Closer scrutiny of the brains of New Caledonian crows might provide a few pointers to the special attributes they would have needed. If were lucky we may find specific developments in the brain that set these animals apart, says Kacelnik. One of these might be a very strong degree of laterality the specialisation of one side of the brain to perform specific tasks. In people, the left side of the brain controls the processing of complex sequential tasks, and also language and speech. One of the consequences of this is thought to be right-handedness. Interestingly, biologists have noticed that most padanus probes are cut from the left side of the leaf, meaning that the birds clip them with the right side of their beaks the crow equivalent of right- handedness. The team thinks this reflects the fact that the left side of the crows brain is specialised to handle the sequential processing required to make complex tools. Under what conditions might this extraordinary talent have emerged in these two species? They are both social creatures, and wide-ranging in their feeding habits. These factors were probably important but, ironically, it may have been their shortcomings that triggered the evolution of toolmaking. Maybe the ancestors of crows and humans found themselves in a position where they couldnt make the physical adaptations required for survival so they had to change their behaviour instead. The stage was then set for the evolution of those rare cognitive skills that produce sophisticated tools. New Caledonian crows may tell us what those crucial skills are.", "hypothesis": "There appears to be a fixed pattern for the padanus probes construction.", "label": "e"} +{"uid": "id_156", "premise": "Two Wings and a Toolkit Betty and her mate Abel are captive crows in the care of Alex Kacelnik, an expert in animal behaviour at Oxford University. They belong to a forest-dwelling species of bird (Corvus rnoneduloides) confined to two islands in the South Pacific. New Caledonian crows are tenacious predators, and the only birds that habitually use a wide selection of self-made tools to find food. One of the wild crows cleverest tools is the crochet hook, made by detaching a side twig from a larger one, leaving enough of the larger twig to shape into a hook. Equally cunning is a tool crafted from the barbed vine-leaf, which consists of a central rib with paired leaflets each with a rose-like thorn at its base. They strip out a piece of this rib, removing the leaflets and all but one thorn at the top, which remains as a ready-made hook to prise out insects from awkward cracks. The crows also make an ingenious tool called a padanus probe from padanus tree leaves. The tool has a broad base, sharp tip, a row of tiny hooks along one edge, and a tapered shape created by the crow nipping and tearing to form a progression of three or four steps along the other edge of the leaf. What makes this tool special is that they manufacture it to a standard design, as if following a set of instructions. Although it is rare to catch a crow in the act of clipping out a padanus probe, we do have ample proof of their workmanship: the discarded leaves from which the tools are cut. The remarkable thing that these counterpart leaves tell us is that crows consistently produce the same design every time, with no in-between or trial versions. Its left the researchers wondering whether, like people, they envisage the tool before they start and perform the actions they know are needed to make it. Research has revealed that genetics plays a part in the less sophisticated toolmaking skills of finches in the Galapagos islands. No one knows if thats also the case for New Caledonian crows, but its highly unlikely that their toolmaking skills are hardwired into the brain. The picture so far points to a combination of cultural transmission from parent birds to their young and individual resourcefulness, says Kacelnik. In a test at Oxford, Kacelniks team offered Betty and Abel an original challenge food in a bucket at the bottom of a well. The only way to get the food was to hook the bucket out by its handle. Given a choice of tools a straight length of wire and one with a hooked end the birds immediately picked the hook, showing that they did indeed understand the functional properties of the tool. But do they also have the foresight and creativity to plan the construction of their tools? It appears they do. In one bucket-in-the-well test, Abel carried off the hook, leaving Betty with nothing but the straight wire. What happened next was absolutely amazing, says Kacelnik. She wedged the tip of the wire into a crack in a plastic dish and pulled the other end to fashion her own hook. Wild crows dont have access to pliable, bendable material that retains its shape, and Bettys only similar experience was a brief encounter with some pipe cleaners a year earlier. In nine out of ten further tests, she again made hooks and retrieved the bucket. The question of whats going on in a crows mind will take time and a lot more experiments to answer, but there could be a lesson in it for understanding our own evolution. Maybe our ancestors, who suddenly began to create symmetrical tools with carefully worked edges some 1.5 million years ago, didnt actually have the sophisticated mental abilities with which we credit them. Closer scrutiny of the brains of New Caledonian crows might provide a few pointers to the special attributes they would have needed. If were lucky we may find specific developments in the brain that set these animals apart, says Kacelnik. One of these might be a very strong degree of laterality the specialisation of one side of the brain to perform specific tasks. In people, the left side of the brain controls the processing of complex sequential tasks, and also language and speech. One of the consequences of this is thought to be right-handedness. Interestingly, biologists have noticed that most padanus probes are cut from the left side of the leaf, meaning that the birds clip them with the right side of their beaks the crow equivalent of right- handedness. The team thinks this reflects the fact that the left side of the crows brain is specialised to handle the sequential processing required to make complex tools. Under what conditions might this extraordinary talent have emerged in these two species? They are both social creatures, and wide-ranging in their feeding habits. These factors were probably important but, ironically, it may have been their shortcomings that triggered the evolution of toolmaking. Maybe the ancestors of crows and humans found themselves in a position where they couldnt make the physical adaptations required for survival so they had to change their behaviour instead. The stage was then set for the evolution of those rare cognitive skills that produce sophisticated tools. New Caledonian crows may tell us what those crucial skills are.", "hypothesis": "Crows seem to practise a number of times before making a usable padanus probe.", "label": "c"} +{"uid": "id_157", "premise": "Two Wings and a Toolkit Betty and her mate Abel are captive crows in the care of Alex Kacelnik, an expert in animal behaviour at Oxford University. They belong to a forest-dwelling species of bird (Corvus rnoneduloides) confined to two islands in the South Pacific. New Caledonian crows are tenacious predators, and the only birds that habitually use a wide selection of self-made tools to find food. One of the wild crows cleverest tools is the crochet hook, made by detaching a side twig from a larger one, leaving enough of the larger twig to shape into a hook. Equally cunning is a tool crafted from the barbed vine-leaf, which consists of a central rib with paired leaflets each with a rose-like thorn at its base. They strip out a piece of this rib, removing the leaflets and all but one thorn at the top, which remains as a ready-made hook to prise out insects from awkward cracks. The crows also make an ingenious tool called a padanus probe from padanus tree leaves. The tool has a broad base, sharp tip, a row of tiny hooks along one edge, and a tapered shape created by the crow nipping and tearing to form a progression of three or four steps along the other edge of the leaf. What makes this tool special is that they manufacture it to a standard design, as if following a set of instructions. Although it is rare to catch a crow in the act of clipping out a padanus probe, we do have ample proof of their workmanship: the discarded leaves from which the tools are cut. The remarkable thing that these counterpart leaves tell us is that crows consistently produce the same design every time, with no in-between or trial versions. Its left the researchers wondering whether, like people, they envisage the tool before they start and perform the actions they know are needed to make it. Research has revealed that genetics plays a part in the less sophisticated toolmaking skills of finches in the Galapagos islands. No one knows if thats also the case for New Caledonian crows, but its highly unlikely that their toolmaking skills are hardwired into the brain. The picture so far points to a combination of cultural transmission from parent birds to their young and individual resourcefulness, says Kacelnik. In a test at Oxford, Kacelniks team offered Betty and Abel an original challenge food in a bucket at the bottom of a well. The only way to get the food was to hook the bucket out by its handle. Given a choice of tools a straight length of wire and one with a hooked end the birds immediately picked the hook, showing that they did indeed understand the functional properties of the tool. But do they also have the foresight and creativity to plan the construction of their tools? It appears they do. In one bucket-in-the-well test, Abel carried off the hook, leaving Betty with nothing but the straight wire. What happened next was absolutely amazing, says Kacelnik. She wedged the tip of the wire into a crack in a plastic dish and pulled the other end to fashion her own hook. Wild crows dont have access to pliable, bendable material that retains its shape, and Bettys only similar experience was a brief encounter with some pipe cleaners a year earlier. In nine out of ten further tests, she again made hooks and retrieved the bucket. The question of whats going on in a crows mind will take time and a lot more experiments to answer, but there could be a lesson in it for understanding our own evolution. Maybe our ancestors, who suddenly began to create symmetrical tools with carefully worked edges some 1.5 million years ago, didnt actually have the sophisticated mental abilities with which we credit them. Closer scrutiny of the brains of New Caledonian crows might provide a few pointers to the special attributes they would have needed. If were lucky we may find specific developments in the brain that set these animals apart, says Kacelnik. One of these might be a very strong degree of laterality the specialisation of one side of the brain to perform specific tasks. In people, the left side of the brain controls the processing of complex sequential tasks, and also language and speech. One of the consequences of this is thought to be right-handedness. Interestingly, biologists have noticed that most padanus probes are cut from the left side of the leaf, meaning that the birds clip them with the right side of their beaks the crow equivalent of right- handedness. The team thinks this reflects the fact that the left side of the crows brain is specialised to handle the sequential processing required to make complex tools. Under what conditions might this extraordinary talent have emerged in these two species? They are both social creatures, and wide-ranging in their feeding habits. These factors were probably important but, ironically, it may have been their shortcomings that triggered the evolution of toolmaking. Maybe the ancestors of crows and humans found themselves in a position where they couldnt make the physical adaptations required for survival so they had to change their behaviour instead. The stage was then set for the evolution of those rare cognitive skills that produce sophisticated tools. New Caledonian crows may tell us what those crucial skills are.", "hypothesis": "The researchers suspect the crows have a mental image of the padanus probe before they create it.", "label": "e"} +{"uid": "id_158", "premise": "Two Wings and a Toolkit Betty and her mate Abel are captive crows in the care of Alex Kacelnik, an expert in animal behaviour at Oxford University. They belong to a forest-dwelling species of bird (Corvus rnoneduloides) confined to two islands in the South Pacific. New Caledonian crows are tenacious predators, and the only birds that habitually use a wide selection of self-made tools to find food. One of the wild crows cleverest tools is the crochet hook, made by detaching a side twig from a larger one, leaving enough of the larger twig to shape into a hook. Equally cunning is a tool crafted from the barbed vine-leaf, which consists of a central rib with paired leaflets each with a rose-like thorn at its base. They strip out a piece of this rib, removing the leaflets and all but one thorn at the top, which remains as a ready-made hook to prise out insects from awkward cracks. The crows also make an ingenious tool called a padanus probe from padanus tree leaves. The tool has a broad base, sharp tip, a row of tiny hooks along one edge, and a tapered shape created by the crow nipping and tearing to form a progression of three or four steps along the other edge of the leaf. What makes this tool special is that they manufacture it to a standard design, as if following a set of instructions. Although it is rare to catch a crow in the act of clipping out a padanus probe, we do have ample proof of their workmanship: the discarded leaves from which the tools are cut. The remarkable thing that these counterpart leaves tell us is that crows consistently produce the same design every time, with no in-between or trial versions. Its left the researchers wondering whether, like people, they envisage the tool before they start and perform the actions they know are needed to make it. Research has revealed that genetics plays a part in the less sophisticated toolmaking skills of finches in the Galapagos islands. No one knows if thats also the case for New Caledonian crows, but its highly unlikely that their toolmaking skills are hardwired into the brain. The picture so far points to a combination of cultural transmission from parent birds to their young and individual resourcefulness, says Kacelnik. In a test at Oxford, Kacelniks team offered Betty and Abel an original challenge food in a bucket at the bottom of a well. The only way to get the food was to hook the bucket out by its handle. Given a choice of tools a straight length of wire and one with a hooked end the birds immediately picked the hook, showing that they did indeed understand the functional properties of the tool. But do they also have the foresight and creativity to plan the construction of their tools? It appears they do. In one bucket-in-the-well test, Abel carried off the hook, leaving Betty with nothing but the straight wire. What happened next was absolutely amazing, says Kacelnik. She wedged the tip of the wire into a crack in a plastic dish and pulled the other end to fashion her own hook. Wild crows dont have access to pliable, bendable material that retains its shape, and Bettys only similar experience was a brief encounter with some pipe cleaners a year earlier. In nine out of ten further tests, she again made hooks and retrieved the bucket. The question of whats going on in a crows mind will take time and a lot more experiments to answer, but there could be a lesson in it for understanding our own evolution. Maybe our ancestors, who suddenly began to create symmetrical tools with carefully worked edges some 1.5 million years ago, didnt actually have the sophisticated mental abilities with which we credit them. Closer scrutiny of the brains of New Caledonian crows might provide a few pointers to the special attributes they would have needed. If were lucky we may find specific developments in the brain that set these animals apart, says Kacelnik. One of these might be a very strong degree of laterality the specialisation of one side of the brain to perform specific tasks. In people, the left side of the brain controls the processing of complex sequential tasks, and also language and speech. One of the consequences of this is thought to be right-handedness. Interestingly, biologists have noticed that most padanus probes are cut from the left side of the leaf, meaning that the birds clip them with the right side of their beaks the crow equivalent of right- handedness. The team thinks this reflects the fact that the left side of the crows brain is specialised to handle the sequential processing required to make complex tools. Under what conditions might this extraordinary talent have emerged in these two species? They are both social creatures, and wide-ranging in their feeding habits. These factors were probably important but, ironically, it may have been their shortcomings that triggered the evolution of toolmaking. Maybe the ancestors of crows and humans found themselves in a position where they couldnt make the physical adaptations required for survival so they had to change their behaviour instead. The stage was then set for the evolution of those rare cognitive skills that produce sophisticated tools. New Caledonian crows may tell us what those crucial skills are.", "hypothesis": "There is plenty of evidence to indicate how the crows manufacture the padanus probe.", "label": "e"} +{"uid": "id_159", "premise": "Two Wings and a Toolkit Betty and her mate Abel are captive crows in the care of Alex Kacelnik, an expert in animal behaviour at Oxford University. They belong to a forest-dwelling species of bird (Corvus rnoneduloides) confined to two islands in the South Pacific. New Caledonian crows are tenacious predators, and the only birds that habitually use a wide selection of self-made tools to find food. One of the wild crows cleverest tools is the crochet hook, made by detaching a side twig from a larger one, leaving enough of the larger twig to shape into a hook. Equally cunning is a tool crafted from the barbed vine-leaf, which consists of a central rib with paired leaflets each with a rose-like thorn at its base. They strip out a piece of this rib, removing the leaflets and all but one thorn at the top, which remains as a ready-made hook to prise out insects from awkward cracks. The crows also make an ingenious tool called a padanus probe from padanus tree leaves. The tool has a broad base, sharp tip, a row of tiny hooks along one edge, and a tapered shape created by the crow nipping and tearing to form a progression of three or four steps along the other edge of the leaf. What makes this tool special is that they manufacture it to a standard design, as if following a set of instructions. Although it is rare to catch a crow in the act of clipping out a padanus probe, we do have ample proof of their workmanship: the discarded leaves from which the tools are cut. The remarkable thing that these counterpart leaves tell us is that crows consistently produce the same design every time, with no in-between or trial versions. Its left the researchers wondering whether, like people, they envisage the tool before they start and perform the actions they know are needed to make it. Research has revealed that genetics plays a part in the less sophisticated toolmaking skills of finches in the Galapagos islands. No one knows if thats also the case for New Caledonian crows, but its highly unlikely that their toolmaking skills are hardwired into the brain. The picture so far points to a combination of cultural transmission from parent birds to their young and individual resourcefulness, says Kacelnik. In a test at Oxford, Kacelniks team offered Betty and Abel an original challenge food in a bucket at the bottom of a well. The only way to get the food was to hook the bucket out by its handle. Given a choice of tools a straight length of wire and one with a hooked end the birds immediately picked the hook, showing that they did indeed understand the functional properties of the tool. But do they also have the foresight and creativity to plan the construction of their tools? It appears they do. In one bucket-in-the-well test, Abel carried off the hook, leaving Betty with nothing but the straight wire. What happened next was absolutely amazing, says Kacelnik. She wedged the tip of the wire into a crack in a plastic dish and pulled the other end to fashion her own hook. Wild crows dont have access to pliable, bendable material that retains its shape, and Bettys only similar experience was a brief encounter with some pipe cleaners a year earlier. In nine out of ten further tests, she again made hooks and retrieved the bucket. The question of whats going on in a crows mind will take time and a lot more experiments to answer, but there could be a lesson in it for understanding our own evolution. Maybe our ancestors, who suddenly began to create symmetrical tools with carefully worked edges some 1.5 million years ago, didnt actually have the sophisticated mental abilities with which we credit them. Closer scrutiny of the brains of New Caledonian crows might provide a few pointers to the special attributes they would have needed. If were lucky we may find specific developments in the brain that set these animals apart, says Kacelnik. One of these might be a very strong degree of laterality the specialisation of one side of the brain to perform specific tasks. In people, the left side of the brain controls the processing of complex sequential tasks, and also language and speech. One of the consequences of this is thought to be right-handedness. Interestingly, biologists have noticed that most padanus probes are cut from the left side of the leaf, meaning that the birds clip them with the right side of their beaks the crow equivalent of right- handedness. The team thinks this reflects the fact that the left side of the crows brain is specialised to handle the sequential processing required to make complex tools. Under what conditions might this extraordinary talent have emerged in these two species? They are both social creatures, and wide-ranging in their feeding habits. These factors were probably important but, ironically, it may have been their shortcomings that triggered the evolution of toolmaking. Maybe the ancestors of crows and humans found themselves in a position where they couldnt make the physical adaptations required for survival so they had to change their behaviour instead. The stage was then set for the evolution of those rare cognitive skills that produce sophisticated tools. New Caledonian crows may tell us what those crucial skills are.", "hypothesis": "The researchers believe the ability to make the padanus probe is passed down to the crows in their genes.", "label": "c"} +{"uid": "id_160", "premise": "Two charities have delivered a petition to the Prime Minister that has been signed by over 35,000 people. The petition, jointly organised by the 'Health Food Group (HFG) and 'Happy Heart and Mind is calling for a ban on junk food adverts before 9pm on any channel. The Government is also being urged to tighten advertising regulations and protect children in this regard more widely. The current regulations restrict junk food adverts from being showing during children's programming but there is nothing to stop them being shown during popular family slots, such as Saturday evenings when many children watch television with their families. Casey Stemp coordinated the petition and is a strong advocate of the proposed changes. 'By removing junk food adverts from television at any time before 99m, we would be seeing a simple, popular and effective move that would help parents to tackle the increasing desire of young people to consume such foods. ' The loopholes that junk food companies find mean that our younger generation are faced with a constant bombardment of junk food adverts. As future generations are becoming more and more obese, we have to look for opportunities to alleviate the temptations they are facing on a daily, if not hourly basis!", "hypothesis": "Saturday evenings are a time when many families would be tempted to indulge in junk food.", "label": "n"} +{"uid": "id_161", "premise": "Two families of venomous snakes are native to the United States. The vast majority are pit vipers, of the family Crotalidae, which include rattlesnakes, copperheads and cottonmouths. Virtually all of the venomous bites in this country are from pit vipers. Some, Mojave rattlesnakes or canebrake rattlesnakes, for example, carry a neurotoxic venom that can affect the brain or spinal cord. Copperheads, on the other hand, have a milder and less dangerous venom that sometimes may not require antivenin treatment. The other family is Elapidae, which includes two species of coral snakes found chiefly in the Southern states. Related to the much more dangerous Asian cobras and kraits, coral snakes have small mouths and short teeth, which give them a less efficient venom delivery than pit vipers. People bitten by coral snakes lack the characteristic fang marks of pit vipers, sometimes making the bite hard to detect.", "hypothesis": "Coral snakes are found in Florida and Alabama.", "label": "n"} +{"uid": "id_162", "premise": "Two families of venomous snakes are native to the United States. The vast majority are pit vipers, of the family Crotalidae, which include rattlesnakes, copperheads and cottonmouths. Virtually all of the venomous bites in this country are from pit vipers. Some, Mojave rattlesnakes or canebrake rattlesnakes, for example, carry a neurotoxic venom that can affect the brain or spinal cord. Copperheads, on the other hand, have a milder and less dangerous venom that sometimes may not require antivenin treatment. The other family is Elapidae, which includes two species of coral snakes found chiefly in the Southern states. Related to the much more dangerous Asian cobras and kraits, coral snakes have small mouths and short teeth, which give them a less efficient venom delivery than pit vipers. People bitten by coral snakes lack the characteristic fang marks of pit vipers, sometimes making the bite hard to detect.", "hypothesis": "Bite marks from pit vipers can be hard to detect.", "label": "n"} +{"uid": "id_163", "premise": "Two families of venomous snakes are native to the United States. The vast majority are pit vipers, of the family Crotalidae, which include rattlesnakes, copperheads and cottonmouths. Virtually all of the venomous bites in this country are from pit vipers. Some, Mojave rattlesnakes or canebrake rattlesnakes, for example, carry a neurotoxic venom that can affect the brain or spinal cord. Copperheads, on the other hand, have a milder and less dangerous venom that sometimes may not require antivenin treatment. The other family is Elapidae, which includes two species of coral snakes found chiefly in the Southern states. Related to the much more dangerous Asian cobras and kraits, coral snakes have small mouths and short teeth, which give them a less efficient venom delivery than pit vipers. People bitten by coral snakes lack the characteristic fang marks of pit vipers, sometimes making the bite hard to detect.", "hypothesis": "Coral snakes are less dangerous than Asian cobras.", "label": "e"} +{"uid": "id_164", "premise": "Two families of venomous snakes are native to the United States. The vast majority are pit vipers, of the family Crotalidae, which include rattlesnakes, copperheads and cottonmouths. Virtually all of the venomous bites in this country are from pit vipers. Some, Mojave rattlesnakes or canebrake rattlesnakes, for example, carry a neurotoxic venom that can affect the brain or spinal cord. Copperheads, on the other hand, have a milder and less dangerous venom that sometimes may not require antivenin treatment. The other family is Elapidae, which includes two species of coral snakes found chiefly in the Southern states. Related to the much more dangerous Asian cobras and kraits, coral snakes have small mouths and short teeth, which give them a less efficient venom delivery than pit vipers. People bitten by coral snakes lack the characteristic fang marks of pit vipers, sometimes making the bite hard to detect.", "hypothesis": "Cottonmouths are also known as Water Moccasins.", "label": "c"} +{"uid": "id_165", "premise": "Two families of venomous snakes are native to the United States. The vast majority are pit vipers, of the family Crotalidae, which include rattlesnakes, copperheads and cottonmouths. Virtually all of the venomous bites in this country are from pit vipers. Some, Mojave rattlesnakes or canebrake rattlesnakes, for example, carry a neurotoxic venom that can affect the brain or spinal cord. Copperheads, on the other hand, have a milder and less dangerous venom that sometimes may not require antivenin treatment. The other family is Elapidae, which includes two species of coral snakes found chiefly in the Southern states. Related to the much more dangerous Asian cobras and kraits, coral snakes have small mouths and short teeth, which give them a less efficient venom delivery than pit vipers. People bitten by coral snakes lack the characteristic fang marks of pit vipers, sometimes making the bite hard to detect.", "hypothesis": "Crotalidae and Elapidae are native to the United States.", "label": "e"} +{"uid": "id_166", "premise": "Two inter-city railway carriages were found ablaze last night (10 March) on a siding near Glundal station. Three elderly men were seen at the station at 19.00 last night and reliable witnesses say that they were all over 6 ft tall and that one of the men had a bad limp. It is also known that: The carriages belonged to Southern Trains and were waiting to be repaired. Fred Wish is 6 ft 5 in tall and 58 years old. Bob Tuck is a retired train driver. The railway company made Rod Debbs redundant in January. The carriages had been taken out of service because of electrical faults. A violent thunderstorm occurred over Glundal on 10 March. John Plum is 64 years old and has just left hospital after a knee operation. Sixty-one-year-old Dennis White, a former railway worker, is 5 ft 6 in tall.", "hypothesis": "Dennis White was one of the three elderly men seen at the station at 19.00 on the night of the fire.", "label": "c"} +{"uid": "id_167", "premise": "Two inter-city railway carriages were found ablaze last night (10 March) on a siding near Glundal station. Three elderly men were seen at the station at 19.00 last night and reliable witnesses say that they were all over 6 ft tall and that one of the men had a bad limp. It is also known that: The carriages belonged to Southern Trains and were waiting to be repaired. Fred Wish is 6 ft 5 in tall and 58 years old. Bob Tuck is a retired train driver. The railway company made Rod Debbs redundant in January. The carriages had been taken out of service because of electrical faults. A violent thunderstorm occurred over Glundal on 10 March. John Plum is 64 years old and has just left hospital after a knee operation. Sixty-one-year-old Dennis White, a former railway worker, is 5 ft 6 in tall.", "hypothesis": "Lightning could have started the fire in the railway carriages.", "label": "e"} +{"uid": "id_168", "premise": "Two inter-city railway carriages were found ablaze last night (10 March) on a siding near Glundal station. Three elderly men were seen at the station at 19.00 last night and reliable witnesses say that they were all over 6 ft tall and that one of the men had a bad limp. It is also known that: The carriages belonged to Southern Trains and were waiting to be repaired. Fred Wish is 6 ft 5 in tall and 58 years old. Bob Tuck is a retired train driver. The railway company made Rod Debbs redundant in January. The carriages had been taken out of service because of electrical faults. A violent thunderstorm occurred over Glundal on 10 March. John Plum is 64 years old and has just left hospital after a knee operation. Sixty-one-year-old Dennis White, a former railway worker, is 5 ft 6 in tall.", "hypothesis": "Fred Wish could have been one of the three elderly men seen at the station.", "label": "e"} +{"uid": "id_169", "premise": "Two inter-city railway carriages were found ablaze last night (10 March) on a siding near Glundal station. Three elderly men were seen at the station at 19.00 last night and reliable witnesses say that they were all over 6 ft tall and that one of the men had a bad limp. It is also known that: The carriages belonged to Southern Trains and were waiting to be repaired. Fred Wish is 6 ft 5 in tall and 58 years old. Bob Tuck is a retired train driver. The railway company made Rod Debbs redundant in January. The carriages had been taken out of service because of electrical faults. A violent thunderstorm occurred over Glundal on 10 March. John Plum is 64 years old and has just left hospital after a knee operation. Sixty-one-year-old Dennis White, a former railway worker, is 5 ft 6 in tall.", "hypothesis": "Bob Tuck is over 65 years old.", "label": "n"} +{"uid": "id_170", "premise": "Two inter-city railway carriages were found ablaze last night (10 March) on a siding near Glundal station. Three elderly men were seen at the station at 19.00 last night and reliable witnesses say that they were all over 6 ft tall and that one of the men had a bad limp. It is also known that: The carriages belonged to Southern Trains and were waiting to be repaired. Fred Wish is 6 ft 5 in tall and 58 years old. Bob Tuck is a retired train driver. The railway company made Rod Debbs redundant in January. The carriages had been taken out of service because of electrical faults. A violent thunderstorm occurred over Glundal on 10 March. John Plum is 64 years old and has just left hospital after a knee operation. Sixty-one-year-old Dennis White, a former railway worker, is 5 ft 6 in tall.", "hypothesis": "Rod Debbs had a motive for the arson attack.", "label": "n"} +{"uid": "id_171", "premise": "Two masked gunmen held up the only bank in Tuisdale at 10.30 on Wednesday 23 May. They made a successful getaway with over 500,000. The police say that three men are helping them with their enquiries. It is also known that: Four people work at the bank. Six customers were in the bank at 10.30. No shots were fired. Ms Grainger left the bank at 10.28 on Wednesday 23 May. All the people in the bank were made to lie on the floor face down on their stomachs. The police chased the getaway car for 16 km, and then lost it. An alarm alerted the police to the hold-up. A red Ford Mondeo drove away from the bank at high speed at 10.30 on Wednesday 23 May.", "hypothesis": "The getaway car was a red Ford Mondeo.", "label": "n"} +{"uid": "id_172", "premise": "Two masked gunmen held up the only bank in Tuisdale at 10.30 on Wednesday 23 May. They made a successful getaway with over 500,000. The police say that three men are helping them with their enquiries. It is also known that: Four people work at the bank. Six customers were in the bank at 10.30. No shots were fired. Ms Grainger left the bank at 10.28 on Wednesday 23 May. All the people in the bank were made to lie on the floor face down on their stomachs. The police chased the getaway car for 16 km, and then lost it. An alarm alerted the police to the hold-up. A red Ford Mondeo drove away from the bank at high speed at 10.30 on Wednesday 23 May.", "hypothesis": "As a goodwill gesture, Tuisdales other bank provided emergency access to cash for customers after their ordeal.", "label": "c"} +{"uid": "id_173", "premise": "Two masked gunmen held up the only bank in Tuisdale at 10.30 on Wednesday 23 May. They made a successful getaway with over 500,000. The police say that three men are helping them with their enquiries. It is also known that: Four people work at the bank. Six customers were in the bank at 10.30. No shots were fired. Ms Grainger left the bank at 10.28 on Wednesday 23 May. All the people in the bank were made to lie on the floor face down on their stomachs. The police chased the getaway car for 16 km, and then lost it. An alarm alerted the police to the hold-up. A red Ford Mondeo drove away from the bank at high speed at 10.30 on Wednesday 23 May.", "hypothesis": "At least six people were lying on the floor in the bank.", "label": "e"} +{"uid": "id_174", "premise": "Two masked gunmen held up the only bank in Tuisdale at 10.30 on Wednesday 23 May. They made a successful getaway with over 500,000. The police say that three men are helping them with their enquiries. It is also known that: Four people work at the bank. Six customers were in the bank at 10.30. No shots were fired. Ms Grainger left the bank at 10.28 on Wednesday 23 May. All the people in the bank were made to lie on the floor face down on their stomachs. The police chased the getaway car for 16 km, and then lost it. An alarm alerted the police to the hold-up. A red Ford Mondeo drove away from the bank at high speed at 10.30 on Wednesday 23 May.", "hypothesis": "The cashier pressed an alarm in the bank, which is connected to the police station.", "label": "n"} +{"uid": "id_175", "premise": "Two masked gunmen held up the only bank in Tuisdale at 10.30 on Wednesday 23 May. They made a successful getaway with over 500,000. The police say that three men are helping them with their enquiries. It is also known that: Four people work at the bank. Six customers were in the bank at 10.30. No shots were fired. Ms Grainger left the bank at 10.28 on Wednesday 23 May. All the people in the bank were made to lie on the floor face down on their stomachs. The police chased the getaway car for 16 km, and then lost it. An alarm alerted the police to the hold-up. A red Ford Mondeo drove away from the bank at high speed at 10.30 on Wednesday 23 May.", "hypothesis": "One of the gunmen fired a shot to make everyone lie down on the floor.", "label": "c"} +{"uid": "id_176", "premise": "Two men posing as council workers have been accused of stealing 500 from an 84-year-old man after offering to clear up leaves from the driveway of his home. The two men called at a house in Aspenwood Drive, Sherlston, owned and lived in by Mr Stephen Pimblett during the morning of 12 October. After introducing them- selves, they offered to sweep up the leaves and remove them for the sum of 20, which he accepted. When the two men had completed the task he invited them into the house in order to pay them the agreed sum. At that point they informed him that the cost had risen to 75 because: they had cleared the garden of leaves in addition to the drive; and, he would have to bear the cost of transporting the large quantity leaves to the council refuge collection point. He paid them the amount they had asked for, but he later claimed that they had stolen an additional sum of money from his house. It is also known that: Thieves posing as council workers and representatives of the utilities companies were known to have been operating in the area, and to have been targeting older people living on their own. Verbal logical reasoning tests Stephen Pimblett had a comprehensive home contents insurance policy which covered him against losses by accidental damage, fire and theft. Mr Pimblett had a large garden, which was surrounded by tall hedges and included many bushes and a number of fruit trees. The autumn weather had been very windy causing the trees to shed their leaves. Mr Pimblett preferred to pay for everything by cash and had a reputation for keeping substantial sums of money hidden in the house. Mr Pimblett was in good health but his family was concerned that he was becoming a bit forgetful and had a habit of falling asleep in an armchair while sitting watching television.", "hypothesis": "Stephen Pimblett was the owner-occupier of a house in Aspenwood Drive, Sherlston.", "label": "e"} +{"uid": "id_177", "premise": "Two men posing as council workers have been accused of stealing 500 from an 84-year-old man after offering to clear up leaves from the driveway of his home. The two men called at a house in Aspenwood Drive, Sherlston, owned and lived in by Mr Stephen Pimblett during the morning of 12 October. After introducing them- selves, they offered to sweep up the leaves and remove them for the sum of 20, which he accepted. When the two men had completed the task he invited them into the house in order to pay them the agreed sum. At that point they informed him that the cost had risen to 75 because: they had cleared the garden of leaves in addition to the drive; and, he would have to bear the cost of transporting the large quantity leaves to the council refuge collection point. He paid them the amount they had asked for, but he later claimed that they had stolen an additional sum of money from his house. It is also known that: Thieves posing as council workers and representatives of the utilities companies were known to have been operating in the area, and to have been targeting older people living on their own. Verbal logical reasoning tests Stephen Pimblett had a comprehensive home contents insurance policy which covered him against losses by accidental damage, fire and theft. Mr Pimblett had a large garden, which was surrounded by tall hedges and included many bushes and a number of fruit trees. The autumn weather had been very windy causing the trees to shed their leaves. Mr Pimblett preferred to pay for everything by cash and had a reputation for keeping substantial sums of money hidden in the house. Mr Pimblett was in good health but his family was concerned that he was becoming a bit forgetful and had a habit of falling asleep in an armchair while sitting watching television.", "hypothesis": "If it had it been stolen from his house by the two men as Mr Pimblett claimed, he would have had no means of recovering the lost money.", "label": "c"} +{"uid": "id_178", "premise": "Two men posing as council workers have been accused of stealing 500 from an 84-year-old man after offering to clear up leaves from the driveway of his home. The two men called at a house in Aspenwood Drive, Sherlston, owned and lived in by Mr Stephen Pimblett during the morning of 12 October. After introducing them- selves, they offered to sweep up the leaves and remove them for the sum of 20, which he accepted. When the two men had completed the task he invited them into the house in order to pay them the agreed sum. At that point they informed him that the cost had risen to 75 because: they had cleared the garden of leaves in addition to the drive; and, he would have to bear the cost of transporting the large quantity leaves to the council refuge collection point. He paid them the amount they had asked for, but he later claimed that they had stolen an additional sum of money from his house. It is also known that: Thieves posing as council workers and representatives of the utilities companies were known to have been operating in the area, and to have been targeting older people living on their own. Verbal logical reasoning tests Stephen Pimblett had a comprehensive home contents insurance policy which covered him against losses by accidental damage, fire and theft. Mr Pimblett had a large garden, which was surrounded by tall hedges and included many bushes and a number of fruit trees. The autumn weather had been very windy causing the trees to shed their leaves. Mr Pimblett preferred to pay for everything by cash and had a reputation for keeping substantial sums of money hidden in the house. Mr Pimblett was in good health but his family was concerned that he was becoming a bit forgetful and had a habit of falling asleep in an armchair while sitting watching television.", "hypothesis": "One of the two men entered the house and stole the money while Mr Pimblett was asleep in an armchair in front of the television.", "label": "n"} +{"uid": "id_179", "premise": "Two men posing as council workers have been accused of stealing 500 from an 84-year-old man after offering to clear up leaves from the driveway of his home. The two men called at a house in Aspenwood Drive, Sherlston, owned and lived in by Mr Stephen Pimblett during the morning of 12 October. After introducing them- selves, they offered to sweep up the leaves and remove them for the sum of 20, which he accepted. When the two men had completed the task he invited them into the house in order to pay them the agreed sum. At that point they informed him that the cost had risen to 75 because: they had cleared the garden of leaves in addition to the drive; and, he would have to bear the cost of transporting the large quantity leaves to the council refuge collection point. He paid them the amount they had asked for, but he later claimed that they had stolen an additional sum of money from his house. It is also known that: Thieves posing as council workers and representatives of the utilities companies were known to have been operating in the area, and to have been targeting older people living on their own. Verbal logical reasoning tests Stephen Pimblett had a comprehensive home contents insurance policy which covered him against losses by accidental damage, fire and theft. Mr Pimblett had a large garden, which was surrounded by tall hedges and included many bushes and a number of fruit trees. The autumn weather had been very windy causing the trees to shed their leaves. Mr Pimblett preferred to pay for everything by cash and had a reputation for keeping substantial sums of money hidden in the house. Mr Pimblett was in good health but his family was concerned that he was becoming a bit forgetful and had a habit of falling asleep in an armchair while sitting watching television.", "hypothesis": "The two men that called at Mr Pimbletts house and offered to clear up the fallen leaves from his driveway were part of a gang of confidence tricksters known to have been operating in the area.", "label": "n"} +{"uid": "id_180", "premise": "Two men posing as council workers have been accused of stealing 500 from an 84-year-old man after offering to clear up leaves from the driveway of his home. The two men called at a house in Aspenwood Drive, Sherlston, owned and lived in by Mr Stephen Pimblett during the morning of 12 October. After introducing them- selves, they offered to sweep up the leaves and remove them for the sum of 20, which he accepted. When the two men had completed the task he invited them into the house in order to pay them the agreed sum. At that point they informed him that the cost had risen to 75 because: they had cleared the garden of leaves in addition to the drive; and, he would have to bear the cost of transporting the large quantity leaves to the council refuge collection point. He paid them the amount they had asked for, but he later claimed that they had stolen an additional sum of money from his house. It is also known that: Thieves posing as council workers and representatives of the utilities companies were known to have been operating in the area, and to have been targeting older people living on their own. Verbal logical reasoning tests Stephen Pimblett had a comprehensive home contents insurance policy which covered him against losses by accidental damage, fire and theft. Mr Pimblett had a large garden, which was surrounded by tall hedges and included many bushes and a number of fruit trees. The autumn weather had been very windy causing the trees to shed their leaves. Mr Pimblett preferred to pay for everything by cash and had a reputation for keeping substantial sums of money hidden in the house. Mr Pimblett was in good health but his family was concerned that he was becoming a bit forgetful and had a habit of falling asleep in an armchair while sitting watching television.", "hypothesis": "A thorough search of the house would probably have shown that the sum of money Mr Pimblett claimed to have been stolen was still in the place where he had hidden it.", "label": "n"} +{"uid": "id_181", "premise": "Two months ago, it was announced that Central Government pensioners would get dearness relief with immediate effect but till date, banks have not credited the arrears.", "hypothesis": "Most of the banks normally take care of the pensioners", "label": "n"} +{"uid": "id_182", "premise": "Two months ago, it was announced that Central Government pensioners would get dearness relief with immediate effect but till date, banks have not credited the arrears.", "hypothesis": "Two months time is sufficient for the government machinery to move and give effect to pensioners.", "label": "e"} +{"uid": "id_183", "premise": "Two studies published recently show that 13 of 16 children treated with gene therapy treating diseases by correcting a patient's faulty genes - for severe combined immune deficiency, or SCID, have had their immune systems restored. The best treatment for the disease is a bone marrow transplant from an immunologically matched sibling. But when no matched donor is available, unmatched donors, such as parents, are recruited; these transplants are only around 70 percent successful. The success of gene therapy now rivals or betters that seen in these unmatched donor situations. In 2001, a child in the trial developed leukemia, thought to have been induced by a component in the modified virus, or vector, the researchers used to insert the correct gene into the boy's cells. Of the 30 children worldwide who have been treated with gene therapy for another form of SCID, marked by a deficiency in the enzyme adenosine deaminase (ADA), none has developed leukemia. Yet medical researchers maintain that gene therapy is still a better alternative than the conventional treatment for X-linked SCID in some children because 19 of the 20 children who have received gene therapy for X-linked SCID are still alive. When told these odds, all parents of children with X-linked SCID have opted for gene therapy", "hypothesis": "In most instances, gene therapy is preferable to bone marrow transplants.", "label": "n"} +{"uid": "id_184", "premise": "Two studies published recently show that 13 of 16 children treated with gene therapy treating diseases by correcting a patient's faulty genes - for severe combined immune deficiency, or SCID, have had their immune systems restored. The best treatment for the disease is a bone marrow transplant from an immunologically matched sibling. But when no matched donor is available, unmatched donors, such as parents, are recruited; these transplants are only around 70 percent successful. The success of gene therapy now rivals or betters that seen in these unmatched donor situations. In 2001, a child in the trial developed leukemia, thought to have been induced by a component in the modified virus, or vector, the researchers used to insert the correct gene into the boy's cells. Of the 30 children worldwide who have been treated with gene therapy for another form of SCID, marked by a deficiency in the enzyme adenosine deaminase (ADA), none has developed leukemia. Yet medical researchers maintain that gene therapy is still a better alternative than the conventional treatment for X-linked SCID in some children because 19 of the 20 children who have received gene therapy for X-linked SCID are still alive. When told these odds, all parents of children with X-linked SCID have opted for gene therapy", "hypothesis": "Siblings are always immunologically matched.", "label": "n"} +{"uid": "id_185", "premise": "Two studies published recently show that 13 of 16 children treated with gene therapy treating diseases by correcting a patient's faulty genes - for severe combined immune deficiency, or SCID, have had their immune systems restored. The best treatment for the disease is a bone marrow transplant from an immunologically matched sibling. But when no matched donor is available, unmatched donors, such as parents, are recruited; these transplants are only around 70 percent successful. The success of gene therapy now rivals or betters that seen in these unmatched donor situations. In 2001, a child in the trial developed leukemia, thought to have been induced by a component in the modified virus, or vector, the researchers used to insert the correct gene into the boy's cells. Of the 30 children worldwide who have been treated with gene therapy for another form of SCID, marked by a deficiency in the enzyme adenosine deaminase (ADA), none has developed leukemia. Yet medical researchers maintain that gene therapy is still a better alternative than the conventional treatment for X-linked SCID in some children because 19 of the 20 children who have received gene therapy for X-linked SCID are still alive. When told these odds, all parents of children with X-linked SCID have opted for gene therapy", "hypothesis": "Of the remedies mentioned, bone marrow transplant from an immunologically unmatched donor has the lowest rate of success.", "label": "e"} +{"uid": "id_186", "premise": "U3b Networks (U3b being short for the underprivileged three billion who lack internet access) is a company in Jersey set up by Greg Wyler, former owner of Rwandas national telephone company. His company intends to provide cheap, high-speed internet access to remote areas in developing countries, which up to now has been the reserve of developed countries. Mr Wyler plans to charge $500 per megabit per month, compared with the $4,000 charged by existing companies. Mr Wyler has so far raised 40m from investors, but this seems like a risky investment, especially as billions were lost on similar projects in the past. So why are people investing in the hope of finding customers in the worlds poorest regions? The reason is that previous projects were over-ambitious and set out to provide global coverage, whereas U3bs project is far more modest in its optimism and its services will be available only to a 100km wide corridor around the equator, which happens to cover most developing countries. It will initially use just five satellites circling 8,000km above the equator and further expansion will be determined by customer appetite.", "hypothesis": "The majority of developing countries lie within 100km of the equator.", "label": "e"} +{"uid": "id_187", "premise": "U3b Networks (U3b being short for the underprivileged three billion who lack internet access) is a company in Jersey set up by Greg Wyler, former owner of Rwandas national telephone company. His company intends to provide cheap, high-speed internet access to remote areas in developing countries, which up to now has been the reserve of developed countries. Mr Wyler plans to charge $500 per megabit per month, compared with the $4,000 charged by existing companies. Mr Wyler has so far raised 40m from investors, but this seems like a risky investment, especially as billions were lost on similar projects in the past. So why are people investing in the hope of finding customers in the worlds poorest regions? The reason is that previous projects were over-ambitious and set out to provide global coverage, whereas U3bs project is far more modest in its optimism and its services will be available only to a 100km wide corridor around the equator, which happens to cover most developing countries. It will initially use just five satellites circling 8,000km above the equator and further expansion will be determined by customer appetite.", "hypothesis": "Greg Wyler had a background in telecoms.", "label": "e"} +{"uid": "id_188", "premise": "U3b Networks (U3b being short for the underprivileged three billion who lack internet access) is a company in Jersey set up by Greg Wyler, former owner of Rwandas national telephone company. His company intends to provide cheap, high-speed internet access to remote areas in developing countries, which up to now has been the reserve of developed countries. Mr Wyler plans to charge $500 per megabit per month, compared with the $4,000 charged by existing companies. Mr Wyler has so far raised 40m from investors, but this seems like a risky investment, especially as billions were lost on similar projects in the past. So why are people investing in the hope of finding customers in the worlds poorest regions? The reason is that previous projects were over-ambitious and set out to provide global coverage, whereas U3bs project is far more modest in its optimism and its services will be available only to a 100km wide corridor around the equator, which happens to cover most developing countries. It will initially use just five satellites circling 8,000km above the equator and further expansion will be determined by customer appetite.", "hypothesis": "The satellites for the project will cost 8m each.", "label": "n"} +{"uid": "id_189", "premise": "UK companies need more effective boards of directors After a number of serious failures of governance (that is, how they are managed at the highest level), companies in Britain, as well as elsewhere, should consider radical changes to their directors' roles. It is clear that the role of a board director today is not an easy one. Following the 2008 financial meltdown, which resulted in a deeper and more prolonged period of economic downturn than anyone expected, the search for explanations in the many post-mortems of the crisis has meant blame has been spread far and wide. Governments, regulators, central banks and auditors have all been in the frame. The role of bank directors and management and their widely publicised failures have been extensively picked over and examined in reports, inquiries and commentaries. The knock-on effect of this scrutiny has been to make the governance of companies in general an issue of intense public debate and has significantly increased the pressures on, and the responsibilities of, directors. At the simplest and most practical level, the time involved in fulfilling the demands of a board directorship has increased significantly, calling into question the effectiveness of the classic model of corporate governance by part-time, independent non-executive directors. Where once a board schedule may have consisted of between eight and ten meetings a year, in many companies the number of events requiring board input and decisions has dramatically risen. Furthermore, the amount of reading and preparation required for each meeting is increasing. Agendas can become overloaded and this can mean the time for constructive debate must necessarily be restricted in favour of getting through the business. Often, board business is devolved to committees in order to cope with the workload, which may be more efficient but can mean that the board as a whole is less involved in fully addressing some of the most important issues. It is not uncommon for the audit committee meeting to last longer than the main board meeting itself. Process may take the place of discussion and be at the expense of real collaboration, so that boxes are ticked rather than issues tackled. A radical solution, which may work for some very large companies whose businesses are extensive and complex, is the professional board, whose members would work up to three or four days a week, supported by their own dedicated staff and advisers. There are obvious risks to this and it would be important to establish clear guidelines for such a board to ensure that it did not step on the toes of management by becoming too engaged in the day-to-day running of the company. Problems of recruitment, remuneration and independence could also arise and this structure would not be appropriate for all companies. However, more professional and better-informed boards would have been particularly appropriate for banks where the executives had access to information that part-time non-executive directors lacked, leaving the latter unable to comprehend or anticipate the 2008 crash. One of the main criticisms of boards and their directors is that they do not focus sufficiently on longer-term matters of strategy, sustainability and governance, but instead concentrate too much on short-term financial metrics. Regulatory requirements and the structure of the market encourage this behaviour. The tyranny of quarterly reporting can distort board decision-making, as directors have to 'make the numbers' every four months to meet the insatiable appetite of the market for more data. This serves to encourage the trading methodology of a certain kind of investor who moves in and out of a stock without engaging in constructive dialogue with the company about strategy or performance, and is simply seeking a short-term financial gain. This effect has been made worse by the changing profile of investors due to the globalisation of capital and the increasing use of automated trading systems. Corporate culture adapts and management teams are largely incentivised to meet financial goals. Compensation for chief executives has become a combat zone where pitched battles between investors, management and board members are fought, often behind closed doors but increasingly frequently in the full glare of press attention. Many would argue that this is in the interest of transparency and good governance as shareholders use their muscle in the area of pay to pressure boards to remove underperforming chief executives. Their powers to vote down executive remuneration policies increased when binding votes came into force. The chair of the remuneration committee can be an exposed and lonely role, as Alison Carnwath, chair of Barclays Bank's remuneration committee, found when she had to resign, having been roundly criticised for trying to defend the enormous bonus to be paid to the chief executive; the irony being that she was widely understood to have spoken out against it in the privacy of the committee. The financial crisis stimulated a debate about the role and purpose of the company and a heightened awareness of corporate ethics. Trust in the corporation has been eroded and academics such as Michael Sandel, in his thoughtful and bestselling book What Money Can't Buy, are questioning the morality of capitalism and the market economy. Boards of companies in all sectors will need to widen their perspective to encompass these issues and this may involve a realignment of corporate goals. We live in challenging times.", "hypothesis": "Using a committee structure would ensure that board members are fully informed about significant issues.", "label": "c"} +{"uid": "id_190", "premise": "UK companies need more effective boards of directors After a number of serious failures of governance (that is, how they are managed at the highest level), companies in Britain, as well as elsewhere, should consider radical changes to their directors' roles. It is clear that the role of a board director today is not an easy one. Following the 2008 financial meltdown, which resulted in a deeper and more prolonged period of economic downturn than anyone expected, the search for explanations in the many post-mortems of the crisis has meant blame has been spread far and wide. Governments, regulators, central banks and auditors have all been in the frame. The role of bank directors and management and their widely publicised failures have been extensively picked over and examined in reports, inquiries and commentaries. The knock-on effect of this scrutiny has been to make the governance of companies in general an issue of intense public debate and has significantly increased the pressures on, and the responsibilities of, directors. At the simplest and most practical level, the time involved in fulfilling the demands of a board directorship has increased significantly, calling into question the effectiveness of the classic model of corporate governance by part-time, independent non-executive directors. Where once a board schedule may have consisted of between eight and ten meetings a year, in many companies the number of events requiring board input and decisions has dramatically risen. Furthermore, the amount of reading and preparation required for each meeting is increasing. Agendas can become overloaded and this can mean the time for constructive debate must necessarily be restricted in favour of getting through the business. Often, board business is devolved to committees in order to cope with the workload, which may be more efficient but can mean that the board as a whole is less involved in fully addressing some of the most important issues. It is not uncommon for the audit committee meeting to last longer than the main board meeting itself. Process may take the place of discussion and be at the expense of real collaboration, so that boxes are ticked rather than issues tackled. A radical solution, which may work for some very large companies whose businesses are extensive and complex, is the professional board, whose members would work up to three or four days a week, supported by their own dedicated staff and advisers. There are obvious risks to this and it would be important to establish clear guidelines for such a board to ensure that it did not step on the toes of management by becoming too engaged in the day-to-day running of the company. Problems of recruitment, remuneration and independence could also arise and this structure would not be appropriate for all companies. However, more professional and better-informed boards would have been particularly appropriate for banks where the executives had access to information that part-time non-executive directors lacked, leaving the latter unable to comprehend or anticipate the 2008 crash. One of the main criticisms of boards and their directors is that they do not focus sufficiently on longer-term matters of strategy, sustainability and governance, but instead concentrate too much on short-term financial metrics. Regulatory requirements and the structure of the market encourage this behaviour. The tyranny of quarterly reporting can distort board decision-making, as directors have to 'make the numbers' every four months to meet the insatiable appetite of the market for more data. This serves to encourage the trading methodology of a certain kind of investor who moves in and out of a stock without engaging in constructive dialogue with the company about strategy or performance, and is simply seeking a short-term financial gain. This effect has been made worse by the changing profile of investors due to the globalisation of capital and the increasing use of automated trading systems. Corporate culture adapts and management teams are largely incentivised to meet financial goals. Compensation for chief executives has become a combat zone where pitched battles between investors, management and board members are fought, often behind closed doors but increasingly frequently in the full glare of press attention. Many would argue that this is in the interest of transparency and good governance as shareholders use their muscle in the area of pay to pressure boards to remove underperforming chief executives. Their powers to vote down executive remuneration policies increased when binding votes came into force. The chair of the remuneration committee can be an exposed and lonely role, as Alison Carnwath, chair of Barclays Bank's remuneration committee, found when she had to resign, having been roundly criticised for trying to defend the enormous bonus to be paid to the chief executive; the irony being that she was widely understood to have spoken out against it in the privacy of the committee. The financial crisis stimulated a debate about the role and purpose of the company and a heightened awareness of corporate ethics. Trust in the corporation has been eroded and academics such as Michael Sandel, in his thoughtful and bestselling book What Money Can't Buy, are questioning the morality of capitalism and the market economy. Boards of companies in all sectors will need to widen their perspective to encompass these issues and this may involve a realignment of corporate goals. We live in challenging times.", "hypothesis": "Board meetings normally continue for as long as necessary to debate matters in full.", "label": "c"} +{"uid": "id_191", "premise": "UK companies need more effective boards of directors After a number of serious failures of governance (that is, how they are managed at the highest level), companies in Britain, as well as elsewhere, should consider radical changes to their directors' roles. It is clear that the role of a board director today is not an easy one. Following the 2008 financial meltdown, which resulted in a deeper and more prolonged period of economic downturn than anyone expected, the search for explanations in the many post-mortems of the crisis has meant blame has been spread far and wide. Governments, regulators, central banks and auditors have all been in the frame. The role of bank directors and management and their widely publicised failures have been extensively picked over and examined in reports, inquiries and commentaries. The knock-on effect of this scrutiny has been to make the governance of companies in general an issue of intense public debate and has significantly increased the pressures on, and the responsibilities of, directors. At the simplest and most practical level, the time involved in fulfilling the demands of a board directorship has increased significantly, calling into question the effectiveness of the classic model of corporate governance by part-time, independent non-executive directors. Where once a board schedule may have consisted of between eight and ten meetings a year, in many companies the number of events requiring board input and decisions has dramatically risen. Furthermore, the amount of reading and preparation required for each meeting is increasing. Agendas can become overloaded and this can mean the time for constructive debate must necessarily be restricted in favour of getting through the business. Often, board business is devolved to committees in order to cope with the workload, which may be more efficient but can mean that the board as a whole is less involved in fully addressing some of the most important issues. It is not uncommon for the audit committee meeting to last longer than the main board meeting itself. Process may take the place of discussion and be at the expense of real collaboration, so that boxes are ticked rather than issues tackled. A radical solution, which may work for some very large companies whose businesses are extensive and complex, is the professional board, whose members would work up to three or four days a week, supported by their own dedicated staff and advisers. There are obvious risks to this and it would be important to establish clear guidelines for such a board to ensure that it did not step on the toes of management by becoming too engaged in the day-to-day running of the company. Problems of recruitment, remuneration and independence could also arise and this structure would not be appropriate for all companies. However, more professional and better-informed boards would have been particularly appropriate for banks where the executives had access to information that part-time non-executive directors lacked, leaving the latter unable to comprehend or anticipate the 2008 crash. One of the main criticisms of boards and their directors is that they do not focus sufficiently on longer-term matters of strategy, sustainability and governance, but instead concentrate too much on short-term financial metrics. Regulatory requirements and the structure of the market encourage this behaviour. The tyranny of quarterly reporting can distort board decision-making, as directors have to 'make the numbers' every four months to meet the insatiable appetite of the market for more data. This serves to encourage the trading methodology of a certain kind of investor who moves in and out of a stock without engaging in constructive dialogue with the company about strategy or performance, and is simply seeking a short-term financial gain. This effect has been made worse by the changing profile of investors due to the globalisation of capital and the increasing use of automated trading systems. Corporate culture adapts and management teams are largely incentivised to meet financial goals. Compensation for chief executives has become a combat zone where pitched battles between investors, management and board members are fought, often behind closed doors but increasingly frequently in the full glare of press attention. Many would argue that this is in the interest of transparency and good governance as shareholders use their muscle in the area of pay to pressure boards to remove underperforming chief executives. Their powers to vote down executive remuneration policies increased when binding votes came into force. The chair of the remuneration committee can be an exposed and lonely role, as Alison Carnwath, chair of Barclays Bank's remuneration committee, found when she had to resign, having been roundly criticised for trying to defend the enormous bonus to be paid to the chief executive; the irony being that she was widely understood to have spoken out against it in the privacy of the committee. The financial crisis stimulated a debate about the role and purpose of the company and a heightened awareness of corporate ethics. Trust in the corporation has been eroded and academics such as Michael Sandel, in his thoughtful and bestselling book What Money Can't Buy, are questioning the morality of capitalism and the market economy. Boards of companies in all sectors will need to widen their perspective to encompass these issues and this may involve a realignment of corporate goals. We live in challenging times.", "hypothesis": "Banks have been mismanaged to a greater extent than other businesses.", "label": "n"} +{"uid": "id_192", "premise": "UK companies need more effective boards of directors After a number of serious failures of governance (that is, how they are managed at the highest level), companies in Britain, as well as elsewhere, should consider radical changes to their directors' roles. It is clear that the role of a board director today is not an easy one. Following the 2008 financial meltdown, which resulted in a deeper and more prolonged period of economic downturn than anyone expected, the search for explanations in the many post-mortems of the crisis has meant blame has been spread far and wide. Governments, regulators, central banks and auditors have all been in the frame. The role of bank directors and management and their widely publicised failures have been extensively picked over and examined in reports, inquiries and commentaries. The knock-on effect of this scrutiny has been to make the governance of companies in general an issue of intense public debate and has significantly increased the pressures on, and the responsibilities of, directors. At the simplest and most practical level, the time involved in fulfilling the demands of a board directorship has increased significantly, calling into question the effectiveness of the classic model of corporate governance by part-time, independent non-executive directors. Where once a board schedule may have consisted of between eight and ten meetings a year, in many companies the number of events requiring board input and decisions has dramatically risen. Furthermore, the amount of reading and preparation required for each meeting is increasing. Agendas can become overloaded and this can mean the time for constructive debate must necessarily be restricted in favour of getting through the business. Often, board business is devolved to committees in order to cope with the workload, which may be more efficient but can mean that the board as a whole is less involved in fully addressing some of the most important issues. It is not uncommon for the audit committee meeting to last longer than the main board meeting itself. Process may take the place of discussion and be at the expense of real collaboration, so that boxes are ticked rather than issues tackled. A radical solution, which may work for some very large companies whose businesses are extensive and complex, is the professional board, whose members would work up to three or four days a week, supported by their own dedicated staff and advisers. There are obvious risks to this and it would be important to establish clear guidelines for such a board to ensure that it did not step on the toes of management by becoming too engaged in the day-to-day running of the company. Problems of recruitment, remuneration and independence could also arise and this structure would not be appropriate for all companies. However, more professional and better-informed boards would have been particularly appropriate for banks where the executives had access to information that part-time non-executive directors lacked, leaving the latter unable to comprehend or anticipate the 2008 crash. One of the main criticisms of boards and their directors is that they do not focus sufficiently on longer-term matters of strategy, sustainability and governance, but instead concentrate too much on short-term financial metrics. Regulatory requirements and the structure of the market encourage this behaviour. The tyranny of quarterly reporting can distort board decision-making, as directors have to 'make the numbers' every four months to meet the insatiable appetite of the market for more data. This serves to encourage the trading methodology of a certain kind of investor who moves in and out of a stock without engaging in constructive dialogue with the company about strategy or performance, and is simply seeking a short-term financial gain. This effect has been made worse by the changing profile of investors due to the globalisation of capital and the increasing use of automated trading systems. Corporate culture adapts and management teams are largely incentivised to meet financial goals. Compensation for chief executives has become a combat zone where pitched battles between investors, management and board members are fought, often behind closed doors but increasingly frequently in the full glare of press attention. Many would argue that this is in the interest of transparency and good governance as shareholders use their muscle in the area of pay to pressure boards to remove underperforming chief executives. Their powers to vote down executive remuneration policies increased when binding votes came into force. The chair of the remuneration committee can be an exposed and lonely role, as Alison Carnwath, chair of Barclays Bank's remuneration committee, found when she had to resign, having been roundly criticised for trying to defend the enormous bonus to be paid to the chief executive; the irony being that she was widely understood to have spoken out against it in the privacy of the committee. The financial crisis stimulated a debate about the role and purpose of the company and a heightened awareness of corporate ethics. Trust in the corporation has been eroded and academics such as Michael Sandel, in his thoughtful and bestselling book What Money Can't Buy, are questioning the morality of capitalism and the market economy. Boards of companies in all sectors will need to widen their perspective to encompass these issues and this may involve a realignment of corporate goals. We live in challenging times.", "hypothesis": "Close scrutiny of the behaviour of boards has increased since the economic downturn.", "label": "e"} +{"uid": "id_193", "premise": "UK rail services how do l claim for my delayed train? Generally, if you have been delayed on a train journey, you may be able to claim compensation, but train companies all have different rules, so it can be confusing to work out what youre entitled to. The type of delay you can claim for depends on whether the train company runs a Delay Repay scheme or a less generous, older-style scheme. Delay Repay is a train operator scheme to compensate passengers when trains are late, and the train company will pay out even if it was not responsible for the delay. The scheme varies between companies, but up to 2016 most paid 50 percent of the single ticket cost for 30 minutes delay and 100 percent for an hour. On the London Underground, you get a full refund for 15-minute delays. Companies that do not use Delay Repay and still use the older scheme will not usually pay compensation if the problem is considered to be out of their control. But it is still worth asking them for compensation, as some may pay out. You are unlikely to get compensation for a delay if any of the following occur: Accidents involving people getting onto the line illegally Gas leaks or fires in buildings next to the line which were not caused by a train company Line closures at the request of the emergency services Exceptionally severe weather conditions Strike action National Rail Conditions of Travel state that you are entitled to compensation in the same form that you paid for the ticket. Some train companies are still paying using rail vouchers, which they are allowed to do if you do not ask for a cash refund. Since 2016, rail passengers have acquired further rights for compensation through the Consumer Rights Act. This means that passengers could now be eligible for compensation due to: a severely overcrowded train with too few carriages available; a consistently late running service; and a service that is delayed for less than the time limit that applied under existing compensation schemes. However, in order to exercise their rights beyond the existing compensation schemes, for instance Delay Repay, and where the train operating company refuses to compensate despite letters threatening court action, passengers may need to bring their claims to a court of law.", "hypothesis": "Under Delay Repay, a train company will only provide compensation if it caused the delay.", "label": "c"} +{"uid": "id_194", "premise": "UK rail services how do l claim for my delayed train? Generally, if you have been delayed on a train journey, you may be able to claim compensation, but train companies all have different rules, so it can be confusing to work out what youre entitled to. The type of delay you can claim for depends on whether the train company runs a Delay Repay scheme or a less generous, older-style scheme. Delay Repay is a train operator scheme to compensate passengers when trains are late, and the train company will pay out even if it was not responsible for the delay. The scheme varies between companies, but up to 2016 most paid 50 percent of the single ticket cost for 30 minutes delay and 100 percent for an hour. On the London Underground, you get a full refund for 15-minute delays. Companies that do not use Delay Repay and still use the older scheme will not usually pay compensation if the problem is considered to be out of their control. But it is still worth asking them for compensation, as some may pay out. You are unlikely to get compensation for a delay if any of the following occur: Accidents involving people getting onto the line illegally Gas leaks or fires in buildings next to the line which were not caused by a train company Line closures at the request of the emergency services Exceptionally severe weather conditions Strike action National Rail Conditions of Travel state that you are entitled to compensation in the same form that you paid for the ticket. Some train companies are still paying using rail vouchers, which they are allowed to do if you do not ask for a cash refund. Since 2016, rail passengers have acquired further rights for compensation through the Consumer Rights Act. This means that passengers could now be eligible for compensation due to: a severely overcrowded train with too few carriages available; a consistently late running service; and a service that is delayed for less than the time limit that applied under existing compensation schemes. However, in order to exercise their rights beyond the existing compensation schemes, for instance Delay Repay, and where the train operating company refuses to compensate despite letters threatening court action, passengers may need to bring their claims to a court of law.", "hypothesis": "The system for claiming compensation varies from one company to another.", "label": "e"} +{"uid": "id_195", "premise": "UK rail services how do l claim for my delayed train? Generally, if you have been delayed on a train journey, you may be able to claim compensation, but train companies all have different rules, so it can be confusing to work out what youre entitled to. The type of delay you can claim for depends on whether the train company runs a Delay Repay scheme or a less generous, older-style scheme. Delay Repay is a train operator scheme to compensate passengers when trains are late, and the train company will pay out even if it was not responsible for the delay. The scheme varies between companies, but up to 2016 most paid 50 percent of the single ticket cost for 30 minutes delay and 100 percent for an hour. On the London Underground, you get a full refund for 15-minute delays. Companies that do not use Delay Repay and still use the older scheme will not usually pay compensation if the problem is considered to be out of their control. But it is still worth asking them for compensation, as some may pay out. You are unlikely to get compensation for a delay if any of the following occur: Accidents involving people getting onto the line illegally Gas leaks or fires in buildings next to the line which were not caused by a train company Line closures at the request of the emergency services Exceptionally severe weather conditions Strike action National Rail Conditions of Travel state that you are entitled to compensation in the same form that you paid for the ticket. Some train companies are still paying using rail vouchers, which they are allowed to do if you do not ask for a cash refund. Since 2016, rail passengers have acquired further rights for compensation through the Consumer Rights Act. This means that passengers could now be eligible for compensation due to: a severely overcrowded train with too few carriages available; a consistently late running service; and a service that is delayed for less than the time limit that applied under existing compensation schemes. However, in order to exercise their rights beyond the existing compensation schemes, for instance Delay Repay, and where the train operating company refuses to compensate despite letters threatening court action, passengers may need to bring their claims to a court of law.", "hypothesis": "It is doubtful whether companies using the older scheme will provide compensation if a delay is caused by a strike.", "label": "e"} +{"uid": "id_196", "premise": "UK rail services how do l claim for my delayed train? Generally, if you have been delayed on a train journey, you may be able to claim compensation, but train companies all have different rules, so it can be confusing to work out what youre entitled to. The type of delay you can claim for depends on whether the train company runs a Delay Repay scheme or a less generous, older-style scheme. Delay Repay is a train operator scheme to compensate passengers when trains are late, and the train company will pay out even if it was not responsible for the delay. The scheme varies between companies, but up to 2016 most paid 50 percent of the single ticket cost for 30 minutes delay and 100 percent for an hour. On the London Underground, you get a full refund for 15-minute delays. Companies that do not use Delay Repay and still use the older scheme will not usually pay compensation if the problem is considered to be out of their control. But it is still worth asking them for compensation, as some may pay out. You are unlikely to get compensation for a delay if any of the following occur: Accidents involving people getting onto the line illegally Gas leaks or fires in buildings next to the line which were not caused by a train company Line closures at the request of the emergency services Exceptionally severe weather conditions Strike action National Rail Conditions of Travel state that you are entitled to compensation in the same form that you paid for the ticket. Some train companies are still paying using rail vouchers, which they are allowed to do if you do not ask for a cash refund. Since 2016, rail passengers have acquired further rights for compensation through the Consumer Rights Act. This means that passengers could now be eligible for compensation due to: a severely overcrowded train with too few carriages available; a consistently late running service; and a service that is delayed for less than the time limit that applied under existing compensation schemes. However, in order to exercise their rights beyond the existing compensation schemes, for instance Delay Repay, and where the train operating company refuses to compensate despite letters threatening court action, passengers may need to bring their claims to a court of law.", "hypothesis": "An increasing number of train companies are willing to pay compensation for problems they are not responsible for.", "label": "n"} +{"uid": "id_197", "premise": "UK rail services how do l claim for my delayed train? Generally, if you have been delayed on a train journey, you may be able to claim compensation, but train companies all have different rules, so it can be confusing to work out what youre entitled to. The type of delay you can claim for depends on whether the train company runs a Delay Repay scheme or a less generous, older-style scheme. Delay Repay is a train operator scheme to compensate passengers when trains are late, and the train company will pay out even if it was not responsible for the delay. The scheme varies between companies, but up to 2016 most paid 50 percent of the single ticket cost for 30 minutes delay and 100 percent for an hour. On the London Underground, you get a full refund for 15-minute delays. Companies that do not use Delay Repay and still use the older scheme will not usually pay compensation if the problem is considered to be out of their control. But it is still worth asking them for compensation, as some may pay out. You are unlikely to get compensation for a delay if any of the following occur: Accidents involving people getting onto the line illegally Gas leaks or fires in buildings next to the line which were not caused by a train company Line closures at the request of the emergency services Exceptionally severe weather conditions Strike action National Rail Conditions of Travel state that you are entitled to compensation in the same form that you paid for the ticket. Some train companies are still paying using rail vouchers, which they are allowed to do if you do not ask for a cash refund. Since 2016, rail passengers have acquired further rights for compensation through the Consumer Rights Act. This means that passengers could now be eligible for compensation due to: a severely overcrowded train with too few carriages available; a consistently late running service; and a service that is delayed for less than the time limit that applied under existing compensation schemes. However, in order to exercise their rights beyond the existing compensation schemes, for instance Delay Repay, and where the train operating company refuses to compensate despite letters threatening court action, passengers may need to bring their claims to a court of law.", "hypothesis": "Under Delay Repay, underground and other train companies give exactly the same amounts of money in compensation.", "label": "c"} +{"uid": "id_198", "premise": "UK rail services how do l claim for my delayed train? Generally, if you have been delayed on a train journey, you may be able to claim compensation, but train companies all have different rules, so it can be confusing to work out what youre entitled to. The type of delay you can claim for depends on whether the train company runs a Delay Repay scheme or a less generous, older-style scheme. Delay Repay is a train operator scheme to compensate passengers when trains are late, and the train company will pay out even if it was not responsible for the delay. The scheme varies between companies, but up to 2016 most paid 50 percent of the single ticket cost for 30 minutes delay and 100 percent for an hour. On the London Underground, you get a full refund for 15-minute delays. Companies that do not use Delay Repay and still use the older scheme will not usually pay compensation if the problem is considered to be out of their control. But it is still worth asking them for compensation, as some may pay out. You are unlikely to get compensation for a delay if any of the following occur: Accidents involving people getting onto the line illegally Gas leaks or fires in buildings next to the line which were not caused by a train company Line closures at the request of the emergency services Exceptionally severe weather conditions Strike action National Rail Conditions of Travel state that you are entitled to compensation in the same form that you paid for the ticket. Some train companies are still paying using rail vouchers, which they are allowed to do if you do not ask for a cash refund. Since 2016, rail passengers have acquired further rights for compensation through the Consumer Rights Act. This means that passengers could now be eligible for compensation due to: a severely overcrowded train with too few carriages available; a consistently late running service; and a service that is delayed for less than the time limit that applied under existing compensation schemes. However, in order to exercise their rights beyond the existing compensation schemes, for instance Delay Repay, and where the train operating company refuses to compensate despite letters threatening court action, passengers may need to bring their claims to a court of law.", "hypothesis": "Passengers may receive compensation in the form of a train voucher if they forget to request cash.", "label": "e"} +{"uid": "id_199", "premise": "UK unemployment has reached a new high after the public sector made a new wave of cuts this week. Statistics suggest that those particularly hit by the cuts will be youths, as a record high of over 1 million youths were recorded as unemployed at the beginning of this month. This figure is just under half of the total national statistic for unemployment, a reported 2.5 million. Yet, the number of people claiming unemployment benefits has not risen as far as it was expected to. Economists predicted that the number of people claiming support would rise by an estimated 15,000, yet the actual figure demonstrates a rise of less than 4,000. Perhaps things are not as bad as they seem after all.", "hypothesis": "Unemployment is about to fall, improving the economic outlook.", "label": "n"} +{"uid": "id_200", "premise": "UK unemployment has reached a new high after the public sector made a new wave of cuts this week. Statistics suggest that those particularly hit by the cuts will be youths, as a record high of over 1 million youths were recorded as unemployed at the beginning of this month. This figure is just under half of the total national statistic for unemployment, a reported 2.5 million. Yet, the number of people claiming unemployment benefits has not risen as far as it was expected to. Economists predicted that the number of people claiming support would rise by an estimated 15,000, yet the actual figure demonstrates a rise of less than 4,000. Perhaps things are not as bad as they seem after all.", "hypothesis": "The government is likely to make new public sector cuts.", "label": "n"} +{"uid": "id_201", "premise": "UK unemployment has reached a new high after the public sector made a new wave of cuts this week. Statistics suggest that those particularly hit by the cuts will be youths, as a record high of over 1 million youths were recorded as unemployed at the beginning of this month. This figure is just under half of the total national statistic for unemployment, a reported 2.5 million. Yet, the number of people claiming unemployment benefits has not risen as far as it was expected to. Economists predicted that the number of people claiming support would rise by an estimated 15,000, yet the actual figure demonstrates a rise of less than 4,000. Perhaps things are not as bad as they seem after all.", "hypothesis": "Economists over-estimated the number rise in benefits claims", "label": "e"} +{"uid": "id_202", "premise": "UK unemployment has reached a new high after the public sector made a new wave of cuts this week. Statistics suggest that those particularly hit by the cuts will be youths, as a record high of over 1 million youths were recorded as unemployed at the beginning of this month. This figure is just under half of the total national statistic for unemployment, a reported 2.5 million. Yet, the number of people claiming unemployment benefits has not risen as far as it was expected to. Economists predicted that the number of people claiming support would rise by an estimated 15,000, yet the actual figure demonstrates a rise of less than 4,000. Perhaps things are not as bad as they seem after all.", "hypothesis": "Economists are mistaken and unemployment is lower.", "label": "c"} +{"uid": "id_203", "premise": "UNMASKING SKIN If you took off your skin and laid it flat, it would cover an area of about twenty-one square feet, making it by far the bodys largest organ. Draped in place over our bodies, skin forms the barrier between whats inside us and whats outside. It protects us from a multitude of external forces. It serves as an avenue to our most intimate physical and psychological selves. This impervious yet permeable barrier, less than a millimetre thick in places, is composed of three layers. The outermost layer is the bloodless epidermis. The dermis includes collagen, elastin, and nerve endings. The innermost layer, subcutaneous fat, contains tissue that acts as an energy source, cushion and insulator for the body. From these familiar characteristics of skin emerge the profound mysteries of touch, arguably our most essential source of sensory stimulation. We can live without seeing or hearing in fact, without any of our other senses. But babies born without effective nerve connections between skin and brain can fail to thrive and may even die. Laboratory experiments decades ago, now considered unethical and inhumane, kept baby monkeys from being touched by their mothers. It made no difference that the babies could see, hear and smell their mothers; without touching, the babies became apathetic, and failed to progress. For humans, insufficient touching in early years can have lifelong results. In touching cultures, adult aggression is low, whereas, in cultures where touch is limited, adult aggression is high, writes Tiffany Field, director of the Touch Research Institutes at the University of Miami School of Medicine. Studies of a variety of cultures show a correspondence between high rates of physical affection in childhood and low rates of adult physical violence. While the effects of touching are easy to understand, the mechanics of it are less so. Your skin has millions of nerve cells of various shapes at different depths, explains Stanley Bolanowski, a neuroscientist and associate director of the Institute for Sensory Research at Syracuse University. When the nerve cells are stimulated, physical energy is transformed into energy used by the nervous system and passed from the skin to the spinal cord and brain. Its called transduction, and no one knows exactly how it takes place. Suffice it to say that the process involves the intricate, split-second operation of a complex system of signals between neurons in the skin and brain. This is starting to sound very confusing until Bolanowski says: In simple terms, people perceive three basic things via skin: pressure, temperature, and pain. And then Im sure hes wrong. When I get wet, my skin feels wet, I protest. Close your eyes and lean back, says Bolanowski. Something cold and wet is on my forehead so wet, in fact, that I wait for water to start dripping down my cheeks. Open your eyes. Bolanowski says, showing me that the sensation comes from a chilled, but dry, metal cylinder. The combination of pressure and cold, he explains, is what makes my skin perceive wetness. He gives me a surgical glove to put on and has me put a finger in a glass of cold water. My finger feels wet, even though I have visual proof that its not touching water. My skin, which seemed so reliable, has been deceiving me my entire life. When I shower or wash my hands, I now realize, my skin feels pressure and temperature. Its my brain that says I feel wet. Perceptions of pressure, temperature and pain manifest themselves in many different ways. Gentle stimulation of pressure receptors can result in ticklishness; gentle stimulation of pain receptors, in itching. Both sensations arise from a neurological transmission, not from something that physically exists. Skin, Im realizing, is under constant assault, both from within the body and from forces outside. Repairs occur with varying success. Take the spot where I nicked myself with a knife while slicing fruit. I have a crusty scab surrounded by pink tissue about a quarter inch long on my right palm. Under the scab, epidermal cells are migrating into the wound to close it up. When the process is complete, the scab will fall off to reveal new epidermis. Its only been a few days, but my little self-repair is almost complete. Likewise, we recover quickly from slight burns. If you ever happen to touch a hot burner, just put your finger in cold water. The chances are you will have no blister, little pain and no scar. Severe burns, though, are a different matter.", "hypothesis": "The skin is more sensitive to pressure than to temperature or pain.", "label": "n"} +{"uid": "id_204", "premise": "UNMASKING SKIN If you took off your skin and laid it flat, it would cover an area of about twenty-one square feet, making it by far the bodys largest organ. Draped in place over our bodies, skin forms the barrier between whats inside us and whats outside. It protects us from a multitude of external forces. It serves as an avenue to our most intimate physical and psychological selves. This impervious yet permeable barrier, less than a millimetre thick in places, is composed of three layers. The outermost layer is the bloodless epidermis. The dermis includes collagen, elastin, and nerve endings. The innermost layer, subcutaneous fat, contains tissue that acts as an energy source, cushion and insulator for the body. From these familiar characteristics of skin emerge the profound mysteries of touch, arguably our most essential source of sensory stimulation. We can live without seeing or hearing in fact, without any of our other senses. But babies born without effective nerve connections between skin and brain can fail to thrive and may even die. Laboratory experiments decades ago, now considered unethical and inhumane, kept baby monkeys from being touched by their mothers. It made no difference that the babies could see, hear and smell their mothers; without touching, the babies became apathetic, and failed to progress. For humans, insufficient touching in early years can have lifelong results. In touching cultures, adult aggression is low, whereas, in cultures where touch is limited, adult aggression is high, writes Tiffany Field, director of the Touch Research Institutes at the University of Miami School of Medicine. Studies of a variety of cultures show a correspondence between high rates of physical affection in childhood and low rates of adult physical violence. While the effects of touching are easy to understand, the mechanics of it are less so. Your skin has millions of nerve cells of various shapes at different depths, explains Stanley Bolanowski, a neuroscientist and associate director of the Institute for Sensory Research at Syracuse University. When the nerve cells are stimulated, physical energy is transformed into energy used by the nervous system and passed from the skin to the spinal cord and brain. Its called transduction, and no one knows exactly how it takes place. Suffice it to say that the process involves the intricate, split-second operation of a complex system of signals between neurons in the skin and brain. This is starting to sound very confusing until Bolanowski says: In simple terms, people perceive three basic things via skin: pressure, temperature, and pain. And then Im sure hes wrong. When I get wet, my skin feels wet, I protest. Close your eyes and lean back, says Bolanowski. Something cold and wet is on my forehead so wet, in fact, that I wait for water to start dripping down my cheeks. Open your eyes. Bolanowski says, showing me that the sensation comes from a chilled, but dry, metal cylinder. The combination of pressure and cold, he explains, is what makes my skin perceive wetness. He gives me a surgical glove to put on and has me put a finger in a glass of cold water. My finger feels wet, even though I have visual proof that its not touching water. My skin, which seemed so reliable, has been deceiving me my entire life. When I shower or wash my hands, I now realize, my skin feels pressure and temperature. Its my brain that says I feel wet. Perceptions of pressure, temperature and pain manifest themselves in many different ways. Gentle stimulation of pressure receptors can result in ticklishness; gentle stimulation of pain receptors, in itching. Both sensations arise from a neurological transmission, not from something that physically exists. Skin, Im realizing, is under constant assault, both from within the body and from forces outside. Repairs occur with varying success. Take the spot where I nicked myself with a knife while slicing fruit. I have a crusty scab surrounded by pink tissue about a quarter inch long on my right palm. Under the scab, epidermal cells are migrating into the wound to close it up. When the process is complete, the scab will fall off to reveal new epidermis. Its only been a few days, but my little self-repair is almost complete. Likewise, we recover quickly from slight burns. If you ever happen to touch a hot burner, just put your finger in cold water. The chances are you will have no blister, little pain and no scar. Severe burns, though, are a different matter.", "hypothesis": "The human skin is always good at repairing itself.", "label": "c"} +{"uid": "id_205", "premise": "UNMASKING SKIN If you took off your skin and laid it flat, it would cover an area of about twenty-one square feet, making it by far the bodys largest organ. Draped in place over our bodies, skin forms the barrier between whats inside us and whats outside. It protects us from a multitude of external forces. It serves as an avenue to our most intimate physical and psychological selves. This impervious yet permeable barrier, less than a millimetre thick in places, is composed of three layers. The outermost layer is the bloodless epidermis. The dermis includes collagen, elastin, and nerve endings. The innermost layer, subcutaneous fat, contains tissue that acts as an energy source, cushion and insulator for the body. From these familiar characteristics of skin emerge the profound mysteries of touch, arguably our most essential source of sensory stimulation. We can live without seeing or hearing in fact, without any of our other senses. But babies born without effective nerve connections between skin and brain can fail to thrive and may even die. Laboratory experiments decades ago, now considered unethical and inhumane, kept baby monkeys from being touched by their mothers. It made no difference that the babies could see, hear and smell their mothers; without touching, the babies became apathetic, and failed to progress. For humans, insufficient touching in early years can have lifelong results. In touching cultures, adult aggression is low, whereas, in cultures where touch is limited, adult aggression is high, writes Tiffany Field, director of the Touch Research Institutes at the University of Miami School of Medicine. Studies of a variety of cultures show a correspondence between high rates of physical affection in childhood and low rates of adult physical violence. While the effects of touching are easy to understand, the mechanics of it are less so. Your skin has millions of nerve cells of various shapes at different depths, explains Stanley Bolanowski, a neuroscientist and associate director of the Institute for Sensory Research at Syracuse University. When the nerve cells are stimulated, physical energy is transformed into energy used by the nervous system and passed from the skin to the spinal cord and brain. Its called transduction, and no one knows exactly how it takes place. Suffice it to say that the process involves the intricate, split-second operation of a complex system of signals between neurons in the skin and brain. This is starting to sound very confusing until Bolanowski says: In simple terms, people perceive three basic things via skin: pressure, temperature, and pain. And then Im sure hes wrong. When I get wet, my skin feels wet, I protest. Close your eyes and lean back, says Bolanowski. Something cold and wet is on my forehead so wet, in fact, that I wait for water to start dripping down my cheeks. Open your eyes. Bolanowski says, showing me that the sensation comes from a chilled, but dry, metal cylinder. The combination of pressure and cold, he explains, is what makes my skin perceive wetness. He gives me a surgical glove to put on and has me put a finger in a glass of cold water. My finger feels wet, even though I have visual proof that its not touching water. My skin, which seemed so reliable, has been deceiving me my entire life. When I shower or wash my hands, I now realize, my skin feels pressure and temperature. Its my brain that says I feel wet. Perceptions of pressure, temperature and pain manifest themselves in many different ways. Gentle stimulation of pressure receptors can result in ticklishness; gentle stimulation of pain receptors, in itching. Both sensations arise from a neurological transmission, not from something that physically exists. Skin, Im realizing, is under constant assault, both from within the body and from forces outside. Repairs occur with varying success. Take the spot where I nicked myself with a knife while slicing fruit. I have a crusty scab surrounded by pink tissue about a quarter inch long on my right palm. Under the scab, epidermal cells are migrating into the wound to close it up. When the process is complete, the scab will fall off to reveal new epidermis. Its only been a few days, but my little self-repair is almost complete. Likewise, we recover quickly from slight burns. If you ever happen to touch a hot burner, just put your finger in cold water. The chances are you will have no blister, little pain and no scar. Severe burns, though, are a different matter.", "hypothesis": "Even scientists have difficulty understanding how our sense of touch works.", "label": "e"} +{"uid": "id_206", "premise": "USE OF UNIVERSITY GROUNDS BY VEHICULAR TRAFFIC The University grounds are private. The University authorities only allow authorised members of the University, visitors and drivers of vehicles servicing the University to enter the grounds. Members of staff who have paid the requisite fee and display the appropriate permit may bring a vehicle into the grounds. A University permit does not entitle them to park in Hall car parks however, unless authorised by the Warden of the Hall concerned. Students may not bring vehicles into the grounds during the working day unless they have been given special permission by the Security Officer and have paid for and are displaying an appropriate entry permit. Students living in Halls of Residence must obtain permission from the Warden to keep a motor vehicle at their residence. Students are reminded that if they park a motor vehicle on University premises without a valid permit, they will be fined 20", "hypothesis": "The campus roads are not open to general members of the public.", "label": "e"} +{"uid": "id_207", "premise": "USE OF UNIVERSITY GROUNDS BY VEHICULAR TRAFFIC The University grounds are private. The University authorities only allow authorised members of the University, visitors and drivers of vehicles servicing the University to enter the grounds. Members of staff who have paid the requisite fee and display the appropriate permit may bring a vehicle into the grounds. A University permit does not entitle them to park in Hall car parks however, unless authorised by the Warden of the Hall concerned. Students may not bring vehicles into the grounds during the working day unless they have been given special permission by the Security Officer and have paid for and are displaying an appropriate entry permit. Students living in Halls of Residence must obtain permission from the Warden to keep a motor vehicle at their residence. Students are reminded that if they park a motor vehicle on University premises without a valid permit, they will be fined 20", "hypothesis": "Students living in Hall do not need permission to park in Hall car parks.", "label": "c"} +{"uid": "id_208", "premise": "USE OF UNIVERSITY GROUNDS BY VEHICULAR TRAFFIC The University grounds are private. The University authorities only allow authorised members of the University, visitors and drivers of vehicles servicing the University to enter the grounds. Members of staff who have paid the requisite fee and display the appropriate permit may bring a vehicle into the grounds. A University permit does not entitle them to park in Hall car parks however, unless authorised by the Warden of the Hall concerned. Students may not bring vehicles into the grounds during the working day unless they have been given special permission by the Security Officer and have paid for and are displaying an appropriate entry permit. Students living in Halls of Residence must obtain permission from the Warden to keep a motor vehicle at their residence. Students are reminded that if they park a motor vehicle on University premises without a valid permit, they will be fined 20", "hypothesis": "Parking permits cost 20 a year.", "label": "n"} +{"uid": "id_209", "premise": "USE OF UNIVERSITY GROUNDS BY VEHICULAR TRAFFIC The University grounds are private. The University authorities only allow authorised members of the University, visitors and drivers of vehicles servicing the University to enter the grounds. Members of staff who have paid the requisite fee and display the appropriate permit may bring a vehicle into the grounds. A University permit does not entitle them to park in Hall car parks however, unless authorised by the Warden of the Hall concerned. Students may not bring vehicles into the grounds during the working day unless they have been given special permission by the Security Officer and have paid for and are displaying an appropriate entry permit. Students living in Halls of Residence must obtain permission from the Warden to keep a motor vehicle at their residence. Students are reminded that if they park a motor vehicle on University premises without a valid permit, they will be fined 20", "hypothesis": "Having a University permit does not allow staff to park at Halls.", "label": "e"} +{"uid": "id_210", "premise": "USE OF UNIVERSITY GROUNDS BY VEHICULAR TRAFFIC The University grounds are private. The University authorities only allow authorised members of the University, visitors and drivers of vehicles servicing the University to enter the grounds. Members of staff who have paid the requisite fee and display the appropriate permit may bring a vehicle into the grounds. A University permit does not entitle them to park in Hall car parks however, unless authorised by the Warden of the Hall concerned. Students may not bring vehicles into the grounds during the working day unless they have been given special permission by the Security Officer and have paid for and are displaying an appropriate entry permit. Students living in Halls of Residence must obtain permission from the Warden to keep a motor vehicle at their residence. Students are reminded that if they park a motor vehicle on University premises without a valid permit, they will be fined 20", "hypothesis": "Parking in Halls of Residence is handled by the Wardens of the Halls.", "label": "e"} +{"uid": "id_211", "premise": "USE OF UNIVERSITY GROUNDS BY VEHICULAR TRAFFIC The University grounds are private. The University authorities only allow authorised members of the University, visitors and drivers of vehicles servicing the University to enter the grounds. Members of staff who have paid the requisite fee and display the appropriate permit may bring a vehicle into the grounds. A University permit does not entitle them to park in Hall car parks however, unless authorised by the Warden of the Hall concerned. Students may not bring vehicles into the grounds during the working day unless they have been given special permission by the Security Officer and have paid for and are displaying an appropriate entry permit. Students living in Halls of Residence must obtain permission from the Warden to keep a motor vehicle at their residence. Students are reminded that if they park a motor vehicle on University premises without a valid permit, they will be fined 20", "hypothesis": "University employees do not need to pay for their parking permits.", "label": "c"} +{"uid": "id_212", "premise": "USE THE RIGHT TYPE OF FIRE EXTINGUISHER! Fire extinguishers come in different types depending on the material combusted. The five main types of fire extinguisher are described below. Pressurized water Used for Class A fires only. Carbon-dioxide Used for Class E fires because it does not damage electrical equipment such as computers. Limited use for Class B fires because there is a risk of re-ignition due to a lack of cooling. Foam-filled Used for Class B fires. Also used for Class A fires, though not in confined spaces. They are NOT for electrical equipment fires or cooking oil. Dry powder Used for Class A, B, C and E fires, with specialist powders for Class D fires. Smothers the fire but does not cool it or penetrate very well so there is a risk of re-ignition. Wet chemical Used for Class F fires, especially high temperature deep fat fryers. There are six classifications of combustible material as shown below. Class A: flammable organic solids (eg wood, paper, coal, plastics, textiles) Class B: flammable liquids (eg gasoline, spirits) but not cooking oil Class C: flammable gas (eg propane, butane) Class D: combustible metals (eg magnesium, lithium) Class E: electrical equipment (eg computers, photocopiers) Class F: cooking oil and fat The above classifications apply to Europe and Australia.", "hypothesis": "Only one type of fire extinguisher is suitable for a lithium battery fire.", "label": "n"} +{"uid": "id_213", "premise": "USE THE RIGHT TYPE OF FIRE EXTINGUISHER! Fire extinguishers come in different types depending on the material combusted. The five main types of fire extinguisher are described below. Pressurized water Used for Class A fires only. Carbon-dioxide Used for Class E fires because it does not damage electrical equipment such as computers. Limited use for Class B fires because there is a risk of re-ignition due to a lack of cooling. Foam-filled Used for Class B fires. Also used for Class A fires, though not in confined spaces. They are NOT for electrical equipment fires or cooking oil. Dry powder Used for Class A, B, C and E fires, with specialist powders for Class D fires. Smothers the fire but does not cool it or penetrate very well so there is a risk of re-ignition. Wet chemical Used for Class F fires, especially high temperature deep fat fryers. There are six classifications of combustible material as shown below. Class A: flammable organic solids (eg wood, paper, coal, plastics, textiles) Class B: flammable liquids (eg gasoline, spirits) but not cooking oil Class C: flammable gas (eg propane, butane) Class D: combustible metals (eg magnesium, lithium) Class E: electrical equipment (eg computers, photocopiers) Class F: cooking oil and fat The above classifications apply to Europe and Australia.", "hypothesis": "Foam-filled extinguishers should NOT be used outdoors.", "label": "c"} +{"uid": "id_214", "premise": "USE THE RIGHT TYPE OF FIRE EXTINGUISHER! Fire extinguishers come in different types depending on the material combusted. The five main types of fire extinguisher are described below. Pressurized water Used for Class A fires only. Carbon-dioxide Used for Class E fires because it does not damage electrical equipment such as computers. Limited use for Class B fires because there is a risk of re-ignition due to a lack of cooling. Foam-filled Used for Class B fires. Also used for Class A fires, though not in confined spaces. They are NOT for electrical equipment fires or cooking oil. Dry powder Used for Class A, B, C and E fires, with specialist powders for Class D fires. Smothers the fire but does not cool it or penetrate very well so there is a risk of re-ignition. Wet chemical Used for Class F fires, especially high temperature deep fat fryers. There are six classifications of combustible material as shown below. Class A: flammable organic solids (eg wood, paper, coal, plastics, textiles) Class B: flammable liquids (eg gasoline, spirits) but not cooking oil Class C: flammable gas (eg propane, butane) Class D: combustible metals (eg magnesium, lithium) Class E: electrical equipment (eg computers, photocopiers) Class F: cooking oil and fat The above classifications apply to Europe and Australia.", "hypothesis": "Cooking oil fires should only be tackled with Class F fire extinguishers.", "label": "e"} +{"uid": "id_215", "premise": "USE THE RIGHT TYPE OF FIRE EXTINGUISHER! Fire extinguishers come in different types depending on the material combusted. The five main types of fire extinguisher are described below. Pressurized water Used for Class A fires only. Carbon-dioxide Used for Class E fires because it does not damage electrical equipment such as computers. Limited use for Class B fires because there is a risk of re-ignition due to a lack of cooling. Foam-filled Used for Class B fires. Also used for Class A fires, though not in confined spaces. They are NOT for electrical equipment fires or cooking oil. Dry powder Used for Class A, B, C and E fires, with specialist powders for Class D fires. Smothers the fire but does not cool it or penetrate very well so there is a risk of re-ignition. Wet chemical Used for Class F fires, especially high temperature deep fat fryers. There are six classifications of combustible material as shown below. Class A: flammable organic solids (eg wood, paper, coal, plastics, textiles) Class B: flammable liquids (eg gasoline, spirits) but not cooking oil Class C: flammable gas (eg propane, butane) Class D: combustible metals (eg magnesium, lithium) Class E: electrical equipment (eg computers, photocopiers) Class F: cooking oil and fat The above classifications apply to Europe and Australia.", "hypothesis": "Foam-filled extinguishers can be used on fires involving plastics.", "label": "e"} +{"uid": "id_216", "premise": "USE THE RIGHT TYPE OF FIRE EXTINGUISHER! Fire extinguishers come in different types depending on the material combusted. The five main types of fire extinguisher are described below. Pressurized water Used for Class A fires only. Carbon-dioxide Used for Class E fires because it does not damage electrical equipment such as computers. Limited use for Class B fires because there is a risk of re-ignition due to a lack of cooling. Foam-filled Used for Class B fires. Also used for Class A fires, though not in confined spaces. They are NOT for electrical equipment fires or cooking oil. Dry powder Used for Class A, B, C and E fires, with specialist powders for Class D fires. Smothers the fire but does not cool it or penetrate very well so there is a risk of re-ignition. Wet chemical Used for Class F fires, especially high temperature deep fat fryers. There are six classifications of combustible material as shown below. Class A: flammable organic solids (eg wood, paper, coal, plastics, textiles) Class B: flammable liquids (eg gasoline, spirits) but not cooking oil Class C: flammable gas (eg propane, butane) Class D: combustible metals (eg magnesium, lithium) Class E: electrical equipment (eg computers, photocopiers) Class F: cooking oil and fat The above classifications apply to Europe and Australia.", "hypothesis": "A gasoline fire extinguished with carbon-dioxide might ignite again.", "label": "e"} +{"uid": "id_217", "premise": "USE THE RIGHT TYPE OF FIRE EXTINGUISHER! Fire extinguishers come in different types depending on the material combusted. The five main types of fire extinguisher are described below. Pressurized water Used for Class A fires only. Carbon-dioxide Used for Class E fires because it does not damage electrical equipment such as computers. Limited use for Class B fires because there is a risk of re-ignition due to a lack of cooling. Foam-filled Used for Class B fires. Also used for Class A fires, though not in confined spaces. They are NOT for electrical equipment fires or cooking oil. Dry powder Used for Class A, B, C and E fires, with specialist powders for Class D fires. Smothers the fire but does not cool it or penetrate very well so there is a risk of re-ignition. Wet chemical Used for Class F fires, especially high temperature deep fat fryers. There are six classifications of combustible material as shown below. Class A: flammable organic solids (eg wood, paper, coal, plastics, textiles) Class B: flammable liquids (eg gasoline, spirits) but not cooking oil Class C: flammable gas (eg propane, butane) Class D: combustible metals (eg magnesium, lithium) Class E: electrical equipment (eg computers, photocopiers) Class F: cooking oil and fat The above classifications apply to Europe and Australia.", "hypothesis": "Flammable liquids are more likely to reignite than flammable solids.", "label": "n"} +{"uid": "id_218", "premise": "USE THE RIGHT TYPE OF FIRE EXTINGUISHER! Fire extinguishers come in different types depending on the material combusted. The five main types of fire extinguisher are described below. Pressurized water Used for Class A fires only. Carbon-dioxide Used for Class E fires because it does not damage electrical equipment such as computers. Limited use for Class B fires because there is a risk of re-ignition due to a lack of cooling. Foam-filled Used for Class B fires. Also used for Class A fires, though not in confined spaces. They are NOT for electrical equipment fires or cooking oil. Dry powder Used for Class A, B, C and E fires, with specialist powders for Class D fires. Smothers the fire but does not cool it or penetrate very well so there is a risk of re-ignition. Wet chemical Used for Class F fires, especially high temperature deep fat fryers. There are six classifications of combustible material as shown below. Class A: flammable organic solids (eg wood, paper, coal, plastics, textiles) Class B: flammable liquids (eg gasoline, spirits) but not cooking oil Class C: flammable gas (eg propane, butane) Class D: combustible metals (eg magnesium, lithium) Class E: electrical equipment (eg computers, photocopiers) Class F: cooking oil and fat The above classifications apply to Europe and Australia.", "hypothesis": "Class A fires can be tackled with three types of extinguisher.", "label": "e"} +{"uid": "id_219", "premise": "USE THE RIGHT TYPE OF FIRE EXTINGUISHER! Fire extinguishers come in different types depending on the material combusted. The five main types of fire extinguisher are described below. Pressurized water Used for Class A fires only. Carbon-dioxide Used for Class E fires because it does not damage electrical equipment such as computers. Limited use for Class B fires because there is a risk of re-ignition due to a lack of cooling. Foam-filled Used for Class B fires. Also used for Class A fires, though not in confined spaces. They are NOT for electrical equipment fires or cooking oil. Dry powder Used for Class A, B, C and E fires, with specialist powders for Class D fires. Smothers the fire but does not cool it or penetrate very well so there is a risk of re-ignition. Wet chemical Used for Class F fires, especially high temperature deep fat fryers. There are six classifications of combustible material as shown below. Class A: flammable organic solids (eg wood, paper, coal, plastics, textiles) Class B: flammable liquids (eg gasoline, spirits) but not cooking oil Class C: flammable gas (eg propane, butane) Class D: combustible metals (eg magnesium, lithium) Class E: electrical equipment (eg computers, photocopiers) Class F: cooking oil and fat The above classifications apply to Europe and Australia.", "hypothesis": "Only one type of fire extinguisher is suitable for a lithium battery fire.", "label": "e"} +{"uid": "id_220", "premise": "Under 5% of employers test their staff for the use of recreational drugs and the vast majority do not consider substance abuse to be a significant issue in their workplace. Evidence of a link between drug abuse and accidents or low productivity is hard to find. More studies found a link between alcoholism and a detrimental impact on safety and performance than a link with drugs, either so-called soft drugs such as cannabis or class 1 drugs such as cocaine. Employers face problems if they decide that they should test staff for drugs. In addition to ethical considerations, employees have a right to privacy under the Human Rights Act; however, the employer also has a duty to provide a safe workplace and has a duty of care to take every reasonable step to ensure safety at work under the Health and Safety at Work Act. In some industries, for example the transport and nuclear power industries, employers do routinely test their staff for drug use.", "hypothesis": "Society cannot afford the risk of an accident caused by an employee on drugs in the transport or nuclear industry and that is why testing takes place in those industries.", "label": "n"} +{"uid": "id_221", "premise": "Under 5% of employers test their staff for the use of recreational drugs and the vast majority do not consider substance abuse to be a significant issue in their workplace. Evidence of a link between drug abuse and accidents or low productivity is hard to find. More studies found a link between alcoholism and a detrimental impact on safety and performance than a link with drugs, either so-called soft drugs such as cannabis or class 1 drugs such as cocaine. Employers face problems if they decide that they should test staff for drugs. In addition to ethical considerations, employees have a right to privacy under the Human Rights Act; however, the employer also has a duty to provide a safe workplace and has a duty of care to take every reasonable step to ensure safety at work under the Health and Safety at Work Act. In some industries, for example the transport and nuclear power industries, employers do routinely test their staff for drug use.", "hypothesis": "There is evidence that the use of recreational drugs is irrelevant to most employers.", "label": "e"} +{"uid": "id_222", "premise": "Under 5% of employers test their staff for the use of recreational drugs and the vast majority do not consider substance abuse to be a significant issue in their workplace. Evidence of a link between drug abuse and accidents or low productivity is hard to find. More studies found a link between alcoholism and a detrimental impact on safety and performance than a link with drugs, either so-called soft drugs such as cannabis or class 1 drugs such as cocaine. Employers face problems if they decide that they should test staff for drugs. In addition to ethical considerations, employees have a right to privacy under the Human Rights Act; however, the employer also has a duty to provide a safe workplace and has a duty of care to take every reasonable step to ensure safety at work under the Health and Safety at Work Act. In some industries, for example the transport and nuclear power industries, employers do routinely test their staff for drug use.", "hypothesis": "There is no conflict between the right to privacy and the right to a safe place of work.", "label": "c"} +{"uid": "id_223", "premise": "Under law, negligence is usually defined in the context of jury instructions wherein a judge instructs the jury that a party is to be considered negligent if they failed to exercise the standard of care that a reasonable person would have exercised under the same circumstances. In most jurisdictions, it is necessary to show first that a person had a duty to exercise care in a given situation, and that they breached that duty. In brief: Negligence, a tort, is a civil wrong consisting of five criteria: Duty or reasonable standard of care (as decided by judge as a matter of law), Breach (or negligence in laymens terms, decided as a matter of fact), Injury (the fact that the plaintiff suffered an injury, and is determined at a matter of fact), Cause in Fact or conduct of defendant that causes plaintiffs injury(s)(decided as a matter of fact), Legal Cause (now perceived as the foreseeability of the type of injury caused but not the specific injury or extent of injury, determined as a matter of fact). Matters of law are decided by a judge, matters of fact are decided by a jury. In order to prove negligence, it is not necessary to prove harm, but in order for a cause of action to rest in tort, harm must be proven. Hence, it would be meaningless to sue someone for negligence if no harm resulted. Conversely, it is not enough that a harm was done. In order for the harm to be compensable in a negligence lawsuit, the defendant must be shown to have been negligent, and it must be demonstrated that his negligence was the proximate cause of the harm sustained by the plaintiff.", "hypothesis": "In some cases negligence can be proven but harm cannot be proven.", "label": "e"} +{"uid": "id_224", "premise": "Under law, negligence is usually defined in the context of jury instructions wherein a judge instructs the jury that a party is to be considered negligent if they failed to exercise the standard of care that a reasonable person would have exercised under the same circumstances. In most jurisdictions, it is necessary to show first that a person had a duty to exercise care in a given situation, and that they breached that duty. In brief: Negligence, a tort, is a civil wrong consisting of five criteria: Duty or reasonable standard of care (as decided by judge as a matter of law), Breach (or negligence in laymens terms, decided as a matter of fact), Injury (the fact that the plaintiff suffered an injury, and is determined at a matter of fact), Cause in Fact or conduct of defendant that causes plaintiffs injury(s)(decided as a matter of fact), Legal Cause (now perceived as the foreseeability of the type of injury caused but not the specific injury or extent of injury, determined as a matter of fact). Matters of law are decided by a judge, matters of fact are decided by a jury. In order to prove negligence, it is not necessary to prove harm, but in order for a cause of action to rest in tort, harm must be proven. Hence, it would be meaningless to sue someone for negligence if no harm resulted. Conversely, it is not enough that a harm was done. In order for the harm to be compensable in a negligence lawsuit, the defendant must be shown to have been negligent, and it must be demonstrated that his negligence was the proximate cause of the harm sustained by the plaintiff.", "hypothesis": "The defendant must be shown to have been negligent before compensation can be awarded.", "label": "n"} +{"uid": "id_225", "premise": "Under law, negligence is usually defined in the context of jury instructions wherein a judge instructs the jury that a party is to be considered negligent if they failed to exercise the standard of care that a reasonable person would have exercised under the same circumstances. In most jurisdictions, it is necessary to show first that a person had a duty to exercise care in a given situation, and that they breached that duty. In brief: Negligence, a tort, is a civil wrong consisting of five criteria: Duty or reasonable standard of care (as decided by judge as a matter of law), Breach (or negligence in laymens terms, decided as a matter of fact), Injury (the fact that the plaintiff suffered an injury, and is determined at a matter of fact), Cause in Fact or conduct of defendant that causes plaintiffs injury(s)(decided as a matter of fact), Legal Cause (now perceived as the foreseeability of the type of injury caused but not the specific injury or extent of injury, determined as a matter of fact). Matters of law are decided by a judge, matters of fact are decided by a jury. In order to prove negligence, it is not necessary to prove harm, but in order for a cause of action to rest in tort, harm must be proven. Hence, it would be meaningless to sue someone for negligence if no harm resulted. Conversely, it is not enough that a harm was done. In order for the harm to be compensable in a negligence lawsuit, the defendant must be shown to have been negligent, and it must be demonstrated that his negligence was the proximate cause of the harm sustained by the plaintiff.", "hypothesis": "Proximate cause is an important concept in cases of negligence.", "label": "e"} +{"uid": "id_226", "premise": "Under law, negligence is usually defined in the context of jury instructions wherein a judge instructs the jury that a party is to be considered negligent if they failed to exercise the standard of care that a reasonable person would have exercised under the same circumstances. In most jurisdictions, it is necessary to show first that a person had a duty to exercise care in a given situation, and that they breached that duty. In brief: Negligence, a tort, is a civil wrong consisting of five criteria: Duty or reasonable standard of care (as decided by judge as a matter of law), Breach (or negligence in laymens terms, decided as a matter of fact), Injury (the fact that the plaintiff suffered an injury, and is determined at a matter of fact), Cause in Fact or conduct of defendant that causes plaintiffs injury(s)(decided as a matter of fact), Legal Cause (now perceived as the foreseeability of the type of injury caused but not the specific injury or extent of injury, determined as a matter of fact). Matters of law are decided by a judge, matters of fact are decided by a jury. In order to prove negligence, it is not necessary to prove harm, but in order for a cause of action to rest in tort, harm must be proven. Hence, it would be meaningless to sue someone for negligence if no harm resulted. Conversely, it is not enough that a harm was done. In order for the harm to be compensable in a negligence lawsuit, the defendant must be shown to have been negligent, and it must be demonstrated that his negligence was the proximate cause of the harm sustained by the plaintiff.", "hypothesis": "Matters of fact and matters of law are decided by a judge and jury respectively.", "label": "c"} +{"uid": "id_227", "premise": "Under law, negligence is usually defined in the context of jury instructions wherein a judge instructs the jury that a party is to be considered negligent if they failed to exercise the standard of care that a reasonable person would have exercised under the same circumstances. In most jurisdictions, it is necessary to show first that a person had a duty to exercise care in a given situation, and that they breached that duty. In brief: Negligence, a tort, is a civil wrong consisting of five criteria: Duty or reasonable standard of care (as decided by judge as a matter of law), Breach (or negligence in laymens terms, decided as a matter of fact), Injury (the fact that the plaintiff suffered an injury, and is determined at a matter of fact), Cause in Fact or conduct of defendant that causes plaintiffs injury(s)(decided as a matter of fact), Legal Cause (now perceived as the foreseeability of the type of injury caused but not the specific injury or extent of injury, determined as a matter of fact). Matters of law are decided by a judge, matters of fact are decided by a jury. In order to prove negligence, it is not necessary to prove harm, but in order for a cause of action to rest in tort, harm must be proven. Hence, it would be meaningless to sue someone for negligence if no harm resulted. Conversely, it is not enough that a harm was done. In order for the harm to be compensable in a negligence lawsuit, the defendant must be shown to have been negligent, and it must be demonstrated that his negligence was the proximate cause of the harm sustained by the plaintiff.", "hypothesis": "Legal cause is one of the criteria which is determined by a judge.", "label": "c"} +{"uid": "id_228", "premise": "Under section 36 of the Trade Descriptions Act 1968, goods are deemed to have been manufactured or produced in the country in which they last underwent a treatment or process resulting in a substantial change. Meat from animals coming into the UK and then cured here can be described as UK produce. Most well- known brands of ham or bacon are often advertised with packaging depicting a British countryside scene and described as farmhouse, which would lead shoppers to believe they are buying products made from British meat, but most are in fact made using imported meat.", "hypothesis": "The passage leads the reader to agree that the practice of importing foods and then processing them so that they are substantially changed should be stopped.", "label": "c"} +{"uid": "id_229", "premise": "Under section 36 of the Trade Descriptions Act 1968, goods are deemed to have been manufactured or produced in the country in which they last underwent a treatment or process resulting in a substantial change. Meat from animals coming into the UK and then cured here can be described as UK produce. Most well- known brands of ham or bacon are often advertised with packaging depicting a British countryside scene and described as farmhouse, which would lead shoppers to believe they are buying products made from British meat, but most are in fact made using imported meat.", "hypothesis": "Under section 36 of the Act, British lamb exported to France and slaughtered there is sold as French.", "label": "n"} +{"uid": "id_230", "premise": "Under section 36 of the Trade Descriptions Act 1968, goods are deemed to have been manufactured or produced in the country in which they last underwent a treatment or process resulting in a substantial change. Meat from animals coming into the UK and then cured here can be described as UK produce. Most well- known brands of ham or bacon are often advertised with packaging depicting a British countryside scene and described as farmhouse, which would lead shoppers to believe they are buying products made from British meat, but most are in fact made using imported meat.", "hypothesis": "The author of the passage believes that the practice risks some consumers being duped.", "label": "e"} +{"uid": "id_231", "premise": "Understanding Your Gas Bill How can I get a duplicate bill or information on my latest bill The easiest way to view or print a copy of your most recent or past bill is to register or log on to My Account. You can receive, view and pay your bill -- all online. When you log on to My Account, go to View My Bill, then Bill History. There you can view and print out your account history -- up to 25 months. Just click on the bill you'd like to see. Try it now. Or, if you'd prefer, you can call our automated service line 24 hours a day, at 1-800-772-5050*. Note, requests made through our phone line will take approximately 3-5 working days to complete. Billing information can only be sent to the mailing address on record. CCF: Hundred of Cubic Feet: Method used for gas measurement. The quantity of gas at a temperature of sixty degrees Fahrenheit and a pressure of 14.73 pounds per square inch makes up one cubic foot. Billing Terms BTU: British Thermal Unit: One BTU is the amount of heat required to raise the temperature of one pound of water one degree Fahrenheit. A more practical definition would be: how much gas an appliance will use to produce heat or cooling. As a result, gas appliances are sized by a BTU rating. 100,000 BTU's equal 1 therm. For example, a 400,000 BTU heater, when in use, would use 4 therms of gas per hour. A 30,000 BTU range would use .3 therms per hour of use. Billing Factor: An adjuster used to convert CCF into therms. It adjusts the amount of gas used to reflect the heat value of the gas at a given altitude. The heating value can vary from month to month; therefore, the billing factor is not always the same. Therm: A therm is approximately 100,000 BTUs. It is a standard unit of measurement. CCFs are converted to therms for purposes of billing. Natural Gas Conversions 1 cubic foot = 1050 Btu Therm = 100,000 Btu Ccf = 100 cubic foot, or 1 therm Mcf = 1000 cubic feet = 10.20 therms MMcf = 1 million cubic feet Bcf = 1 billion cubic feet Decatherm (Dth) = 10 therms = 1 million Btu Mmbtu = 1 million btu = 10 therms About gas rates and how bills are calculated Natural gas rates are made up of two primary charges: Gas delivery service, which The Gas Company provides - the \"delivery\" (or \"transmission\") charge; and, The cost of the natural gas itself -- which is reflected in the \"procurement\" charge. Many people believe that The Gas Company produces natural gas, but we don't. For our residential and smaller business customers, we buy natural gas from producers and marketers at the best possible prices on the open market. The wholesale gas prices we pay are based on market supply and demand. They're not marked up by The Gas Company, and are shown on your monthly bill as the \"commodity charge. \" The Gas Company's delivery service charge covers the costs of transporting natural gas through our pipeline system. It is approved annually by the Public Utilities Commission and is not impacted by the price of natural gas. Monthly Gas rates vary based on monthly gas prices Since 1997, the cost of natural gas that customers pay in their rates is based on a forecasted monthly price instead of a forecasted annual price. This allows rates to more closely follow current natural gas market prices. With monthly pricing, gas rates are based upon a 30-day forecast of natural gas market prices. This gives customers a better picture of the current price of natural gas, and means they no longer have to wait for annual adjustments to their bills to make up for differences between the 12-month forecast price and the actual price paid by The Gas Company on a monthly basis. Does The Gas Company benefit from higher gas prices? We do not produce natural gas; energy production companies produce natural gas. The Gas Company just delivers natural gas to its customers. Baseline therm allowance As determined by the Public Utilities Commission, under the direction of the State Legislature, \"baseline therm allowances\" are the amounts of natural gas needed to meet the minimum basic needs of the average home. The Gas Company is required to bill these \"baseline\" amounts at its lowest residential rates. The goal of these \"baseline\" amounts is to encourage efficient use of natural gas. Charges on Your Bill from a Third Party For bill questions and charges on your gas bill from third-party vendors -- Commerce Energy (formerly ACN) 1-877-226-3649 HomeServe 1-888-302-0137", "hypothesis": "Phone requests for a copy of your bill are processed within a working week.", "label": "e"} +{"uid": "id_232", "premise": "Understanding Your Gas Bill How can I get a duplicate bill or information on my latest bill The easiest way to view or print a copy of your most recent or past bill is to register or log on to My Account. You can receive, view and pay your bill -- all online. When you log on to My Account, go to View My Bill, then Bill History. There you can view and print out your account history -- up to 25 months. Just click on the bill you'd like to see. Try it now. Or, if you'd prefer, you can call our automated service line 24 hours a day, at 1-800-772-5050*. Note, requests made through our phone line will take approximately 3-5 working days to complete. Billing information can only be sent to the mailing address on record. CCF: Hundred of Cubic Feet: Method used for gas measurement. The quantity of gas at a temperature of sixty degrees Fahrenheit and a pressure of 14.73 pounds per square inch makes up one cubic foot. Billing Terms BTU: British Thermal Unit: One BTU is the amount of heat required to raise the temperature of one pound of water one degree Fahrenheit. A more practical definition would be: how much gas an appliance will use to produce heat or cooling. As a result, gas appliances are sized by a BTU rating. 100,000 BTU's equal 1 therm. For example, a 400,000 BTU heater, when in use, would use 4 therms of gas per hour. A 30,000 BTU range would use .3 therms per hour of use. Billing Factor: An adjuster used to convert CCF into therms. It adjusts the amount of gas used to reflect the heat value of the gas at a given altitude. The heating value can vary from month to month; therefore, the billing factor is not always the same. Therm: A therm is approximately 100,000 BTUs. It is a standard unit of measurement. CCFs are converted to therms for purposes of billing. Natural Gas Conversions 1 cubic foot = 1050 Btu Therm = 100,000 Btu Ccf = 100 cubic foot, or 1 therm Mcf = 1000 cubic feet = 10.20 therms MMcf = 1 million cubic feet Bcf = 1 billion cubic feet Decatherm (Dth) = 10 therms = 1 million Btu Mmbtu = 1 million btu = 10 therms About gas rates and how bills are calculated Natural gas rates are made up of two primary charges: Gas delivery service, which The Gas Company provides - the \"delivery\" (or \"transmission\") charge; and, The cost of the natural gas itself -- which is reflected in the \"procurement\" charge. Many people believe that The Gas Company produces natural gas, but we don't. For our residential and smaller business customers, we buy natural gas from producers and marketers at the best possible prices on the open market. The wholesale gas prices we pay are based on market supply and demand. They're not marked up by The Gas Company, and are shown on your monthly bill as the \"commodity charge. \" The Gas Company's delivery service charge covers the costs of transporting natural gas through our pipeline system. It is approved annually by the Public Utilities Commission and is not impacted by the price of natural gas. Monthly Gas rates vary based on monthly gas prices Since 1997, the cost of natural gas that customers pay in their rates is based on a forecasted monthly price instead of a forecasted annual price. This allows rates to more closely follow current natural gas market prices. With monthly pricing, gas rates are based upon a 30-day forecast of natural gas market prices. This gives customers a better picture of the current price of natural gas, and means they no longer have to wait for annual adjustments to their bills to make up for differences between the 12-month forecast price and the actual price paid by The Gas Company on a monthly basis. Does The Gas Company benefit from higher gas prices? We do not produce natural gas; energy production companies produce natural gas. The Gas Company just delivers natural gas to its customers. Baseline therm allowance As determined by the Public Utilities Commission, under the direction of the State Legislature, \"baseline therm allowances\" are the amounts of natural gas needed to meet the minimum basic needs of the average home. The Gas Company is required to bill these \"baseline\" amounts at its lowest residential rates. The goal of these \"baseline\" amounts is to encourage efficient use of natural gas. Charges on Your Bill from a Third Party For bill questions and charges on your gas bill from third-party vendors -- Commerce Energy (formerly ACN) 1-877-226-3649 HomeServe 1-888-302-0137", "hypothesis": "CCFs are calculated at a temperature of 60 degrees C.", "label": "c"} +{"uid": "id_233", "premise": "Understanding Your Gas Bill How can I get a duplicate bill or information on my latest bill The easiest way to view or print a copy of your most recent or past bill is to register or log on to My Account. You can receive, view and pay your bill -- all online. When you log on to My Account, go to View My Bill, then Bill History. There you can view and print out your account history -- up to 25 months. Just click on the bill you'd like to see. Try it now. Or, if you'd prefer, you can call our automated service line 24 hours a day, at 1-800-772-5050*. Note, requests made through our phone line will take approximately 3-5 working days to complete. Billing information can only be sent to the mailing address on record. CCF: Hundred of Cubic Feet: Method used for gas measurement. The quantity of gas at a temperature of sixty degrees Fahrenheit and a pressure of 14.73 pounds per square inch makes up one cubic foot. Billing Terms BTU: British Thermal Unit: One BTU is the amount of heat required to raise the temperature of one pound of water one degree Fahrenheit. A more practical definition would be: how much gas an appliance will use to produce heat or cooling. As a result, gas appliances are sized by a BTU rating. 100,000 BTU's equal 1 therm. For example, a 400,000 BTU heater, when in use, would use 4 therms of gas per hour. A 30,000 BTU range would use .3 therms per hour of use. Billing Factor: An adjuster used to convert CCF into therms. It adjusts the amount of gas used to reflect the heat value of the gas at a given altitude. The heating value can vary from month to month; therefore, the billing factor is not always the same. Therm: A therm is approximately 100,000 BTUs. It is a standard unit of measurement. CCFs are converted to therms for purposes of billing. Natural Gas Conversions 1 cubic foot = 1050 Btu Therm = 100,000 Btu Ccf = 100 cubic foot, or 1 therm Mcf = 1000 cubic feet = 10.20 therms MMcf = 1 million cubic feet Bcf = 1 billion cubic feet Decatherm (Dth) = 10 therms = 1 million Btu Mmbtu = 1 million btu = 10 therms About gas rates and how bills are calculated Natural gas rates are made up of two primary charges: Gas delivery service, which The Gas Company provides - the \"delivery\" (or \"transmission\") charge; and, The cost of the natural gas itself -- which is reflected in the \"procurement\" charge. Many people believe that The Gas Company produces natural gas, but we don't. For our residential and smaller business customers, we buy natural gas from producers and marketers at the best possible prices on the open market. The wholesale gas prices we pay are based on market supply and demand. They're not marked up by The Gas Company, and are shown on your monthly bill as the \"commodity charge. \" The Gas Company's delivery service charge covers the costs of transporting natural gas through our pipeline system. It is approved annually by the Public Utilities Commission and is not impacted by the price of natural gas. Monthly Gas rates vary based on monthly gas prices Since 1997, the cost of natural gas that customers pay in their rates is based on a forecasted monthly price instead of a forecasted annual price. This allows rates to more closely follow current natural gas market prices. With monthly pricing, gas rates are based upon a 30-day forecast of natural gas market prices. This gives customers a better picture of the current price of natural gas, and means they no longer have to wait for annual adjustments to their bills to make up for differences between the 12-month forecast price and the actual price paid by The Gas Company on a monthly basis. Does The Gas Company benefit from higher gas prices? We do not produce natural gas; energy production companies produce natural gas. The Gas Company just delivers natural gas to its customers. Baseline therm allowance As determined by the Public Utilities Commission, under the direction of the State Legislature, \"baseline therm allowances\" are the amounts of natural gas needed to meet the minimum basic needs of the average home. The Gas Company is required to bill these \"baseline\" amounts at its lowest residential rates. The goal of these \"baseline\" amounts is to encourage efficient use of natural gas. Charges on Your Bill from a Third Party For bill questions and charges on your gas bill from third-party vendors -- Commerce Energy (formerly ACN) 1-877-226-3649 HomeServe 1-888-302-0137", "hypothesis": "The Gas Company receives a discount on the market price of the day.", "label": "c"} +{"uid": "id_234", "premise": "Understanding Your Gas Bill How can I get a duplicate bill or information on my latest bill The easiest way to view or print a copy of your most recent or past bill is to register or log on to My Account. You can receive, view and pay your bill -- all online. When you log on to My Account, go to View My Bill, then Bill History. There you can view and print out your account history -- up to 25 months. Just click on the bill you'd like to see. Try it now. Or, if you'd prefer, you can call our automated service line 24 hours a day, at 1-800-772-5050*. Note, requests made through our phone line will take approximately 3-5 working days to complete. Billing information can only be sent to the mailing address on record. CCF: Hundred of Cubic Feet: Method used for gas measurement. The quantity of gas at a temperature of sixty degrees Fahrenheit and a pressure of 14.73 pounds per square inch makes up one cubic foot. Billing Terms BTU: British Thermal Unit: One BTU is the amount of heat required to raise the temperature of one pound of water one degree Fahrenheit. A more practical definition would be: how much gas an appliance will use to produce heat or cooling. As a result, gas appliances are sized by a BTU rating. 100,000 BTU's equal 1 therm. For example, a 400,000 BTU heater, when in use, would use 4 therms of gas per hour. A 30,000 BTU range would use .3 therms per hour of use. Billing Factor: An adjuster used to convert CCF into therms. It adjusts the amount of gas used to reflect the heat value of the gas at a given altitude. The heating value can vary from month to month; therefore, the billing factor is not always the same. Therm: A therm is approximately 100,000 BTUs. It is a standard unit of measurement. CCFs are converted to therms for purposes of billing. Natural Gas Conversions 1 cubic foot = 1050 Btu Therm = 100,000 Btu Ccf = 100 cubic foot, or 1 therm Mcf = 1000 cubic feet = 10.20 therms MMcf = 1 million cubic feet Bcf = 1 billion cubic feet Decatherm (Dth) = 10 therms = 1 million Btu Mmbtu = 1 million btu = 10 therms About gas rates and how bills are calculated Natural gas rates are made up of two primary charges: Gas delivery service, which The Gas Company provides - the \"delivery\" (or \"transmission\") charge; and, The cost of the natural gas itself -- which is reflected in the \"procurement\" charge. Many people believe that The Gas Company produces natural gas, but we don't. For our residential and smaller business customers, we buy natural gas from producers and marketers at the best possible prices on the open market. The wholesale gas prices we pay are based on market supply and demand. They're not marked up by The Gas Company, and are shown on your monthly bill as the \"commodity charge. \" The Gas Company's delivery service charge covers the costs of transporting natural gas through our pipeline system. It is approved annually by the Public Utilities Commission and is not impacted by the price of natural gas. Monthly Gas rates vary based on monthly gas prices Since 1997, the cost of natural gas that customers pay in their rates is based on a forecasted monthly price instead of a forecasted annual price. This allows rates to more closely follow current natural gas market prices. With monthly pricing, gas rates are based upon a 30-day forecast of natural gas market prices. This gives customers a better picture of the current price of natural gas, and means they no longer have to wait for annual adjustments to their bills to make up for differences between the 12-month forecast price and the actual price paid by The Gas Company on a monthly basis. Does The Gas Company benefit from higher gas prices? We do not produce natural gas; energy production companies produce natural gas. The Gas Company just delivers natural gas to its customers. Baseline therm allowance As determined by the Public Utilities Commission, under the direction of the State Legislature, \"baseline therm allowances\" are the amounts of natural gas needed to meet the minimum basic needs of the average home. The Gas Company is required to bill these \"baseline\" amounts at its lowest residential rates. The goal of these \"baseline\" amounts is to encourage efficient use of natural gas. Charges on Your Bill from a Third Party For bill questions and charges on your gas bill from third-party vendors -- Commerce Energy (formerly ACN) 1-877-226-3649 HomeServe 1-888-302-0137", "hypothesis": "Therms are converted to CCFs to calculate your bill.", "label": "c"} +{"uid": "id_235", "premise": "Understanding Your Gas Bill How can I get a duplicate bill or information on my latest bill The easiest way to view or print a copy of your most recent or past bill is to register or log on to My Account. You can receive, view and pay your bill -- all online. When you log on to My Account, go to View My Bill, then Bill History. There you can view and print out your account history -- up to 25 months. Just click on the bill you'd like to see. Try it now. Or, if you'd prefer, you can call our automated service line 24 hours a day, at 1-800-772-5050*. Note, requests made through our phone line will take approximately 3-5 working days to complete. Billing information can only be sent to the mailing address on record. CCF: Hundred of Cubic Feet: Method used for gas measurement. The quantity of gas at a temperature of sixty degrees Fahrenheit and a pressure of 14.73 pounds per square inch makes up one cubic foot. Billing Terms BTU: British Thermal Unit: One BTU is the amount of heat required to raise the temperature of one pound of water one degree Fahrenheit. A more practical definition would be: how much gas an appliance will use to produce heat or cooling. As a result, gas appliances are sized by a BTU rating. 100,000 BTU's equal 1 therm. For example, a 400,000 BTU heater, when in use, would use 4 therms of gas per hour. A 30,000 BTU range would use .3 therms per hour of use. Billing Factor: An adjuster used to convert CCF into therms. It adjusts the amount of gas used to reflect the heat value of the gas at a given altitude. The heating value can vary from month to month; therefore, the billing factor is not always the same. Therm: A therm is approximately 100,000 BTUs. It is a standard unit of measurement. CCFs are converted to therms for purposes of billing. Natural Gas Conversions 1 cubic foot = 1050 Btu Therm = 100,000 Btu Ccf = 100 cubic foot, or 1 therm Mcf = 1000 cubic feet = 10.20 therms MMcf = 1 million cubic feet Bcf = 1 billion cubic feet Decatherm (Dth) = 10 therms = 1 million Btu Mmbtu = 1 million btu = 10 therms About gas rates and how bills are calculated Natural gas rates are made up of two primary charges: Gas delivery service, which The Gas Company provides - the \"delivery\" (or \"transmission\") charge; and, The cost of the natural gas itself -- which is reflected in the \"procurement\" charge. Many people believe that The Gas Company produces natural gas, but we don't. For our residential and smaller business customers, we buy natural gas from producers and marketers at the best possible prices on the open market. The wholesale gas prices we pay are based on market supply and demand. They're not marked up by The Gas Company, and are shown on your monthly bill as the \"commodity charge. \" The Gas Company's delivery service charge covers the costs of transporting natural gas through our pipeline system. It is approved annually by the Public Utilities Commission and is not impacted by the price of natural gas. Monthly Gas rates vary based on monthly gas prices Since 1997, the cost of natural gas that customers pay in their rates is based on a forecasted monthly price instead of a forecasted annual price. This allows rates to more closely follow current natural gas market prices. With monthly pricing, gas rates are based upon a 30-day forecast of natural gas market prices. This gives customers a better picture of the current price of natural gas, and means they no longer have to wait for annual adjustments to their bills to make up for differences between the 12-month forecast price and the actual price paid by The Gas Company on a monthly basis. Does The Gas Company benefit from higher gas prices? We do not produce natural gas; energy production companies produce natural gas. The Gas Company just delivers natural gas to its customers. Baseline therm allowance As determined by the Public Utilities Commission, under the direction of the State Legislature, \"baseline therm allowances\" are the amounts of natural gas needed to meet the minimum basic needs of the average home. The Gas Company is required to bill these \"baseline\" amounts at its lowest residential rates. The goal of these \"baseline\" amounts is to encourage efficient use of natural gas. Charges on Your Bill from a Third Party For bill questions and charges on your gas bill from third-party vendors -- Commerce Energy (formerly ACN) 1-877-226-3649 HomeServe 1-888-302-0137", "hypothesis": "Since 1997, customers have not had to wait for annual adjustments.", "label": "e"} +{"uid": "id_236", "premise": "University Union The job of the University Union is to represent the interests of the studentsboth to the University and to the outside worldand provide students with cultural, sporting and welfare facilitie. When you arrive at the University, you will be given a Student Guide, explaining in detail what the Union has to offer. All full-time registered students are automatically members of the University Union, which is affiliated to the National Union of Students (although under Section 22(2)(c) of the Education Act 1994, a student has the right not to be a member of the Union if he or she so wishes). The Union is run by students (Sabbatical Officers) elected in cross-campus ballots, who work full-time, taking a year of from their university courses. International students are represented by an Overseas Students Officer, a part-time Union post. The Graduate Association All postgraduate students at the University of St James are automatically members of the Graduate Association. It plays an important role in representing the interests of all postgraduate students, and also acts as a social club. The Graduate Association elects annually international officers, representing the interests of students from Europe and from outside Europe. Societies and Groups National and Cultural Societies There are some 18 societies affiliated to the Union with memberships of nationals from those countries and other international and UK students interested in finding out more about their culture and language. The current list of National and Cultural societies as of January 2000 can be obtained at the Union office. The presidents of all these societies can be contacted through their pigeonholes in the Union. If there is no society for your nationality, why not start one? Wives International Group This group was formed to foster contact amongst the wives of overseas students, Coffee mornings are held every Wednesday morning in the Senior Common Room, Clifton Hill House, where children can play with the many toys provided, and their mothers can enjoy a cup of tea or coffee and chat. Language tuition can also be arranged by qualified teachers at a reduced rate for wives who do not have much knowledge of the English language.", "hypothesis": "The representative of the international students studies as well as works.", "label": "n"} +{"uid": "id_237", "premise": "University Union The job of the University Union is to represent the interests of the studentsboth to the University and to the outside worldand provide students with cultural, sporting and welfare facilitie. When you arrive at the University, you will be given a Student Guide, explaining in detail what the Union has to offer. All full-time registered students are automatically members of the University Union, which is affiliated to the National Union of Students (although under Section 22(2)(c) of the Education Act 1994, a student has the right not to be a member of the Union if he or she so wishes). The Union is run by students (Sabbatical Officers) elected in cross-campus ballots, who work full-time, taking a year of from their university courses. International students are represented by an Overseas Students Officer, a part-time Union post. The Graduate Association All postgraduate students at the University of St James are automatically members of the Graduate Association. It plays an important role in representing the interests of all postgraduate students, and also acts as a social club. The Graduate Association elects annually international officers, representing the interests of students from Europe and from outside Europe. Societies and Groups National and Cultural Societies There are some 18 societies affiliated to the Union with memberships of nationals from those countries and other international and UK students interested in finding out more about their culture and language. The current list of National and Cultural societies as of January 2000 can be obtained at the Union office. The presidents of all these societies can be contacted through their pigeonholes in the Union. If there is no society for your nationality, why not start one? Wives International Group This group was formed to foster contact amongst the wives of overseas students, Coffee mornings are held every Wednesday morning in the Senior Common Room, Clifton Hill House, where children can play with the many toys provided, and their mothers can enjoy a cup of tea or coffee and chat. Language tuition can also be arranged by qualified teachers at a reduced rate for wives who do not have much knowledge of the English language.", "hypothesis": "National and Cultural clubs may be started by student", "label": "e"} +{"uid": "id_238", "premise": "University Union The job of the University Union is to represent the interests of the studentsboth to the University and to the outside worldand provide students with cultural, sporting and welfare facilitie. When you arrive at the University, you will be given a Student Guide, explaining in detail what the Union has to offer. All full-time registered students are automatically members of the University Union, which is affiliated to the National Union of Students (although under Section 22(2)(c) of the Education Act 1994, a student has the right not to be a member of the Union if he or she so wishes). The Union is run by students (Sabbatical Officers) elected in cross-campus ballots, who work full-time, taking a year of from their university courses. International students are represented by an Overseas Students Officer, a part-time Union post. The Graduate Association All postgraduate students at the University of St James are automatically members of the Graduate Association. It plays an important role in representing the interests of all postgraduate students, and also acts as a social club. The Graduate Association elects annually international officers, representing the interests of students from Europe and from outside Europe. Societies and Groups National and Cultural Societies There are some 18 societies affiliated to the Union with memberships of nationals from those countries and other international and UK students interested in finding out more about their culture and language. The current list of National and Cultural societies as of January 2000 can be obtained at the Union office. The presidents of all these societies can be contacted through their pigeonholes in the Union. If there is no society for your nationality, why not start one? Wives International Group This group was formed to foster contact amongst the wives of overseas students, Coffee mornings are held every Wednesday morning in the Senior Common Room, Clifton Hill House, where children can play with the many toys provided, and their mothers can enjoy a cup of tea or coffee and chat. Language tuition can also be arranged by qualified teachers at a reduced rate for wives who do not have much knowledge of the English language.", "hypothesis": "Full-time students should register to be members of the University Union.", "label": "c"} +{"uid": "id_239", "premise": "University Union The job of the University Union is to represent the interests of the studentsboth to the University and to the outside worldand provide students with cultural, sporting and welfare facilitie. When you arrive at the University, you will be given a Student Guide, explaining in detail what the Union has to offer. All full-time registered students are automatically members of the University Union, which is affiliated to the National Union of Students (although under Section 22(2)(c) of the Education Act 1994, a student has the right not to be a member of the Union if he or she so wishes). The Union is run by students (Sabbatical Officers) elected in cross-campus ballots, who work full-time, taking a year of from their university courses. International students are represented by an Overseas Students Officer, a part-time Union post. The Graduate Association All postgraduate students at the University of St James are automatically members of the Graduate Association. It plays an important role in representing the interests of all postgraduate students, and also acts as a social club. The Graduate Association elects annually international officers, representing the interests of students from Europe and from outside Europe. Societies and Groups National and Cultural Societies There are some 18 societies affiliated to the Union with memberships of nationals from those countries and other international and UK students interested in finding out more about their culture and language. The current list of National and Cultural societies as of January 2000 can be obtained at the Union office. The presidents of all these societies can be contacted through their pigeonholes in the Union. If there is no society for your nationality, why not start one? Wives International Group This group was formed to foster contact amongst the wives of overseas students, Coffee mornings are held every Wednesday morning in the Senior Common Room, Clifton Hill House, where children can play with the many toys provided, and their mothers can enjoy a cup of tea or coffee and chat. Language tuition can also be arranged by qualified teachers at a reduced rate for wives who do not have much knowledge of the English language.", "hypothesis": "As with the University Union, all students are automatic members of the Graduate Society.", "label": "c"} +{"uid": "id_240", "premise": "University Union The job of the University Union is to represent the interests of the studentsboth to the University and to the outside worldand provide students with cultural, sporting and welfare facilitie. When you arrive at the University, you will be given a Student Guide, explaining in detail what the Union has to offer. All full-time registered students are automatically members of the University Union, which is affiliated to the National Union of Students (although under Section 22(2)(c) of the Education Act 1994, a student has the right not to be a member of the Union if he or she so wishes). The Union is run by students (Sabbatical Officers) elected in cross-campus ballots, who work full-time, taking a year of from their university courses. International students are represented by an Overseas Students Officer, a part-time Union post. The Graduate Association All postgraduate students at the University of St James are automatically members of the Graduate Association. It plays an important role in representing the interests of all postgraduate students, and also acts as a social club. The Graduate Association elects annually international officers, representing the interests of students from Europe and from outside Europe. Societies and Groups National and Cultural Societies There are some 18 societies affiliated to the Union with memberships of nationals from those countries and other international and UK students interested in finding out more about their culture and language. The current list of National and Cultural societies as of January 2000 can be obtained at the Union office. The presidents of all these societies can be contacted through their pigeonholes in the Union. If there is no society for your nationality, why not start one? Wives International Group This group was formed to foster contact amongst the wives of overseas students, Coffee mornings are held every Wednesday morning in the Senior Common Room, Clifton Hill House, where children can play with the many toys provided, and their mothers can enjoy a cup of tea or coffee and chat. Language tuition can also be arranged by qualified teachers at a reduced rate for wives who do not have much knowledge of the English language.", "hypothesis": "The people who run the University Union do not study at the same time as they work.", "label": "e"} +{"uid": "id_241", "premise": "University Union The job of the University Union is to represent the interests of the studentsboth to the University and to the outside worldand provide students with cultural, sporting and welfare facilitie. When you arrive at the University, you will be given a Student Guide, explaining in detail what the Union has to offer. All full-time registered students are automatically members of the University Union, which is affiliated to the National Union of Students (although under Section 22(2)(c) of the Education Act 1994, a student has the right not to be a member of the Union if he or she so wishes). The Union is run by students (Sabbatical Officers) elected in cross-campus ballots, who work full-time, taking a year of from their university courses. International students are represented by an Overseas Students Officer, a part-time Union post. The Graduate Association All postgraduate students at the University of St James are automatically members of the Graduate Association. It plays an important role in representing the interests of all postgraduate students, and also acts as a social club. The Graduate Association elects annually international officers, representing the interests of students from Europe and from outside Europe. Societies and Groups National and Cultural Societies There are some 18 societies affiliated to the Union with memberships of nationals from those countries and other international and UK students interested in finding out more about their culture and language. The current list of National and Cultural societies as of January 2000 can be obtained at the Union office. The presidents of all these societies can be contacted through their pigeonholes in the Union. If there is no society for your nationality, why not start one? Wives International Group This group was formed to foster contact amongst the wives of overseas students, Coffee mornings are held every Wednesday morning in the Senior Common Room, Clifton Hill House, where children can play with the many toys provided, and their mothers can enjoy a cup of tea or coffee and chat. Language tuition can also be arranged by qualified teachers at a reduced rate for wives who do not have much knowledge of the English language.", "hypothesis": "The wives of Wives International Group are able to receive free language instruction.", "label": "c"} +{"uid": "id_242", "premise": "University Union The job of the University Union is to represent the interests of the studentsboth to the University and to the outside worldand provide students with cultural, sporting and welfare facilitie. When you arrive at the University, you will be given a Student Guide, explaining in detail what the Union has to offer. All full-time registered students are automatically members of the University Union, which is affiliated to the National Union of Students (although under Section 22(2)(c) of the Education Act 1994, a student has the right not to be a member of the Union if he or she so wishes). The Union is run by students (Sabbatical Officers) elected in cross-campus ballots, who work full-time, taking a year of from their university courses. International students are represented by an Overseas Students Officer, a part-time Union post. The Graduate Association All postgraduate students at the University of St James are automatically members of the Graduate Association. It plays an important role in representing the interests of all postgraduate students, and also acts as a social club. The Graduate Association elects annually international officers, representing the interests of students from Europe and from outside Europe. Societies and Groups National and Cultural Societies There are some 18 societies affiliated to the Union with memberships of nationals from those countries and other international and UK students interested in finding out more about their culture and language. The current list of National and Cultural societies as of January 2000 can be obtained at the Union office. The presidents of all these societies can be contacted through their pigeonholes in the Union. If there is no society for your nationality, why not start one? Wives International Group This group was formed to foster contact amongst the wives of overseas students, Coffee mornings are held every Wednesday morning in the Senior Common Room, Clifton Hill House, where children can play with the many toys provided, and their mothers can enjoy a cup of tea or coffee and chat. Language tuition can also be arranged by qualified teachers at a reduced rate for wives who do not have much knowledge of the English language.", "hypothesis": "All students must be members of the Union.", "label": "c"} +{"uid": "id_243", "premise": "Unless companies have some knowledge of buyer behavior, they would be unaware of and unfamiliar with the complex range of behavioral factors that impinge upon purchasing behavior. The truth is that, like much of human behavior, purchase behavior is complex and multi-faceted. Even the simplest of purchasing decisions is an amalgam of behavioral forces and factors of which even the purchaser may not be aware. However, even though consumer behavior is a complex subject, marketing planners should at least have some understanding of it. Marketers are specifically interested in the behavior associated with groups or segments of consumers as it would be impossible to serve the exact needs and wants of specific individuals in a market and remain profitable.", "hypothesis": "Even if one could predict the behavior of an individual buyer, it would not be profitable for marketers to try to do so.", "label": "n"} +{"uid": "id_244", "premise": "Unless companies have some knowledge of buyer behavior, they would be unaware of and unfamiliar with the complex range of behavioral factors that impinge upon purchasing behavior. The truth is that, like much of human behavior, purchase behavior is complex and multi-faceted. Even the simplest of purchasing decisions is an amalgam of behavioral forces and factors of which even the purchaser may not be aware. However, even though consumer behavior is a complex subject, marketing planners should at least have some understanding of it. Marketers are specifically interested in the behavior associated with groups or segments of consumers as it would be impossible to serve the exact needs and wants of specific individuals in a market and remain profitable.", "hypothesis": "The purchasing behavior of consumers is unpredictable.", "label": "n"} +{"uid": "id_245", "premise": "Unless companies have some knowledge of buyer behavior, they would be unaware of and unfamiliar with the complex range of behavioral factors that impinge upon purchasing behavior. The truth is that, like much of human behavior, purchase behavior is complex and multi-faceted. Even the simplest of purchasing decisions is an amalgam of behavioral forces and factors of which even the purchaser may not be aware. However, even though consumer behavior is a complex subject, marketing planners should at least have some understanding of it. Marketers are specifically interested in the behavior associated with groups or segments of consumers as it would be impossible to serve the exact needs and wants of specific individuals in a market and remain profitable.", "hypothesis": "Some consumer groups exhibit more complex behavior than others do.", "label": "n"} +{"uid": "id_246", "premise": "Unless companies have some knowledge of buyer behavior, they would be unaware of and unfamiliar with the complex range of behavioral factors that impinge upon purchasing behavior. The truth is that, like much of human behavior, purchase behavior is complex and multi-faceted. Even the simplest of purchasing decisions is an amalgam of behavioral forces and factors of which even the purchaser may not be aware. However, even though consumer behavior is a complex subject, marketing planners should at least have some understanding of it. Marketers are specifically interested in the behavior associated with groups or segments of consumers as it would be impossible to serve the exact needs and wants of specific individuals in a market and remain profitable.", "hypothesis": "Purchase behavior is not subject to the same whims as other aspects of human behavior.", "label": "c"} +{"uid": "id_247", "premise": "Unless companies have some knowledge of buyer behaviour, they would be unaware of and unfamiliar with the complex range of behavioural factors that impinge upon purchasing behaviour. The truth is that, like much of human behaviour, purchase behaviour is complex and multi-faceted. Even the simplest of purchasing decisions is an amalgam of behavioural forces and factors of which even the purchaser may not be aware. However, even though consumer behaviour is a complex subject, marketing planners should at least have some understanding of it. Marketers are Specifically interested in the behaviour associated with groups or segments of consumers as it would be impossible to serve the exact needs and wants of specific individuals in a market and remain profitable.", "hypothesis": "Some consumer groups exhibit more complex behaviour than others do.", "label": "n"} +{"uid": "id_248", "premise": "Unless companies have some knowledge of buyer behaviour, they would be unaware of and unfamiliar with the complex range of behavioural factors that impinge upon purchasing behaviour. The truth is that, like much of human behaviour, purchase behaviour is complex and multi-faceted. Even the simplest of purchasing decisions is an amalgam of behavioural forces and factors of which even the purchaser may not be aware. However, even though consumer behaviour is a complex subject, marketing planners should at least have some understanding of it. Marketers are Specifically interested in the behaviour associated with groups or segments of consumers as it would be impossible to serve the exact needs and wants of specific individuals in a market and remain profitable.", "hypothesis": "Purchase behaviour is not subject to the same whims as other aspects of human behaviour.", "label": "c"} +{"uid": "id_249", "premise": "Unless companies have some knowledge of buyer behaviour, they would be unaware of and unfamiliar with the complex range of behavioural factors that impinge upon purchasing behaviour. The truth is that, like much of human behaviour, purchase behaviour is complex and multi-faceted. Even the simplest of purchasing decisions is an amalgam of behavioural forces and factors of which even the purchaser may not be aware. However, even though consumer behaviour is a complex subject, marketing planners should at least have some understanding of it. Marketers are Specifically interested in the behaviour associated with groups or segments of consumers as it would be impossible to serve the exact needs and wants of specific individuals in a market and remain profitable.", "hypothesis": "The purchasing behavior of consumers is unpredictable.", "label": "n"} +{"uid": "id_250", "premise": "Unless companies have some knowledge of buyer behaviour, they would be unaware of and unfamiliar with the complex range of behavioural factors that impinge upon purchasing behaviour. The truth is that, like much of human behaviour, purchase behaviour is complex and multi-faceted. Even the simplest of purchasing decisions is an amalgam of behavioural forces and factors of which even the purchaser may not be aware. However, even though consumer behaviour is a complex subject, marketing planners should at least have some understanding of it. Marketers are Specifically interested in the behaviour associated with groups or segments of consumers as it would be impossible to serve the exact needs and wants of specific individuals in a market and remain profitable.", "hypothesis": "Even if one could predict the behaviour of an individual buyer, it would not be profitable for marketers to try to do so.", "label": "e"} +{"uid": "id_251", "premise": "Until 1995, the use of bicycles had remained virtually static for many years. However, in recent years the number of people using bicycles has grown with increasing pressure from environmentalists, transport agencies and health officials. The trend has been to produce more fashionable bicycles in a variety of styles, lighter cycles, and more comfortable cycles. The diversity of models has increased enormously, though their general shape has not changed radically.", "hypothesis": "There is now a greater diversity of bicycles available than before 1995.", "label": "e"} +{"uid": "id_252", "premise": "Until 1995, the use of bicycles had remained virtually static for many years. However, in recent years the number of people using bicycles has grown with increasing pressure from environmentalists, transport agencies and health officials. The trend has been to produce more fashionable bicycles in a variety of styles, lighter cycles, and more comfortable cycles. The diversity of models has increased enormously, though their general shape has not changed radically.", "hypothesis": "There has been an increasing pressure from transport agencies to use bicycles.", "label": "e"} +{"uid": "id_253", "premise": "Until 1995, the use of bicycles has remained virtually static for many years. However, in recent years the number of people using bicycles has grown with increasing pressure from environmentalists, transport agencies and health officials. The trend has been to produce more fashionable bicycles in a variety of styles, lighter cycles, and more comfortable cycles. The diversity of models has increased enormously, though their general shape has not changed Radically.", "hypothesis": "There is now a greater diversity of bicycles available than before 1995.", "label": "e"} +{"uid": "id_254", "premise": "Until 1995, the use of bicycles has remained virtually static for many years. However, in recent years the number of people using bicycles has grown with increasing pressure from environmentalists, transport agencies and health officials. The trend has been to produce more fashionable bicycles in a variety of styles, lighter cycles, and more comfortable cycles. The diversity of models has increased enormously, though their general shape has not changed Radically.", "hypothesis": "There has been an increasing pressure from transport agencies to use bicycles.", "label": "e"} +{"uid": "id_255", "premise": "Urban planning in Singapore British merchants established a trading post in Singapore in the early nineteenth century, and for more than a century trading interests dominated. However, in 1965 the newly independent island state was cut off from its hinterland, and so it set about pursuing a survival strategy. The good international communications it already enjoyed provided a useful base, but it was decided that if Singapore was to secure its economic future, it must develop its industry. To this end, new institutional structures were needed to facilitate, develop, and control foreign investment. One of the most important of these was the Economic Development Board (EDB), an arm of government that developed strategies for attracting investment. Thus from the outset, the Singaporean government was involved in city promotion. Towards the end of the twentieth century, the government realised that, due to limits on both the size of the countrys workforce and its land area, its labour-intensive industries were becoming increasingly uncompetitive. So an economic committee was established which concluded that Singapore should focus on developing as a service centre, and seek to attract company headquarters to serve South East Asia, and develop tourism, banking, and offshore activities. The land required for this service-sector orientation had been acquired in the early 1970s, when the government realised that it lacked the banking infrastructure for a modern economy. So a new banking and corporate district, known as the Golden Shoe, was planned, incorporating the historic commercial area. This district now houses all the major companies and various government financial agencies. Singapores current economic strategy is closely linked to land use and development planning. Although it is already a major city, the current development plan seeks to ensure Singapores continued economic growth through restructuring, to ensure that the facilities needed by future business are planned now. These include transport and telecommunication infrastructure, land, and environmental quality. A major concern is to avoid congestion in the central area, and so the latest plan deviates from previous plans by having a strong decentralisation policy. The plan makes provision for four major regional centres, each serving 800,000 people, but this does not mean that the existing central business district will not also grow. A major extension planned around Marina Bay draws on examples of other world cities, especially those with waterside central areas such as Sydney and San Francisco. The project involves major land reclamation of 667 hectares in total. Part of this has already been developed as a conference and exhibition zone, and the rest will be used for other facilities. However the need for vitality has been recognised and a mixed zoning approach has been adopted, to include housing and entertainment. One of the new features of the current plan is a broader conception of what contributes to economic success. It encompasses high quality residential provision, a good environment, leisure facilities and exciting city life. Thus there is more provision for low-density housing, often in waterfront communities linked to beaches and recreational facilities. However, the lower housing densities will put considerable pressure on the very limited land available for development, and this creates problems for another of the plans aims, which is to stress environmental quality. More and more of the remaining open area will be developed, and the only natural landscape surviving will be a small zone in the centre of the island which serves as a water catchment area. Environmental policy is therefore very much concerned with making the built environment more green by introducing more plants what is referred to as the beautification of Singapore. The plan focuses on green zones defining the boundaries of settlements, and running along transport corridors. The incidental green provision within housing areas is also given considerable attention. Much of the environmental provision, for example golf courses, recreation areas, and beaches, is linked to the prime objective of attracting business. The plan places much emphasis on good leisure provision and the need to exploit Singapores island setting. One way of doing this is through further land reclamation, to create a whole new island devoted to leisure and luxury housing which will stretch from the central area to the airport. A current concern also appears to be how to use the planning system to create opportunities for greater spontaneity: planners have recently given much attention to the concept of the 24-hour city and the cafe society. For example, a promotion has taken place along the Singapore river to create a cafe zone. This has included the realisation, rather late in the day, of the value of retaining older buildings, and the creation of a continuous riverside promenade. Since the relaxation in 1996 of strict guidelines on outdoor eating areas, this has become an extremely popular area in the evenings. Also, in 1998 the Urban Redevelopment Authority created a new entertainment area in the centre of the city which they are promoting as the citys one-stop, dynamic entertainment scene. In conclusion, the economic development of Singapore has been very consciously centrally planned, and the latest strategy is very clearly oriented to establishing Singapore as a leading world city. It is well placed to succeed, for a variety of reasons. It can draw upon its historic roots as a world trading centre; it has invested heavily in telecommunications and air transport infrastructure; it is well located in relation to other Asian economies; it has developed a safe and clean environment; and it has utilised the international language of English.", "hypothesis": "Singapore will find it difficult to compete with leading cities in other parts of the world.", "label": "c"} +{"uid": "id_256", "premise": "Urban planning in Singapore British merchants established a trading post in Singapore in the early nineteenth century, and for more than a century trading interests dominated. However, in 1965 the newly independent island state was cut off from its hinterland, and so it set about pursuing a survival strategy. The good international communications it already enjoyed provided a useful base, but it was decided that if Singapore was to secure its economic future, it must develop its industry. To this end, new institutional structures were needed to facilitate, develop, and control foreign investment. One of the most important of these was the Economic Development Board (EDB), an arm of government that developed strategies for attracting investment. Thus from the outset, the Singaporean government was involved in city promotion. Towards the end of the twentieth century, the government realised that, due to limits on both the size of the countrys workforce and its land area, its labour-intensive industries were becoming increasingly uncompetitive. So an economic committee was established which concluded that Singapore should focus on developing as a service centre, and seek to attract company headquarters to serve South East Asia, and develop tourism, banking, and offshore activities. The land required for this service-sector orientation had been acquired in the early 1970s, when the government realised that it lacked the banking infrastructure for a modern economy. So a new banking and corporate district, known as the Golden Shoe, was planned, incorporating the historic commercial area. This district now houses all the major companies and various government financial agencies. Singapores current economic strategy is closely linked to land use and development planning. Although it is already a major city, the current development plan seeks to ensure Singapores continued economic growth through restructuring, to ensure that the facilities needed by future business are planned now. These include transport and telecommunication infrastructure, land, and environmental quality. A major concern is to avoid congestion in the central area, and so the latest plan deviates from previous plans by having a strong decentralisation policy. The plan makes provision for four major regional centres, each serving 800,000 people, but this does not mean that the existing central business district will not also grow. A major extension planned around Marina Bay draws on examples of other world cities, especially those with waterside central areas such as Sydney and San Francisco. The project involves major land reclamation of 667 hectares in total. Part of this has already been developed as a conference and exhibition zone, and the rest will be used for other facilities. However the need for vitality has been recognised and a mixed zoning approach has been adopted, to include housing and entertainment. One of the new features of the current plan is a broader conception of what contributes to economic success. It encompasses high quality residential provision, a good environment, leisure facilities and exciting city life. Thus there is more provision for low-density housing, often in waterfront communities linked to beaches and recreational facilities. However, the lower housing densities will put considerable pressure on the very limited land available for development, and this creates problems for another of the plans aims, which is to stress environmental quality. More and more of the remaining open area will be developed, and the only natural landscape surviving will be a small zone in the centre of the island which serves as a water catchment area. Environmental policy is therefore very much concerned with making the built environment more green by introducing more plants what is referred to as the beautification of Singapore. The plan focuses on green zones defining the boundaries of settlements, and running along transport corridors. The incidental green provision within housing areas is also given considerable attention. Much of the environmental provision, for example golf courses, recreation areas, and beaches, is linked to the prime objective of attracting business. The plan places much emphasis on good leisure provision and the need to exploit Singapores island setting. One way of doing this is through further land reclamation, to create a whole new island devoted to leisure and luxury housing which will stretch from the central area to the airport. A current concern also appears to be how to use the planning system to create opportunities for greater spontaneity: planners have recently given much attention to the concept of the 24-hour city and the cafe society. For example, a promotion has taken place along the Singapore river to create a cafe zone. This has included the realisation, rather late in the day, of the value of retaining older buildings, and the creation of a continuous riverside promenade. Since the relaxation in 1996 of strict guidelines on outdoor eating areas, this has become an extremely popular area in the evenings. Also, in 1998 the Urban Redevelopment Authority created a new entertainment area in the centre of the city which they are promoting as the citys one-stop, dynamic entertainment scene. In conclusion, the economic development of Singapore has been very consciously centrally planned, and the latest strategy is very clearly oriented to establishing Singapore as a leading world city. It is well placed to succeed, for a variety of reasons. It can draw upon its historic roots as a world trading centre; it has invested heavily in telecommunications and air transport infrastructure; it is well located in relation to other Asian economies; it has developed a safe and clean environment; and it has utilised the international language of English.", "hypothesis": "After 1965, the Singaporean government switched the focus of the islands economy.", "label": "e"} +{"uid": "id_257", "premise": "Urban planning in Singapore British merchants established a trading post in Singapore in the early nineteenth century, and for more than a century trading interests dominated. However, in 1965 the newly independent island state was cut off from its hinterland, and so it set about pursuing a survival strategy. The good international communications it already enjoyed provided a useful base, but it was decided that if Singapore was to secure its economic future, it must develop its industry. To this end, new institutional structures were needed to facilitate, develop, and control foreign investment. One of the most important of these was the Economic Development Board (EDB), an arm of government that developed strategies for attracting investment. Thus from the outset, the Singaporean government was involved in city promotion. Towards the end of the twentieth century, the government realised that, due to limits on both the size of the countrys workforce and its land area, its labour-intensive industries were becoming increasingly uncompetitive. So an economic committee was established which concluded that Singapore should focus on developing as a service centre, and seek to attract company headquarters to serve South East Asia, and develop tourism, banking, and offshore activities. The land required for this service-sector orientation had been acquired in the early 1970s, when the government realised that it lacked the banking infrastructure for a modern economy. So a new banking and corporate district, known as the Golden Shoe, was planned, incorporating the historic commercial area. This district now houses all the major companies and various government financial agencies. Singapores current economic strategy is closely linked to land use and development planning. Although it is already a major city, the current development plan seeks to ensure Singapores continued economic growth through restructuring, to ensure that the facilities needed by future business are planned now. These include transport and telecommunication infrastructure, land, and environmental quality. A major concern is to avoid congestion in the central area, and so the latest plan deviates from previous plans by having a strong decentralisation policy. The plan makes provision for four major regional centres, each serving 800,000 people, but this does not mean that the existing central business district will not also grow. A major extension planned around Marina Bay draws on examples of other world cities, especially those with waterside central areas such as Sydney and San Francisco. The project involves major land reclamation of 667 hectares in total. Part of this has already been developed as a conference and exhibition zone, and the rest will be used for other facilities. However the need for vitality has been recognised and a mixed zoning approach has been adopted, to include housing and entertainment. One of the new features of the current plan is a broader conception of what contributes to economic success. It encompasses high quality residential provision, a good environment, leisure facilities and exciting city life. Thus there is more provision for low-density housing, often in waterfront communities linked to beaches and recreational facilities. However, the lower housing densities will put considerable pressure on the very limited land available for development, and this creates problems for another of the plans aims, which is to stress environmental quality. More and more of the remaining open area will be developed, and the only natural landscape surviving will be a small zone in the centre of the island which serves as a water catchment area. Environmental policy is therefore very much concerned with making the built environment more green by introducing more plants what is referred to as the beautification of Singapore. The plan focuses on green zones defining the boundaries of settlements, and running along transport corridors. The incidental green provision within housing areas is also given considerable attention. Much of the environmental provision, for example golf courses, recreation areas, and beaches, is linked to the prime objective of attracting business. The plan places much emphasis on good leisure provision and the need to exploit Singapores island setting. One way of doing this is through further land reclamation, to create a whole new island devoted to leisure and luxury housing which will stretch from the central area to the airport. A current concern also appears to be how to use the planning system to create opportunities for greater spontaneity: planners have recently given much attention to the concept of the 24-hour city and the cafe society. For example, a promotion has taken place along the Singapore river to create a cafe zone. This has included the realisation, rather late in the day, of the value of retaining older buildings, and the creation of a continuous riverside promenade. Since the relaxation in 1996 of strict guidelines on outdoor eating areas, this has become an extremely popular area in the evenings. Also, in 1998 the Urban Redevelopment Authority created a new entertainment area in the centre of the city which they are promoting as the citys one-stop, dynamic entertainment scene. In conclusion, the economic development of Singapore has been very consciously centrally planned, and the latest strategy is very clearly oriented to establishing Singapore as a leading world city. It is well placed to succeed, for a variety of reasons. It can draw upon its historic roots as a world trading centre; it has invested heavily in telecommunications and air transport infrastructure; it is well located in relation to other Asian economies; it has developed a safe and clean environment; and it has utilised the international language of English.", "hypothesis": "The government has enacted new laws to protect Singapores old buildings.", "label": "n"} +{"uid": "id_258", "premise": "Urban planning in Singapore British merchants established a trading post in Singapore in the early nineteenth century, and for more than a century trading interests dominated. However, in 1965 the newly independent island state was cut off from its hinterland, and so it set about pursuing a survival strategy. The good international communications it already enjoyed provided a useful base, but it was decided that if Singapore was to secure its economic future, it must develop its industry. To this end, new institutional structures were needed to facilitate, develop, and control foreign investment. One of the most important of these was the Economic Development Board (EDB), an arm of government that developed strategies for attracting investment. Thus from the outset, the Singaporean government was involved in city promotion. Towards the end of the twentieth century, the government realised that, due to limits on both the size of the countrys workforce and its land area, its labour-intensive industries were becoming increasingly uncompetitive. So an economic committee was established which concluded that Singapore should focus on developing as a service centre, and seek to attract company headquarters to serve South East Asia, and develop tourism, banking, and offshore activities. The land required for this service-sector orientation had been acquired in the early 1970s, when the government realised that it lacked the banking infrastructure for a modern economy. So a new banking and corporate district, known as the Golden Shoe, was planned, incorporating the historic commercial area. This district now houses all the major companies and various government financial agencies. Singapores current economic strategy is closely linked to land use and development planning. Although it is already a major city, the current development plan seeks to ensure Singapores continued economic growth through restructuring, to ensure that the facilities needed by future business are planned now. These include transport and telecommunication infrastructure, land, and environmental quality. A major concern is to avoid congestion in the central area, and so the latest plan deviates from previous plans by having a strong decentralisation policy. The plan makes provision for four major regional centres, each serving 800,000 people, but this does not mean that the existing central business district will not also grow. A major extension planned around Marina Bay draws on examples of other world cities, especially those with waterside central areas such as Sydney and San Francisco. The project involves major land reclamation of 667 hectares in total. Part of this has already been developed as a conference and exhibition zone, and the rest will be used for other facilities. However the need for vitality has been recognised and a mixed zoning approach has been adopted, to include housing and entertainment. One of the new features of the current plan is a broader conception of what contributes to economic success. It encompasses high quality residential provision, a good environment, leisure facilities and exciting city life. Thus there is more provision for low-density housing, often in waterfront communities linked to beaches and recreational facilities. However, the lower housing densities will put considerable pressure on the very limited land available for development, and this creates problems for another of the plans aims, which is to stress environmental quality. More and more of the remaining open area will be developed, and the only natural landscape surviving will be a small zone in the centre of the island which serves as a water catchment area. Environmental policy is therefore very much concerned with making the built environment more green by introducing more plants what is referred to as the beautification of Singapore. The plan focuses on green zones defining the boundaries of settlements, and running along transport corridors. The incidental green provision within housing areas is also given considerable attention. Much of the environmental provision, for example golf courses, recreation areas, and beaches, is linked to the prime objective of attracting business. The plan places much emphasis on good leisure provision and the need to exploit Singapores island setting. One way of doing this is through further land reclamation, to create a whole new island devoted to leisure and luxury housing which will stretch from the central area to the airport. A current concern also appears to be how to use the planning system to create opportunities for greater spontaneity: planners have recently given much attention to the concept of the 24-hour city and the cafe society. For example, a promotion has taken place along the Singapore river to create a cafe zone. This has included the realisation, rather late in the day, of the value of retaining older buildings, and the creation of a continuous riverside promenade. Since the relaxation in 1996 of strict guidelines on outdoor eating areas, this has become an extremely popular area in the evenings. Also, in 1998 the Urban Redevelopment Authority created a new entertainment area in the centre of the city which they are promoting as the citys one-stop, dynamic entertainment scene. In conclusion, the economic development of Singapore has been very consciously centrally planned, and the latest strategy is very clearly oriented to establishing Singapore as a leading world city. It is well placed to succeed, for a variety of reasons. It can draw upon its historic roots as a world trading centre; it has invested heavily in telecommunications and air transport infrastructure; it is well located in relation to other Asian economies; it has developed a safe and clean environment; and it has utilised the international language of English.", "hypothesis": "The creation of Singapores financial centre was delayed while a suitable site was found.", "label": "c"} +{"uid": "id_259", "premise": "Urban planning in Singapore British merchants established a trading post in Singapore in the early nineteenth century, and for more than a century trading interests dominated. However, in 1965 the newly independent island state was cut off from its hinterland, and so it set about pursuing a survival strategy. The good international communications it already enjoyed provided a useful base, but it was decided that if Singapore was to secure its economic future, it must develop its industry. To this end, new institutional structures were needed to facilitate, develop, and control foreign investment. One of the most important of these was the Economic Development Board (EDB), an arm of government that developed strategies for attracting investment. Thus from the outset, the Singaporean government was involved in city promotion. Towards the end of the twentieth century, the government realised that, due to limits on both the size of the countrys workforce and its land area, its labour-intensive industries were becoming increasingly uncompetitive. So an economic committee was established which concluded that Singapore should focus on developing as a service centre, and seek to attract company headquarters to serve South East Asia, and develop tourism, banking, and offshore activities. The land required for this service-sector orientation had been acquired in the early 1970s, when the government realised that it lacked the banking infrastructure for a modern economy. So a new banking and corporate district, known as the Golden Shoe, was planned, incorporating the historic commercial area. This district now houses all the major companies and various government financial agencies. Singapores current economic strategy is closely linked to land use and development planning. Although it is already a major city, the current development plan seeks to ensure Singapores continued economic growth through restructuring, to ensure that the facilities needed by future business are planned now. These include transport and telecommunication infrastructure, land, and environmental quality. A major concern is to avoid congestion in the central area, and so the latest plan deviates from previous plans by having a strong decentralisation policy. The plan makes provision for four major regional centres, each serving 800,000 people, but this does not mean that the existing central business district will not also grow. A major extension planned around Marina Bay draws on examples of other world cities, especially those with waterside central areas such as Sydney and San Francisco. The project involves major land reclamation of 667 hectares in total. Part of this has already been developed as a conference and exhibition zone, and the rest will be used for other facilities. However the need for vitality has been recognised and a mixed zoning approach has been adopted, to include housing and entertainment. One of the new features of the current plan is a broader conception of what contributes to economic success. It encompasses high quality residential provision, a good environment, leisure facilities and exciting city life. Thus there is more provision for low-density housing, often in waterfront communities linked to beaches and recreational facilities. However, the lower housing densities will put considerable pressure on the very limited land available for development, and this creates problems for another of the plans aims, which is to stress environmental quality. More and more of the remaining open area will be developed, and the only natural landscape surviving will be a small zone in the centre of the island which serves as a water catchment area. Environmental policy is therefore very much concerned with making the built environment more green by introducing more plants what is referred to as the beautification of Singapore. The plan focuses on green zones defining the boundaries of settlements, and running along transport corridors. The incidental green provision within housing areas is also given considerable attention. Much of the environmental provision, for example golf courses, recreation areas, and beaches, is linked to the prime objective of attracting business. The plan places much emphasis on good leisure provision and the need to exploit Singapores island setting. One way of doing this is through further land reclamation, to create a whole new island devoted to leisure and luxury housing which will stretch from the central area to the airport. A current concern also appears to be how to use the planning system to create opportunities for greater spontaneity: planners have recently given much attention to the concept of the 24-hour city and the cafe society. For example, a promotion has taken place along the Singapore river to create a cafe zone. This has included the realisation, rather late in the day, of the value of retaining older buildings, and the creation of a continuous riverside promenade. Since the relaxation in 1996 of strict guidelines on outdoor eating areas, this has become an extremely popular area in the evenings. Also, in 1998 the Urban Redevelopment Authority created a new entertainment area in the centre of the city which they are promoting as the citys one-stop, dynamic entertainment scene. In conclusion, the economic development of Singapore has been very consciously centrally planned, and the latest strategy is very clearly oriented to establishing Singapore as a leading world city. It is well placed to succeed, for a variety of reasons. It can draw upon its historic roots as a world trading centre; it has invested heavily in telecommunications and air transport infrastructure; it is well located in relation to other Asian economies; it has developed a safe and clean environment; and it has utilised the international language of English.", "hypothesis": "Singapores four regional centres will eventually be the same size as its central business district.", "label": "n"} +{"uid": "id_260", "premise": "Urban planning in Singapore British merchants established a trading post in Singapore in the early nineteenth century, and for more than a century trading interests dominated. However, in 1965 the newly independent island state was cut off from its hinterland, and so it set about pursuing a survival strategy. The good international communications it already enjoyed provided a useful base, but it was decided that if Singapore was to secure its economic future, it must develop its industry. To this end, new institutional structures were needed to facilitate, develop, and control foreign investment. One of the most important of these was the Economic Development Board (EDB), an arm of government that developed strategies for attracting investment. Thus from the outset, the Singaporean government was involved in city promotion. Towards the end of the twentieth century, the government realised that, due to limits on both the size of the countrys workforce and its land area, its labour-intensive industries were becoming increasingly uncompetitive. So an economic committee was established which concluded that Singapore should focus on developing as a service centre, and seek to attract company headquarters to serve South East Asia, and develop tourism, banking, and offshore activities. The land required for this service-sector orientation had been acquired in the early 1970s, when the government realised that it lacked the banking infrastructure for a modern economy. So a new banking and corporate district, known as the Golden Shoe, was planned, incorporating the historic commercial area. This district now houses all the major companies and various government financial agencies. Singapores current economic strategy is closely linked to land use and development planning. Although it is already a major city, the current development plan seeks to ensure Singapores continued economic growth through restructuring, to ensure that the facilities needed by future business are planned now. These include transport and telecommunication infrastructure, land, and environmental quality. A major concern is to avoid congestion in the central area, and so the latest plan deviates from previous plans by having a strong decentralisation policy. The plan makes provision for four major regional centres, each serving 800,000 people, but this does not mean that the existing central business district will not also grow. A major extension planned around Marina Bay draws on examples of other world cities, especially those with waterside central areas such as Sydney and San Francisco. The project involves major land reclamation of 667 hectares in total. Part of this has already been developed as a conference and exhibition zone, and the rest will be used for other facilities. However the need for vitality has been recognised and a mixed zoning approach has been adopted, to include housing and entertainment. One of the new features of the current plan is a broader conception of what contributes to economic success. It encompasses high quality residential provision, a good environment, leisure facilities and exciting city life. Thus there is more provision for low-density housing, often in waterfront communities linked to beaches and recreational facilities. However, the lower housing densities will put considerable pressure on the very limited land available for development, and this creates problems for another of the plans aims, which is to stress environmental quality. More and more of the remaining open area will be developed, and the only natural landscape surviving will be a small zone in the centre of the island which serves as a water catchment area. Environmental policy is therefore very much concerned with making the built environment more green by introducing more plants what is referred to as the beautification of Singapore. The plan focuses on green zones defining the boundaries of settlements, and running along transport corridors. The incidental green provision within housing areas is also given considerable attention. Much of the environmental provision, for example golf courses, recreation areas, and beaches, is linked to the prime objective of attracting business. The plan places much emphasis on good leisure provision and the need to exploit Singapores island setting. One way of doing this is through further land reclamation, to create a whole new island devoted to leisure and luxury housing which will stretch from the central area to the airport. A current concern also appears to be how to use the planning system to create opportunities for greater spontaneity: planners have recently given much attention to the concept of the 24-hour city and the cafe society. For example, a promotion has taken place along the Singapore river to create a cafe zone. This has included the realisation, rather late in the day, of the value of retaining older buildings, and the creation of a continuous riverside promenade. Since the relaxation in 1996 of strict guidelines on outdoor eating areas, this has become an extremely popular area in the evenings. Also, in 1998 the Urban Redevelopment Authority created a new entertainment area in the centre of the city which they are promoting as the citys one-stop, dynamic entertainment scene. In conclusion, the economic development of Singapore has been very consciously centrally planned, and the latest strategy is very clearly oriented to establishing Singapore as a leading world city. It is well placed to succeed, for a variety of reasons. It can draw upon its historic roots as a world trading centre; it has invested heavily in telecommunications and air transport infrastructure; it is well located in relation to other Asian economies; it has developed a safe and clean environment; and it has utilised the international language of English.", "hypothesis": "Planners have modelled new urban developments on other coastal cities.", "label": "e"} +{"uid": "id_261", "premise": "Urban planning in Singapore British merchants established a trading post in Singapore in the early nineteenth century, and for more than a century trading interests dominated. However, in 1965 the newly independent island state was cut off from its hinterland, and so it set about pursuing a survival strategy. The good international communications it already enjoyed provided a useful base, but it was decided that if Singapore was to secure its economic future, it must develop its industry. To this end, new institutional structures were needed to facilitate, develop, and control foreign investment. One of the most important of these was the Economic Development Board (EDB), an arm of government that developed strategies for attracting investment. Thus from the outset, the Singaporean government was involved in city promotion. Towards the end of the twentieth century, the government realised that, due to limits on both the size of the countrys workforce and its land area, its labour-intensive industries were becoming increasingly uncompetitive. So an economic committee was established which concluded that Singapore should focus on developing as a service centre, and seek to attract company headquarters to serve South East Asia, and develop tourism, banking, and offshore activities. The land required for this service-sector orientation had been acquired in the early 1970s, when the government realised that it lacked the banking infrastructure for a modern economy. So a new banking and corporate district, known as the Golden Shoe, was planned, incorporating the historic commercial area. This district now houses all the major companies and various government financial agencies. Singapores current economic strategy is closely linked to land use and development planning. Although it is already a major city, the current development plan seeks to ensure Singapores continued economic growth through restructuring, to ensure that the facilities needed by future business are planned now. These include transport and telecommunication infrastructure, land, and environmental quality. A major concern is to avoid congestion in the central area, and so the latest plan deviates from previous plans by having a strong decentralisation policy. The plan makes provision for four major regional centres, each serving 800,000 people, but this does not mean that the existing central business district will not also grow. A major extension planned around Marina Bay draws on examples of other world cities, especially those with waterside central areas such as Sydney and San Francisco. The project involves major land reclamation of 667 hectares in total. Part of this has already been developed as a conference and exhibition zone, and the rest will be used for other facilities. However the need for vitality has been recognised and a mixed zoning approach has been adopted, to include housing and entertainment. One of the new features of the current plan is a broader conception of what contributes to economic success. It encompasses high quality residential provision, a good environment, leisure facilities and exciting city life. Thus there is more provision for low-density housing, often in waterfront communities linked to beaches and recreational facilities. However, the lower housing densities will put considerable pressure on the very limited land available for development, and this creates problems for another of the plans aims, which is to stress environmental quality. More and more of the remaining open area will be developed, and the only natural landscape surviving will be a small zone in the centre of the island which serves as a water catchment area. Environmental policy is therefore very much concerned with making the built environment more green by introducing more plants what is referred to as the beautification of Singapore. The plan focuses on green zones defining the boundaries of settlements, and running along transport corridors. The incidental green provision within housing areas is also given considerable attention. Much of the environmental provision, for example golf courses, recreation areas, and beaches, is linked to the prime objective of attracting business. The plan places much emphasis on good leisure provision and the need to exploit Singapores island setting. One way of doing this is through further land reclamation, to create a whole new island devoted to leisure and luxury housing which will stretch from the central area to the airport. A current concern also appears to be how to use the planning system to create opportunities for greater spontaneity: planners have recently given much attention to the concept of the 24-hour city and the cafe society. For example, a promotion has taken place along the Singapore river to create a cafe zone. This has included the realisation, rather late in the day, of the value of retaining older buildings, and the creation of a continuous riverside promenade. Since the relaxation in 1996 of strict guidelines on outdoor eating areas, this has become an extremely popular area in the evenings. Also, in 1998 the Urban Redevelopment Authority created a new entertainment area in the centre of the city which they are promoting as the citys one-stop, dynamic entertainment scene. In conclusion, the economic development of Singapore has been very consciously centrally planned, and the latest strategy is very clearly oriented to establishing Singapore as a leading world city. It is well placed to succeed, for a variety of reasons. It can draw upon its historic roots as a world trading centre; it has invested heavily in telecommunications and air transport infrastructure; it is well located in relation to other Asian economies; it has developed a safe and clean environment; and it has utilised the international language of English.", "hypothesis": "Plants and trees are amongst the current priorities for Singapores city planners.", "label": "e"} +{"uid": "id_262", "premise": "Use of cell phones and pagers is not allowed inside the auditorium. Please switch off such devices while you are inside the auditorium. ------ A notice.", "hypothesis": "All those who have such devices will switch them off before they take their seatin the auditorium.", "label": "e"} +{"uid": "id_263", "premise": "Use of cell phones and pagers is not allowed inside the auditorium. Please switch off such devices while you are inside the auditorium. ------ A notice.", "hypothesis": "Generally people do not bring such devices when they come to attend functions in the auditorium.", "label": "c"} +{"uid": "id_264", "premise": "Use our product to improve memory of our child. It is based on natural herbs and has no harmful side effects. ---- An advertisement of a pharmaceutical company.", "hypothesis": "People generally opt for a medical product which is useful and has no harmful side effects.", "label": "e"} +{"uid": "id_265", "premise": "Use our product to improve memory of our child. It is based on natural herbs and has no harmful side effects. ---- An advertisement of a pharmaceutical company.", "hypothesis": "Improving memory of child is considered as important by many parents.", "label": "e"} +{"uid": "id_266", "premise": "Using Wind-up Cell Phone Chargers So what do you do when your battery on your cell phone runs out and you're forced to use some muscle with your wind-up charger? Fortunately, most chargers are very small and lightweight, even smaller than most cell phones, so they're easy to carry with you and could easily store in a car's glove compartment, a purse or backpack. They typically weigh no more than a couple of ounces. When your phone needs some extra juice, simply connect the wind-up charger to your cell phone's input. To give the phone's battery its power, you'll need to turn the crank vigorously. Most wind-up charger instructions say to crank at a rate of two revolutions per second, although turning the crank slower or faster is fine and will still provide power to the battery. Depending on the model, you can get 25-30 minutes of extra standby power to a cell phone after just a few minutes of solid cranking. You should only be able to get about 6 minutes of call time from the same amount of exercise, however, since it requires more power to send out signals. If you have a hands free set like a Bluetooth earpiece, you can even hold the charger and talk at the same time, since charging is a two-handed operation. As long as you keep turning the handle, the power you provide to charge the phone should be greater than the power needed to keep the phone on. This allows you to talk and provide a charge continuously. What about the different types of inputs on cell phones? Often one of the more frustrating things about losing battery power on your cell phone is when someone else actually has a charger available, but the parts don't fit. Fortunately, many wind-up cell phone chargers come with adapters that fit most phones so you should be able to find the right charge input.", "hypothesis": "Charging your phone with the wind-up charger should give you 25-30 minutes more call time.", "label": "c"} +{"uid": "id_267", "premise": "Using Wind-up Cell Phone Chargers So what do you do when your battery on your cell phone runs out and you're forced to use some muscle with your wind-up charger? Fortunately, most chargers are very small and lightweight, even smaller than most cell phones, so they're easy to carry with you and could easily store in a car's glove compartment, a purse or backpack. They typically weigh no more than a couple of ounces. When your phone needs some extra juice, simply connect the wind-up charger to your cell phone's input. To give the phone's battery its power, you'll need to turn the crank vigorously. Most wind-up charger instructions say to crank at a rate of two revolutions per second, although turning the crank slower or faster is fine and will still provide power to the battery. Depending on the model, you can get 25-30 minutes of extra standby power to a cell phone after just a few minutes of solid cranking. You should only be able to get about 6 minutes of call time from the same amount of exercise, however, since it requires more power to send out signals. If you have a hands free set like a Bluetooth earpiece, you can even hold the charger and talk at the same time, since charging is a two-handed operation. As long as you keep turning the handle, the power you provide to charge the phone should be greater than the power needed to keep the phone on. This allows you to talk and provide a charge continuously. What about the different types of inputs on cell phones? Often one of the more frustrating things about losing battery power on your cell phone is when someone else actually has a charger available, but the parts don't fit. Fortunately, many wind-up cell phone chargers come with adapters that fit most phones so you should be able to find the right charge input.", "hypothesis": "The light on the Sidewinder can be difficult to illuminate.", "label": "c"} +{"uid": "id_268", "premise": "Using Wind-up Cell Phone Chargers So what do you do when your battery on your cell phone runs out and you're forced to use some muscle with your wind-up charger? Fortunately, most chargers are very small and lightweight, even smaller than most cell phones, so they're easy to carry with you and could easily store in a car's glove compartment, a purse or backpack. They typically weigh no more than a couple of ounces. When your phone needs some extra juice, simply connect the wind-up charger to your cell phone's input. To give the phone's battery its power, you'll need to turn the crank vigorously. Most wind-up charger instructions say to crank at a rate of two revolutions per second, although turning the crank slower or faster is fine and will still provide power to the battery. Depending on the model, you can get 25-30 minutes of extra standby power to a cell phone after just a few minutes of solid cranking. You should only be able to get about 6 minutes of call time from the same amount of exercise, however, since it requires more power to send out signals. If you have a hands free set like a Bluetooth earpiece, you can even hold the charger and talk at the same time, since charging is a two-handed operation. As long as you keep turning the handle, the power you provide to charge the phone should be greater than the power needed to keep the phone on. This allows you to talk and provide a charge continuously. What about the different types of inputs on cell phones? Often one of the more frustrating things about losing battery power on your cell phone is when someone else actually has a charger available, but the parts don't fit. Fortunately, many wind-up cell phone chargers come with adapters that fit most phones so you should be able to find the right charge input.", "hypothesis": "You can charge your phone with the wind-up charger while having a conversation on your phone.", "label": "e"} +{"uid": "id_269", "premise": "Using Wind-up Cell Phone Chargers So what do you do when your battery on your cell phone runs out and you're forced to use some muscle with your wind-up charger? Fortunately, most chargers are very small and lightweight, even smaller than most cell phones, so they're easy to carry with you and could easily store in a car's glove compartment, a purse or backpack. They typically weigh no more than a couple of ounces. When your phone needs some extra juice, simply connect the wind-up charger to your cell phone's input. To give the phone's battery its power, you'll need to turn the crank vigorously. Most wind-up charger instructions say to crank at a rate of two revolutions per second, although turning the crank slower or faster is fine and will still provide power to the battery. Depending on the model, you can get 25-30 minutes of extra standby power to a cell phone after just a few minutes of solid cranking. You should only be able to get about 6 minutes of call time from the same amount of exercise, however, since it requires more power to send out signals. If you have a hands free set like a Bluetooth earpiece, you can even hold the charger and talk at the same time, since charging is a two-handed operation. As long as you keep turning the handle, the power you provide to charge the phone should be greater than the power needed to keep the phone on. This allows you to talk and provide a charge continuously. What about the different types of inputs on cell phones? Often one of the more frustrating things about losing battery power on your cell phone is when someone else actually has a charger available, but the parts don't fit. Fortunately, many wind-up cell phone chargers come with adapters that fit most phones so you should be able to find the right charge input.", "hypothesis": "Adapters for most cell phones can be purchased for the wind-up charger.", "label": "n"} +{"uid": "id_270", "premise": "Using Wind-up Cell Phone Chargers So what do you do when your battery on your cell phone runs out and you're forced to use some muscle with your wind-up charger? Fortunately, most chargers are very small and lightweight, even smaller than most cell phones, so they're easy to carry with you and could easily store in a car's glove compartment, a purse or backpack. They typically weigh no more than a couple of ounces. When your phone needs some extra juice, simply connect the wind-up charger to your cell phone's input. To give the phone's battery its power, you'll need to turn the crank vigorously. Most wind-up charger instructions say to crank at a rate of two revolutions per second, although turning the crank slower or faster is fine and will still provide power to the battery. Depending on the model, you can get 25-30 minutes of extra standby power to a cell phone after just a few minutes of solid cranking. You should only be able to get about 6 minutes of call time from the same amount of exercise, however, since it requires more power to send out signals. If you have a hands free set like a Bluetooth earpiece, you can even hold the charger and talk at the same time, since charging is a two-handed operation. As long as you keep turning the handle, the power you provide to charge the phone should be greater than the power needed to keep the phone on. This allows you to talk and provide a charge continuously. What about the different types of inputs on cell phones? Often one of the more frustrating things about losing battery power on your cell phone is when someone else actually has a charger available, but the parts don't fit. Fortunately, many wind-up cell phone chargers come with adapters that fit most phones so you should be able to find the right charge input.", "hypothesis": "The Sidewinder could help you in the even of you losing your phone.", "label": "e"} +{"uid": "id_271", "premise": "Using Wind-up Cell Phone Chargers So what do you do when your battery on your cell phone runs out and you're forced to use some muscle with your wind-up charger? Fortunately, most chargers are very small and lightweight, even smaller than most cell phones, so they're easy to carry with you and could easily store in a car's glove compartment, a purse or backpack. They typically weigh no more than a couple of ounces. When your phone needs some extra juice, simply connect the wind-up charger to your cell phone's input. To give the phone's battery its power, you'll need to turn the crank vigorously. Most wind-up charger instructions say to crank at a rate of two revolutions per second, although turning the crank slower or faster is fine and will still provide power to the battery. Depending on the model, you can get 25-30 minutes of extra standby power to a cell phone after just a few minutes of solid cranking. You should only be able to get about 6 minutes of call time from the same amount of exercise, however, since it requires more power to send out signals. If you have a hands free set like a Bluetooth earpiece, you can even hold the charger and talk at the same time, since charging is a two-handed operation. As long as you keep turning the handle, the power you provide to charge the phone should be greater than the power needed to keep the phone on. This allows you to talk and provide a charge continuously. What about the different types of inputs on cell phones? Often one of the more frustrating things about losing battery power on your cell phone is when someone else actually has a charger available, but the parts don't fit. Fortunately, many wind-up cell phone chargers come with adapters that fit most phones so you should be able to find the right charge input.", "hypothesis": "To charge the phone's battery the wind-up charger needs to be rotated gently.", "label": "c"} +{"uid": "id_272", "premise": "Using Wind-up Cell Phone Chargers So what do you do when your battery on your cell phone runs out and you're forced to use some muscle with your wind-up charger? Fortunately, most chargers are very small and lightweight, even smaller than most cell phones, so they're easy to carry with you and could easily store in a car's glove compartment, a purse or backpack. They typically weigh no more than a couple of ounces. When your phone needs some extra juice, simply connect the wind-up charger to your cell phone's input. To give the phone's battery its power, you'll need to turn the crank vigorously. Most wind-up charger instructions say to crank at a rate of two revolutions per second, although turning the crank slower or faster is fine and will still provide power to the battery. Depending on the model, you can get 25-30 minutes of extra standby power to a cell phone after just a few minutes of solid cranking. You should only be able to get about 6 minutes of call time from the same amount of exercise, however, since it requires more power to send out signals. If you have a hands free set like a Bluetooth earpiece, you can even hold the charger and talk at the same time, since charging is a two-handed operation. As long as you keep turning the handle, the power you provide to charge the phone should be greater than the power needed to keep the phone on. This allows you to talk and provide a charge continuously. What about the different types of inputs on cell phones? Often one of the more frustrating things about losing battery power on your cell phone is when someone else actually has a charger available, but the parts don't fit. Fortunately, many wind-up cell phone chargers come with adapters that fit most phones so you should be able to find the right charge input.", "hypothesis": "The wind-up cell phone chargers are smaller enough to fit inside a glove.", "label": "n"} +{"uid": "id_273", "premise": "Venus Flytrap From indigenous myths to John Wyndhams Day of the Triffids and the off-Broadway musical Little Shop of Horrors, the idea of cerebral, carnivorous flora has spooked audiences and readers for centuries. While shrubs and shoots have yet to uproot themselves or show any interest in human beings, however, for some of earths smaller inhabitants arachnids and insects the risk of being trapped and ingested by a plant can be a threat to their daily existence. Easily, the most famous of these predators is the Venus Flytrap, one of only two types of snap traps in the world. Though rarely found growing wild, the Flytrap has captured popular imagination and can be purchased in florists and plant retailers around the world. Part of the Venus Flytraps mysterious aura begins with the tide itself. While it is fairly clear that the second half of the epithet has been given for its insect-trapping ability, the origin of Venus is somewhat more ambiguous. According to the International Carnivorous Plant Society, the plant was first studied in the 17th and 18th centuries, when puritanical mores ruled Western societies and obsession was rife with forbidden human impulses and urges, women were often portrayed in these times as seductresses and temptresses, and botanists are believed to have seen a parallel between the behaviour of the plant in luring and devouring insects and the imagined behaviour of women in luring and trapping witless men. The plant was thus named after the pagan goddess of love and money Venus. The Venus Flytrap is a small plant with six to seven leaves growing out of a bulb-like stem. At the end of each leaf is a trap, which is an opened pod with cilia around the edges like stiff eyelashes. The pod is lined with anthocyanin pigments and sweet-smelling sap to attract flies and other insects. When they fly in, trigger hairs inside the pod sense the intruders movement, and the pod snaps shut. The trigger mechanism is so sophisticated that the plant can differentiate between living creatures and non-edible debris by requiring two trigger hairs to be touched within twenty seconds of each other, or one hair to be touched in quick succession. The plant has no nervous system, and researchers can only hypothesise as to how the rapid shutting movement works. This uncertainty adds to the Venus Flytraps allure. The pod shuts quickly but does not seal entirely at first; scientists have found that tins mechanism allows miniscule insects to escape, as they will not be a source of useful nourishment for the plant. If the creature is large enough, however, the plants flaps will eventually meet to form an airtight compress, and at this point, the digestive process begins. A Venus Flytraps digestive system is remarkably similar to how a human stomach works. For somewhere between five and twelve days, the trap secretes acidic digestive juices that dissolve the soft tissue and cell membranes of the insect. These juices also kill any bacteria that have entered with the food, ensuring the plant maintains its hygiene so that it does not begin to rot. Enzymes in the acid help with the digestion of DNA, amino acids, and cell molecules so that every fleshy part of the animal can be consumed. Once the plant has reabsorbed the digestive fluid this time with the added nourishment, the trap reopens and the exoskeleton blows away in the wind. Although transplanted to other locations around the world, the Venus Flytrap is only found natively in an area around Wilmington, North Carolina in the United States. It thrives in bogs, marshes, and wetlands and grows in wet sand and peaty soils. Because these environments are so depleted in nitrogen, they asphyxiate other flora, but the Flytrap overcomes this nutritional poverty by sourcing protein from its insect prey. One of the plants curious features is resilience to flame. It is speculated that the Flytrap evolved this to endure through periodic blazes and to act as a means of survival that its competition lacks. While the Venus Flytrap will not become extinct any time soon (an estimated 3-6 million plants are presently in cultivation), its natural existence is uncertain. In the last survey, only 35,800 Flytraps were found remaining in the wild, and some prominent conservationists have suggested the plant be given the status of vulnerable. Since this research is considerably dated, having taken place in 1992, the present number is considerably lower. The draining and destruction of natural wetlands where the Flytrap lives is considered to be the biggest threat to its existence, as well as people removing the plants from their natural habitat. Punitive measures have been introduced to prevent people from doing this. Ironically, while cultural depictions of perennial killers may persist, the bigger threat is not what meat-eating plants might do to us but what we may do to them.", "hypothesis": "Many botanists would like the Venus Flytrap to be officially recognised as an endangered plant species.", "label": "n"} +{"uid": "id_274", "premise": "Venus Flytrap From indigenous myths to John Wyndhams Day of the Triffids and the off-Broadway musical Little Shop of Horrors, the idea of cerebral, carnivorous flora has spooked audiences and readers for centuries. While shrubs and shoots have yet to uproot themselves or show any interest in human beings, however, for some of earths smaller inhabitants arachnids and insects the risk of being trapped and ingested by a plant can be a threat to their daily existence. Easily, the most famous of these predators is the Venus Flytrap, one of only two types of snap traps in the world. Though rarely found growing wild, the Flytrap has captured popular imagination and can be purchased in florists and plant retailers around the world. Part of the Venus Flytraps mysterious aura begins with the tide itself. While it is fairly clear that the second half of the epithet has been given for its insect-trapping ability, the origin of Venus is somewhat more ambiguous. According to the International Carnivorous Plant Society, the plant was first studied in the 17th and 18th centuries, when puritanical mores ruled Western societies and obsession was rife with forbidden human impulses and urges, women were often portrayed in these times as seductresses and temptresses, and botanists are believed to have seen a parallel between the behaviour of the plant in luring and devouring insects and the imagined behaviour of women in luring and trapping witless men. The plant was thus named after the pagan goddess of love and money Venus. The Venus Flytrap is a small plant with six to seven leaves growing out of a bulb-like stem. At the end of each leaf is a trap, which is an opened pod with cilia around the edges like stiff eyelashes. The pod is lined with anthocyanin pigments and sweet-smelling sap to attract flies and other insects. When they fly in, trigger hairs inside the pod sense the intruders movement, and the pod snaps shut. The trigger mechanism is so sophisticated that the plant can differentiate between living creatures and non-edible debris by requiring two trigger hairs to be touched within twenty seconds of each other, or one hair to be touched in quick succession. The plant has no nervous system, and researchers can only hypothesise as to how the rapid shutting movement works. This uncertainty adds to the Venus Flytraps allure. The pod shuts quickly but does not seal entirely at first; scientists have found that tins mechanism allows miniscule insects to escape, as they will not be a source of useful nourishment for the plant. If the creature is large enough, however, the plants flaps will eventually meet to form an airtight compress, and at this point, the digestive process begins. A Venus Flytraps digestive system is remarkably similar to how a human stomach works. For somewhere between five and twelve days, the trap secretes acidic digestive juices that dissolve the soft tissue and cell membranes of the insect. These juices also kill any bacteria that have entered with the food, ensuring the plant maintains its hygiene so that it does not begin to rot. Enzymes in the acid help with the digestion of DNA, amino acids, and cell molecules so that every fleshy part of the animal can be consumed. Once the plant has reabsorbed the digestive fluid this time with the added nourishment, the trap reopens and the exoskeleton blows away in the wind. Although transplanted to other locations around the world, the Venus Flytrap is only found natively in an area around Wilmington, North Carolina in the United States. It thrives in bogs, marshes, and wetlands and grows in wet sand and peaty soils. Because these environments are so depleted in nitrogen, they asphyxiate other flora, but the Flytrap overcomes this nutritional poverty by sourcing protein from its insect prey. One of the plants curious features is resilience to flame. It is speculated that the Flytrap evolved this to endure through periodic blazes and to act as a means of survival that its competition lacks. While the Venus Flytrap will not become extinct any time soon (an estimated 3-6 million plants are presently in cultivation), its natural existence is uncertain. In the last survey, only 35,800 Flytraps were found remaining in the wild, and some prominent conservationists have suggested the plant be given the status of vulnerable. Since this research is considerably dated, having taken place in 1992, the present number is considerably lower. The draining and destruction of natural wetlands where the Flytrap lives is considered to be the biggest threat to its existence, as well as people removing the plants from their natural habitat. Punitive measures have been introduced to prevent people from doing this. Ironically, while cultural depictions of perennial killers may persist, the bigger threat is not what meat-eating plants might do to us but what we may do to them.", "hypothesis": "The Venus Flytrap can withstand some exposure to fire.", "label": "e"} +{"uid": "id_275", "premise": "Venus Flytrap From indigenous myths to John Wyndhams Day of the Triffids and the off-Broadway musical Little Shop of Horrors, the idea of cerebral, carnivorous flora has spooked audiences and readers for centuries. While shrubs and shoots have yet to uproot themselves or show any interest in human beings, however, for some of earths smaller inhabitants arachnids and insects the risk of being trapped and ingested by a plant can be a threat to their daily existence. Easily, the most famous of these predators is the Venus Flytrap, one of only two types of snap traps in the world. Though rarely found growing wild, the Flytrap has captured popular imagination and can be purchased in florists and plant retailers around the world. Part of the Venus Flytraps mysterious aura begins with the tide itself. While it is fairly clear that the second half of the epithet has been given for its insect-trapping ability, the origin of Venus is somewhat more ambiguous. According to the International Carnivorous Plant Society, the plant was first studied in the 17th and 18th centuries, when puritanical mores ruled Western societies and obsession was rife with forbidden human impulses and urges, women were often portrayed in these times as seductresses and temptresses, and botanists are believed to have seen a parallel between the behaviour of the plant in luring and devouring insects and the imagined behaviour of women in luring and trapping witless men. The plant was thus named after the pagan goddess of love and money Venus. The Venus Flytrap is a small plant with six to seven leaves growing out of a bulb-like stem. At the end of each leaf is a trap, which is an opened pod with cilia around the edges like stiff eyelashes. The pod is lined with anthocyanin pigments and sweet-smelling sap to attract flies and other insects. When they fly in, trigger hairs inside the pod sense the intruders movement, and the pod snaps shut. The trigger mechanism is so sophisticated that the plant can differentiate between living creatures and non-edible debris by requiring two trigger hairs to be touched within twenty seconds of each other, or one hair to be touched in quick succession. The plant has no nervous system, and researchers can only hypothesise as to how the rapid shutting movement works. This uncertainty adds to the Venus Flytraps allure. The pod shuts quickly but does not seal entirely at first; scientists have found that tins mechanism allows miniscule insects to escape, as they will not be a source of useful nourishment for the plant. If the creature is large enough, however, the plants flaps will eventually meet to form an airtight compress, and at this point, the digestive process begins. A Venus Flytraps digestive system is remarkably similar to how a human stomach works. For somewhere between five and twelve days, the trap secretes acidic digestive juices that dissolve the soft tissue and cell membranes of the insect. These juices also kill any bacteria that have entered with the food, ensuring the plant maintains its hygiene so that it does not begin to rot. Enzymes in the acid help with the digestion of DNA, amino acids, and cell molecules so that every fleshy part of the animal can be consumed. Once the plant has reabsorbed the digestive fluid this time with the added nourishment, the trap reopens and the exoskeleton blows away in the wind. Although transplanted to other locations around the world, the Venus Flytrap is only found natively in an area around Wilmington, North Carolina in the United States. It thrives in bogs, marshes, and wetlands and grows in wet sand and peaty soils. Because these environments are so depleted in nitrogen, they asphyxiate other flora, but the Flytrap overcomes this nutritional poverty by sourcing protein from its insect prey. One of the plants curious features is resilience to flame. It is speculated that the Flytrap evolved this to endure through periodic blazes and to act as a means of survival that its competition lacks. While the Venus Flytrap will not become extinct any time soon (an estimated 3-6 million plants are presently in cultivation), its natural existence is uncertain. In the last survey, only 35,800 Flytraps were found remaining in the wild, and some prominent conservationists have suggested the plant be given the status of vulnerable. Since this research is considerably dated, having taken place in 1992, the present number is considerably lower. The draining and destruction of natural wetlands where the Flytrap lives is considered to be the biggest threat to its existence, as well as people removing the plants from their natural habitat. Punitive measures have been introduced to prevent people from doing this. Ironically, while cultural depictions of perennial killers may persist, the bigger threat is not what meat-eating plants might do to us but what we may do to them.", "hypothesis": "Only 35,800 Venus Flytraps now survive in their natural habitats.", "label": "c"} +{"uid": "id_276", "premise": "Venus Flytrap From indigenous myths to John Wyndhams Day of the Triffids and the off-Broadway musical Little Shop of Horrors, the idea of cerebral, carnivorous flora has spooked audiences and readers for centuries. While shrubs and shoots have yet to uproot themselves or show any interest in human beings, however, for some of earths smaller inhabitants arachnids and insects the risk of being trapped and ingested by a plant can be a threat to their daily existence. Easily, the most famous of these predators is the Venus Flytrap, one of only two types of snap traps in the world. Though rarely found growing wild, the Flytrap has captured popular imagination and can be purchased in florists and plant retailers around the world. Part of the Venus Flytraps mysterious aura begins with the tide itself. While it is fairly clear that the second half of the epithet has been given for its insect-trapping ability, the origin of Venus is somewhat more ambiguous. According to the International Carnivorous Plant Society, the plant was first studied in the 17th and 18th centuries, when puritanical mores ruled Western societies and obsession was rife with forbidden human impulses and urges, women were often portrayed in these times as seductresses and temptresses, and botanists are believed to have seen a parallel between the behaviour of the plant in luring and devouring insects and the imagined behaviour of women in luring and trapping witless men. The plant was thus named after the pagan goddess of love and money Venus. The Venus Flytrap is a small plant with six to seven leaves growing out of a bulb-like stem. At the end of each leaf is a trap, which is an opened pod with cilia around the edges like stiff eyelashes. The pod is lined with anthocyanin pigments and sweet-smelling sap to attract flies and other insects. When they fly in, trigger hairs inside the pod sense the intruders movement, and the pod snaps shut. The trigger mechanism is so sophisticated that the plant can differentiate between living creatures and non-edible debris by requiring two trigger hairs to be touched within twenty seconds of each other, or one hair to be touched in quick succession. The plant has no nervous system, and researchers can only hypothesise as to how the rapid shutting movement works. This uncertainty adds to the Venus Flytraps allure. The pod shuts quickly but does not seal entirely at first; scientists have found that tins mechanism allows miniscule insects to escape, as they will not be a source of useful nourishment for the plant. If the creature is large enough, however, the plants flaps will eventually meet to form an airtight compress, and at this point, the digestive process begins. A Venus Flytraps digestive system is remarkably similar to how a human stomach works. For somewhere between five and twelve days, the trap secretes acidic digestive juices that dissolve the soft tissue and cell membranes of the insect. These juices also kill any bacteria that have entered with the food, ensuring the plant maintains its hygiene so that it does not begin to rot. Enzymes in the acid help with the digestion of DNA, amino acids, and cell molecules so that every fleshy part of the animal can be consumed. Once the plant has reabsorbed the digestive fluid this time with the added nourishment, the trap reopens and the exoskeleton blows away in the wind. Although transplanted to other locations around the world, the Venus Flytrap is only found natively in an area around Wilmington, North Carolina in the United States. It thrives in bogs, marshes, and wetlands and grows in wet sand and peaty soils. Because these environments are so depleted in nitrogen, they asphyxiate other flora, but the Flytrap overcomes this nutritional poverty by sourcing protein from its insect prey. One of the plants curious features is resilience to flame. It is speculated that the Flytrap evolved this to endure through periodic blazes and to act as a means of survival that its competition lacks. While the Venus Flytrap will not become extinct any time soon (an estimated 3-6 million plants are presently in cultivation), its natural existence is uncertain. In the last survey, only 35,800 Flytraps were found remaining in the wild, and some prominent conservationists have suggested the plant be given the status of vulnerable. Since this research is considerably dated, having taken place in 1992, the present number is considerably lower. The draining and destruction of natural wetlands where the Flytrap lives is considered to be the biggest threat to its existence, as well as people removing the plants from their natural habitat. Punitive measures have been introduced to prevent people from doing this. Ironically, while cultural depictions of perennial killers may persist, the bigger threat is not what meat-eating plants might do to us but what we may do to them.", "hypothesis": "Human interference is a major factor in the decline of wild Venus Flytraps.", "label": "e"} +{"uid": "id_277", "premise": "Venus in Transit On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at a girls school, where it is alleged the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realised that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realised that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Mauritius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular which makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the Suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January when Earth is at one point in its orbit it will seem to be in a different position from where it appears six months later. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos detecting Earth-sized planets orbiting other stars.", "hypothesis": "Le Gentil managed to observe a second Venus transit.", "label": "c"} +{"uid": "id_278", "premise": "Venus in Transit On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at a girls school, where it is alleged the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realised that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realised that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Mauritius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular which makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the Suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January when Earth is at one point in its orbit it will seem to be in a different position from where it appears six months later. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos detecting Earth-sized planets orbiting other stars.", "hypothesis": "Halley observed one transit of the planet Venus.", "label": "c"} +{"uid": "id_279", "premise": "Venus in Transit On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at a girls school, where it is alleged the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realised that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realised that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Mauritius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular which makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the Suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January when Earth is at one point in its orbit it will seem to be in a different position from where it appears six months later. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos detecting Earth-sized planets orbiting other stars.", "hypothesis": "Early astronomers suspected that the atmosphere on Venus was toxic.", "label": "n"} +{"uid": "id_280", "premise": "Venus in Transit On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at a girls school, where it is alleged the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realised that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realised that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Mauritius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular which makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the Suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January when Earth is at one point in its orbit it will seem to be in a different position from where it appears six months later. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos detecting Earth-sized planets orbiting other stars.", "hypothesis": "The shape of Venus appears distorted when it starts to pass in front of the Sun.", "label": "e"} +{"uid": "id_281", "premise": "Venus in Transit On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at a girls school, where it is alleged the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realised that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realised that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Mauritius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular which makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the Suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January when Earth is at one point in its orbit it will seem to be in a different position from where it appears six months later. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos detecting Earth-sized planets orbiting other stars.", "hypothesis": "The parallax principle allows astronomers to work out how far away distant stars are from the Earth.", "label": "e"} +{"uid": "id_282", "premise": "Venus in transit June 2004 saw the first passage, known as a transit, of the planet Venus across the face of the Sun in 122 years. Transits have helped shape our view of the whole Universe, as Heather Cooper and Nigel Henbest explain On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at a girls school, where - it is alleged - the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realised that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle - the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17 th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realised that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 - though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit - but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Mauritius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular - which 32makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the Suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January - when Earth is at one point in its orbit - it will seem to be in a different position from where it appears six months later. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos - detecting Earth-sized planets orbiting other stars .", "hypothesis": "Halley observed one transit of the planet Venus.", "label": "c"} +{"uid": "id_283", "premise": "Venus in transit June 2004 saw the first passage, known as a transit, of the planet Venus across the face of the Sun in 122 years. Transits have helped shape our view of the whole Universe, as Heather Cooper and Nigel Henbest explain On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at a girls school, where - it is alleged - the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realised that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle - the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17 th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realised that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 - though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit - but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Mauritius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular - which 32makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the Suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January - when Earth is at one point in its orbit - it will seem to be in a different position from where it appears six months later. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos - detecting Earth-sized planets orbiting other stars .", "hypothesis": "Le Gentil managed to observe a second Venus transit.", "label": "c"} +{"uid": "id_284", "premise": "Venus in transit June 2004 saw the first passage, known as a transit, of the planet Venus across the face of the Sun in 122 years. Transits have helped shape our view of the whole Universe, as Heather Cooper and Nigel Henbest explain On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at a girls school, where - it is alleged - the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realised that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle - the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17 th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realised that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 - though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit - but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Mauritius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular - which 32makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the Suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January - when Earth is at one point in its orbit - it will seem to be in a different position from where it appears six months later. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos - detecting Earth-sized planets orbiting other stars .", "hypothesis": "Early astronomers suspected that the atmosphere on Venus was toxic.", "label": "n"} +{"uid": "id_285", "premise": "Venus in transit June 2004 saw the first passage, known as a transit, of the planet Venus across the face of the Sun in 122 years. Transits have helped shape our view of the whole Universe, as Heather Cooper and Nigel Henbest explain On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at a girls school, where - it is alleged - the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realised that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle - the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17 th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realised that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 - though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit - but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Mauritius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular - which 32makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the Suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January - when Earth is at one point in its orbit - it will seem to be in a different position from where it appears six months later. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos - detecting Earth-sized planets orbiting other stars .", "hypothesis": "The parallax principle allows astronomers to work out how far away distant stars are from the Earth.", "label": "e"} +{"uid": "id_286", "premise": "Venus in transit June 2004 saw the first passage, known as a transit, of the planet Venus across the face of the Sun in 122 years. Transits have helped shape our view of the whole Universe, as Heather Cooper and Nigel Henbest explain On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at a girls school, where - it is alleged - the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realised that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle - the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17 th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realised that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 - though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit - but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Mauritius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular - which 32makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the Suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January - when Earth is at one point in its orbit - it will seem to be in a different position from where it appears six months later. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos - detecting Earth-sized planets orbiting other stars .", "hypothesis": "The shape of Venus appears distorted when it starts to pass in front of the Sun.", "label": "e"} +{"uid": "id_287", "premise": "Venus in transit. June 2004 saw the first passage, known as a transit, of the planet Venus across the face of the Sun in 122 years. Transits have helped shape our view of the whole Universe, as Heather Cooper and Nigel Henbest explain On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at girls school, where it is alleged the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realized that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realized that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Maurtius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular which makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January when Earth is at one point in its orbit it will seem to be in a different position from where it appears six months late. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos detecting Earth-sized planets orbiting other stars.", "hypothesis": "Le Gentil managed to observe a second Venus transit.", "label": "c"} +{"uid": "id_288", "premise": "Venus in transit. June 2004 saw the first passage, known as a transit, of the planet Venus across the face of the Sun in 122 years. Transits have helped shape our view of the whole Universe, as Heather Cooper and Nigel Henbest explain On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at girls school, where it is alleged the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realized that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realized that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Maurtius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular which makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January when Earth is at one point in its orbit it will seem to be in a different position from where it appears six months late. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos detecting Earth-sized planets orbiting other stars.", "hypothesis": "The parallax principle allows astronomers to work out how far away distant stars are from the Earth.", "label": "e"} +{"uid": "id_289", "premise": "Venus in transit. June 2004 saw the first passage, known as a transit, of the planet Venus across the face of the Sun in 122 years. Transits have helped shape our view of the whole Universe, as Heather Cooper and Nigel Henbest explain On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at girls school, where it is alleged the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realized that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realized that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Maurtius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular which makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January when Earth is at one point in its orbit it will seem to be in a different position from where it appears six months late. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos detecting Earth-sized planets orbiting other stars.", "hypothesis": "Early astronomers suspected that the atmosphere on Venus was toxic.", "label": "n"} +{"uid": "id_290", "premise": "Venus in transit. June 2004 saw the first passage, known as a transit, of the planet Venus across the face of the Sun in 122 years. Transits have helped shape our view of the whole Universe, as Heather Cooper and Nigel Henbest explain On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at girls school, where it is alleged the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realized that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realized that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Maurtius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular which makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January when Earth is at one point in its orbit it will seem to be in a different position from where it appears six months late. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos detecting Earth-sized planets orbiting other stars.", "hypothesis": "Halley observed one transit of the planet Venus.", "label": "c"} +{"uid": "id_291", "premise": "Venus in transit. June 2004 saw the first passage, known as a transit, of the planet Venus across the face of the Sun in 122 years. Transits have helped shape our view of the whole Universe, as Heather Cooper and Nigel Henbest explain On 8 June 2004, more than half the population of the world were treated to a rare astronomical event. For over six hours, the planet Venus steadily inched its way over the surface of the Sun. This transit of Venus was the first since 6 December 1882. On that occasion, the American astronomer Professor Simon Newcomb led a party to South Africa to observe the event. They were based at girls school, where it is alleged the combined forces of three schoolmistresses outperformed the professionals with the accuracy of their observations. For centuries, transits of Venus have drawn explorers and astronomers alike to the four corners of the globe. And you can put it all down to the extraordinary polymath Edmond Halley. In November 1677, Halley observed a transit of the innermost planet, Mercury, from the desolate island of St Helena in the South Pacific. He realized that, from different latitudes, the passage of the planet across the Suns disc would appear to differ. By timing the transit from two widely-separated locations, teams of astronomers could calculate the parallax angle the apparent difference in position of an astronomical body due to a difference in the observers position. Calculating this angle would allow astronomers to measure what was then the ultimate goal: the distance of the Earth from the Sun. This distance is known as the astronomical unit or AU. Halley was aware that the AU was one of the most fundamental of all astronomical measurements. Johannes Kepler, in the early 17th century, had shown that the distances of the planets from the Sun governed their orbital speeds, which were easily measurable. But no-one had found a way to calculate accurate distances to the planets from the Earth. The goal was to measure the AU; then, knowing the orbital speeds of all the other planets round the Sun, the scale of the Solar System would fall into place. However, Halley realized that Mercury was so far away that its parallax angle would be very difficult to determine. As Venus was closer to the Earth, its parallax angle would be larger, and Halley worked out that by using Venus it would be possible to measure the Suns distance to 1 part in 500. But there was a problem: transits of Venus, unlike those of Mercury, are rare, occurring in pairs roughly eight years apart every hundred or so years. Nevertheless, he accurately predicted that Venus would cross the face of the Sun in both 1761 and 1769 though he didnt survive to see either. Inspired by Halleys suggestion of a way to pin down the scale of the Solar System, teams of British and French astronomers set out on expeditions to places as diverse as India and Siberia. But things werent helped by Britain and France being at war. The person who deserves most sympathy is the French astronomer Guillaume Le Gentil. He was thwarted by the fact that the British were besieging his observation site at Pondicherry in India. Fleeing on a French warship crossing the Indian Ocean, Le Gentil saw a wonderful transit but the ships pitching and rolling ruled out any attempt at making accurate observations. Undaunted, he remained south of the equator, keeping himself busy by studying the islands of Maurtius and Madagascar before setting off to observe the next transit in the Philippines. Ironically after travelling nearly 50,000 kilometres, his view was clouded out at the last moment, a very dispiriting experience. While the early transit timings were as precise as instruments would allow, the measurements were dogged by the black drop effect. When Venus begins to cross the Suns disc, it looks smeared not circular which makes it difficult to establish timings. This is due to diffraction of light. The second problem is that Venus exhibits a halo of light when it is seen just outside the suns disc. While this showed astronomers that Venus was surrounded by a thick layer of gases refracting sunlight around it, both effects made it impossible to obtain accurate timings. But astronomers laboured hard to analyse the results of these expeditions to observe Venus transits. Johann Franz Encke, Director of the Berlin Observatory, finally determined a value for the AU based on all these parallax measurements: 153,340,000 km. Reasonably accurate for the time, that is quite close to todays value of 149,597,870 km, determined by radar, which has now superseded transits and all other methods in accuracy. The AU is a cosmic measuring rod, and the basis of how we scale the Universe today. The parallax principle can be extended to measure the distances to the stars. If we look at a star in January when Earth is at one point in its orbit it will seem to be in a different position from where it appears six months late. Knowing the width of Earths orbit, the parallax shift lets astronomers calculate the distance. June 2004s transit of Venus was thus more of an astronomical spectacle than a scientifically important event. But such transits have paved the way for what might prove to be one of the most vital breakthroughs in the cosmos detecting Earth-sized planets orbiting other stars.", "hypothesis": "The shape of Venus appears distorted when it starts to pass in front of the Sun.", "label": "e"} +{"uid": "id_292", "premise": "Vertical transport A DEATH DEFYING STUNT THAT SHAPED THE SKYLINE OF THE WORLD A The raising of water from a well using a bucket suspended from a rope can be traced back to ancient times. If the rope was passed over a pulley wheel it made the lifting less strenuous. The method could be improved upon by attaching an empty bucket to the opposite end of the rope, then lowering it down the well as the full bucket came up, to counterbalance the weight. B Some medieval monasteries were perched on the tops of cliffs that could not be readily scaled. To overcome the problem, a basket was lowered to the base of the cliff on the end of a rope coiled round a wooden rod, known as a windlass. It was possible to lift heavy weights with a windlass, especially if a small cog wheel on the cranking handle drove a larger cog wheel on a second rod. Materials and people were hoisted in this fashion, but it was a slow process and if the rope were to break the basket plummeted to the ground. C In the middle of the nineteenth century the general public considered elevators supported by a rope to be too dangerous for personal use. Without an elevator, the height of a commercial building was limited by the number of steps people could be expected to climb within an economic time period. It was the American inventor and manufacturer Elisha Graves Otis (181161) who finally solved the problem of passenger elevators. D In 1852, Otis pioneered the idea of a safety brake, and two years later he demon strated it in spectacular fashion at the New York Crystal Palace Exhibition of Industry. Otis stood on the lifting platform, four storeys above an expectant crowd. The rope was cut, and after a small jolt, the platform came to a halt. Otis stunt increased peoples confidence in elevators and sales increased. E The operating principle of the safety elevator was described and illustrated in its pattern documentation of 1861. The lifting platform was suspended between two vertical posts each lined with a toothed guide rail. A hook was set into the sides of the platform to engage with the teeth, allowing movement vertically upwards but not downwards. Descent of the elevator was possible only if the hooks were pulled in, which could only happen when the rope was in tension. If the rope were to break, the tension would be lost and the hooks would spring outwards to engage the teeth and stop the fall. Modern elevators incorporate similar safety mechanisms. F Otis installed the first passenger elevator in a store in New York City in 1957. Following the success of the elevator, taller buildings were constructed, and sales increased once more as the business expanded into Europe. Englands first Otis passenger elevator (or lift as the British say) appeared four years later with the open ing of Londons Grosvenor Hotel. Today, the Otis Elevator Company continues to be the worlds leading manufacturer of elevators, employing over 60,000 people with markets in 200 countries. More significantly perhaps, the advent of passenger lifts marked the birth of the modern skyscraper. G Passenger elevators were powered by steam prior to 1902. A rope carrying the cab was wound round a revolving drum driven by a steam engine. The method was too slow for a tall building, which needed a large drum to hold a long coil of rope. By the following year, Otis had developed a compact electric traction elevator that used a cable but did away with the winding gear, allowing the passenger cab to be raised over 100 storeys both quickly and efficiently. H In the electric elevator, the cable was routed from the top of the passenger cab to a pulley wheel at the head of the lift shaft and then back down to a weight acting as a counterbalance. A geared-down electric motor rotated the pulley wheel, which contained a groove to grip the cable and provide the traction. Following the success of the electric elevator, skyscraper buildings began to spring up in the major cities. The Woolworths building in New York, constructed in 1913, was a significant land mark, being the worlds tallest building for the next 27 years. It had 57 floors and the Otis high-speed electric elevators could reach the top floor in a little over one minute. I Each elevator used several cables and pulley wheels, though one cable was enough to support the weight of the car. As a further safety feature, an oil-filled shock piston was mounted at the base of the lift shaft to act as a buffer, slowing the car down at a safe rate in the unlikely event of every cable failing as well as the safety brake.", "hypothesis": "Only people could be hoisted with a windlass.", "label": "c"} +{"uid": "id_293", "premise": "Vertical transport A DEATH DEFYING STUNT THAT SHAPED THE SKYLINE OF THE WORLD A The raising of water from a well using a bucket suspended from a rope can be traced back to ancient times. If the rope was passed over a pulley wheel it made the lifting less strenuous. The method could be improved upon by attaching an empty bucket to the opposite end of the rope, then lowering it down the well as the full bucket came up, to counterbalance the weight. B Some medieval monasteries were perched on the tops of cliffs that could not be readily scaled. To overcome the problem, a basket was lowered to the base of the cliff on the end of a rope coiled round a wooden rod, known as a windlass. It was possible to lift heavy weights with a windlass, especially if a small cog wheel on the cranking handle drove a larger cog wheel on a second rod. Materials and people were hoisted in this fashion, but it was a slow process and if the rope were to break the basket plummeted to the ground. C In the middle of the nineteenth century the general public considered elevators supported by a rope to be too dangerous for personal use. Without an elevator, the height of a commercial building was limited by the number of steps people could be expected to climb within an economic time period. It was the American inventor and manufacturer Elisha Graves Otis (181161) who finally solved the problem of passenger elevators. D In 1852, Otis pioneered the idea of a safety brake, and two years later he demon strated it in spectacular fashion at the New York Crystal Palace Exhibition of Industry. Otis stood on the lifting platform, four storeys above an expectant crowd. The rope was cut, and after a small jolt, the platform came to a halt. Otis stunt increased peoples confidence in elevators and sales increased. E The operating principle of the safety elevator was described and illustrated in its pattern documentation of 1861. The lifting platform was suspended between two vertical posts each lined with a toothed guide rail. A hook was set into the sides of the platform to engage with the teeth, allowing movement vertically upwards but not downwards. Descent of the elevator was possible only if the hooks were pulled in, which could only happen when the rope was in tension. If the rope were to break, the tension would be lost and the hooks would spring outwards to engage the teeth and stop the fall. Modern elevators incorporate similar safety mechanisms. F Otis installed the first passenger elevator in a store in New York City in 1957. Following the success of the elevator, taller buildings were constructed, and sales increased once more as the business expanded into Europe. Englands first Otis passenger elevator (or lift as the British say) appeared four years later with the open ing of Londons Grosvenor Hotel. Today, the Otis Elevator Company continues to be the worlds leading manufacturer of elevators, employing over 60,000 people with markets in 200 countries. More significantly perhaps, the advent of passenger lifts marked the birth of the modern skyscraper. G Passenger elevators were powered by steam prior to 1902. A rope carrying the cab was wound round a revolving drum driven by a steam engine. The method was too slow for a tall building, which needed a large drum to hold a long coil of rope. By the following year, Otis had developed a compact electric traction elevator that used a cable but did away with the winding gear, allowing the passenger cab to be raised over 100 storeys both quickly and efficiently. H In the electric elevator, the cable was routed from the top of the passenger cab to a pulley wheel at the head of the lift shaft and then back down to a weight acting as a counterbalance. A geared-down electric motor rotated the pulley wheel, which contained a groove to grip the cable and provide the traction. Following the success of the electric elevator, skyscraper buildings began to spring up in the major cities. The Woolworths building in New York, constructed in 1913, was a significant land mark, being the worlds tallest building for the next 27 years. It had 57 floors and the Otis high-speed electric elevators could reach the top floor in a little over one minute. I Each elevator used several cables and pulley wheels, though one cable was enough to support the weight of the car. As a further safety feature, an oil-filled shock piston was mounted at the base of the lift shaft to act as a buffer, slowing the car down at a safe rate in the unlikely event of every cable failing as well as the safety brake.", "hypothesis": "Tall commercial buildings were not economic without an elevator.", "label": "e"} +{"uid": "id_294", "premise": "Vertical transport A DEATH DEFYING STUNT THAT SHAPED THE SKYLINE OF THE WORLD A The raising of water from a well using a bucket suspended from a rope can be traced back to ancient times. If the rope was passed over a pulley wheel it made the lifting less strenuous. The method could be improved upon by attaching an empty bucket to the opposite end of the rope, then lowering it down the well as the full bucket came up, to counterbalance the weight. B Some medieval monasteries were perched on the tops of cliffs that could not be readily scaled. To overcome the problem, a basket was lowered to the base of the cliff on the end of a rope coiled round a wooden rod, known as a windlass. It was possible to lift heavy weights with a windlass, especially if a small cog wheel on the cranking handle drove a larger cog wheel on a second rod. Materials and people were hoisted in this fashion, but it was a slow process and if the rope were to break the basket plummeted to the ground. C In the middle of the nineteenth century the general public considered elevators supported by a rope to be too dangerous for personal use. Without an elevator, the height of a commercial building was limited by the number of steps people could be expected to climb within an economic time period. It was the American inventor and manufacturer Elisha Graves Otis (181161) who finally solved the problem of passenger elevators. D In 1852, Otis pioneered the idea of a safety brake, and two years later he demon strated it in spectacular fashion at the New York Crystal Palace Exhibition of Industry. Otis stood on the lifting platform, four storeys above an expectant crowd. The rope was cut, and after a small jolt, the platform came to a halt. Otis stunt increased peoples confidence in elevators and sales increased. E The operating principle of the safety elevator was described and illustrated in its pattern documentation of 1861. The lifting platform was suspended between two vertical posts each lined with a toothed guide rail. A hook was set into the sides of the platform to engage with the teeth, allowing movement vertically upwards but not downwards. Descent of the elevator was possible only if the hooks were pulled in, which could only happen when the rope was in tension. If the rope were to break, the tension would be lost and the hooks would spring outwards to engage the teeth and stop the fall. Modern elevators incorporate similar safety mechanisms. F Otis installed the first passenger elevator in a store in New York City in 1957. Following the success of the elevator, taller buildings were constructed, and sales increased once more as the business expanded into Europe. Englands first Otis passenger elevator (or lift as the British say) appeared four years later with the open ing of Londons Grosvenor Hotel. Today, the Otis Elevator Company continues to be the worlds leading manufacturer of elevators, employing over 60,000 people with markets in 200 countries. More significantly perhaps, the advent of passenger lifts marked the birth of the modern skyscraper. G Passenger elevators were powered by steam prior to 1902. A rope carrying the cab was wound round a revolving drum driven by a steam engine. The method was too slow for a tall building, which needed a large drum to hold a long coil of rope. By the following year, Otis had developed a compact electric traction elevator that used a cable but did away with the winding gear, allowing the passenger cab to be raised over 100 storeys both quickly and efficiently. H In the electric elevator, the cable was routed from the top of the passenger cab to a pulley wheel at the head of the lift shaft and then back down to a weight acting as a counterbalance. A geared-down electric motor rotated the pulley wheel, which contained a groove to grip the cable and provide the traction. Following the success of the electric elevator, skyscraper buildings began to spring up in the major cities. The Woolworths building in New York, constructed in 1913, was a significant land mark, being the worlds tallest building for the next 27 years. It had 57 floors and the Otis high-speed electric elevators could reach the top floor in a little over one minute. I Each elevator used several cables and pulley wheels, though one cable was enough to support the weight of the car. As a further safety feature, an oil-filled shock piston was mounted at the base of the lift shaft to act as a buffer, slowing the car down at a safe rate in the unlikely event of every cable failing as well as the safety brake.", "hypothesis": "Otis pattern documents contained a diagram.", "label": "e"} +{"uid": "id_295", "premise": "Vertical transport A DEATH DEFYING STUNT THAT SHAPED THE SKYLINE OF THE WORLD A The raising of water from a well using a bucket suspended from a rope can be traced back to ancient times. If the rope was passed over a pulley wheel it made the lifting less strenuous. The method could be improved upon by attaching an empty bucket to the opposite end of the rope, then lowering it down the well as the full bucket came up, to counterbalance the weight. B Some medieval monasteries were perched on the tops of cliffs that could not be readily scaled. To overcome the problem, a basket was lowered to the base of the cliff on the end of a rope coiled round a wooden rod, known as a windlass. It was possible to lift heavy weights with a windlass, especially if a small cog wheel on the cranking handle drove a larger cog wheel on a second rod. Materials and people were hoisted in this fashion, but it was a slow process and if the rope were to break the basket plummeted to the ground. C In the middle of the nineteenth century the general public considered elevators supported by a rope to be too dangerous for personal use. Without an elevator, the height of a commercial building was limited by the number of steps people could be expected to climb within an economic time period. It was the American inventor and manufacturer Elisha Graves Otis (181161) who finally solved the problem of passenger elevators. D In 1852, Otis pioneered the idea of a safety brake, and two years later he demon strated it in spectacular fashion at the New York Crystal Palace Exhibition of Industry. Otis stood on the lifting platform, four storeys above an expectant crowd. The rope was cut, and after a small jolt, the platform came to a halt. Otis stunt increased peoples confidence in elevators and sales increased. E The operating principle of the safety elevator was described and illustrated in its pattern documentation of 1861. The lifting platform was suspended between two vertical posts each lined with a toothed guide rail. A hook was set into the sides of the platform to engage with the teeth, allowing movement vertically upwards but not downwards. Descent of the elevator was possible only if the hooks were pulled in, which could only happen when the rope was in tension. If the rope were to break, the tension would be lost and the hooks would spring outwards to engage the teeth and stop the fall. Modern elevators incorporate similar safety mechanisms. F Otis installed the first passenger elevator in a store in New York City in 1957. Following the success of the elevator, taller buildings were constructed, and sales increased once more as the business expanded into Europe. Englands first Otis passenger elevator (or lift as the British say) appeared four years later with the open ing of Londons Grosvenor Hotel. Today, the Otis Elevator Company continues to be the worlds leading manufacturer of elevators, employing over 60,000 people with markets in 200 countries. More significantly perhaps, the advent of passenger lifts marked the birth of the modern skyscraper. G Passenger elevators were powered by steam prior to 1902. A rope carrying the cab was wound round a revolving drum driven by a steam engine. The method was too slow for a tall building, which needed a large drum to hold a long coil of rope. By the following year, Otis had developed a compact electric traction elevator that used a cable but did away with the winding gear, allowing the passenger cab to be raised over 100 storeys both quickly and efficiently. H In the electric elevator, the cable was routed from the top of the passenger cab to a pulley wheel at the head of the lift shaft and then back down to a weight acting as a counterbalance. A geared-down electric motor rotated the pulley wheel, which contained a groove to grip the cable and provide the traction. Following the success of the electric elevator, skyscraper buildings began to spring up in the major cities. The Woolworths building in New York, constructed in 1913, was a significant land mark, being the worlds tallest building for the next 27 years. It had 57 floors and the Otis high-speed electric elevators could reach the top floor in a little over one minute. I Each elevator used several cables and pulley wheels, though one cable was enough to support the weight of the car. As a further safety feature, an oil-filled shock piston was mounted at the base of the lift shaft to act as a buffer, slowing the car down at a safe rate in the unlikely event of every cable failing as well as the safety brake.", "hypothesis": "The first passenger elevator was installed in a hotel.", "label": "c"} +{"uid": "id_296", "premise": "Vertical transport A DEATH DEFYING STUNT THAT SHAPED THE SKYLINE OF THE WORLD A The raising of water from a well using a bucket suspended from a rope can be traced back to ancient times. If the rope was passed over a pulley wheel it made the lifting less strenuous. The method could be improved upon by attaching an empty bucket to the opposite end of the rope, then lowering it down the well as the full bucket came up, to counterbalance the weight. B Some medieval monasteries were perched on the tops of cliffs that could not be readily scaled. To overcome the problem, a basket was lowered to the base of the cliff on the end of a rope coiled round a wooden rod, known as a windlass. It was possible to lift heavy weights with a windlass, especially if a small cog wheel on the cranking handle drove a larger cog wheel on a second rod. Materials and people were hoisted in this fashion, but it was a slow process and if the rope were to break the basket plummeted to the ground. C In the middle of the nineteenth century the general public considered elevators supported by a rope to be too dangerous for personal use. Without an elevator, the height of a commercial building was limited by the number of steps people could be expected to climb within an economic time period. It was the American inventor and manufacturer Elisha Graves Otis (181161) who finally solved the problem of passenger elevators. D In 1852, Otis pioneered the idea of a safety brake, and two years later he demon strated it in spectacular fashion at the New York Crystal Palace Exhibition of Industry. Otis stood on the lifting platform, four storeys above an expectant crowd. The rope was cut, and after a small jolt, the platform came to a halt. Otis stunt increased peoples confidence in elevators and sales increased. E The operating principle of the safety elevator was described and illustrated in its pattern documentation of 1861. The lifting platform was suspended between two vertical posts each lined with a toothed guide rail. A hook was set into the sides of the platform to engage with the teeth, allowing movement vertically upwards but not downwards. Descent of the elevator was possible only if the hooks were pulled in, which could only happen when the rope was in tension. If the rope were to break, the tension would be lost and the hooks would spring outwards to engage the teeth and stop the fall. Modern elevators incorporate similar safety mechanisms. F Otis installed the first passenger elevator in a store in New York City in 1957. Following the success of the elevator, taller buildings were constructed, and sales increased once more as the business expanded into Europe. Englands first Otis passenger elevator (or lift as the British say) appeared four years later with the open ing of Londons Grosvenor Hotel. Today, the Otis Elevator Company continues to be the worlds leading manufacturer of elevators, employing over 60,000 people with markets in 200 countries. More significantly perhaps, the advent of passenger lifts marked the birth of the modern skyscraper. G Passenger elevators were powered by steam prior to 1902. A rope carrying the cab was wound round a revolving drum driven by a steam engine. The method was too slow for a tall building, which needed a large drum to hold a long coil of rope. By the following year, Otis had developed a compact electric traction elevator that used a cable but did away with the winding gear, allowing the passenger cab to be raised over 100 storeys both quickly and efficiently. H In the electric elevator, the cable was routed from the top of the passenger cab to a pulley wheel at the head of the lift shaft and then back down to a weight acting as a counterbalance. A geared-down electric motor rotated the pulley wheel, which contained a groove to grip the cable and provide the traction. Following the success of the electric elevator, skyscraper buildings began to spring up in the major cities. The Woolworths building in New York, constructed in 1913, was a significant land mark, being the worlds tallest building for the next 27 years. It had 57 floors and the Otis high-speed electric elevators could reach the top floor in a little over one minute. I Each elevator used several cables and pulley wheels, though one cable was enough to support the weight of the car. As a further safety feature, an oil-filled shock piston was mounted at the base of the lift shaft to act as a buffer, slowing the car down at a safe rate in the unlikely event of every cable failing as well as the safety brake.", "hypothesis": "Electric elevators use similar principles to ancient water-wells.", "label": "e"} +{"uid": "id_297", "premise": "Vertical transport A DEATH DEFYING STUNT THAT SHAPED THE SKYLINE OF THE WORLD The raising of water from a well using a bucket suspended from a rope can be traced back to ancient times. If the rope was passed over a pulley wheel it made the lifting less strenuous. The method could be improved upon by attaching an empty bucket to the opposite end of the rope, then lowering it down the well as the full bucket came up, to counterbalance the weight. Some medieval monasteries were perched on the tops of cliffs that could not be readily scaled. To overcome the problem, a basket was lowered to the base of the cliff on the end of a rope coiled round a wooden rod, known as a windlass. It was possible to lift heavy weights with a windlass, especially if a small cog wheel on the cranking handle drove a larger cog wheel on a second rod. Materials and people were hoisted in this fashion, but it was a slow process and if the rope were to break the basket plummeted to the ground. In the middle of the nineteenth century the general public considered elevators supported by a rope to be too dangerous for personal use. Without an elevator, the height of a commercial building was limited by the number of steps people could be expected to climb within an economic time period. It was the American inventor and manufacturer Elisha Graves Otis (181161) who finally solved the problem of passenger elevators. In 1852, Otis pioneered the idea of a safety brake, and two years later he demonstrated it in spectacular fashion at the New York Crystal Palace Exhibition of Industry. Otis stood on the lifting platform, four storeys above an expectant crowd. The rope was cut, and after a small jolt, the platform came to a halt. Otis stunt increased peoples confidence in elevators and sales increased. The operating principle of the safety elevator was described and illustrated in its pattern documentation of 1861. The lifting platform was suspended between two vertical posts each lined with a toothed guide rail. A hook was set into the sides of the platform to engage with the teeth, allowing movement vertically upwards but not downwards. Descent of the elevator was possible only if the hooks were pulled in, which could only happen when the rope was in tension. If the rope were to break, the tension would be lost and the hooks would spring outwards to engage the teeth and stop the fall. Modern elevators incorporate similar safety mechanisms. Otis installed the first passenger elevator in a store in New York City in 1957. Following the success of the elevator, taller buildings were constructed, and sales increased once more as the business expanded into Europe. Englands first Otis passenger elevator (or lift as the British say) appeared four years later with the opening of Londons Grosvenor Hotel. Today, the Otis Elevator Company continues to be the worlds leading manufacturer of elevators, employing over 60,000 people with markets in 200 countries. More significantly perhaps, the advent of passenger lifts marked the birth of the modern skyscraper. Passenger elevators were powered by steam prior to 1902. A rope carrying the cab was wound round a revolving drum driven by a steam engine. The method was too slow for a tall building, which needed a large drum to hold a long coil of rope. By the following year, Otis had developed a compact electric traction elevator that used a cable but did away with the winding gear, allowing the passenger cab to be raised over 100 storeys both quickly and efficiently. In the electric elevator, the cable was routed from the top of the passenger cab to a pulley wheel at the head of the lift shaft and then back down to a weight acting as a counterbalance. A geared-down electric motor rotated the pulley wheel, which contained a groove to grip the cable and provide the traction. Following the success of the electric elevator, skyscraper buildings began to spring up in the major cities. The Woolworths building in New York, constructed in 1913, was a significant landmark, being the worlds tallest building for the next 27 years. It had 57 floors and the Otis high-speed electric elevators could reach the top floor in a little over one minute. Each elevator used several cables and pulley wheels, though one cable was enough to support the weight of the car. As a further safety feature, an oil-filled shock piston was mounted at the base of the lift shaft to act as a buffer, slowing the car down at a safe rate in the unlikely event of every cable failing as well as the safety brake.", "hypothesis": "Only people could be hoisted with a windlass.", "label": "c"} +{"uid": "id_298", "premise": "Vertical transport A DEATH DEFYING STUNT THAT SHAPED THE SKYLINE OF THE WORLD The raising of water from a well using a bucket suspended from a rope can be traced back to ancient times. If the rope was passed over a pulley wheel it made the lifting less strenuous. The method could be improved upon by attaching an empty bucket to the opposite end of the rope, then lowering it down the well as the full bucket came up, to counterbalance the weight. Some medieval monasteries were perched on the tops of cliffs that could not be readily scaled. To overcome the problem, a basket was lowered to the base of the cliff on the end of a rope coiled round a wooden rod, known as a windlass. It was possible to lift heavy weights with a windlass, especially if a small cog wheel on the cranking handle drove a larger cog wheel on a second rod. Materials and people were hoisted in this fashion, but it was a slow process and if the rope were to break the basket plummeted to the ground. In the middle of the nineteenth century the general public considered elevators supported by a rope to be too dangerous for personal use. Without an elevator, the height of a commercial building was limited by the number of steps people could be expected to climb within an economic time period. It was the American inventor and manufacturer Elisha Graves Otis (181161) who finally solved the problem of passenger elevators. In 1852, Otis pioneered the idea of a safety brake, and two years later he demonstrated it in spectacular fashion at the New York Crystal Palace Exhibition of Industry. Otis stood on the lifting platform, four storeys above an expectant crowd. The rope was cut, and after a small jolt, the platform came to a halt. Otis stunt increased peoples confidence in elevators and sales increased. The operating principle of the safety elevator was described and illustrated in its pattern documentation of 1861. The lifting platform was suspended between two vertical posts each lined with a toothed guide rail. A hook was set into the sides of the platform to engage with the teeth, allowing movement vertically upwards but not downwards. Descent of the elevator was possible only if the hooks were pulled in, which could only happen when the rope was in tension. If the rope were to break, the tension would be lost and the hooks would spring outwards to engage the teeth and stop the fall. Modern elevators incorporate similar safety mechanisms. Otis installed the first passenger elevator in a store in New York City in 1957. Following the success of the elevator, taller buildings were constructed, and sales increased once more as the business expanded into Europe. Englands first Otis passenger elevator (or lift as the British say) appeared four years later with the opening of Londons Grosvenor Hotel. Today, the Otis Elevator Company continues to be the worlds leading manufacturer of elevators, employing over 60,000 people with markets in 200 countries. More significantly perhaps, the advent of passenger lifts marked the birth of the modern skyscraper. Passenger elevators were powered by steam prior to 1902. A rope carrying the cab was wound round a revolving drum driven by a steam engine. The method was too slow for a tall building, which needed a large drum to hold a long coil of rope. By the following year, Otis had developed a compact electric traction elevator that used a cable but did away with the winding gear, allowing the passenger cab to be raised over 100 storeys both quickly and efficiently. In the electric elevator, the cable was routed from the top of the passenger cab to a pulley wheel at the head of the lift shaft and then back down to a weight acting as a counterbalance. A geared-down electric motor rotated the pulley wheel, which contained a groove to grip the cable and provide the traction. Following the success of the electric elevator, skyscraper buildings began to spring up in the major cities. The Woolworths building in New York, constructed in 1913, was a significant landmark, being the worlds tallest building for the next 27 years. It had 57 floors and the Otis high-speed electric elevators could reach the top floor in a little over one minute. Each elevator used several cables and pulley wheels, though one cable was enough to support the weight of the car. As a further safety feature, an oil-filled shock piston was mounted at the base of the lift shaft to act as a buffer, slowing the car down at a safe rate in the unlikely event of every cable failing as well as the safety brake.", "hypothesis": "Electric elevators use similar principles to ancient water-wells.", "label": "e"} +{"uid": "id_299", "premise": "Vertical transport A DEATH DEFYING STUNT THAT SHAPED THE SKYLINE OF THE WORLD The raising of water from a well using a bucket suspended from a rope can be traced back to ancient times. If the rope was passed over a pulley wheel it made the lifting less strenuous. The method could be improved upon by attaching an empty bucket to the opposite end of the rope, then lowering it down the well as the full bucket came up, to counterbalance the weight. Some medieval monasteries were perched on the tops of cliffs that could not be readily scaled. To overcome the problem, a basket was lowered to the base of the cliff on the end of a rope coiled round a wooden rod, known as a windlass. It was possible to lift heavy weights with a windlass, especially if a small cog wheel on the cranking handle drove a larger cog wheel on a second rod. Materials and people were hoisted in this fashion, but it was a slow process and if the rope were to break the basket plummeted to the ground. In the middle of the nineteenth century the general public considered elevators supported by a rope to be too dangerous for personal use. Without an elevator, the height of a commercial building was limited by the number of steps people could be expected to climb within an economic time period. It was the American inventor and manufacturer Elisha Graves Otis (181161) who finally solved the problem of passenger elevators. In 1852, Otis pioneered the idea of a safety brake, and two years later he demonstrated it in spectacular fashion at the New York Crystal Palace Exhibition of Industry. Otis stood on the lifting platform, four storeys above an expectant crowd. The rope was cut, and after a small jolt, the platform came to a halt. Otis stunt increased peoples confidence in elevators and sales increased. The operating principle of the safety elevator was described and illustrated in its pattern documentation of 1861. The lifting platform was suspended between two vertical posts each lined with a toothed guide rail. A hook was set into the sides of the platform to engage with the teeth, allowing movement vertically upwards but not downwards. Descent of the elevator was possible only if the hooks were pulled in, which could only happen when the rope was in tension. If the rope were to break, the tension would be lost and the hooks would spring outwards to engage the teeth and stop the fall. Modern elevators incorporate similar safety mechanisms. Otis installed the first passenger elevator in a store in New York City in 1957. Following the success of the elevator, taller buildings were constructed, and sales increased once more as the business expanded into Europe. Englands first Otis passenger elevator (or lift as the British say) appeared four years later with the opening of Londons Grosvenor Hotel. Today, the Otis Elevator Company continues to be the worlds leading manufacturer of elevators, employing over 60,000 people with markets in 200 countries. More significantly perhaps, the advent of passenger lifts marked the birth of the modern skyscraper. Passenger elevators were powered by steam prior to 1902. A rope carrying the cab was wound round a revolving drum driven by a steam engine. The method was too slow for a tall building, which needed a large drum to hold a long coil of rope. By the following year, Otis had developed a compact electric traction elevator that used a cable but did away with the winding gear, allowing the passenger cab to be raised over 100 storeys both quickly and efficiently. In the electric elevator, the cable was routed from the top of the passenger cab to a pulley wheel at the head of the lift shaft and then back down to a weight acting as a counterbalance. A geared-down electric motor rotated the pulley wheel, which contained a groove to grip the cable and provide the traction. Following the success of the electric elevator, skyscraper buildings began to spring up in the major cities. The Woolworths building in New York, constructed in 1913, was a significant landmark, being the worlds tallest building for the next 27 years. It had 57 floors and the Otis high-speed electric elevators could reach the top floor in a little over one minute. Each elevator used several cables and pulley wheels, though one cable was enough to support the weight of the car. As a further safety feature, an oil-filled shock piston was mounted at the base of the lift shaft to act as a buffer, slowing the car down at a safe rate in the unlikely event of every cable failing as well as the safety brake.", "hypothesis": "The first passenger elevator was installed in a hotel.", "label": "c"} +{"uid": "id_300", "premise": "Vertical transport A DEATH DEFYING STUNT THAT SHAPED THE SKYLINE OF THE WORLD The raising of water from a well using a bucket suspended from a rope can be traced back to ancient times. If the rope was passed over a pulley wheel it made the lifting less strenuous. The method could be improved upon by attaching an empty bucket to the opposite end of the rope, then lowering it down the well as the full bucket came up, to counterbalance the weight. Some medieval monasteries were perched on the tops of cliffs that could not be readily scaled. To overcome the problem, a basket was lowered to the base of the cliff on the end of a rope coiled round a wooden rod, known as a windlass. It was possible to lift heavy weights with a windlass, especially if a small cog wheel on the cranking handle drove a larger cog wheel on a second rod. Materials and people were hoisted in this fashion, but it was a slow process and if the rope were to break the basket plummeted to the ground. In the middle of the nineteenth century the general public considered elevators supported by a rope to be too dangerous for personal use. Without an elevator, the height of a commercial building was limited by the number of steps people could be expected to climb within an economic time period. It was the American inventor and manufacturer Elisha Graves Otis (181161) who finally solved the problem of passenger elevators. In 1852, Otis pioneered the idea of a safety brake, and two years later he demonstrated it in spectacular fashion at the New York Crystal Palace Exhibition of Industry. Otis stood on the lifting platform, four storeys above an expectant crowd. The rope was cut, and after a small jolt, the platform came to a halt. Otis stunt increased peoples confidence in elevators and sales increased. The operating principle of the safety elevator was described and illustrated in its pattern documentation of 1861. The lifting platform was suspended between two vertical posts each lined with a toothed guide rail. A hook was set into the sides of the platform to engage with the teeth, allowing movement vertically upwards but not downwards. Descent of the elevator was possible only if the hooks were pulled in, which could only happen when the rope was in tension. If the rope were to break, the tension would be lost and the hooks would spring outwards to engage the teeth and stop the fall. Modern elevators incorporate similar safety mechanisms. Otis installed the first passenger elevator in a store in New York City in 1957. Following the success of the elevator, taller buildings were constructed, and sales increased once more as the business expanded into Europe. Englands first Otis passenger elevator (or lift as the British say) appeared four years later with the opening of Londons Grosvenor Hotel. Today, the Otis Elevator Company continues to be the worlds leading manufacturer of elevators, employing over 60,000 people with markets in 200 countries. More significantly perhaps, the advent of passenger lifts marked the birth of the modern skyscraper. Passenger elevators were powered by steam prior to 1902. A rope carrying the cab was wound round a revolving drum driven by a steam engine. The method was too slow for a tall building, which needed a large drum to hold a long coil of rope. By the following year, Otis had developed a compact electric traction elevator that used a cable but did away with the winding gear, allowing the passenger cab to be raised over 100 storeys both quickly and efficiently. In the electric elevator, the cable was routed from the top of the passenger cab to a pulley wheel at the head of the lift shaft and then back down to a weight acting as a counterbalance. A geared-down electric motor rotated the pulley wheel, which contained a groove to grip the cable and provide the traction. Following the success of the electric elevator, skyscraper buildings began to spring up in the major cities. The Woolworths building in New York, constructed in 1913, was a significant landmark, being the worlds tallest building for the next 27 years. It had 57 floors and the Otis high-speed electric elevators could reach the top floor in a little over one minute. Each elevator used several cables and pulley wheels, though one cable was enough to support the weight of the car. As a further safety feature, an oil-filled shock piston was mounted at the base of the lift shaft to act as a buffer, slowing the car down at a safe rate in the unlikely event of every cable failing as well as the safety brake.", "hypothesis": "Otis pattern documents contained a diagram.", "label": "e"} +{"uid": "id_301", "premise": "Vertical transport A DEATH DEFYING STUNT THAT SHAPED THE SKYLINE OF THE WORLD The raising of water from a well using a bucket suspended from a rope can be traced back to ancient times. If the rope was passed over a pulley wheel it made the lifting less strenuous. The method could be improved upon by attaching an empty bucket to the opposite end of the rope, then lowering it down the well as the full bucket came up, to counterbalance the weight. Some medieval monasteries were perched on the tops of cliffs that could not be readily scaled. To overcome the problem, a basket was lowered to the base of the cliff on the end of a rope coiled round a wooden rod, known as a windlass. It was possible to lift heavy weights with a windlass, especially if a small cog wheel on the cranking handle drove a larger cog wheel on a second rod. Materials and people were hoisted in this fashion, but it was a slow process and if the rope were to break the basket plummeted to the ground. In the middle of the nineteenth century the general public considered elevators supported by a rope to be too dangerous for personal use. Without an elevator, the height of a commercial building was limited by the number of steps people could be expected to climb within an economic time period. It was the American inventor and manufacturer Elisha Graves Otis (181161) who finally solved the problem of passenger elevators. In 1852, Otis pioneered the idea of a safety brake, and two years later he demonstrated it in spectacular fashion at the New York Crystal Palace Exhibition of Industry. Otis stood on the lifting platform, four storeys above an expectant crowd. The rope was cut, and after a small jolt, the platform came to a halt. Otis stunt increased peoples confidence in elevators and sales increased. The operating principle of the safety elevator was described and illustrated in its pattern documentation of 1861. The lifting platform was suspended between two vertical posts each lined with a toothed guide rail. A hook was set into the sides of the platform to engage with the teeth, allowing movement vertically upwards but not downwards. Descent of the elevator was possible only if the hooks were pulled in, which could only happen when the rope was in tension. If the rope were to break, the tension would be lost and the hooks would spring outwards to engage the teeth and stop the fall. Modern elevators incorporate similar safety mechanisms. Otis installed the first passenger elevator in a store in New York City in 1957. Following the success of the elevator, taller buildings were constructed, and sales increased once more as the business expanded into Europe. Englands first Otis passenger elevator (or lift as the British say) appeared four years later with the opening of Londons Grosvenor Hotel. Today, the Otis Elevator Company continues to be the worlds leading manufacturer of elevators, employing over 60,000 people with markets in 200 countries. More significantly perhaps, the advent of passenger lifts marked the birth of the modern skyscraper. Passenger elevators were powered by steam prior to 1902. A rope carrying the cab was wound round a revolving drum driven by a steam engine. The method was too slow for a tall building, which needed a large drum to hold a long coil of rope. By the following year, Otis had developed a compact electric traction elevator that used a cable but did away with the winding gear, allowing the passenger cab to be raised over 100 storeys both quickly and efficiently. In the electric elevator, the cable was routed from the top of the passenger cab to a pulley wheel at the head of the lift shaft and then back down to a weight acting as a counterbalance. A geared-down electric motor rotated the pulley wheel, which contained a groove to grip the cable and provide the traction. Following the success of the electric elevator, skyscraper buildings began to spring up in the major cities. The Woolworths building in New York, constructed in 1913, was a significant landmark, being the worlds tallest building for the next 27 years. It had 57 floors and the Otis high-speed electric elevators could reach the top floor in a little over one minute. Each elevator used several cables and pulley wheels, though one cable was enough to support the weight of the car. As a further safety feature, an oil-filled shock piston was mounted at the base of the lift shaft to act as a buffer, slowing the car down at a safe rate in the unlikely event of every cable failing as well as the safety brake.", "hypothesis": "Tall commercial buildings were not economic without an elevator.", "label": "e"} +{"uid": "id_302", "premise": "Video Games Unexpected Benefits to Human Brain A. James Paul Gee, professor of education at the University of Wisconsin- Madison, played his first video game years ago when his six-year-old son Sam was playing Pajama Sam: No Need to Hide When Its Dark Outside. He wanted to play the game so he could support Sams problem solving. Though Pajama Sam is not an educational game, it is replete with the types of problems psychologists study when they study thinking and learning. When he saw how well the game held Sams attention, he wondered what sort of beast a more mature video game might be. Video and computer games, like many other popular, entertaining and addicting kids activities, are looked down upon by many parents as time- wasters, and worse, parents think that these games rot the brain. Violent video games are readily blamed by the media and some experts as the reason why some youth become violent or commit extreme anti-social behavior. Recent content analyses of video games show that as many as 89% of games contain some violent content, but there is no form of aggressive content for 70% of popular games. Many scientists and psychologists, like James Paul Gee, find thatvideo games actually have many benefits - the main one being making kids smart. Video games may actually teach kids high-level thinking skills that they will need in the future. \"Video games change your brain, \" according to University of Wisconsin psychologist Shawn Green. Video games change the brains physical structure the same way as do learning to read, playing the piano, or navigating using a map. Much like exercise can build muscle, the powerful combination of concentration and rewarding surges of neurotransmitters like dopamine, which strengthens neural circuits, can build the players brain. Video games give your childs brain a real workout. In many video games, the skills requ u ed to win involve abstract and high level thinking. These skills are not even taught at school. Some of the mental skills trained by video games include: following instructions, problem solving, logic, hand-eye coordination, fine motor and spatial skills. Research also suggests that people can learn iconic, spatial, and visual attention skills from video games. There have been even studies with adults showing that experience with video games is related to better surgical skills. Jacob Benjamin, doctor from Beth Israel Medical Center NY, found a direct link between skill at video gaming and skill at keyhole or laparoscopic surgery. Also, a reason given by experts as to why fighter pilots of today are more skillful is that this generations pilots are being weaned on video games. The players learn to manage resources that are limited, and decide the best use of resources, the same way as in real life. In strategy games, for instance, while developing a city, an unexpected surprise like an enemy might emerge. This forces the player to be flexible and quickly change tactics. Sometimes the player does this almost every second of the game giving the brain a real workout. According to researchers at the University of Rochester, led by Daphne Bavelier, a cognitive scientist, games simulating stressful events such as those found in battle or action games could be a training tool for real-world situations. The study suggests that playing action video games primes the brain to make quick decisions. Video games can be used to train soldiers and surgeons, according to the study. Steven Johnson, author of Everything Bad isGood For You: How Today's Popular Culture, says gamers must deal with immediate problems while keeping their long-term goals on their horizon. Young gamers force themselves to read to get instructions, follow storylines of games, and get information from the game texts. James Paul Gee, professor of education at the University of Wisconsin- Madison, says that playing a video game is similar to working through a science problem. Like students in a laboratory, gamers must come up with a hypothesis. For example, players in some games constantly try out combinations of weapons and powers to use to defeat an enemy. If one does not work, they change hypothesis and try the next one. Video games are goal-driven experiences, says Gee, which are fundamental to learning. Also, using math skills is important to win in many games that involve quantitative analysis like managing resources. In higher levels of a game, players usually fail the first time around, but they keep on trying until they succeed and move on to the next level. Many games are played online and involve cooperation with other online players in order to win. Video and computer games also help children gain self-confidence and many games are based on history, city building, and governance and so on. Such games indirectly teach children about aspects of life on earth. H. In an upcoming study in the journal Current Biology, authors Daphne Bavelier, Alexandre Pouget, and C. Shawn Green report that video games could provide a potent training regimen for speeding up reactions in many types of real-life situations. The researchers tested dozens of 18-to 25-year-olds who were not ordinarily video game players. They split the subjects into two groups. One group played 50 hours of the fast-paced action video games \"Call of Duty 2\" and \"Unreal Tournament, \" and the other group played 50 hours of the slow- moving strategy game \"The Sims 2. \" After this training period, all of the subjects were asked to make quick decisions in several tasks designed by the researchers. The action game players were up to 25 percent faster at coming to a conclusion and answered just as many questions correctly as their strategy game playing peers.", "hypothesis": "The action game players minimized the percentage of making mistakes in the experiment.", "label": "c"} +{"uid": "id_303", "premise": "Video Games Unexpected Benefits to Human Brain A. James Paul Gee, professor of education at the University of Wisconsin- Madison, played his first video game years ago when his six-year-old son Sam was playing Pajama Sam: No Need to Hide When Its Dark Outside. He wanted to play the game so he could support Sams problem solving. Though Pajama Sam is not an educational game, it is replete with the types of problems psychologists study when they study thinking and learning. When he saw how well the game held Sams attention, he wondered what sort of beast a more mature video game might be. Video and computer games, like many other popular, entertaining and addicting kids activities, are looked down upon by many parents as time- wasters, and worse, parents think that these games rot the brain. Violent video games are readily blamed by the media and some experts as the reason why some youth become violent or commit extreme anti-social behavior. Recent content analyses of video games show that as many as 89% of games contain some violent content, but there is no form of aggressive content for 70% of popular games. Many scientists and psychologists, like James Paul Gee, find thatvideo games actually have many benefits - the main one being making kids smart. Video games may actually teach kids high-level thinking skills that they will need in the future. \"Video games change your brain, \" according to University of Wisconsin psychologist Shawn Green. Video games change the brains physical structure the same way as do learning to read, playing the piano, or navigating using a map. Much like exercise can build muscle, the powerful combination of concentration and rewarding surges of neurotransmitters like dopamine, which strengthens neural circuits, can build the players brain. Video games give your childs brain a real workout. In many video games, the skills requ u ed to win involve abstract and high level thinking. These skills are not even taught at school. Some of the mental skills trained by video games include: following instructions, problem solving, logic, hand-eye coordination, fine motor and spatial skills. Research also suggests that people can learn iconic, spatial, and visual attention skills from video games. There have been even studies with adults showing that experience with video games is related to better surgical skills. Jacob Benjamin, doctor from Beth Israel Medical Center NY, found a direct link between skill at video gaming and skill at keyhole or laparoscopic surgery. Also, a reason given by experts as to why fighter pilots of today are more skillful is that this generations pilots are being weaned on video games. The players learn to manage resources that are limited, and decide the best use of resources, the same way as in real life. In strategy games, for instance, while developing a city, an unexpected surprise like an enemy might emerge. This forces the player to be flexible and quickly change tactics. Sometimes the player does this almost every second of the game giving the brain a real workout. According to researchers at the University of Rochester, led by Daphne Bavelier, a cognitive scientist, games simulating stressful events such as those found in battle or action games could be a training tool for real-world situations. The study suggests that playing action video games primes the brain to make quick decisions. Video games can be used to train soldiers and surgeons, according to the study. Steven Johnson, author of Everything Bad isGood For You: How Today's Popular Culture, says gamers must deal with immediate problems while keeping their long-term goals on their horizon. Young gamers force themselves to read to get instructions, follow storylines of games, and get information from the game texts. James Paul Gee, professor of education at the University of Wisconsin- Madison, says that playing a video game is similar to working through a science problem. Like students in a laboratory, gamers must come up with a hypothesis. For example, players in some games constantly try out combinations of weapons and powers to use to defeat an enemy. If one does not work, they change hypothesis and try the next one. Video games are goal-driven experiences, says Gee, which are fundamental to learning. Also, using math skills is important to win in many games that involve quantitative analysis like managing resources. In higher levels of a game, players usually fail the first time around, but they keep on trying until they succeed and move on to the next level. Many games are played online and involve cooperation with other online players in order to win. Video and computer games also help children gain self-confidence and many games are based on history, city building, and governance and so on. Such games indirectly teach children about aspects of life on earth. H. In an upcoming study in the journal Current Biology, authors Daphne Bavelier, Alexandre Pouget, and C. Shawn Green report that video games could provide a potent training regimen for speeding up reactions in many types of real-life situations. The researchers tested dozens of 18-to 25-year-olds who were not ordinarily video game players. They split the subjects into two groups. One group played 50 hours of the fast-paced action video games \"Call of Duty 2\" and \"Unreal Tournament, \" and the other group played 50 hours of the slow- moving strategy game \"The Sims 2. \" After this training period, all of the subjects were asked to make quick decisions in several tasks designed by the researchers. The action game players were up to 25 percent faster at coming to a conclusion and answered just as many questions correctly as their strategy game playing peers.", "hypothesis": "It would be a good idea for schools to apply video games in their classrooms.", "label": "n"} +{"uid": "id_304", "premise": "Video Games Unexpected Benefits to Human Brain A. James Paul Gee, professor of education at the University of Wisconsin- Madison, played his first video game years ago when his six-year-old son Sam was playing Pajama Sam: No Need to Hide When Its Dark Outside. He wanted to play the game so he could support Sams problem solving. Though Pajama Sam is not an educational game, it is replete with the types of problems psychologists study when they study thinking and learning. When he saw how well the game held Sams attention, he wondered what sort of beast a more mature video game might be. Video and computer games, like many other popular, entertaining and addicting kids activities, are looked down upon by many parents as time- wasters, and worse, parents think that these games rot the brain. Violent video games are readily blamed by the media and some experts as the reason why some youth become violent or commit extreme anti-social behavior. Recent content analyses of video games show that as many as 89% of games contain some violent content, but there is no form of aggressive content for 70% of popular games. Many scientists and psychologists, like James Paul Gee, find thatvideo games actually have many benefits - the main one being making kids smart. Video games may actually teach kids high-level thinking skills that they will need in the future. \"Video games change your brain, \" according to University of Wisconsin psychologist Shawn Green. Video games change the brains physical structure the same way as do learning to read, playing the piano, or navigating using a map. Much like exercise can build muscle, the powerful combination of concentration and rewarding surges of neurotransmitters like dopamine, which strengthens neural circuits, can build the players brain. Video games give your childs brain a real workout. In many video games, the skills requ u ed to win involve abstract and high level thinking. These skills are not even taught at school. Some of the mental skills trained by video games include: following instructions, problem solving, logic, hand-eye coordination, fine motor and spatial skills. Research also suggests that people can learn iconic, spatial, and visual attention skills from video games. There have been even studies with adults showing that experience with video games is related to better surgical skills. Jacob Benjamin, doctor from Beth Israel Medical Center NY, found a direct link between skill at video gaming and skill at keyhole or laparoscopic surgery. Also, a reason given by experts as to why fighter pilots of today are more skillful is that this generations pilots are being weaned on video games. The players learn to manage resources that are limited, and decide the best use of resources, the same way as in real life. In strategy games, for instance, while developing a city, an unexpected surprise like an enemy might emerge. This forces the player to be flexible and quickly change tactics. Sometimes the player does this almost every second of the game giving the brain a real workout. According to researchers at the University of Rochester, led by Daphne Bavelier, a cognitive scientist, games simulating stressful events such as those found in battle or action games could be a training tool for real-world situations. The study suggests that playing action video games primes the brain to make quick decisions. Video games can be used to train soldiers and surgeons, according to the study. Steven Johnson, author of Everything Bad isGood For You: How Today's Popular Culture, says gamers must deal with immediate problems while keeping their long-term goals on their horizon. Young gamers force themselves to read to get instructions, follow storylines of games, and get information from the game texts. James Paul Gee, professor of education at the University of Wisconsin- Madison, says that playing a video game is similar to working through a science problem. Like students in a laboratory, gamers must come up with a hypothesis. For example, players in some games constantly try out combinations of weapons and powers to use to defeat an enemy. If one does not work, they change hypothesis and try the next one. Video games are goal-driven experiences, says Gee, which are fundamental to learning. Also, using math skills is important to win in many games that involve quantitative analysis like managing resources. In higher levels of a game, players usually fail the first time around, but they keep on trying until they succeed and move on to the next level. Many games are played online and involve cooperation with other online players in order to win. Video and computer games also help children gain self-confidence and many games are based on history, city building, and governance and so on. Such games indirectly teach children about aspects of life on earth. H. In an upcoming study in the journal Current Biology, authors Daphne Bavelier, Alexandre Pouget, and C. Shawn Green report that video games could provide a potent training regimen for speeding up reactions in many types of real-life situations. The researchers tested dozens of 18-to 25-year-olds who were not ordinarily video game players. They split the subjects into two groups. One group played 50 hours of the fast-paced action video games \"Call of Duty 2\" and \"Unreal Tournament, \" and the other group played 50 hours of the slow- moving strategy game \"The Sims 2. \" After this training period, all of the subjects were asked to make quick decisions in several tasks designed by the researchers. The action game players were up to 25 percent faster at coming to a conclusion and answered just as many questions correctly as their strategy game playing peers.", "hypothesis": "Most video games are popular because of their violent content.", "label": "n"} +{"uid": "id_305", "premise": "Video Games Unexpected Benefits to Human Brain A. James Paul Gee, professor of education at the University of Wisconsin- Madison, played his first video game years ago when his six-year-old son Sam was playing Pajama Sam: No Need to Hide When Its Dark Outside. He wanted to play the game so he could support Sams problem solving. Though Pajama Sam is not an educational game, it is replete with the types of problems psychologists study when they study thinking and learning. When he saw how well the game held Sams attention, he wondered what sort of beast a more mature video game might be. Video and computer games, like many other popular, entertaining and addicting kids activities, are looked down upon by many parents as time- wasters, and worse, parents think that these games rot the brain. Violent video games are readily blamed by the media and some experts as the reason why some youth become violent or commit extreme anti-social behavior. Recent content analyses of video games show that as many as 89% of games contain some violent content, but there is no form of aggressive content for 70% of popular games. Many scientists and psychologists, like James Paul Gee, find thatvideo games actually have many benefits - the main one being making kids smart. Video games may actually teach kids high-level thinking skills that they will need in the future. \"Video games change your brain, \" according to University of Wisconsin psychologist Shawn Green. Video games change the brains physical structure the same way as do learning to read, playing the piano, or navigating using a map. Much like exercise can build muscle, the powerful combination of concentration and rewarding surges of neurotransmitters like dopamine, which strengthens neural circuits, can build the players brain. Video games give your childs brain a real workout. In many video games, the skills requ u ed to win involve abstract and high level thinking. These skills are not even taught at school. Some of the mental skills trained by video games include: following instructions, problem solving, logic, hand-eye coordination, fine motor and spatial skills. Research also suggests that people can learn iconic, spatial, and visual attention skills from video games. There have been even studies with adults showing that experience with video games is related to better surgical skills. Jacob Benjamin, doctor from Beth Israel Medical Center NY, found a direct link between skill at video gaming and skill at keyhole or laparoscopic surgery. Also, a reason given by experts as to why fighter pilots of today are more skillful is that this generations pilots are being weaned on video games. The players learn to manage resources that are limited, and decide the best use of resources, the same way as in real life. In strategy games, for instance, while developing a city, an unexpected surprise like an enemy might emerge. This forces the player to be flexible and quickly change tactics. Sometimes the player does this almost every second of the game giving the brain a real workout. According to researchers at the University of Rochester, led by Daphne Bavelier, a cognitive scientist, games simulating stressful events such as those found in battle or action games could be a training tool for real-world situations. The study suggests that playing action video games primes the brain to make quick decisions. Video games can be used to train soldiers and surgeons, according to the study. Steven Johnson, author of Everything Bad isGood For You: How Today's Popular Culture, says gamers must deal with immediate problems while keeping their long-term goals on their horizon. Young gamers force themselves to read to get instructions, follow storylines of games, and get information from the game texts. James Paul Gee, professor of education at the University of Wisconsin- Madison, says that playing a video game is similar to working through a science problem. Like students in a laboratory, gamers must come up with a hypothesis. For example, players in some games constantly try out combinations of weapons and powers to use to defeat an enemy. If one does not work, they change hypothesis and try the next one. Video games are goal-driven experiences, says Gee, which are fundamental to learning. Also, using math skills is important to win in many games that involve quantitative analysis like managing resources. In higher levels of a game, players usually fail the first time around, but they keep on trying until they succeed and move on to the next level. Many games are played online and involve cooperation with other online players in order to win. Video and computer games also help children gain self-confidence and many games are based on history, city building, and governance and so on. Such games indirectly teach children about aspects of life on earth. H. In an upcoming study in the journal Current Biology, authors Daphne Bavelier, Alexandre Pouget, and C. Shawn Green report that video games could provide a potent training regimen for speeding up reactions in many types of real-life situations. The researchers tested dozens of 18-to 25-year-olds who were not ordinarily video game players. They split the subjects into two groups. One group played 50 hours of the fast-paced action video games \"Call of Duty 2\" and \"Unreal Tournament, \" and the other group played 50 hours of the slow- moving strategy game \"The Sims 2. \" After this training period, all of the subjects were asked to make quick decisions in several tasks designed by the researchers. The action game players were up to 25 percent faster at coming to a conclusion and answered just as many questions correctly as their strategy game playing peers.", "hypothesis": "Those people who are addicted to video games have lots of dopamine in their brains.", "label": "e"} +{"uid": "id_306", "premise": "Video game research Although video games were first developed for adults, they are no longer exclusively reserved for the grown ups in the home. In 2006, Rideout and Hamel reported that as many as 29 percent of preschool children (children between two and six years old) in the United States had played console video games, and 18 percent had played hand-held ones. Given young childrens insatiable eagerness to learn, coupled with the fact that they are clearly surrounded by these media, we predict that preschoolers will both continue and increasingly begin to adopt video games for personal enjoyment. Although the majority of gaming equipment is still designed for a much older target audience, once a game system enters the household it is potentially available for all family members, including the youngest. Portable systems have done a particularly good job of penetrating the younger market. Research in the video game market is typically done at two stages: some time close to the end of the product cycle, in order to get feedback from consumers, so that a marketing strategy can be developed; and at the very end of the product cycle to fix bugs in the game. While both of those types of research are important, and may be appropriate for dealing with adult consumers, neither of them aids in designing better games, especially when it comes to designing for an audience that may have particular needs, such as preschoolers or senior citizens. Instead, exploratory and formative research has to be undertaken in order to truly understand those audiences, their abilities, their perspective, and their needs. In the spring of 2007, our preschool-game production team at Nickelodeon had a hunch that the Nintendo DS with its new features, such as the microphone, small size and portability, and its relatively low price point was a ripe gaming platform for preschoolers. There were a few games on the market at the time which had characters that appealed to the younger set, but our game producers did not think that the game mechanics or design were appropriate for preschoolers. What exactly preschoolers could do with the system, however, was a bit of a mystery. So we set about doing a study to answer the query: What could we expect preschoolers to be capable of in the context of hand-held game play, and how might the child development literature inform us as we proceeded with the creation of a new outlet for this age group? Our context in this case was the United States, although the games that resulted were also released in other regions, due to the broad international reach of the characters. In order to design the best possible DS product for a preschool audience we were fully committed to the ideals of a user-centered approach, which assumes that users will be at least considered, but ideally consulted during the development process. After all, when it comes to introducing a new interactive product to the child market, and particularly such a young age group within it, we believe it is crucial to assess the range of physical and cognitive abilities associated with their specific developmental stage. Revelle and Medoff (2002) review some of the basic reasons why home entertainment systems, computers, and other electronic gaming devices, are often difficult for preschoolers to use. In addition to their still developing motor skills (which make manipulating a controller with small buttons difficult), many of the major stumbling blocks are cognitive. Though preschoolers are learning to think symbolically, and understand that pictures can stand for real-life objects, the vast majority are still unable to read and write. Thus, using text-based menu selections is not viable. Mapping is yet another obstacle since preschoolers may be unable to understand that there is a direct link between how the controller is used and the activities that appear before them on screen. Though this aspect is changing, in traditional mapping systems real life movements do not usually translate into game-based activity. Over the course of our study, we gained many insights into how preschoolers interact with various platforms, including the DS. For instance, all instructions for preschoolers need to be in voice-over, and include visual representations, and this has been one of the most difficult areas for us to negotiate with respect to game design on the DS. Because the game cartridges have very limited memory capacity, particularly in comparison to console or computer games, the ability to capture large amounts of voice-over data via sound files or visual representations of instructions becomes limited. Text instructions take up minimal memory, so they are preferable from a technological perspective. Figuring out ways to maximise sound and graphics files, while retaining the clear visual and verbal cues that we know are critical for our youngest players, is a constant give and take. Another of our findings indicated that preschoolers may use either a stylus, or their fingers, or both although they are not very accurate with either. One of the very interesting aspects of the DS is that the interface, which is designed to respond to stylus interactions, can also effectively be used with the tip of the finger. This is particularly noteworthy in the context of preschoolers for two reasons. Firstly, as they have trouble with fine motor skills and their hand-eye coordination is still in development, they are less exact with their stylus movements; and secondly, their fingers are so small that they mimic the stylus very effectively, and therefore by using their fingers they can often be more accurate in their game interactions.", "hypothesis": "Video game use amongst preschool children is higher in the US than in other countries.", "label": "n"} +{"uid": "id_307", "premise": "Video game research Although video games were first developed for adults, they are no longer exclusively reserved for the grown ups in the home. In 2006, Rideout and Hamel reported that as many as 29 percent of preschool children (children between two and six years old) in the United States had played console video games, and 18 percent had played hand-held ones. Given young childrens insatiable eagerness to learn, coupled with the fact that they are clearly surrounded by these media, we predict that preschoolers will both continue and increasingly begin to adopt video games for personal enjoyment. Although the majority of gaming equipment is still designed for a much older target audience, once a game system enters the household it is potentially available for all family members, including the youngest. Portable systems have done a particularly good job of penetrating the younger market. Research in the video game market is typically done at two stages: some time close to the end of the product cycle, in order to get feedback from consumers, so that a marketing strategy can be developed; and at the very end of the product cycle to fix bugs in the game. While both of those types of research are important, and may be appropriate for dealing with adult consumers, neither of them aids in designing better games, especially when it comes to designing for an audience that may have particular needs, such as preschoolers or senior citizens. Instead, exploratory and formative research has to be undertaken in order to truly understand those audiences, their abilities, their perspective, and their needs. In the spring of 2007, our preschool-game production team at Nickelodeon had a hunch that the Nintendo DS with its new features, such as the microphone, small size and portability, and its relatively low price point was a ripe gaming platform for preschoolers. There were a few games on the market at the time which had characters that appealed to the younger set, but our game producers did not think that the game mechanics or design were appropriate for preschoolers. What exactly preschoolers could do with the system, however, was a bit of a mystery. So we set about doing a study to answer the query: What could we expect preschoolers to be capable of in the context of hand-held game play, and how might the child development literature inform us as we proceeded with the creation of a new outlet for this age group? Our context in this case was the United States, although the games that resulted were also released in other regions, due to the broad international reach of the characters. In order to design the best possible DS product for a preschool audience we were fully committed to the ideals of a user-centered approach, which assumes that users will be at least considered, but ideally consulted during the development process. After all, when it comes to introducing a new interactive product to the child market, and particularly such a young age group within it, we believe it is crucial to assess the range of physical and cognitive abilities associated with their specific developmental stage. Revelle and Medoff (2002) review some of the basic reasons why home entertainment systems, computers, and other electronic gaming devices, are often difficult for preschoolers to use. In addition to their still developing motor skills (which make manipulating a controller with small buttons difficult), many of the major stumbling blocks are cognitive. Though preschoolers are learning to think symbolically, and understand that pictures can stand for real-life objects, the vast majority are still unable to read and write. Thus, using text-based menu selections is not viable. Mapping is yet another obstacle since preschoolers may be unable to understand that there is a direct link between how the controller is used and the activities that appear before them on screen. Though this aspect is changing, in traditional mapping systems real life movements do not usually translate into game-based activity. Over the course of our study, we gained many insights into how preschoolers interact with various platforms, including the DS. For instance, all instructions for preschoolers need to be in voice-over, and include visual representations, and this has been one of the most difficult areas for us to negotiate with respect to game design on the DS. Because the game cartridges have very limited memory capacity, particularly in comparison to console or computer games, the ability to capture large amounts of voice-over data via sound files or visual representations of instructions becomes limited. Text instructions take up minimal memory, so they are preferable from a technological perspective. Figuring out ways to maximise sound and graphics files, while retaining the clear visual and verbal cues that we know are critical for our youngest players, is a constant give and take. Another of our findings indicated that preschoolers may use either a stylus, or their fingers, or both although they are not very accurate with either. One of the very interesting aspects of the DS is that the interface, which is designed to respond to stylus interactions, can also effectively be used with the tip of the finger. This is particularly noteworthy in the context of preschoolers for two reasons. Firstly, as they have trouble with fine motor skills and their hand-eye coordination is still in development, they are less exact with their stylus movements; and secondly, their fingers are so small that they mimic the stylus very effectively, and therefore by using their fingers they can often be more accurate in their game interactions.", "hypothesis": "The proportion of preschool children using video games is likely to rise.", "label": "e"} +{"uid": "id_308", "premise": "Video game research Although video games were first developed for adults, they are no longer exclusively reserved for the grown ups in the home. In 2006, Rideout and Hamel reported that as many as 29 percent of preschool children (children between two and six years old) in the United States had played console video games, and 18 percent had played hand-held ones. Given young childrens insatiable eagerness to learn, coupled with the fact that they are clearly surrounded by these media, we predict that preschoolers will both continue and increasingly begin to adopt video games for personal enjoyment. Although the majority of gaming equipment is still designed for a much older target audience, once a game system enters the household it is potentially available for all family members, including the youngest. Portable systems have done a particularly good job of penetrating the younger market. Research in the video game market is typically done at two stages: some time close to the end of the product cycle, in order to get feedback from consumers, so that a marketing strategy can be developed; and at the very end of the product cycle to fix bugs in the game. While both of those types of research are important, and may be appropriate for dealing with adult consumers, neither of them aids in designing better games, especially when it comes to designing for an audience that may have particular needs, such as preschoolers or senior citizens. Instead, exploratory and formative research has to be undertaken in order to truly understand those audiences, their abilities, their perspective, and their needs. In the spring of 2007, our preschool-game production team at Nickelodeon had a hunch that the Nintendo DS with its new features, such as the microphone, small size and portability, and its relatively low price point was a ripe gaming platform for preschoolers. There were a few games on the market at the time which had characters that appealed to the younger set, but our game producers did not think that the game mechanics or design were appropriate for preschoolers. What exactly preschoolers could do with the system, however, was a bit of a mystery. So we set about doing a study to answer the query: What could we expect preschoolers to be capable of in the context of hand-held game play, and how might the child development literature inform us as we proceeded with the creation of a new outlet for this age group? Our context in this case was the United States, although the games that resulted were also released in other regions, due to the broad international reach of the characters. In order to design the best possible DS product for a preschool audience we were fully committed to the ideals of a user-centered approach, which assumes that users will be at least considered, but ideally consulted during the development process. After all, when it comes to introducing a new interactive product to the child market, and particularly such a young age group within it, we believe it is crucial to assess the range of physical and cognitive abilities associated with their specific developmental stage. Revelle and Medoff (2002) review some of the basic reasons why home entertainment systems, computers, and other electronic gaming devices, are often difficult for preschoolers to use. In addition to their still developing motor skills (which make manipulating a controller with small buttons difficult), many of the major stumbling blocks are cognitive. Though preschoolers are learning to think symbolically, and understand that pictures can stand for real-life objects, the vast majority are still unable to read and write. Thus, using text-based menu selections is not viable. Mapping is yet another obstacle since preschoolers may be unable to understand that there is a direct link between how the controller is used and the activities that appear before them on screen. Though this aspect is changing, in traditional mapping systems real life movements do not usually translate into game-based activity. Over the course of our study, we gained many insights into how preschoolers interact with various platforms, including the DS. For instance, all instructions for preschoolers need to be in voice-over, and include visual representations, and this has been one of the most difficult areas for us to negotiate with respect to game design on the DS. Because the game cartridges have very limited memory capacity, particularly in comparison to console or computer games, the ability to capture large amounts of voice-over data via sound files or visual representations of instructions becomes limited. Text instructions take up minimal memory, so they are preferable from a technological perspective. Figuring out ways to maximise sound and graphics files, while retaining the clear visual and verbal cues that we know are critical for our youngest players, is a constant give and take. Another of our findings indicated that preschoolers may use either a stylus, or their fingers, or both although they are not very accurate with either. One of the very interesting aspects of the DS is that the interface, which is designed to respond to stylus interactions, can also effectively be used with the tip of the finger. This is particularly noteworthy in the context of preschoolers for two reasons. Firstly, as they have trouble with fine motor skills and their hand-eye coordination is still in development, they are less exact with their stylus movements; and secondly, their fingers are so small that they mimic the stylus very effectively, and therefore by using their fingers they can often be more accurate in their game interactions.", "hypothesis": "Parents in the US who own gaming equipment generally allow their children to play with it.", "label": "n"} +{"uid": "id_309", "premise": "Video game research Although video games were first developed for adults, they are no longer exclusively reserved for the grown ups in the home. In 2006, Rideout and Hamel reported that as many as 29 percent of preschool children (children between two and six years old) in the United States had played console video games, and 18 percent had played hand-held ones. Given young childrens insatiable eagerness to learn, coupled with the fact that they are clearly surrounded by these media, we predict that preschoolers will both continue and increasingly begin to adopt video games for personal enjoyment. Although the majority of gaming equipment is still designed for a much older target audience, once a game system enters the household it is potentially available for all family members, including the youngest. Portable systems have done a particularly good job of penetrating the younger market. Research in the video game market is typically done at two stages: some time close to the end of the product cycle, in order to get feedback from consumers, so that a marketing strategy can be developed; and at the very end of the product cycle to fix bugs in the game. While both of those types of research are important, and may be appropriate for dealing with adult consumers, neither of them aids in designing better games, especially when it comes to designing for an audience that may have particular needs, such as preschoolers or senior citizens. Instead, exploratory and formative research has to be undertaken in order to truly understand those audiences, their abilities, their perspective, and their needs. In the spring of 2007, our preschool-game production team at Nickelodeon had a hunch that the Nintendo DS with its new features, such as the microphone, small size and portability, and its relatively low price point was a ripe gaming platform for preschoolers. There were a few games on the market at the time which had characters that appealed to the younger set, but our game producers did not think that the game mechanics or design were appropriate for preschoolers. What exactly preschoolers could do with the system, however, was a bit of a mystery. So we set about doing a study to answer the query: What could we expect preschoolers to be capable of in the context of hand-held game play, and how might the child development literature inform us as we proceeded with the creation of a new outlet for this age group? Our context in this case was the United States, although the games that resulted were also released in other regions, due to the broad international reach of the characters. In order to design the best possible DS product for a preschool audience we were fully committed to the ideals of a user-centered approach, which assumes that users will be at least considered, but ideally consulted during the development process. After all, when it comes to introducing a new interactive product to the child market, and particularly such a young age group within it, we believe it is crucial to assess the range of physical and cognitive abilities associated with their specific developmental stage. Revelle and Medoff (2002) review some of the basic reasons why home entertainment systems, computers, and other electronic gaming devices, are often difficult for preschoolers to use. In addition to their still developing motor skills (which make manipulating a controller with small buttons difficult), many of the major stumbling blocks are cognitive. Though preschoolers are learning to think symbolically, and understand that pictures can stand for real-life objects, the vast majority are still unable to read and write. Thus, using text-based menu selections is not viable. Mapping is yet another obstacle since preschoolers may be unable to understand that there is a direct link between how the controller is used and the activities that appear before them on screen. Though this aspect is changing, in traditional mapping systems real life movements do not usually translate into game-based activity. Over the course of our study, we gained many insights into how preschoolers interact with various platforms, including the DS. For instance, all instructions for preschoolers need to be in voice-over, and include visual representations, and this has been one of the most difficult areas for us to negotiate with respect to game design on the DS. Because the game cartridges have very limited memory capacity, particularly in comparison to console or computer games, the ability to capture large amounts of voice-over data via sound files or visual representations of instructions becomes limited. Text instructions take up minimal memory, so they are preferable from a technological perspective. Figuring out ways to maximise sound and graphics files, while retaining the clear visual and verbal cues that we know are critical for our youngest players, is a constant give and take. Another of our findings indicated that preschoolers may use either a stylus, or their fingers, or both although they are not very accurate with either. One of the very interesting aspects of the DS is that the interface, which is designed to respond to stylus interactions, can also effectively be used with the tip of the finger. This is particularly noteworthy in the context of preschoolers for two reasons. Firstly, as they have trouble with fine motor skills and their hand-eye coordination is still in development, they are less exact with their stylus movements; and secondly, their fingers are so small that they mimic the stylus very effectively, and therefore by using their fingers they can often be more accurate in their game interactions.", "hypothesis": "The type of research which manufacturers usually do is aimed at improving game design.", "label": "c"} +{"uid": "id_310", "premise": "Video game research Although video games were first developed for adults, they are no longer exclusively reserved for the grown ups in the home. In 2006, Rideout and Hamel reported that as many as 29 percent of preschool children (children between two and six years old) in the United States had played console video games, and 18 percent had played hand-held ones. Given young childrens insatiable eagerness to learn, coupled with the fact that they are clearly surrounded by these media, we predict that preschoolers will both continue and increasingly begin to adopt video games for personal enjoyment. Although the majority of gaming equipment is still designed for a much older target audience, once a game system enters the household it is potentially available for all family members, including the youngest. Portable systems have done a particularly good job of penetrating the younger market. Research in the video game market is typically done at two stages: some time close to the end of the product cycle, in order to get feedback from consumers, so that a marketing strategy can be developed; and at the very end of the product cycle to fix bugs in the game. While both of those types of research are important, and may be appropriate for dealing with adult consumers, neither of them aids in designing better games, especially when it comes to designing for an audience that may have particular needs, such as preschoolers or senior citizens. Instead, exploratory and formative research has to be undertaken in order to truly understand those audiences, their abilities, their perspective, and their needs. In the spring of 2007, our preschool-game production team at Nickelodeon had a hunch that the Nintendo DS with its new features, such as the microphone, small size and portability, and its relatively low price point was a ripe gaming platform for preschoolers. There were a few games on the market at the time which had characters that appealed to the younger set, but our game producers did not think that the game mechanics or design were appropriate for preschoolers. What exactly preschoolers could do with the system, however, was a bit of a mystery. So we set about doing a study to answer the query: What could we expect preschoolers to be capable of in the context of hand-held game play, and how might the child development literature inform us as we proceeded with the creation of a new outlet for this age group? Our context in this case was the United States, although the games that resulted were also released in other regions, due to the broad international reach of the characters. In order to design the best possible DS product for a preschool audience we were fully committed to the ideals of a user-centered approach, which assumes that users will be at least considered, but ideally consulted during the development process. After all, when it comes to introducing a new interactive product to the child market, and particularly such a young age group within it, we believe it is crucial to assess the range of physical and cognitive abilities associated with their specific developmental stage. Revelle and Medoff (2002) review some of the basic reasons why home entertainment systems, computers, and other electronic gaming devices, are often difficult for preschoolers to use. In addition to their still developing motor skills (which make manipulating a controller with small buttons difficult), many of the major stumbling blocks are cognitive. Though preschoolers are learning to think symbolically, and understand that pictures can stand for real-life objects, the vast majority are still unable to read and write. Thus, using text-based menu selections is not viable. Mapping is yet another obstacle since preschoolers may be unable to understand that there is a direct link between how the controller is used and the activities that appear before them on screen. Though this aspect is changing, in traditional mapping systems real life movements do not usually translate into game-based activity. Over the course of our study, we gained many insights into how preschoolers interact with various platforms, including the DS. For instance, all instructions for preschoolers need to be in voice-over, and include visual representations, and this has been one of the most difficult areas for us to negotiate with respect to game design on the DS. Because the game cartridges have very limited memory capacity, particularly in comparison to console or computer games, the ability to capture large amounts of voice-over data via sound files or visual representations of instructions becomes limited. Text instructions take up minimal memory, so they are preferable from a technological perspective. Figuring out ways to maximise sound and graphics files, while retaining the clear visual and verbal cues that we know are critical for our youngest players, is a constant give and take. Another of our findings indicated that preschoolers may use either a stylus, or their fingers, or both although they are not very accurate with either. One of the very interesting aspects of the DS is that the interface, which is designed to respond to stylus interactions, can also effectively be used with the tip of the finger. This is particularly noteworthy in the context of preschoolers for two reasons. Firstly, as they have trouble with fine motor skills and their hand-eye coordination is still in development, they are less exact with their stylus movements; and secondly, their fingers are so small that they mimic the stylus very effectively, and therefore by using their fingers they can often be more accurate in their game interactions.", "hypothesis": "Both old and young games consumers require research which is specifically targeted", "label": "e"} +{"uid": "id_311", "premise": "Vincent has a paper route Each morning he delivers 37 newspapers to customers in his neighborhood. It takes Vincent 50 minutes to deliver all the papers. If Vincent is sick or has other plans, his friend Thomas, who lives on the same street, will sometimes deliver the papers for him.", "hypothesis": "It is dark outside when Vincent begins his deliveries.", "label": "n"} +{"uid": "id_312", "premise": "Vincent has a paper route Each morning he delivers 37 newspapers to customers in his neighborhood. It takes Vincent 50 minutes to deliver all the papers. If Vincent is sick or has other plans, his friend Thomas, who lives on the same street, will sometimes deliver the papers for him.", "hypothesis": "Vincent and Thomas live in the same neighborhood.", "label": "e"} +{"uid": "id_313", "premise": "Vincent has a paper route Each morning he delivers 37 newspapers to customers in his neighborhood. It takes Vincent 50 minutes to deliver all the papers. If Vincent is sick or has other plans, his friend Thomas, who lives on the same street, will sometimes deliver the papers for him.", "hypothesis": "It takes Thomas more than 50 minutes to deliver the papers.", "label": "n"} +{"uid": "id_314", "premise": "Vincent has a paper route Each morning he delivers 37 newspapers to customers in his neighborhood. It takes Vincent 50 minutes to deliver all the papers. If Vincent is sick or has other plans, his friend Thomas, who lives on the same street, will sometimes deliver the papers for him.", "hypothesis": "Thomas would like to have his own paper route.", "label": "n"} +{"uid": "id_315", "premise": "Vitamins To supplement or not? Mineral, vitamin, and antioxidant health supplements make up a multi-billion-dollar industry in the United States alone, but do they really work? Evidence suggests supplementation is clearly indicated in special circumstances, but can actually be harmful in others. For the general population, however, supplements have negligible or no impact on the prevention of common cancers, cardiovascular diseases, cognitive decline, mortality, or any other major indicators of health. In pursuit of a longer, happier and healthier life, there are certainly better investments for most people than a tube of vitamin supplements. Particular sub-groups of the population can gain a proven benefit from supplementation. Folic acid has long been indicated as a prenatal supplement due to its assistance in foetal cell division and corresponding ability to prevent neural tube birth defects. Since Canada and the United States decided to require white flour to be fortified with folic acid, spinal birth defects have plummeted by 75%, and rates of neuroblastoma (a ravaging form of infant cancer) are now 50% lower. In countries without such fortification, or for women on low-carbohydrate diets, a prenatal multivitamin could make the crucial difference. The United States Department of Health and Human Services has concluded that the elderly may also benefit from extra vitamin D; calcium can help prevent bone fractures; and zinc and antioxidants can maintain vision while deflecting macular degeneration in people who would otherwise be likely to develop this affliction. There is mounting evidence, however, for many people to steer clear of multivitamins. The National Institutes of Health has noted a disturbing evidence of risk in tobacco users: beta-carotene, a common ingredient in multivitamins, was found over a six-year study to significantly contribute to higher lung cancer and mortality rates in smokers. Meanwhile, excessive vitamin A (a supplement often taken to boost the immune system) has been proven to increase womens risk of a hip fracture, and vitamin E, thought to improve cardiovascular health, was contraindicated in a study that demonstrated higher rates of congestive heart failure among such vitamin users. Antioxidant supplementation has no purpose nor does it achieve anything, according to the Food and Nutrition Board of the National Academy of Sciences, and the Medical Letter Group has gone further in suggesting they may interfere with treatment and promote some cancers. Antioxidants are generally regarded as counteracting the destructive effect of free radicals in the body, but according to the Medical Letters theory, free radicals may also serve the purpose of sending a powerful signal to the bodys immune system to fix the damage. By taking supplements, we risk undermining that message and upsetting the balance of antioxidants and free radicals in the body. The supplements counteract the free radicals, the immune system is not placed on alert, and the disease could sneak through the gates. One problem with supplementation by tablet is the poor record on digestibility. These tablets are often stocked with metal-based minerals that are essentially miniature rocks, and our bodies are unable to digest them. Even the vitamin elements of these pills that are theoretically digestible are often unable to be effectively extracted by our bodies when they arrive in such a condensed form. In Salt Lake City, for example, over 150 gallons of vitamin and mineral pills are retrieved from the sewer filters each month. According to the physicians desk reference, only about 10% 20% of multivitamins are absorbed by the body. The National Advisory Board is even more damning, suggesting that every 100mg of tablet corresponds to about 8.3mg of blood concentration, although noting that this can still potentially perform a helpful role in some cases. In effect, for every $100 you spend on vitamin supplements, over $90 of that is quite literally flushed down the toilet. A final argument against multivitamins is the notion that they can lead people consciously or not to the conclusion that supplementation fills in the gaps of an unhealthy diet and mops up afterwards, leaving their bodies none the wiser that instead of preparing a breakfast of fresh fruit and muesli, they popped a tiny capsule with coffee and a chocolate bar. In a seven-year study, however, the Heart Protection study did not find any positive outcome whatsoever from multivitamins and concluded that while vitamins in the diet are important, multivitamin tablets are safe but completely useless. There is evidently no shortcut around the task of buying, preparing, and consuming fresh fruit and vegetables every day. Boosting, supplementing, and fortifying products alter peoples very perception of what healthy food is; instead of heading for the fresh produce aisle in the supermarket, they are likely to seek out sugary, processed foods with a handful of extra B vitamins as a healthy choice. We cannot supplement our way out of a bad diet.", "hypothesis": "Some multivitamin tablets have indigestible ingredients.", "label": "e"} +{"uid": "id_316", "premise": "Vitamins To supplement or not? Mineral, vitamin, and antioxidant health supplements make up a multi-billion-dollar industry in the United States alone, but do they really work? Evidence suggests supplementation is clearly indicated in special circumstances, but can actually be harmful in others. For the general population, however, supplements have negligible or no impact on the prevention of common cancers, cardiovascular diseases, cognitive decline, mortality, or any other major indicators of health. In pursuit of a longer, happier and healthier life, there are certainly better investments for most people than a tube of vitamin supplements. Particular sub-groups of the population can gain a proven benefit from supplementation. Folic acid has long been indicated as a prenatal supplement due to its assistance in foetal cell division and corresponding ability to prevent neural tube birth defects. Since Canada and the United States decided to require white flour to be fortified with folic acid, spinal birth defects have plummeted by 75%, and rates of neuroblastoma (a ravaging form of infant cancer) are now 50% lower. In countries without such fortification, or for women on low-carbohydrate diets, a prenatal multivitamin could make the crucial difference. The United States Department of Health and Human Services has concluded that the elderly may also benefit from extra vitamin D; calcium can help prevent bone fractures; and zinc and antioxidants can maintain vision while deflecting macular degeneration in people who would otherwise be likely to develop this affliction. There is mounting evidence, however, for many people to steer clear of multivitamins. The National Institutes of Health has noted a disturbing evidence of risk in tobacco users: beta-carotene, a common ingredient in multivitamins, was found over a six-year study to significantly contribute to higher lung cancer and mortality rates in smokers. Meanwhile, excessive vitamin A (a supplement often taken to boost the immune system) has been proven to increase womens risk of a hip fracture, and vitamin E, thought to improve cardiovascular health, was contraindicated in a study that demonstrated higher rates of congestive heart failure among such vitamin users. Antioxidant supplementation has no purpose nor does it achieve anything, according to the Food and Nutrition Board of the National Academy of Sciences, and the Medical Letter Group has gone further in suggesting they may interfere with treatment and promote some cancers. Antioxidants are generally regarded as counteracting the destructive effect of free radicals in the body, but according to the Medical Letters theory, free radicals may also serve the purpose of sending a powerful signal to the bodys immune system to fix the damage. By taking supplements, we risk undermining that message and upsetting the balance of antioxidants and free radicals in the body. The supplements counteract the free radicals, the immune system is not placed on alert, and the disease could sneak through the gates. One problem with supplementation by tablet is the poor record on digestibility. These tablets are often stocked with metal-based minerals that are essentially miniature rocks, and our bodies are unable to digest them. Even the vitamin elements of these pills that are theoretically digestible are often unable to be effectively extracted by our bodies when they arrive in such a condensed form. In Salt Lake City, for example, over 150 gallons of vitamin and mineral pills are retrieved from the sewer filters each month. According to the physicians desk reference, only about 10% 20% of multivitamins are absorbed by the body. The National Advisory Board is even more damning, suggesting that every 100mg of tablet corresponds to about 8.3mg of blood concentration, although noting that this can still potentially perform a helpful role in some cases. In effect, for every $100 you spend on vitamin supplements, over $90 of that is quite literally flushed down the toilet. A final argument against multivitamins is the notion that they can lead people consciously or not to the conclusion that supplementation fills in the gaps of an unhealthy diet and mops up afterwards, leaving their bodies none the wiser that instead of preparing a breakfast of fresh fruit and muesli, they popped a tiny capsule with coffee and a chocolate bar. In a seven-year study, however, the Heart Protection study did not find any positive outcome whatsoever from multivitamins and concluded that while vitamins in the diet are important, multivitamin tablets are safe but completely useless. There is evidently no shortcut around the task of buying, preparing, and consuming fresh fruit and vegetables every day. Boosting, supplementing, and fortifying products alter peoples very perception of what healthy food is; instead of heading for the fresh produce aisle in the supermarket, they are likely to seek out sugary, processed foods with a handful of extra B vitamins as a healthy choice. We cannot supplement our way out of a bad diet.", "hypothesis": "Some individual vitamins are better absorbed than others in a tablet form.", "label": "n"} +{"uid": "id_317", "premise": "Vitamins To supplement or not? Mineral, vitamin, and antioxidant health supplements make up a multi-billion-dollar industry in the United States alone, but do they really work? Evidence suggests supplementation is clearly indicated in special circumstances, but can actually be harmful in others. For the general population, however, supplements have negligible or no impact on the prevention of common cancers, cardiovascular diseases, cognitive decline, mortality, or any other major indicators of health. In pursuit of a longer, happier and healthier life, there are certainly better investments for most people than a tube of vitamin supplements. Particular sub-groups of the population can gain a proven benefit from supplementation. Folic acid has long been indicated as a prenatal supplement due to its assistance in foetal cell division and corresponding ability to prevent neural tube birth defects. Since Canada and the United States decided to require white flour to be fortified with folic acid, spinal birth defects have plummeted by 75%, and rates of neuroblastoma (a ravaging form of infant cancer) are now 50% lower. In countries without such fortification, or for women on low-carbohydrate diets, a prenatal multivitamin could make the crucial difference. The United States Department of Health and Human Services has concluded that the elderly may also benefit from extra vitamin D; calcium can help prevent bone fractures; and zinc and antioxidants can maintain vision while deflecting macular degeneration in people who would otherwise be likely to develop this affliction. There is mounting evidence, however, for many people to steer clear of multivitamins. The National Institutes of Health has noted a disturbing evidence of risk in tobacco users: beta-carotene, a common ingredient in multivitamins, was found over a six-year study to significantly contribute to higher lung cancer and mortality rates in smokers. Meanwhile, excessive vitamin A (a supplement often taken to boost the immune system) has been proven to increase womens risk of a hip fracture, and vitamin E, thought to improve cardiovascular health, was contraindicated in a study that demonstrated higher rates of congestive heart failure among such vitamin users. Antioxidant supplementation has no purpose nor does it achieve anything, according to the Food and Nutrition Board of the National Academy of Sciences, and the Medical Letter Group has gone further in suggesting they may interfere with treatment and promote some cancers. Antioxidants are generally regarded as counteracting the destructive effect of free radicals in the body, but according to the Medical Letters theory, free radicals may also serve the purpose of sending a powerful signal to the bodys immune system to fix the damage. By taking supplements, we risk undermining that message and upsetting the balance of antioxidants and free radicals in the body. The supplements counteract the free radicals, the immune system is not placed on alert, and the disease could sneak through the gates. One problem with supplementation by tablet is the poor record on digestibility. These tablets are often stocked with metal-based minerals that are essentially miniature rocks, and our bodies are unable to digest them. Even the vitamin elements of these pills that are theoretically digestible are often unable to be effectively extracted by our bodies when they arrive in such a condensed form. In Salt Lake City, for example, over 150 gallons of vitamin and mineral pills are retrieved from the sewer filters each month. According to the physicians desk reference, only about 10% 20% of multivitamins are absorbed by the body. The National Advisory Board is even more damning, suggesting that every 100mg of tablet corresponds to about 8.3mg of blood concentration, although noting that this can still potentially perform a helpful role in some cases. In effect, for every $100 you spend on vitamin supplements, over $90 of that is quite literally flushed down the toilet. A final argument against multivitamins is the notion that they can lead people consciously or not to the conclusion that supplementation fills in the gaps of an unhealthy diet and mops up afterwards, leaving their bodies none the wiser that instead of preparing a breakfast of fresh fruit and muesli, they popped a tiny capsule with coffee and a chocolate bar. In a seven-year study, however, the Heart Protection study did not find any positive outcome whatsoever from multivitamins and concluded that while vitamins in the diet are important, multivitamin tablets are safe but completely useless. There is evidently no shortcut around the task of buying, preparing, and consuming fresh fruit and vegetables every day. Boosting, supplementing, and fortifying products alter peoples very perception of what healthy food is; instead of heading for the fresh produce aisle in the supermarket, they are likely to seek out sugary, processed foods with a handful of extra B vitamins as a healthy choice. We cannot supplement our way out of a bad diet.", "hypothesis": "Our bodies cannot distinguish food-based from supplement-based vitamins.", "label": "n"} +{"uid": "id_318", "premise": "Vitamins To supplement or not? Mineral, vitamin, and antioxidant health supplements make up a multi-billion-dollar industry in the United States alone, but do they really work? Evidence suggests supplementation is clearly indicated in special circumstances, but can actually be harmful in others. For the general population, however, supplements have negligible or no impact on the prevention of common cancers, cardiovascular diseases, cognitive decline, mortality, or any other major indicators of health. In pursuit of a longer, happier and healthier life, there are certainly better investments for most people than a tube of vitamin supplements. Particular sub-groups of the population can gain a proven benefit from supplementation. Folic acid has long been indicated as a prenatal supplement due to its assistance in foetal cell division and corresponding ability to prevent neural tube birth defects. Since Canada and the United States decided to require white flour to be fortified with folic acid, spinal birth defects have plummeted by 75%, and rates of neuroblastoma (a ravaging form of infant cancer) are now 50% lower. In countries without such fortification, or for women on low-carbohydrate diets, a prenatal multivitamin could make the crucial difference. The United States Department of Health and Human Services has concluded that the elderly may also benefit from extra vitamin D; calcium can help prevent bone fractures; and zinc and antioxidants can maintain vision while deflecting macular degeneration in people who would otherwise be likely to develop this affliction. There is mounting evidence, however, for many people to steer clear of multivitamins. The National Institutes of Health has noted a disturbing evidence of risk in tobacco users: beta-carotene, a common ingredient in multivitamins, was found over a six-year study to significantly contribute to higher lung cancer and mortality rates in smokers. Meanwhile, excessive vitamin A (a supplement often taken to boost the immune system) has been proven to increase womens risk of a hip fracture, and vitamin E, thought to improve cardiovascular health, was contraindicated in a study that demonstrated higher rates of congestive heart failure among such vitamin users. Antioxidant supplementation has no purpose nor does it achieve anything, according to the Food and Nutrition Board of the National Academy of Sciences, and the Medical Letter Group has gone further in suggesting they may interfere with treatment and promote some cancers. Antioxidants are generally regarded as counteracting the destructive effect of free radicals in the body, but according to the Medical Letters theory, free radicals may also serve the purpose of sending a powerful signal to the bodys immune system to fix the damage. By taking supplements, we risk undermining that message and upsetting the balance of antioxidants and free radicals in the body. The supplements counteract the free radicals, the immune system is not placed on alert, and the disease could sneak through the gates. One problem with supplementation by tablet is the poor record on digestibility. These tablets are often stocked with metal-based minerals that are essentially miniature rocks, and our bodies are unable to digest them. Even the vitamin elements of these pills that are theoretically digestible are often unable to be effectively extracted by our bodies when they arrive in such a condensed form. In Salt Lake City, for example, over 150 gallons of vitamin and mineral pills are retrieved from the sewer filters each month. According to the physicians desk reference, only about 10% 20% of multivitamins are absorbed by the body. The National Advisory Board is even more damning, suggesting that every 100mg of tablet corresponds to about 8.3mg of blood concentration, although noting that this can still potentially perform a helpful role in some cases. In effect, for every $100 you spend on vitamin supplements, over $90 of that is quite literally flushed down the toilet. A final argument against multivitamins is the notion that they can lead people consciously or not to the conclusion that supplementation fills in the gaps of an unhealthy diet and mops up afterwards, leaving their bodies none the wiser that instead of preparing a breakfast of fresh fruit and muesli, they popped a tiny capsule with coffee and a chocolate bar. In a seven-year study, however, the Heart Protection study did not find any positive outcome whatsoever from multivitamins and concluded that while vitamins in the diet are important, multivitamin tablets are safe but completely useless. There is evidently no shortcut around the task of buying, preparing, and consuming fresh fruit and vegetables every day. Boosting, supplementing, and fortifying products alter peoples very perception of what healthy food is; instead of heading for the fresh produce aisle in the supermarket, they are likely to seek out sugary, processed foods with a handful of extra B vitamins as a healthy choice. We cannot supplement our way out of a bad diet.", "hypothesis": "Multivitamins can lead to poorer overall eating habits in a persons life.", "label": "e"} +{"uid": "id_319", "premise": "Vitamins To supplement or not? Mineral, vitamin, and antioxidant health supplements make up a multi-billion-dollar industry in the United States alone, but do they really work? Evidence suggests supplementation is clearly indicated in special circumstances, but can actually be harmful in others. For the general population, however, supplements have negligible or no impact on the prevention of common cancers, cardiovascular diseases, cognitive decline, mortality, or any other major indicators of health. In pursuit of a longer, happier and healthier life, there are certainly better investments for most people than a tube of vitamin supplements. Particular sub-groups of the population can gain a proven benefit from supplementation. Folic acid has long been indicated as a prenatal supplement due to its assistance in foetal cell division and corresponding ability to prevent neural tube birth defects. Since Canada and the United States decided to require white flour to be fortified with folic acid, spinal birth defects have plummeted by 75%, and rates of neuroblastoma (a ravaging form of infant cancer) are now 50% lower. In countries without such fortification, or for women on low-carbohydrate diets, a prenatal multivitamin could make the crucial difference. The United States Department of Health and Human Services has concluded that the elderly may also benefit from extra vitamin D; calcium can help prevent bone fractures; and zinc and antioxidants can maintain vision while deflecting macular degeneration in people who would otherwise be likely to develop this affliction. There is mounting evidence, however, for many people to steer clear of multivitamins. The National Institutes of Health has noted a disturbing evidence of risk in tobacco users: beta-carotene, a common ingredient in multivitamins, was found over a six-year study to significantly contribute to higher lung cancer and mortality rates in smokers. Meanwhile, excessive vitamin A (a supplement often taken to boost the immune system) has been proven to increase womens risk of a hip fracture, and vitamin E, thought to improve cardiovascular health, was contraindicated in a study that demonstrated higher rates of congestive heart failure among such vitamin users. Antioxidant supplementation has no purpose nor does it achieve anything, according to the Food and Nutrition Board of the National Academy of Sciences, and the Medical Letter Group has gone further in suggesting they may interfere with treatment and promote some cancers. Antioxidants are generally regarded as counteracting the destructive effect of free radicals in the body, but according to the Medical Letters theory, free radicals may also serve the purpose of sending a powerful signal to the bodys immune system to fix the damage. By taking supplements, we risk undermining that message and upsetting the balance of antioxidants and free radicals in the body. The supplements counteract the free radicals, the immune system is not placed on alert, and the disease could sneak through the gates. One problem with supplementation by tablet is the poor record on digestibility. These tablets are often stocked with metal-based minerals that are essentially miniature rocks, and our bodies are unable to digest them. Even the vitamin elements of these pills that are theoretically digestible are often unable to be effectively extracted by our bodies when they arrive in such a condensed form. In Salt Lake City, for example, over 150 gallons of vitamin and mineral pills are retrieved from the sewer filters each month. According to the physicians desk reference, only about 10% 20% of multivitamins are absorbed by the body. The National Advisory Board is even more damning, suggesting that every 100mg of tablet corresponds to about 8.3mg of blood concentration, although noting that this can still potentially perform a helpful role in some cases. In effect, for every $100 you spend on vitamin supplements, over $90 of that is quite literally flushed down the toilet. A final argument against multivitamins is the notion that they can lead people consciously or not to the conclusion that supplementation fills in the gaps of an unhealthy diet and mops up afterwards, leaving their bodies none the wiser that instead of preparing a breakfast of fresh fruit and muesli, they popped a tiny capsule with coffee and a chocolate bar. In a seven-year study, however, the Heart Protection study did not find any positive outcome whatsoever from multivitamins and concluded that while vitamins in the diet are important, multivitamin tablets are safe but completely useless. There is evidently no shortcut around the task of buying, preparing, and consuming fresh fruit and vegetables every day. Boosting, supplementing, and fortifying products alter peoples very perception of what healthy food is; instead of heading for the fresh produce aisle in the supermarket, they are likely to seek out sugary, processed foods with a handful of extra B vitamins as a healthy choice. We cannot supplement our way out of a bad diet.", "hypothesis": "People typically know that fortified processed foods are not good for them.", "label": "c"} +{"uid": "id_320", "premise": "Volunteers Thank you for volunteering to work one-on-one with some of the students at our school who need extra help. Smoking policy Smoking is prohibited by law in the classrooms and anywhere on the school grounds. Safety and Health Volunteers are responsible for their own personal safety and should notify the school of any pre-existing medical conditions. Prescription and any other medications that you normally carry with you must be handed in to the school nurse on arrival and collected on departure. If you require them, the nurse will dispense them to you in her office. Sign-in A sign-in book is located at office reception. Please sign this register every time you come to the school. This is important for insurance purposes and emergency situations. After signing the book, collect a Visitors badge from the office. This must be worn at all times when you are on school premises. Remember to return the badge afterwards. Messages Teachers will communicate with volunteers via telephone, email or messages left at the office. Always ask for messages. You may communicate with teachers in the same way the preferred method is to leave a memo in the relevant teachers pigeonhole. These can be found at the end of the corridor in the staffroom block. Work hours We understand that your time commitment is entirely voluntary and therefore flexible. If your personal schedule should change and this affects your availability, please contact the Co-ordinator for Volunteers at the school on extension 402: alternatively, you could drop in to her office situated in F block. Role of the Co-ordinator The Co-ordinator is responsible for matching volunteer tutors with students, organising tutorial rooms, ensuring student attendance and overseeing volunteer tutor training. If you encounter any problems, contact her as above.", "hypothesis": "If you forget to sign the register, you wont be insured for accidents.", "label": "n"} +{"uid": "id_321", "premise": "Volunteers Thank you for volunteering to work one-on-one with some of the students at our school who need extra help. Smoking policy Smoking is prohibited by law in the classrooms and anywhere on the school grounds. Safety and Health Volunteers are responsible for their own personal safety and should notify the school of any pre-existing medical conditions. Prescription and any other medications that you normally carry with you must be handed in to the school nurse on arrival and collected on departure. If you require them, the nurse will dispense them to you in her office. Sign-in A sign-in book is located at office reception. Please sign this register every time you come to the school. This is important for insurance purposes and emergency situations. After signing the book, collect a Visitors badge from the office. This must be worn at all times when you are on school premises. Remember to return the badge afterwards. Messages Teachers will communicate with volunteers via telephone, email or messages left at the office. Always ask for messages. You may communicate with teachers in the same way the preferred method is to leave a memo in the relevant teachers pigeonhole. These can be found at the end of the corridor in the staffroom block. Work hours We understand that your time commitment is entirely voluntary and therefore flexible. If your personal schedule should change and this affects your availability, please contact the Co-ordinator for Volunteers at the school on extension 402: alternatively, you could drop in to her office situated in F block. Role of the Co-ordinator The Co-ordinator is responsible for matching volunteer tutors with students, organising tutorial rooms, ensuring student attendance and overseeing volunteer tutor training. If you encounter any problems, contact her as above.", "hypothesis": "As a volunteer, you will be helping students individually.", "label": "e"} +{"uid": "id_322", "premise": "Volunteers Thank you for volunteering to work one-on-one with some of the students at our school who need extra help. Smoking policy Smoking is prohibited by law in the classrooms and anywhere on the school grounds. Safety and Health Volunteers are responsible for their own personal safety and should notify the school of any pre-existing medical conditions. Prescription and any other medications that you normally carry with you must be handed in to the school nurse on arrival and collected on departure. If you require them, the nurse will dispense them to you in her office. Sign-in A sign-in book is located at office reception. Please sign this register every time you come to the school. This is important for insurance purposes and emergency situations. After signing the book, collect a Visitors badge from the office. This must be worn at all times when you are on school premises. Remember to return the badge afterwards. Messages Teachers will communicate with volunteers via telephone, email or messages left at the office. Always ask for messages. You may communicate with teachers in the same way the preferred method is to leave a memo in the relevant teachers pigeonhole. These can be found at the end of the corridor in the staffroom block. Work hours We understand that your time commitment is entirely voluntary and therefore flexible. If your personal schedule should change and this affects your availability, please contact the Co-ordinator for Volunteers at the school on extension 402: alternatively, you could drop in to her office situated in F block. Role of the Co-ordinator The Co-ordinator is responsible for matching volunteer tutors with students, organising tutorial rooms, ensuring student attendance and overseeing volunteer tutor training. If you encounter any problems, contact her as above.", "hypothesis": "You may smoke in the playground.", "label": "c"} +{"uid": "id_323", "premise": "Volunteers Thank you for volunteering to work one-on-one with some of the students at our school who need extra help. Smoking policy Smoking is prohibited by law in the classrooms and anywhere on the school grounds. Safety and Health Volunteers are responsible for their own personal safety and should notify the school of any pre-existing medical conditions. Prescription and any other medications that you normally carry with you must be handed in to the school nurse on arrival and collected on departure. If you require them, the nurse will dispense them to you in her office. Sign-in A sign-in book is located at office reception. Please sign this register every time you come to the school. This is important for insurance purposes and emergency situations. After signing the book, collect a Visitors badge from the office. This must be worn at all times when you are on school premises. Remember to return the badge afterwards. Messages Teachers will communicate with volunteers via telephone, email or messages left at the office. Always ask for messages. You may communicate with teachers in the same way the preferred method is to leave a memo in the relevant teachers pigeonhole. These can be found at the end of the corridor in the staffroom block. Work hours We understand that your time commitment is entirely voluntary and therefore flexible. If your personal schedule should change and this affects your availability, please contact the Co-ordinator for Volunteers at the school on extension 402: alternatively, you could drop in to her office situated in F block. Role of the Co-ordinator The Co-ordinator is responsible for matching volunteer tutors with students, organising tutorial rooms, ensuring student attendance and overseeing volunteer tutor training. If you encounter any problems, contact her as above.", "hypothesis": "The co-ordinator keeps student attendance rolls.", "label": "n"} +{"uid": "id_324", "premise": "Volunteers Thank you for volunteering to work one-on-one with some of the students at our school who need extra help. Smoking policy Smoking is prohibited by law in the classrooms and anywhere on the school grounds. Safety and Health Volunteers are responsible for their own personal safety and should notify the school of any pre-existing medical conditions. Prescription and any other medications that you normally carry with you must be handed in to the school nurse on arrival and collected on departure. If you require them, the nurse will dispense them to you in her office. Sign-in A sign-in book is located at office reception. Please sign this register every time you come to the school. This is important for insurance purposes and emergency situations. After signing the book, collect a Visitors badge from the office. This must be worn at all times when you are on school premises. Remember to return the badge afterwards. Messages Teachers will communicate with volunteers via telephone, email or messages left at the office. Always ask for messages. You may communicate with teachers in the same way the preferred method is to leave a memo in the relevant teachers pigeonhole. These can be found at the end of the corridor in the staffroom block. Work hours We understand that your time commitment is entirely voluntary and therefore flexible. If your personal schedule should change and this affects your availability, please contact the Co-ordinator for Volunteers at the school on extension 402: alternatively, you could drop in to her office situated in F block. Role of the Co-ordinator The Co-ordinator is responsible for matching volunteer tutors with students, organising tutorial rooms, ensuring student attendance and overseeing volunteer tutor training. If you encounter any problems, contact her as above.", "hypothesis": "You cannot take any medicine while at the school.", "label": "c"} +{"uid": "id_325", "premise": "Volunteers Thank you for volunteering to work one-on-one with some of the students at our school who need extra help. Smoking policy Smoking is prohibited by law in the classrooms and anywhere on the school grounds. Safety and Health Volunteers are responsible for their own personal safety and should notify the school of any pre-existing medical conditions. Prescription and any other medications that you normally carry with you must be handed in to the school nurse on arrival and collected on departure. If you require them, the nurse will dispense them to you in her office. Sign-in A sign-in book is located at office reception. Please sign this register every time you come to the school. This is important for insurance purposes and emergency situations. After signing the book, collect a Visitors badge from the office. This must be worn at all times when you are on school premises. Remember to return the badge afterwards. Messages Teachers will communicate with volunteers via telephone, email or messages left at the office. Always ask for messages. You may communicate with teachers in the same way the preferred method is to leave a memo in the relevant teachers pigeonhole. These can be found at the end of the corridor in the staffroom block. Work hours We understand that your time commitment is entirely voluntary and therefore flexible. If your personal schedule should change and this affects your availability, please contact the Co-ordinator for Volunteers at the school on extension 402: alternatively, you could drop in to her office situated in F block. Role of the Co-ordinator The Co-ordinator is responsible for matching volunteer tutors with students, organising tutorial rooms, ensuring student attendance and overseeing volunteer tutor training. If you encounter any problems, contact her as above.", "hypothesis": "The best way of communicating with teachers is in writing.", "label": "e"} +{"uid": "id_326", "premise": "Volunteers Thank you for volunteering to work one-on-one with some of the students at our school who need extra help. Smoking policy Smoking is prohibited by law in the classrooms and anywhere on the school grounds. Safety and Health Volunteers are responsible for their own personal safety and should notify the school of any pre-existing medical conditions. Prescription and any other medications that you normally carry with you must be handed in to the school nurse on arrival and collected on departure. If you require them, the nurse will dispense them to you in her office. Sign-in A sign-in book is located at office reception. Please sign this register every time you come to the school. This is important for insurance purposes and emergency situations. After signing the book, collect a Visitors badge from the office. This must be worn at all times when you are on school premises. Remember to return the badge afterwards. Messages Teachers will communicate with volunteers via telephone, email or messages left at the office. Always ask for messages. You may communicate with teachers in the same way the preferred method is to leave a memo in the relevant teachers pigeonhole. These can be found at the end of the corridor in the staffroom block. Work hours We understand that your time commitment is entirely voluntary and therefore flexible. If your personal schedule should change and this affects your availability, please contact the Co-ordinator for Volunteers at the school on extension 402: alternatively, you could drop in to her office situated in F block. Role of the Co-ordinator The Co-ordinator is responsible for matching volunteer tutors with students, organising tutorial rooms, ensuring student attendance and overseeing volunteer tutor training. If you encounter any problems, contact her as above.", "hypothesis": "You can choose your own hours of work.", "label": "e"} +{"uid": "id_327", "premise": "Votes count. The United Kingdom has had a full parliamentary democracy since 1928 when women were allowed to vote in general elections at age 21, the same as men. Women were first given the right to vote in 1918 after the First World War, but only if they were over the age of 30. In 1969 the voting age for men and women was reduced to 18. Today, no person can vote unless their name appears on the electoral register, and the earliest you can register is age 16. Citizens of the Commonwealth and those of the Irish Republic are eligible to vote in all public elections (general and local) as long as they are resident in the UK. British nationals who move abroad retain the right to vote in British and EU elections for a further 15 years. Some people are disenfranchised, including convicted prisoners (but not those on remand), non-UK EU citizens, Church of England archbishops and bishops, members of the House of Lords and people lacking the mental capacity to vote on polling day. However, all of the above people (convicted prisoners and those lacking mental capacity excepted) can vote in local elections, and all EU citizens can also vote in European elections, though only in one country and not two.", "hypothesis": "A 19-year-old female born in the Irish Republic is entitled to vote in a UK general election.", "label": "n"} +{"uid": "id_328", "premise": "Votes count. The United Kingdom has had a full parliamentary democracy since 1928 when women were allowed to vote in general elections at age 21, the same as men. Women were first given the right to vote in 1918 after the First World War, but only if they were over the age of 30. In 1969 the voting age for men and women was reduced to 18. Today, no person can vote unless their name appears on the electoral register, and the earliest you can register is age 16. Citizens of the Commonwealth and those of the Irish Republic are eligible to vote in all public elections (general and local) as long as they are resident in the UK. British nationals who move abroad retain the right to vote in British and EU elections for a further 15 years. Some people are disenfranchised, including convicted prisoners (but not those on remand), non-UK EU citizens, Church of England archbishops and bishops, members of the House of Lords and people lacking the mental capacity to vote on polling day. However, all of the above people (convicted prisoners and those lacking mental capacity excepted) can vote in local elections, and all EU citizens can also vote in European elections, though only in one country and not two.", "hypothesis": "Non-UK EU citizens over age 18 with mental capacity who are not prisoners are entitled to vote in a UK general election.", "label": "c"} +{"uid": "id_329", "premise": "Votes count. The United Kingdom has had a full parliamentary democracy since 1928 when women were allowed to vote in general elections at age 21, the same as men. Women were first given the right to vote in 1918 after the First World War, but only if they were over the age of 30. In 1969 the voting age for men and women was reduced to 18. Today, no person can vote unless their name appears on the electoral register, and the earliest you can register is age 16. Citizens of the Commonwealth and those of the Irish Republic are eligible to vote in all public elections (general and local) as long as they are resident in the UK. British nationals who move abroad retain the right to vote in British and EU elections for a further 15 years. Some people are disenfranchised, including convicted prisoners (but not those on remand), non-UK EU citizens, Church of England archbishops and bishops, members of the House of Lords and people lacking the mental capacity to vote on polling day. However, all of the above people (convicted prisoners and those lacking mental capacity excepted) can vote in local elections, and all EU citizens can also vote in European elections, though only in one country and not two.", "hypothesis": "A woman born in 1889 would not have been allowed to vote in the 1918 UK general election.", "label": "e"} +{"uid": "id_330", "premise": "Votes count. The United Kingdom has had a full parliamentary democracy since 1928 when women were allowed to vote in general elections at age 21, the same as men. Women were first given the right to vote in 1918 after the First World War, but only if they were over the age of 30. In 1969 the voting age for men and women was reduced to 18. Today, no person can vote unless their name appears on the electoral register, and the earliest you can register is age 16. Citizens of the Commonwealth and those of the Irish Republic are eligible to vote in all public elections (general and local) as long as they are resident in the UK. British nationals who move abroad retain the right to vote in British and EU elections for a further 15 years. Some people are disenfranchised, including convicted prisoners (but not those on remand), non-UK EU citizens, Church of England archbishops and bishops, members of the House of Lords and people lacking the mental capacity to vote on polling day. However, all of the above people (convicted prisoners and those lacking mental capacity excepted) can vote in local elections, and all EU citizens can also vote in European elections, though only in one country and not two.", "hypothesis": "A 55-year-old male born in Northern Ireland and domiciled in Spain five years ago is entitled to vote in a UK general election.", "label": "e"} +{"uid": "id_331", "premise": "Voyage of going: beyond the blue line. One feels a certain sympathy for Captain James Cook on the day in 1778 that he \"discovered\" Hawaii. Then on his third expedition to the Pacific, the British navigator had explored scores of islands across the breadth of the sea, from lush New Zealand to the lonely wastes of Easter Island This latest voyage had taken him thousands of miles north from the Society Islands to an archipelago so remote that even the ok! Polynesians back on Tahiti knew nothing about it. Imagine Cook's surprise, then, when the natives of Hawaii came paddling out in their canoes and greeted him in a familiar tongue, one he had heard on virtually every mote of inhabited land he had visited Marveling at the ubiquity of this Pacific language and culture, he later wondered in his journal: \"How shall we account for this Nation spreading it self so far over this Vast ocean? \" B. Answers have been slow in coming. But now a startling archaeological find on the island of Efate, in the Pacific nation of Vanuatu, has revealed an ancient seafaring people, the distant ancestors of today's Polynesians, taking their first steps into the unknown. The discoveries there have also opened a window into the shadowy work! of those early voyagers. At the same time, other pieces of this human puzzle are turning up in unlikely places. Climate data gleaned from slow-growing corals around the Pacific and from sediments in alpine lakes in South America may help explain how, more than a thousand years later, a second wave of seafarers beat their way across the entire Pacific. C. What we have is a first-or second-generation site containing the graves of some of the Pacific's first explorers, \" says Spriggs, professor of archaeology at the Australian National University and co-leader of an international team excavating the site. It came to light only by luck A backhoe operator, digging up topsoil on the grounds of a derelict coconut plantation, scraped open a grave the first of dozens in a burial ground some 3,000 years old It is the oldest cemetery ever found in the Pacific islands, and it harbors the bones of an ancient people archaeologists call the Lapita, a label that derives from a beach in New Caledonia where a landmark cache of their pottery was found in the 1950s. They were daring blue-water adventurers who roved the sea not just as expbrers but also as pioneers, bringing abng everything they would need to build new lives their families and livestock, taro seedlings and stone tools. D. Within the span of a few centuries the Lapita stretched the boundaries of theirworld from the jungle-clad vokanoes of Papua New Guinea to the bneliest coral outliers of Tonga, at feast 2,000 miles eastward in the Pacific. Abng the way they expbred millions of square miles of unknown sea, discovering and cobnizing scores of tropical islands never before seen by human eyes: Vanuatu, New Caledonia, Fiji, Samoa. E. What little is known or surmised about them has been pieced together from fragments of pottery, animal bones, obsidian flakes, and such oblique sources as comparative linguistics and geochemistry. Although their voyages can be traced back to the northern islands of Papua New Guinea, their language variants of which are still spoken across the Pacific came from Taiwan. And their peculiar style of pottery decoration, created by pressing a carved stamp into the clay, probably had its roots in the northern Philippines. With the discovery of the Lapita cemetery on Efate, the volume of data available to researchers has expanded dramatically. The bones of at feast 62 individuals have been uncovered so far including old men, young women, even babiesand more skeletons are known to be in the ground Archaeobgists were also thrilled to discover six complete Lapita pots. It's an important find, Spriggs says, for it conclusively identifies the remains as Lapita. \"It would be hard for anyone to argue that these aren't Lapita when you have human bones enshrined inside what is unmistakably a Lapita urn. \" F. Several lines of evidence also undergird Spriggs's conclusion that this was a community of pioneers making their first voyages into the remote reaches of Oceania. For one thing, the radiocarbon dating of bones and charcoal places them early in the Lapita expansion. For another, the chemical makeup of the obsidian flakes littering the site indicates that the rock wasn't local; instead it was imported from a large island in Papua New Guinea's Bismarck Archipelago, the springboard for the Lapita's thrust into the Pacific. A particularly intriguing clue comes from chemical tests on the teeth of several skeletons. DNA teased from these ancient bones may also help answer one of the most puzzling questions in Pacific anthropobgy: Did all Pacific islanders spring from one source or many? Was there only one outward migration from a single point in Asia, or several from different points? \"This represents the best opportunity we've had yet, \" says Spriggs, \"to find out who the Lapita actually were, where they came from, and who their cbsest descendants are today. G. \"There is one stubborn question for which archaeobgy has yet to provide any answers: How did the Lapita accomplish the ancient equivalent of a moon landing, many times over? No one has found one of their canoes or any rigging, which could reveal how the canoes were sailed Nor do the oral histories andtraditions of later Polynesians offer any insights, for they segue into myth long before they reach as far back in time as the Lapita. \" All we can say for certain is that the Lapita had canoes that were capable of ocean voyages, and they had the ability to sail them, \" says Geoff Irwin, a professor of archaeology at the University of Auckland and an avid yachtsman. Those sailing skills, he says, were developed and passed down over thousands of years by earlier mariners who worked their way through the archipelagoes of the western Pacific making short crossings to islands within sight of each other. Reaching Fiji, as they did a century or so later, meant crossing more than 500 miles of ocean, pressing on day after day into the great blue void of the Pacific. What gave them the courage to launch out on such a risky voyage? H. The Lapita's thrust into the Pacific was eastward, against the prevailing trade winds, Irwin notes. Those nagging headwinds, he argues, may have been the key to their success. \"They could sail out for days into the unknown and reconnoiter, secure in the knowledge that if they didn't find anything, they could turn about and catch a swift ride home on the trade winds. It's what made the whole thing work. \" Once out there, skilled seafarers would detect abundant leads to follow to land: seabirds and turtles, coconuts and twigs carried out to sea by the tides, and the afternoon pileup of clouds on the horizon that often betokens an island in the distance. Some islands may have broadcast their presence with far less subtlety than a cloud bank. Some of the most violent eruptions anywhere on the planet during the past 10,000 years occurred in Melanesia, which sits nervously in one of the most explosive volcanic regions on Earth. Even less spectacular eruptions would have sent plumes of smoke bilbwing into the stratosphere and rained ash for hundreds of miles. It's possible that the Lapita saw these signs of distant islands and later sailed off in their direction, knowing they would find land For returning explorers, successful or not, the geography of their own archipelagoes provided a safety net to keep them from overshooting their home ports and sailing off into eternity. I. However they did it, the Lapita spread themselves a third of the way across the Pacific, then called it quits for reasons known only to them. Ahead lay the vast emptiness of the central Pacific, and perhaps they were too thinly stretched to venture farther. They probably never numbered more than a few thousand in total, and in their rapid migration eastward they encountered hundreds of islands more than 300 in Fiji alone. Still, more than a millennium would pass before the Lapita's descendants, a people we now call the Polynesians, struck out in search of new territory.", "hypothesis": "Professor Spriggs and his research team went to the Efate to try to find the site of ancient cemetery.", "label": "c"} +{"uid": "id_332", "premise": "Voyage of going: beyond the blue line. One feels a certain sympathy for Captain James Cook on the day in 1778 that he \"discovered\" Hawaii. Then on his third expedition to the Pacific, the British navigator had explored scores of islands across the breadth of the sea, from lush New Zealand to the lonely wastes of Easter Island This latest voyage had taken him thousands of miles north from the Society Islands to an archipelago so remote that even the ok! Polynesians back on Tahiti knew nothing about it. Imagine Cook's surprise, then, when the natives of Hawaii came paddling out in their canoes and greeted him in a familiar tongue, one he had heard on virtually every mote of inhabited land he had visited Marveling at the ubiquity of this Pacific language and culture, he later wondered in his journal: \"How shall we account for this Nation spreading it self so far over this Vast ocean? \" B. Answers have been slow in coming. But now a startling archaeological find on the island of Efate, in the Pacific nation of Vanuatu, has revealed an ancient seafaring people, the distant ancestors of today's Polynesians, taking their first steps into the unknown. The discoveries there have also opened a window into the shadowy work! of those early voyagers. At the same time, other pieces of this human puzzle are turning up in unlikely places. Climate data gleaned from slow-growing corals around the Pacific and from sediments in alpine lakes in South America may help explain how, more than a thousand years later, a second wave of seafarers beat their way across the entire Pacific. C. What we have is a first-or second-generation site containing the graves of some of the Pacific's first explorers, \" says Spriggs, professor of archaeology at the Australian National University and co-leader of an international team excavating the site. It came to light only by luck A backhoe operator, digging up topsoil on the grounds of a derelict coconut plantation, scraped open a grave the first of dozens in a burial ground some 3,000 years old It is the oldest cemetery ever found in the Pacific islands, and it harbors the bones of an ancient people archaeologists call the Lapita, a label that derives from a beach in New Caledonia where a landmark cache of their pottery was found in the 1950s. They were daring blue-water adventurers who roved the sea not just as expbrers but also as pioneers, bringing abng everything they would need to build new lives their families and livestock, taro seedlings and stone tools. D. Within the span of a few centuries the Lapita stretched the boundaries of theirworld from the jungle-clad vokanoes of Papua New Guinea to the bneliest coral outliers of Tonga, at feast 2,000 miles eastward in the Pacific. Abng the way they expbred millions of square miles of unknown sea, discovering and cobnizing scores of tropical islands never before seen by human eyes: Vanuatu, New Caledonia, Fiji, Samoa. E. What little is known or surmised about them has been pieced together from fragments of pottery, animal bones, obsidian flakes, and such oblique sources as comparative linguistics and geochemistry. Although their voyages can be traced back to the northern islands of Papua New Guinea, their language variants of which are still spoken across the Pacific came from Taiwan. And their peculiar style of pottery decoration, created by pressing a carved stamp into the clay, probably had its roots in the northern Philippines. With the discovery of the Lapita cemetery on Efate, the volume of data available to researchers has expanded dramatically. The bones of at feast 62 individuals have been uncovered so far including old men, young women, even babiesand more skeletons are known to be in the ground Archaeobgists were also thrilled to discover six complete Lapita pots. It's an important find, Spriggs says, for it conclusively identifies the remains as Lapita. \"It would be hard for anyone to argue that these aren't Lapita when you have human bones enshrined inside what is unmistakably a Lapita urn. \" F. Several lines of evidence also undergird Spriggs's conclusion that this was a community of pioneers making their first voyages into the remote reaches of Oceania. For one thing, the radiocarbon dating of bones and charcoal places them early in the Lapita expansion. For another, the chemical makeup of the obsidian flakes littering the site indicates that the rock wasn't local; instead it was imported from a large island in Papua New Guinea's Bismarck Archipelago, the springboard for the Lapita's thrust into the Pacific. A particularly intriguing clue comes from chemical tests on the teeth of several skeletons. DNA teased from these ancient bones may also help answer one of the most puzzling questions in Pacific anthropobgy: Did all Pacific islanders spring from one source or many? Was there only one outward migration from a single point in Asia, or several from different points? \"This represents the best opportunity we've had yet, \" says Spriggs, \"to find out who the Lapita actually were, where they came from, and who their cbsest descendants are today. G. \"There is one stubborn question for which archaeobgy has yet to provide any answers: How did the Lapita accomplish the ancient equivalent of a moon landing, many times over? No one has found one of their canoes or any rigging, which could reveal how the canoes were sailed Nor do the oral histories andtraditions of later Polynesians offer any insights, for they segue into myth long before they reach as far back in time as the Lapita. \" All we can say for certain is that the Lapita had canoes that were capable of ocean voyages, and they had the ability to sail them, \" says Geoff Irwin, a professor of archaeology at the University of Auckland and an avid yachtsman. Those sailing skills, he says, were developed and passed down over thousands of years by earlier mariners who worked their way through the archipelagoes of the western Pacific making short crossings to islands within sight of each other. Reaching Fiji, as they did a century or so later, meant crossing more than 500 miles of ocean, pressing on day after day into the great blue void of the Pacific. What gave them the courage to launch out on such a risky voyage? H. The Lapita's thrust into the Pacific was eastward, against the prevailing trade winds, Irwin notes. Those nagging headwinds, he argues, may have been the key to their success. \"They could sail out for days into the unknown and reconnoiter, secure in the knowledge that if they didn't find anything, they could turn about and catch a swift ride home on the trade winds. It's what made the whole thing work. \" Once out there, skilled seafarers would detect abundant leads to follow to land: seabirds and turtles, coconuts and twigs carried out to sea by the tides, and the afternoon pileup of clouds on the horizon that often betokens an island in the distance. Some islands may have broadcast their presence with far less subtlety than a cloud bank. Some of the most violent eruptions anywhere on the planet during the past 10,000 years occurred in Melanesia, which sits nervously in one of the most explosive volcanic regions on Earth. Even less spectacular eruptions would have sent plumes of smoke bilbwing into the stratosphere and rained ash for hundreds of miles. It's possible that the Lapita saw these signs of distant islands and later sailed off in their direction, knowing they would find land For returning explorers, successful or not, the geography of their own archipelagoes provided a safety net to keep them from overshooting their home ports and sailing off into eternity. I. However they did it, the Lapita spread themselves a third of the way across the Pacific, then called it quits for reasons known only to them. Ahead lay the vast emptiness of the central Pacific, and perhaps they were too thinly stretched to venture farther. They probably never numbered more than a few thousand in total, and in their rapid migration eastward they encountered hundreds of islands more than 300 in Fiji alone. Still, more than a millennium would pass before the Lapita's descendants, a people we now call the Polynesians, struck out in search of new territory.", "hypothesis": "Captain cook depicted number of cultural aspects of Polynesians in his journal.", "label": "c"} +{"uid": "id_333", "premise": "Voyage of going: beyond the blue line. One feels a certain sympathy for Captain James Cook on the day in 1778 that he \"discovered\" Hawaii. Then on his third expedition to the Pacific, the British navigator had explored scores of islands across the breadth of the sea, from lush New Zealand to the lonely wastes of Easter Island This latest voyage had taken him thousands of miles north from the Society Islands to an archipelago so remote that even the ok! Polynesians back on Tahiti knew nothing about it. Imagine Cook's surprise, then, when the natives of Hawaii came paddling out in their canoes and greeted him in a familiar tongue, one he had heard on virtually every mote of inhabited land he had visited Marveling at the ubiquity of this Pacific language and culture, he later wondered in his journal: \"How shall we account for this Nation spreading it self so far over this Vast ocean? \" B. Answers have been slow in coming. But now a startling archaeological find on the island of Efate, in the Pacific nation of Vanuatu, has revealed an ancient seafaring people, the distant ancestors of today's Polynesians, taking their first steps into the unknown. The discoveries there have also opened a window into the shadowy work! of those early voyagers. At the same time, other pieces of this human puzzle are turning up in unlikely places. Climate data gleaned from slow-growing corals around the Pacific and from sediments in alpine lakes in South America may help explain how, more than a thousand years later, a second wave of seafarers beat their way across the entire Pacific. C. What we have is a first-or second-generation site containing the graves of some of the Pacific's first explorers, \" says Spriggs, professor of archaeology at the Australian National University and co-leader of an international team excavating the site. It came to light only by luck A backhoe operator, digging up topsoil on the grounds of a derelict coconut plantation, scraped open a grave the first of dozens in a burial ground some 3,000 years old It is the oldest cemetery ever found in the Pacific islands, and it harbors the bones of an ancient people archaeologists call the Lapita, a label that derives from a beach in New Caledonia where a landmark cache of their pottery was found in the 1950s. They were daring blue-water adventurers who roved the sea not just as expbrers but also as pioneers, bringing abng everything they would need to build new lives their families and livestock, taro seedlings and stone tools. D. Within the span of a few centuries the Lapita stretched the boundaries of theirworld from the jungle-clad vokanoes of Papua New Guinea to the bneliest coral outliers of Tonga, at feast 2,000 miles eastward in the Pacific. Abng the way they expbred millions of square miles of unknown sea, discovering and cobnizing scores of tropical islands never before seen by human eyes: Vanuatu, New Caledonia, Fiji, Samoa. E. What little is known or surmised about them has been pieced together from fragments of pottery, animal bones, obsidian flakes, and such oblique sources as comparative linguistics and geochemistry. Although their voyages can be traced back to the northern islands of Papua New Guinea, their language variants of which are still spoken across the Pacific came from Taiwan. And their peculiar style of pottery decoration, created by pressing a carved stamp into the clay, probably had its roots in the northern Philippines. With the discovery of the Lapita cemetery on Efate, the volume of data available to researchers has expanded dramatically. The bones of at feast 62 individuals have been uncovered so far including old men, young women, even babiesand more skeletons are known to be in the ground Archaeobgists were also thrilled to discover six complete Lapita pots. It's an important find, Spriggs says, for it conclusively identifies the remains as Lapita. \"It would be hard for anyone to argue that these aren't Lapita when you have human bones enshrined inside what is unmistakably a Lapita urn. \" F. Several lines of evidence also undergird Spriggs's conclusion that this was a community of pioneers making their first voyages into the remote reaches of Oceania. For one thing, the radiocarbon dating of bones and charcoal places them early in the Lapita expansion. For another, the chemical makeup of the obsidian flakes littering the site indicates that the rock wasn't local; instead it was imported from a large island in Papua New Guinea's Bismarck Archipelago, the springboard for the Lapita's thrust into the Pacific. A particularly intriguing clue comes from chemical tests on the teeth of several skeletons. DNA teased from these ancient bones may also help answer one of the most puzzling questions in Pacific anthropobgy: Did all Pacific islanders spring from one source or many? Was there only one outward migration from a single point in Asia, or several from different points? \"This represents the best opportunity we've had yet, \" says Spriggs, \"to find out who the Lapita actually were, where they came from, and who their cbsest descendants are today. G. \"There is one stubborn question for which archaeobgy has yet to provide any answers: How did the Lapita accomplish the ancient equivalent of a moon landing, many times over? No one has found one of their canoes or any rigging, which could reveal how the canoes were sailed Nor do the oral histories andtraditions of later Polynesians offer any insights, for they segue into myth long before they reach as far back in time as the Lapita. \" All we can say for certain is that the Lapita had canoes that were capable of ocean voyages, and they had the ability to sail them, \" says Geoff Irwin, a professor of archaeology at the University of Auckland and an avid yachtsman. Those sailing skills, he says, were developed and passed down over thousands of years by earlier mariners who worked their way through the archipelagoes of the western Pacific making short crossings to islands within sight of each other. Reaching Fiji, as they did a century or so later, meant crossing more than 500 miles of ocean, pressing on day after day into the great blue void of the Pacific. What gave them the courage to launch out on such a risky voyage? H. The Lapita's thrust into the Pacific was eastward, against the prevailing trade winds, Irwin notes. Those nagging headwinds, he argues, may have been the key to their success. \"They could sail out for days into the unknown and reconnoiter, secure in the knowledge that if they didn't find anything, they could turn about and catch a swift ride home on the trade winds. It's what made the whole thing work. \" Once out there, skilled seafarers would detect abundant leads to follow to land: seabirds and turtles, coconuts and twigs carried out to sea by the tides, and the afternoon pileup of clouds on the horizon that often betokens an island in the distance. Some islands may have broadcast their presence with far less subtlety than a cloud bank. Some of the most violent eruptions anywhere on the planet during the past 10,000 years occurred in Melanesia, which sits nervously in one of the most explosive volcanic regions on Earth. Even less spectacular eruptions would have sent plumes of smoke bilbwing into the stratosphere and rained ash for hundreds of miles. It's possible that the Lapita saw these signs of distant islands and later sailed off in their direction, knowing they would find land For returning explorers, successful or not, the geography of their own archipelagoes provided a safety net to keep them from overshooting their home ports and sailing off into eternity. I. However they did it, the Lapita spread themselves a third of the way across the Pacific, then called it quits for reasons known only to them. Ahead lay the vast emptiness of the central Pacific, and perhaps they were too thinly stretched to venture farther. They probably never numbered more than a few thousand in total, and in their rapid migration eastward they encountered hundreds of islands more than 300 in Fiji alone. Still, more than a millennium would pass before the Lapita's descendants, a people we now call the Polynesians, struck out in search of new territory.", "hypothesis": "Captain cook once expected the Hawaii might speak another language of people from other pacific islands.", "label": "e"} +{"uid": "id_334", "premise": "Voyage of going: beyond the blue line. One feels a certain sympathy for Captain James Cook on the day in 1778 that he \"discovered\" Hawaii. Then on his third expedition to the Pacific, the British navigator had explored scores of islands across the breadth of the sea, from lush New Zealand to the lonely wastes of Easter Island This latest voyage had taken him thousands of miles north from the Society Islands to an archipelago so remote that even the ok! Polynesians back on Tahiti knew nothing about it. Imagine Cook's surprise, then, when the natives of Hawaii came paddling out in their canoes and greeted him in a familiar tongue, one he had heard on virtually every mote of inhabited land he had visited Marveling at the ubiquity of this Pacific language and culture, he later wondered in his journal: \"How shall we account for this Nation spreading it self so far over this Vast ocean? \" B. Answers have been slow in coming. But now a startling archaeological find on the island of Efate, in the Pacific nation of Vanuatu, has revealed an ancient seafaring people, the distant ancestors of today's Polynesians, taking their first steps into the unknown. The discoveries there have also opened a window into the shadowy work! of those early voyagers. At the same time, other pieces of this human puzzle are turning up in unlikely places. Climate data gleaned from slow-growing corals around the Pacific and from sediments in alpine lakes in South America may help explain how, more than a thousand years later, a second wave of seafarers beat their way across the entire Pacific. C. What we have is a first-or second-generation site containing the graves of some of the Pacific's first explorers, \" says Spriggs, professor of archaeology at the Australian National University and co-leader of an international team excavating the site. It came to light only by luck A backhoe operator, digging up topsoil on the grounds of a derelict coconut plantation, scraped open a grave the first of dozens in a burial ground some 3,000 years old It is the oldest cemetery ever found in the Pacific islands, and it harbors the bones of an ancient people archaeologists call the Lapita, a label that derives from a beach in New Caledonia where a landmark cache of their pottery was found in the 1950s. They were daring blue-water adventurers who roved the sea not just as expbrers but also as pioneers, bringing abng everything they would need to build new lives their families and livestock, taro seedlings and stone tools. D. Within the span of a few centuries the Lapita stretched the boundaries of theirworld from the jungle-clad vokanoes of Papua New Guinea to the bneliest coral outliers of Tonga, at feast 2,000 miles eastward in the Pacific. Abng the way they expbred millions of square miles of unknown sea, discovering and cobnizing scores of tropical islands never before seen by human eyes: Vanuatu, New Caledonia, Fiji, Samoa. E. What little is known or surmised about them has been pieced together from fragments of pottery, animal bones, obsidian flakes, and such oblique sources as comparative linguistics and geochemistry. Although their voyages can be traced back to the northern islands of Papua New Guinea, their language variants of which are still spoken across the Pacific came from Taiwan. And their peculiar style of pottery decoration, created by pressing a carved stamp into the clay, probably had its roots in the northern Philippines. With the discovery of the Lapita cemetery on Efate, the volume of data available to researchers has expanded dramatically. The bones of at feast 62 individuals have been uncovered so far including old men, young women, even babiesand more skeletons are known to be in the ground Archaeobgists were also thrilled to discover six complete Lapita pots. It's an important find, Spriggs says, for it conclusively identifies the remains as Lapita. \"It would be hard for anyone to argue that these aren't Lapita when you have human bones enshrined inside what is unmistakably a Lapita urn. \" F. Several lines of evidence also undergird Spriggs's conclusion that this was a community of pioneers making their first voyages into the remote reaches of Oceania. For one thing, the radiocarbon dating of bones and charcoal places them early in the Lapita expansion. For another, the chemical makeup of the obsidian flakes littering the site indicates that the rock wasn't local; instead it was imported from a large island in Papua New Guinea's Bismarck Archipelago, the springboard for the Lapita's thrust into the Pacific. A particularly intriguing clue comes from chemical tests on the teeth of several skeletons. DNA teased from these ancient bones may also help answer one of the most puzzling questions in Pacific anthropobgy: Did all Pacific islanders spring from one source or many? Was there only one outward migration from a single point in Asia, or several from different points? \"This represents the best opportunity we've had yet, \" says Spriggs, \"to find out who the Lapita actually were, where they came from, and who their cbsest descendants are today. G. \"There is one stubborn question for which archaeobgy has yet to provide any answers: How did the Lapita accomplish the ancient equivalent of a moon landing, many times over? No one has found one of their canoes or any rigging, which could reveal how the canoes were sailed Nor do the oral histories andtraditions of later Polynesians offer any insights, for they segue into myth long before they reach as far back in time as the Lapita. \" All we can say for certain is that the Lapita had canoes that were capable of ocean voyages, and they had the ability to sail them, \" says Geoff Irwin, a professor of archaeology at the University of Auckland and an avid yachtsman. Those sailing skills, he says, were developed and passed down over thousands of years by earlier mariners who worked their way through the archipelagoes of the western Pacific making short crossings to islands within sight of each other. Reaching Fiji, as they did a century or so later, meant crossing more than 500 miles of ocean, pressing on day after day into the great blue void of the Pacific. What gave them the courage to launch out on such a risky voyage? H. The Lapita's thrust into the Pacific was eastward, against the prevailing trade winds, Irwin notes. Those nagging headwinds, he argues, may have been the key to their success. \"They could sail out for days into the unknown and reconnoiter, secure in the knowledge that if they didn't find anything, they could turn about and catch a swift ride home on the trade winds. It's what made the whole thing work. \" Once out there, skilled seafarers would detect abundant leads to follow to land: seabirds and turtles, coconuts and twigs carried out to sea by the tides, and the afternoon pileup of clouds on the horizon that often betokens an island in the distance. Some islands may have broadcast their presence with far less subtlety than a cloud bank. Some of the most violent eruptions anywhere on the planet during the past 10,000 years occurred in Melanesia, which sits nervously in one of the most explosive volcanic regions on Earth. Even less spectacular eruptions would have sent plumes of smoke bilbwing into the stratosphere and rained ash for hundreds of miles. It's possible that the Lapita saw these signs of distant islands and later sailed off in their direction, knowing they would find land For returning explorers, successful or not, the geography of their own archipelagoes provided a safety net to keep them from overshooting their home ports and sailing off into eternity. I. However they did it, the Lapita spread themselves a third of the way across the Pacific, then called it quits for reasons known only to them. Ahead lay the vast emptiness of the central Pacific, and perhaps they were too thinly stretched to venture farther. They probably never numbered more than a few thousand in total, and in their rapid migration eastward they encountered hundreds of islands more than 300 in Fiji alone. Still, more than a millennium would pass before the Lapita's descendants, a people we now call the Polynesians, struck out in search of new territory.", "hypothesis": "The Lapita were the first inhabitants in many pacific islands.", "label": "e"} +{"uid": "id_335", "premise": "Voyage of going: beyond the blue line. One feels a certain sympathy for Captain James Cook on the day in 1778 that he \"discovered\" Hawaii. Then on his third expedition to the Pacific, the British navigator had explored scores of islands across the breadth of the sea, from lush New Zealand to the lonely wastes of Easter Island This latest voyage had taken him thousands of miles north from the Society Islands to an archipelago so remote that even the ok! Polynesians back on Tahiti knew nothing about it. Imagine Cook's surprise, then, when the natives of Hawaii came paddling out in their canoes and greeted him in a familiar tongue, one he had heard on virtually every mote of inhabited land he had visited Marveling at the ubiquity of this Pacific language and culture, he later wondered in his journal: \"How shall we account for this Nation spreading it self so far over this Vast ocean? \" B. Answers have been slow in coming. But now a startling archaeological find on the island of Efate, in the Pacific nation of Vanuatu, has revealed an ancient seafaring people, the distant ancestors of today's Polynesians, taking their first steps into the unknown. The discoveries there have also opened a window into the shadowy work! of those early voyagers. At the same time, other pieces of this human puzzle are turning up in unlikely places. Climate data gleaned from slow-growing corals around the Pacific and from sediments in alpine lakes in South America may help explain how, more than a thousand years later, a second wave of seafarers beat their way across the entire Pacific. C. What we have is a first-or second-generation site containing the graves of some of the Pacific's first explorers, \" says Spriggs, professor of archaeology at the Australian National University and co-leader of an international team excavating the site. It came to light only by luck A backhoe operator, digging up topsoil on the grounds of a derelict coconut plantation, scraped open a grave the first of dozens in a burial ground some 3,000 years old It is the oldest cemetery ever found in the Pacific islands, and it harbors the bones of an ancient people archaeologists call the Lapita, a label that derives from a beach in New Caledonia where a landmark cache of their pottery was found in the 1950s. They were daring blue-water adventurers who roved the sea not just as expbrers but also as pioneers, bringing abng everything they would need to build new lives their families and livestock, taro seedlings and stone tools. D. Within the span of a few centuries the Lapita stretched the boundaries of theirworld from the jungle-clad vokanoes of Papua New Guinea to the bneliest coral outliers of Tonga, at feast 2,000 miles eastward in the Pacific. Abng the way they expbred millions of square miles of unknown sea, discovering and cobnizing scores of tropical islands never before seen by human eyes: Vanuatu, New Caledonia, Fiji, Samoa. E. What little is known or surmised about them has been pieced together from fragments of pottery, animal bones, obsidian flakes, and such oblique sources as comparative linguistics and geochemistry. Although their voyages can be traced back to the northern islands of Papua New Guinea, their language variants of which are still spoken across the Pacific came from Taiwan. And their peculiar style of pottery decoration, created by pressing a carved stamp into the clay, probably had its roots in the northern Philippines. With the discovery of the Lapita cemetery on Efate, the volume of data available to researchers has expanded dramatically. The bones of at feast 62 individuals have been uncovered so far including old men, young women, even babiesand more skeletons are known to be in the ground Archaeobgists were also thrilled to discover six complete Lapita pots. It's an important find, Spriggs says, for it conclusively identifies the remains as Lapita. \"It would be hard for anyone to argue that these aren't Lapita when you have human bones enshrined inside what is unmistakably a Lapita urn. \" F. Several lines of evidence also undergird Spriggs's conclusion that this was a community of pioneers making their first voyages into the remote reaches of Oceania. For one thing, the radiocarbon dating of bones and charcoal places them early in the Lapita expansion. For another, the chemical makeup of the obsidian flakes littering the site indicates that the rock wasn't local; instead it was imported from a large island in Papua New Guinea's Bismarck Archipelago, the springboard for the Lapita's thrust into the Pacific. A particularly intriguing clue comes from chemical tests on the teeth of several skeletons. DNA teased from these ancient bones may also help answer one of the most puzzling questions in Pacific anthropobgy: Did all Pacific islanders spring from one source or many? Was there only one outward migration from a single point in Asia, or several from different points? \"This represents the best opportunity we've had yet, \" says Spriggs, \"to find out who the Lapita actually were, where they came from, and who their cbsest descendants are today. G. \"There is one stubborn question for which archaeobgy has yet to provide any answers: How did the Lapita accomplish the ancient equivalent of a moon landing, many times over? No one has found one of their canoes or any rigging, which could reveal how the canoes were sailed Nor do the oral histories andtraditions of later Polynesians offer any insights, for they segue into myth long before they reach as far back in time as the Lapita. \" All we can say for certain is that the Lapita had canoes that were capable of ocean voyages, and they had the ability to sail them, \" says Geoff Irwin, a professor of archaeology at the University of Auckland and an avid yachtsman. Those sailing skills, he says, were developed and passed down over thousands of years by earlier mariners who worked their way through the archipelagoes of the western Pacific making short crossings to islands within sight of each other. Reaching Fiji, as they did a century or so later, meant crossing more than 500 miles of ocean, pressing on day after day into the great blue void of the Pacific. What gave them the courage to launch out on such a risky voyage? H. The Lapita's thrust into the Pacific was eastward, against the prevailing trade winds, Irwin notes. Those nagging headwinds, he argues, may have been the key to their success. \"They could sail out for days into the unknown and reconnoiter, secure in the knowledge that if they didn't find anything, they could turn about and catch a swift ride home on the trade winds. It's what made the whole thing work. \" Once out there, skilled seafarers would detect abundant leads to follow to land: seabirds and turtles, coconuts and twigs carried out to sea by the tides, and the afternoon pileup of clouds on the horizon that often betokens an island in the distance. Some islands may have broadcast their presence with far less subtlety than a cloud bank. Some of the most violent eruptions anywhere on the planet during the past 10,000 years occurred in Melanesia, which sits nervously in one of the most explosive volcanic regions on Earth. Even less spectacular eruptions would have sent plumes of smoke bilbwing into the stratosphere and rained ash for hundreds of miles. It's possible that the Lapita saw these signs of distant islands and later sailed off in their direction, knowing they would find land For returning explorers, successful or not, the geography of their own archipelagoes provided a safety net to keep them from overshooting their home ports and sailing off into eternity. I. However they did it, the Lapita spread themselves a third of the way across the Pacific, then called it quits for reasons known only to them. Ahead lay the vast emptiness of the central Pacific, and perhaps they were too thinly stretched to venture farther. They probably never numbered more than a few thousand in total, and in their rapid migration eastward they encountered hundreds of islands more than 300 in Fiji alone. Still, more than a millennium would pass before the Lapita's descendants, a people we now call the Polynesians, struck out in search of new territory.", "hypothesis": "The unknown pots discovered in Efate had once been used for cooking.", "label": "n"} +{"uid": "id_336", "premise": "Voyage of going: beyond the blue line. One feels a certain sympathy for Captain James Cook on the day in 1778 that he \"discovered\" Hawaii. Then on his third expedition to the Pacific, the British navigator had explored scores of islands across the breadth of the sea, from lush New Zealand to the lonely wastes of Easter Island This latest voyage had taken him thousands of miles north from the Society Islands to an archipelago so remote that even the ok! Polynesians back on Tahiti knew nothing about it. Imagine Cook's surprise, then, when the natives of Hawaii came paddling out in their canoes and greeted him in a familiar tongue, one he had heard on virtually every mote of inhabited land he had visited Marveling at the ubiquity of this Pacific language and culture, he later wondered in his journal: \"How shall we account for this Nation spreading it self so far over this Vast ocean? \" B. Answers have been slow in coming. But now a startling archaeological find on the island of Efate, in the Pacific nation of Vanuatu, has revealed an ancient seafaring people, the distant ancestors of today's Polynesians, taking their first steps into the unknown. The discoveries there have also opened a window into the shadowy work! of those early voyagers. At the same time, other pieces of this human puzzle are turning up in unlikely places. Climate data gleaned from slow-growing corals around the Pacific and from sediments in alpine lakes in South America may help explain how, more than a thousand years later, a second wave of seafarers beat their way across the entire Pacific. C. What we have is a first-or second-generation site containing the graves of some of the Pacific's first explorers, \" says Spriggs, professor of archaeology at the Australian National University and co-leader of an international team excavating the site. It came to light only by luck A backhoe operator, digging up topsoil on the grounds of a derelict coconut plantation, scraped open a grave the first of dozens in a burial ground some 3,000 years old It is the oldest cemetery ever found in the Pacific islands, and it harbors the bones of an ancient people archaeologists call the Lapita, a label that derives from a beach in New Caledonia where a landmark cache of their pottery was found in the 1950s. They were daring blue-water adventurers who roved the sea not just as expbrers but also as pioneers, bringing abng everything they would need to build new lives their families and livestock, taro seedlings and stone tools. D. Within the span of a few centuries the Lapita stretched the boundaries of theirworld from the jungle-clad vokanoes of Papua New Guinea to the bneliest coral outliers of Tonga, at feast 2,000 miles eastward in the Pacific. Abng the way they expbred millions of square miles of unknown sea, discovering and cobnizing scores of tropical islands never before seen by human eyes: Vanuatu, New Caledonia, Fiji, Samoa. E. What little is known or surmised about them has been pieced together from fragments of pottery, animal bones, obsidian flakes, and such oblique sources as comparative linguistics and geochemistry. Although their voyages can be traced back to the northern islands of Papua New Guinea, their language variants of which are still spoken across the Pacific came from Taiwan. And their peculiar style of pottery decoration, created by pressing a carved stamp into the clay, probably had its roots in the northern Philippines. With the discovery of the Lapita cemetery on Efate, the volume of data available to researchers has expanded dramatically. The bones of at feast 62 individuals have been uncovered so far including old men, young women, even babiesand more skeletons are known to be in the ground Archaeobgists were also thrilled to discover six complete Lapita pots. It's an important find, Spriggs says, for it conclusively identifies the remains as Lapita. \"It would be hard for anyone to argue that these aren't Lapita when you have human bones enshrined inside what is unmistakably a Lapita urn. \" F. Several lines of evidence also undergird Spriggs's conclusion that this was a community of pioneers making their first voyages into the remote reaches of Oceania. For one thing, the radiocarbon dating of bones and charcoal places them early in the Lapita expansion. For another, the chemical makeup of the obsidian flakes littering the site indicates that the rock wasn't local; instead it was imported from a large island in Papua New Guinea's Bismarck Archipelago, the springboard for the Lapita's thrust into the Pacific. A particularly intriguing clue comes from chemical tests on the teeth of several skeletons. DNA teased from these ancient bones may also help answer one of the most puzzling questions in Pacific anthropobgy: Did all Pacific islanders spring from one source or many? Was there only one outward migration from a single point in Asia, or several from different points? \"This represents the best opportunity we've had yet, \" says Spriggs, \"to find out who the Lapita actually were, where they came from, and who their cbsest descendants are today. G. \"There is one stubborn question for which archaeobgy has yet to provide any answers: How did the Lapita accomplish the ancient equivalent of a moon landing, many times over? No one has found one of their canoes or any rigging, which could reveal how the canoes were sailed Nor do the oral histories andtraditions of later Polynesians offer any insights, for they segue into myth long before they reach as far back in time as the Lapita. \" All we can say for certain is that the Lapita had canoes that were capable of ocean voyages, and they had the ability to sail them, \" says Geoff Irwin, a professor of archaeology at the University of Auckland and an avid yachtsman. Those sailing skills, he says, were developed and passed down over thousands of years by earlier mariners who worked their way through the archipelagoes of the western Pacific making short crossings to islands within sight of each other. Reaching Fiji, as they did a century or so later, meant crossing more than 500 miles of ocean, pressing on day after day into the great blue void of the Pacific. What gave them the courage to launch out on such a risky voyage? H. The Lapita's thrust into the Pacific was eastward, against the prevailing trade winds, Irwin notes. Those nagging headwinds, he argues, may have been the key to their success. \"They could sail out for days into the unknown and reconnoiter, secure in the knowledge that if they didn't find anything, they could turn about and catch a swift ride home on the trade winds. It's what made the whole thing work. \" Once out there, skilled seafarers would detect abundant leads to follow to land: seabirds and turtles, coconuts and twigs carried out to sea by the tides, and the afternoon pileup of clouds on the horizon that often betokens an island in the distance. Some islands may have broadcast their presence with far less subtlety than a cloud bank. Some of the most violent eruptions anywhere on the planet during the past 10,000 years occurred in Melanesia, which sits nervously in one of the most explosive volcanic regions on Earth. Even less spectacular eruptions would have sent plumes of smoke bilbwing into the stratosphere and rained ash for hundreds of miles. It's possible that the Lapita saw these signs of distant islands and later sailed off in their direction, knowing they would find land For returning explorers, successful or not, the geography of their own archipelagoes provided a safety net to keep them from overshooting their home ports and sailing off into eternity. I. However they did it, the Lapita spread themselves a third of the way across the Pacific, then called it quits for reasons known only to them. Ahead lay the vast emptiness of the central Pacific, and perhaps they were too thinly stretched to venture farther. They probably never numbered more than a few thousand in total, and in their rapid migration eastward they encountered hundreds of islands more than 300 in Fiji alone. Still, more than a millennium would pass before the Lapita's descendants, a people we now call the Polynesians, struck out in search of new territory.", "hypothesis": "The um buried in Efate site was plain as it was without any decoration.", "label": "c"} +{"uid": "id_337", "premise": "Voyage of going: beyond the blue line. One feels a certain sympathy for Captain James Cook on the day in 1778 that he \"discovered\" Hawaii. Then on his third expedition to the Pacific, the British navigator had explored scores of islands across the breadth of the sea, from lush New Zealand to the lonely wastes of Easter Island This latest voyage had taken him thousands of miles north from the Society Islands to an archipelago so remote that even the ok! Polynesians back on Tahiti knew nothing about it. Imagine Cook's surprise, then, when the natives of Hawaii came paddling out in their canoes and greeted him in a familiar tongue, one he had heard on virtually every mote of inhabited land he had visited Marveling at the ubiquity of this Pacific language and culture, he later wondered in his journal: \"How shall we account for this Nation spreading it self so far over this Vast ocean? \" B. Answers have been slow in coming. But now a startling archaeological find on the island of Efate, in the Pacific nation of Vanuatu, has revealed an ancient seafaring people, the distant ancestors of today's Polynesians, taking their first steps into the unknown. The discoveries there have also opened a window into the shadowy work! of those early voyagers. At the same time, other pieces of this human puzzle are turning up in unlikely places. Climate data gleaned from slow-growing corals around the Pacific and from sediments in alpine lakes in South America may help explain how, more than a thousand years later, a second wave of seafarers beat their way across the entire Pacific. C. What we have is a first-or second-generation site containing the graves of some of the Pacific's first explorers, \" says Spriggs, professor of archaeology at the Australian National University and co-leader of an international team excavating the site. It came to light only by luck A backhoe operator, digging up topsoil on the grounds of a derelict coconut plantation, scraped open a grave the first of dozens in a burial ground some 3,000 years old It is the oldest cemetery ever found in the Pacific islands, and it harbors the bones of an ancient people archaeologists call the Lapita, a label that derives from a beach in New Caledonia where a landmark cache of their pottery was found in the 1950s. They were daring blue-water adventurers who roved the sea not just as expbrers but also as pioneers, bringing abng everything they would need to build new lives their families and livestock, taro seedlings and stone tools. D. Within the span of a few centuries the Lapita stretched the boundaries of theirworld from the jungle-clad vokanoes of Papua New Guinea to the bneliest coral outliers of Tonga, at feast 2,000 miles eastward in the Pacific. Abng the way they expbred millions of square miles of unknown sea, discovering and cobnizing scores of tropical islands never before seen by human eyes: Vanuatu, New Caledonia, Fiji, Samoa. E. What little is known or surmised about them has been pieced together from fragments of pottery, animal bones, obsidian flakes, and such oblique sources as comparative linguistics and geochemistry. Although their voyages can be traced back to the northern islands of Papua New Guinea, their language variants of which are still spoken across the Pacific came from Taiwan. And their peculiar style of pottery decoration, created by pressing a carved stamp into the clay, probably had its roots in the northern Philippines. With the discovery of the Lapita cemetery on Efate, the volume of data available to researchers has expanded dramatically. The bones of at feast 62 individuals have been uncovered so far including old men, young women, even babiesand more skeletons are known to be in the ground Archaeobgists were also thrilled to discover six complete Lapita pots. It's an important find, Spriggs says, for it conclusively identifies the remains as Lapita. \"It would be hard for anyone to argue that these aren't Lapita when you have human bones enshrined inside what is unmistakably a Lapita urn. \" F. Several lines of evidence also undergird Spriggs's conclusion that this was a community of pioneers making their first voyages into the remote reaches of Oceania. For one thing, the radiocarbon dating of bones and charcoal places them early in the Lapita expansion. For another, the chemical makeup of the obsidian flakes littering the site indicates that the rock wasn't local; instead it was imported from a large island in Papua New Guinea's Bismarck Archipelago, the springboard for the Lapita's thrust into the Pacific. A particularly intriguing clue comes from chemical tests on the teeth of several skeletons. DNA teased from these ancient bones may also help answer one of the most puzzling questions in Pacific anthropobgy: Did all Pacific islanders spring from one source or many? Was there only one outward migration from a single point in Asia, or several from different points? \"This represents the best opportunity we've had yet, \" says Spriggs, \"to find out who the Lapita actually were, where they came from, and who their cbsest descendants are today. G. \"There is one stubborn question for which archaeobgy has yet to provide any answers: How did the Lapita accomplish the ancient equivalent of a moon landing, many times over? No one has found one of their canoes or any rigging, which could reveal how the canoes were sailed Nor do the oral histories andtraditions of later Polynesians offer any insights, for they segue into myth long before they reach as far back in time as the Lapita. \" All we can say for certain is that the Lapita had canoes that were capable of ocean voyages, and they had the ability to sail them, \" says Geoff Irwin, a professor of archaeology at the University of Auckland and an avid yachtsman. Those sailing skills, he says, were developed and passed down over thousands of years by earlier mariners who worked their way through the archipelagoes of the western Pacific making short crossings to islands within sight of each other. Reaching Fiji, as they did a century or so later, meant crossing more than 500 miles of ocean, pressing on day after day into the great blue void of the Pacific. What gave them the courage to launch out on such a risky voyage? H. The Lapita's thrust into the Pacific was eastward, against the prevailing trade winds, Irwin notes. Those nagging headwinds, he argues, may have been the key to their success. \"They could sail out for days into the unknown and reconnoiter, secure in the knowledge that if they didn't find anything, they could turn about and catch a swift ride home on the trade winds. It's what made the whole thing work. \" Once out there, skilled seafarers would detect abundant leads to follow to land: seabirds and turtles, coconuts and twigs carried out to sea by the tides, and the afternoon pileup of clouds on the horizon that often betokens an island in the distance. Some islands may have broadcast their presence with far less subtlety than a cloud bank. Some of the most violent eruptions anywhere on the planet during the past 10,000 years occurred in Melanesia, which sits nervously in one of the most explosive volcanic regions on Earth. Even less spectacular eruptions would have sent plumes of smoke bilbwing into the stratosphere and rained ash for hundreds of miles. It's possible that the Lapita saw these signs of distant islands and later sailed off in their direction, knowing they would find land For returning explorers, successful or not, the geography of their own archipelagoes provided a safety net to keep them from overshooting their home ports and sailing off into eternity. I. However they did it, the Lapita spread themselves a third of the way across the Pacific, then called it quits for reasons known only to them. Ahead lay the vast emptiness of the central Pacific, and perhaps they were too thinly stretched to venture farther. They probably never numbered more than a few thousand in total, and in their rapid migration eastward they encountered hundreds of islands more than 300 in Fiji alone. Still, more than a millennium would pass before the Lapita's descendants, a people we now call the Polynesians, struck out in search of new territory.", "hypothesis": "The Lapita completed a journey of around 2,000 miles in a period less than a centenary.", "label": "n"} +{"uid": "id_338", "premise": "WATER HYACINTH: BEAUTIFUL YET DESTRUCTIVE Despite possessing vibrant purple flowers and being attractive to the eye, the water hyacinth has often been referred to as the most problematic aquatic plant in the worlds waters. Due to its aesthetic appeal, water hyacinth, which is native to South America, has been distributed to many different regions and now thrives in the southern states of the USA and many subtropical and tropical locations. It has also been observed to be relatively tolerant of cooler climates and is routinely sold as an ornamental plant for domestic use in a number of horticulture centres. Though the hyacinth species is distinctive in appearance, another aquatic floating plant water lettuce is sometimes mistakenly identified as water hyacinth. Water lettuce, however, does not have the same attractive flowers, has larger leaves and is less tolerant of cooler climates. Water hyacinth has rounded waxy, green leaves which grow up to around 6 inches in width and floating leaf stems which grow up to 12 inches in length. Flowers are typically between 2 to 3 inches in width and as many as 15 flowers, each purple on the outside and containing a yellow centre, may grow from each plant. Many of the problems associated with the water hyacinth are due to its incredible growth and reproduction capabilities, which have made it difficult to control and allow it to quickly dominate the environment in which it grows and spreads. Its growth patterns are characterised by a rapid formation of an impenetrable vegetation mass; botanists say that one plant can produce around 5000 seeds and in one study two plants were observed to produce 1200 plants in as little as 4 months. Following natures usual pattern, water hyacinth seeds are distributed outside of the immediate area by birds, fauna, wind and water currents, facilitating growth in surrounding areas previously free of the plant. Domination of environments by water hyacinth populations has a number of negative implications. For humans, difficulties may be faced in getting boats through areas of rivers and lakes where the plant is present and fishing and swimming opportunities may be limited. However, the implications for the ecosystem of the immediate environment may be of even greater concern. The density of the mass of water hyacinth populations can prevent adequate amounts of sunlight and oxygen reaching the water: as a result, significant numbers of fish may die, other species of plant growing below water level are compromised and the ecosystem of the immediate area can therefore become unbalanced. Furthermore, the conditions created by the presence of water hyacinth, while detrimental to most forms of life, are perfect for encouraging growth of deadly bacteria often found in poorly oxygenated areas of water. In the southern states of the USA, in Florida in particular, water hyacinth is now under maintenance control. The plant population can be limited in a number of ways: including use of herbicides, clearance equipment and bio-control insects. However, efforts to minimise the population of water hyacinth need to be continual and consistent; experts warning that unless control methods are upheld, the problem can easily reoccur. Some say inattention for as little as a twelve month period would allow numbers to quickly return to infestation level; hardly surprising given that the species is known to be able to double in as little as 12 days. Water hyacinth is thought to have been introduced into Africa in the 1800s; its presence at Lake Kyoga was first identified in 1988 and at Lake Victoria in 1989. In the mid 1990s, water hyacinth was estimated to dominate 10% of the latter lakes waters. However, by 1998, the plant was almost completely eliminated from East African waters; this being achieved predominantly by the use of bio-control insects, in this case snout beetles, a type of weevil which feeds only on the water hyacinth species of plant. Tens of thousands of the weevils were distributed throughout the lake areas of East Africa, their habit of feeding on the leaves and laying their eggs in the plants stalks eventually causing the plants to die and sink to the bottom of the lake. In addition, the plant population was removed using mechanical clearing equipment and by hand with the help of a machete. Despite earlier success, however, negative repercussions of human activity have caused the return of water hyacinth to East African waters. Ugandas Lake Kyoga, has recently once again experienced problems with infestation. Sewage and agricultural waste making their way into the waterways and thereby creating an excess of nutrients in the water have been the main contributing factors to the re-emergence of water hyacinth. In addition, high levels of nitrogen in rainfall, which enters the water cycle from the smoke created by wood burning cooking fires used in the region, also serves as nutrition to the increasing plant population. Restriction of human activity on lakes such as this, caused by the infestation of water hyacinth has enormous implications; villages such as Kayago, which is in close proximity to the lake, are often almost completely dependent on fishing activity for their economy and food source. While the infestation of water hyacinth in Lake Victoria at the time of writing stands at 0.5%, far below the 10% level experienced in the middle of the 1990s, experts fear that growth could once again become out of control. The main concern is that, as a result of changing weather conditions, the activity of the snout beetle weevils may be less effective than in the past. The region around Lake Victoria has experienced an extended period of drought and while the water hyacinth is capable of living and reproducing both in lakes and surrounding dry land, its predator, the snout beetle can only survive on water. Plant populations growing in lakeside locations are therefore under limited threat from the insect brought in to control them and are consequently able to reproduce in relative freedom.", "hypothesis": "The current problem of dominance of water hyacinth on Lake Kyoga is less serious than in the 1980s and early 1990s.", "label": "n"} +{"uid": "id_339", "premise": "WATER HYACINTH: BEAUTIFUL YET DESTRUCTIVE Despite possessing vibrant purple flowers and being attractive to the eye, the water hyacinth has often been referred to as the most problematic aquatic plant in the worlds waters. Due to its aesthetic appeal, water hyacinth, which is native to South America, has been distributed to many different regions and now thrives in the southern states of the USA and many subtropical and tropical locations. It has also been observed to be relatively tolerant of cooler climates and is routinely sold as an ornamental plant for domestic use in a number of horticulture centres. Though the hyacinth species is distinctive in appearance, another aquatic floating plant water lettuce is sometimes mistakenly identified as water hyacinth. Water lettuce, however, does not have the same attractive flowers, has larger leaves and is less tolerant of cooler climates. Water hyacinth has rounded waxy, green leaves which grow up to around 6 inches in width and floating leaf stems which grow up to 12 inches in length. Flowers are typically between 2 to 3 inches in width and as many as 15 flowers, each purple on the outside and containing a yellow centre, may grow from each plant. Many of the problems associated with the water hyacinth are due to its incredible growth and reproduction capabilities, which have made it difficult to control and allow it to quickly dominate the environment in which it grows and spreads. Its growth patterns are characterised by a rapid formation of an impenetrable vegetation mass; botanists say that one plant can produce around 5000 seeds and in one study two plants were observed to produce 1200 plants in as little as 4 months. Following natures usual pattern, water hyacinth seeds are distributed outside of the immediate area by birds, fauna, wind and water currents, facilitating growth in surrounding areas previously free of the plant. Domination of environments by water hyacinth populations has a number of negative implications. For humans, difficulties may be faced in getting boats through areas of rivers and lakes where the plant is present and fishing and swimming opportunities may be limited. However, the implications for the ecosystem of the immediate environment may be of even greater concern. The density of the mass of water hyacinth populations can prevent adequate amounts of sunlight and oxygen reaching the water: as a result, significant numbers of fish may die, other species of plant growing below water level are compromised and the ecosystem of the immediate area can therefore become unbalanced. Furthermore, the conditions created by the presence of water hyacinth, while detrimental to most forms of life, are perfect for encouraging growth of deadly bacteria often found in poorly oxygenated areas of water. In the southern states of the USA, in Florida in particular, water hyacinth is now under maintenance control. The plant population can be limited in a number of ways: including use of herbicides, clearance equipment and bio-control insects. However, efforts to minimise the population of water hyacinth need to be continual and consistent; experts warning that unless control methods are upheld, the problem can easily reoccur. Some say inattention for as little as a twelve month period would allow numbers to quickly return to infestation level; hardly surprising given that the species is known to be able to double in as little as 12 days. Water hyacinth is thought to have been introduced into Africa in the 1800s; its presence at Lake Kyoga was first identified in 1988 and at Lake Victoria in 1989. In the mid 1990s, water hyacinth was estimated to dominate 10% of the latter lakes waters. However, by 1998, the plant was almost completely eliminated from East African waters; this being achieved predominantly by the use of bio-control insects, in this case snout beetles, a type of weevil which feeds only on the water hyacinth species of plant. Tens of thousands of the weevils were distributed throughout the lake areas of East Africa, their habit of feeding on the leaves and laying their eggs in the plants stalks eventually causing the plants to die and sink to the bottom of the lake. In addition, the plant population was removed using mechanical clearing equipment and by hand with the help of a machete. Despite earlier success, however, negative repercussions of human activity have caused the return of water hyacinth to East African waters. Ugandas Lake Kyoga, has recently once again experienced problems with infestation. Sewage and agricultural waste making their way into the waterways and thereby creating an excess of nutrients in the water have been the main contributing factors to the re-emergence of water hyacinth. In addition, high levels of nitrogen in rainfall, which enters the water cycle from the smoke created by wood burning cooking fires used in the region, also serves as nutrition to the increasing plant population. Restriction of human activity on lakes such as this, caused by the infestation of water hyacinth has enormous implications; villages such as Kayago, which is in close proximity to the lake, are often almost completely dependent on fishing activity for their economy and food source. While the infestation of water hyacinth in Lake Victoria at the time of writing stands at 0.5%, far below the 10% level experienced in the middle of the 1990s, experts fear that growth could once again become out of control. The main concern is that, as a result of changing weather conditions, the activity of the snout beetle weevils may be less effective than in the past. The region around Lake Victoria has experienced an extended period of drought and while the water hyacinth is capable of living and reproducing both in lakes and surrounding dry land, its predator, the snout beetle can only survive on water. Plant populations growing in lakeside locations are therefore under limited threat from the insect brought in to control them and are consequently able to reproduce in relative freedom.", "hypothesis": "Presence of dense water hyacinth populations can encourage the development of certain harmful forms of life.", "label": "e"} +{"uid": "id_340", "premise": "WATER HYACINTH: BEAUTIFUL YET DESTRUCTIVE Despite possessing vibrant purple flowers and being attractive to the eye, the water hyacinth has often been referred to as the most problematic aquatic plant in the worlds waters. Due to its aesthetic appeal, water hyacinth, which is native to South America, has been distributed to many different regions and now thrives in the southern states of the USA and many subtropical and tropical locations. It has also been observed to be relatively tolerant of cooler climates and is routinely sold as an ornamental plant for domestic use in a number of horticulture centres. Though the hyacinth species is distinctive in appearance, another aquatic floating plant water lettuce is sometimes mistakenly identified as water hyacinth. Water lettuce, however, does not have the same attractive flowers, has larger leaves and is less tolerant of cooler climates. Water hyacinth has rounded waxy, green leaves which grow up to around 6 inches in width and floating leaf stems which grow up to 12 inches in length. Flowers are typically between 2 to 3 inches in width and as many as 15 flowers, each purple on the outside and containing a yellow centre, may grow from each plant. Many of the problems associated with the water hyacinth are due to its incredible growth and reproduction capabilities, which have made it difficult to control and allow it to quickly dominate the environment in which it grows and spreads. Its growth patterns are characterised by a rapid formation of an impenetrable vegetation mass; botanists say that one plant can produce around 5000 seeds and in one study two plants were observed to produce 1200 plants in as little as 4 months. Following natures usual pattern, water hyacinth seeds are distributed outside of the immediate area by birds, fauna, wind and water currents, facilitating growth in surrounding areas previously free of the plant. Domination of environments by water hyacinth populations has a number of negative implications. For humans, difficulties may be faced in getting boats through areas of rivers and lakes where the plant is present and fishing and swimming opportunities may be limited. However, the implications for the ecosystem of the immediate environment may be of even greater concern. The density of the mass of water hyacinth populations can prevent adequate amounts of sunlight and oxygen reaching the water: as a result, significant numbers of fish may die, other species of plant growing below water level are compromised and the ecosystem of the immediate area can therefore become unbalanced. Furthermore, the conditions created by the presence of water hyacinth, while detrimental to most forms of life, are perfect for encouraging growth of deadly bacteria often found in poorly oxygenated areas of water. In the southern states of the USA, in Florida in particular, water hyacinth is now under maintenance control. The plant population can be limited in a number of ways: including use of herbicides, clearance equipment and bio-control insects. However, efforts to minimise the population of water hyacinth need to be continual and consistent; experts warning that unless control methods are upheld, the problem can easily reoccur. Some say inattention for as little as a twelve month period would allow numbers to quickly return to infestation level; hardly surprising given that the species is known to be able to double in as little as 12 days. Water hyacinth is thought to have been introduced into Africa in the 1800s; its presence at Lake Kyoga was first identified in 1988 and at Lake Victoria in 1989. In the mid 1990s, water hyacinth was estimated to dominate 10% of the latter lakes waters. However, by 1998, the plant was almost completely eliminated from East African waters; this being achieved predominantly by the use of bio-control insects, in this case snout beetles, a type of weevil which feeds only on the water hyacinth species of plant. Tens of thousands of the weevils were distributed throughout the lake areas of East Africa, their habit of feeding on the leaves and laying their eggs in the plants stalks eventually causing the plants to die and sink to the bottom of the lake. In addition, the plant population was removed using mechanical clearing equipment and by hand with the help of a machete. Despite earlier success, however, negative repercussions of human activity have caused the return of water hyacinth to East African waters. Ugandas Lake Kyoga, has recently once again experienced problems with infestation. Sewage and agricultural waste making their way into the waterways and thereby creating an excess of nutrients in the water have been the main contributing factors to the re-emergence of water hyacinth. In addition, high levels of nitrogen in rainfall, which enters the water cycle from the smoke created by wood burning cooking fires used in the region, also serves as nutrition to the increasing plant population. Restriction of human activity on lakes such as this, caused by the infestation of water hyacinth has enormous implications; villages such as Kayago, which is in close proximity to the lake, are often almost completely dependent on fishing activity for their economy and food source. While the infestation of water hyacinth in Lake Victoria at the time of writing stands at 0.5%, far below the 10% level experienced in the middle of the 1990s, experts fear that growth could once again become out of control. The main concern is that, as a result of changing weather conditions, the activity of the snout beetle weevils may be less effective than in the past. The region around Lake Victoria has experienced an extended period of drought and while the water hyacinth is capable of living and reproducing both in lakes and surrounding dry land, its predator, the snout beetle can only survive on water. Plant populations growing in lakeside locations are therefore under limited threat from the insect brought in to control them and are consequently able to reproduce in relative freedom.", "hypothesis": "Sewage and waste created by farming have had more of an impact on the return of the water hyacinth population in Uganda than nitrogen- rich air.", "label": "e"} +{"uid": "id_341", "premise": "WEATHERING IN THE DESERT In the deserts, as elsewhere, rocks at the earths surface are changed by weathering, which may be defined as the disintegration of rocks where they lie. Weathering processes are either chemical, when alteration of some of the constituent particles is involved; or mechanical, when there is merely the physical breaking apart and fragmentation of rocks. Which process will dominate depends primarily on the mineralogy and texture of the rock and the local climate, but several individual processes usually work together to the common end of rock disintegration. The great daily changes in temperature of deserts have long been supposed to be responsible for the disintegration of rocks, either by the differential heating of the various rock-forming minerals or by differential heating between the outer and inner parts of rock masses. However, both field observations and laboratory experiments have led to a reassessment of the importance of exposure to the suns rays in desert weathering. Almost half a century ago Barton remarked that the buried parts of some of the ancient monuments in Egypt were more weathered than were those parts fully exposed to the suns rays, and attributed this to the effects of water absorption below the ground surface. Laboratory experiments have shown that rocks subjected to many cycles of large temperature oscillations (larger than those experienced in nature) display no evidence of fissuring or fragmentation, as a result. However, when marked fluctuations of temperature occur in moist conditions small rock fragments quickly form. The expansive action of crystallising salts is often alleged to exert sufficient force to disintegrate rocks. Few would dispute that this mechanism is capable of disrupting fissile or well-cleaved rocks or rocks already weakened by other weathering agencies; wood is splintered, terracotta tiles disintegrated and clays disturbed by the mechanism, but its importance when acting upon fresh and cohesive crystalline rocks remains uncertain. Weathering achieves more than the disintegration of rocks, though this is its most important geomorphic effect. It causes specific landforms to develop. Many boulders possess a superficial hard layer of iron oxide and/or silica, substances which have migrated in solution from the inside of the block towards the surface. Not only is the exterior thus case-hardened but the depleted interior disintegrates easily. When weathering penetrates the shell the inside is rapidly attacked and only the hard outer layer remains to give hollowed or tortoiseshell rocks. Another superficial layer, the precise nature of which is little understood, is the well-known desert varnish or patina, a shiny coat on the surface of rocks and pebbles and characteristic of arid environments. Some varnishes are colourless, others light brown, yet others so dark a brown as to be virtually black. Its origin is unknown but is significant, for it has been suggested that the varnish grows darker with the passage of time; obviously before such a criterion could be used with confidence as a chronological tool its origin must be known with precision. Its formation is so slow that in Egypt, for example, it has been estimated that a light brown coating requires between 2,000 and 5,000 years to develop, a fully formed blackish veneer between 20,000 and 50,000 years. The development of relatively impermeable soil horizons that are subsequently exposed at the surface because of erosion of once overlying, easily eroded materials, and which thus become surface crusts, is widespread in arid regions, although it is also known outside the deserts, and indeed many of the examples in arid lands probably originated in former periods of humid climate. The crusts prevent the waters of occasional torrential downpours from penetrating deeply into the soil, and thus they contribute to the rapid run-off associated with desert storms. Also, after erosion has cut through the crust and exposed underlying soil layers, the hard layer forms a resistant capping (duricrust) on plateaux and mesas, such as are common in many parts of arid and semi-arid Australia. Some duricrust layers have been used as time markers for landforms and geological formations. The necessary conditions for this are that the crust forms fairly rapidly, and that it is sufficiently distinct in appearance to preclude the possibility of confusion with other crusts formed at other times. The Barrilaco calcrete of Mexico for instance is believed to date from about 7,000 B. C. The main silcrete of the northern districts of South Australia is believed to date from the Lower Miocene, the laterite of northern Australia to be of the Lower or Middle Miocene age.", "hypothesis": "It is estimated that dark patina originated between 2,000 and 5,000 years ago.", "label": "c"} +{"uid": "id_342", "premise": "WEATHERING IN THE DESERT In the deserts, as elsewhere, rocks at the earths surface are changed by weathering, which may be defined as the disintegration of rocks where they lie. Weathering processes are either chemical, when alteration of some of the constituent particles is involved; or mechanical, when there is merely the physical breaking apart and fragmentation of rocks. Which process will dominate depends primarily on the mineralogy and texture of the rock and the local climate, but several individual processes usually work together to the common end of rock disintegration. The great daily changes in temperature of deserts have long been supposed to be responsible for the disintegration of rocks, either by the differential heating of the various rock-forming minerals or by differential heating between the outer and inner parts of rock masses. However, both field observations and laboratory experiments have led to a reassessment of the importance of exposure to the suns rays in desert weathering. Almost half a century ago Barton remarked that the buried parts of some of the ancient monuments in Egypt were more weathered than were those parts fully exposed to the suns rays, and attributed this to the effects of water absorption below the ground surface. Laboratory experiments have shown that rocks subjected to many cycles of large temperature oscillations (larger than those experienced in nature) display no evidence of fissuring or fragmentation, as a result. However, when marked fluctuations of temperature occur in moist conditions small rock fragments quickly form. The expansive action of crystallising salts is often alleged to exert sufficient force to disintegrate rocks. Few would dispute that this mechanism is capable of disrupting fissile or well-cleaved rocks or rocks already weakened by other weathering agencies; wood is splintered, terracotta tiles disintegrated and clays disturbed by the mechanism, but its importance when acting upon fresh and cohesive crystalline rocks remains uncertain. Weathering achieves more than the disintegration of rocks, though this is its most important geomorphic effect. It causes specific landforms to develop. Many boulders possess a superficial hard layer of iron oxide and/or silica, substances which have migrated in solution from the inside of the block towards the surface. Not only is the exterior thus case-hardened but the depleted interior disintegrates easily. When weathering penetrates the shell the inside is rapidly attacked and only the hard outer layer remains to give hollowed or tortoiseshell rocks. Another superficial layer, the precise nature of which is little understood, is the well-known desert varnish or patina, a shiny coat on the surface of rocks and pebbles and characteristic of arid environments. Some varnishes are colourless, others light brown, yet others so dark a brown as to be virtually black. Its origin is unknown but is significant, for it has been suggested that the varnish grows darker with the passage of time; obviously before such a criterion could be used with confidence as a chronological tool its origin must be known with precision. Its formation is so slow that in Egypt, for example, it has been estimated that a light brown coating requires between 2,000 and 5,000 years to develop, a fully formed blackish veneer between 20,000 and 50,000 years. The development of relatively impermeable soil horizons that are subsequently exposed at the surface because of erosion of once overlying, easily eroded materials, and which thus become surface crusts, is widespread in arid regions, although it is also known outside the deserts, and indeed many of the examples in arid lands probably originated in former periods of humid climate. The crusts prevent the waters of occasional torrential downpours from penetrating deeply into the soil, and thus they contribute to the rapid run-off associated with desert storms. Also, after erosion has cut through the crust and exposed underlying soil layers, the hard layer forms a resistant capping (duricrust) on plateaux and mesas, such as are common in many parts of arid and semi-arid Australia. Some duricrust layers have been used as time markers for landforms and geological formations. The necessary conditions for this are that the crust forms fairly rapidly, and that it is sufficiently distinct in appearance to preclude the possibility of confusion with other crusts formed at other times. The Barrilaco calcrete of Mexico for instance is believed to date from about 7,000 B. C. The main silcrete of the northern districts of South Australia is believed to date from the Lower Miocene, the laterite of northern Australia to be of the Lower or Middle Miocene age.", "hypothesis": "Desert rocks can become weathered when there is a chemical reaction within the rock.", "label": "e"} +{"uid": "id_343", "premise": "WEATHERING IN THE DESERT In the deserts, as elsewhere, rocks at the earths surface are changed by weathering, which may be defined as the disintegration of rocks where they lie. Weathering processes are either chemical, when alteration of some of the constituent particles is involved; or mechanical, when there is merely the physical breaking apart and fragmentation of rocks. Which process will dominate depends primarily on the mineralogy and texture of the rock and the local climate, but several individual processes usually work together to the common end of rock disintegration. The great daily changes in temperature of deserts have long been supposed to be responsible for the disintegration of rocks, either by the differential heating of the various rock-forming minerals or by differential heating between the outer and inner parts of rock masses. However, both field observations and laboratory experiments have led to a reassessment of the importance of exposure to the suns rays in desert weathering. Almost half a century ago Barton remarked that the buried parts of some of the ancient monuments in Egypt were more weathered than were those parts fully exposed to the suns rays, and attributed this to the effects of water absorption below the ground surface. Laboratory experiments have shown that rocks subjected to many cycles of large temperature oscillations (larger than those experienced in nature) display no evidence of fissuring or fragmentation, as a result. However, when marked fluctuations of temperature occur in moist conditions small rock fragments quickly form. The expansive action of crystallising salts is often alleged to exert sufficient force to disintegrate rocks. Few would dispute that this mechanism is capable of disrupting fissile or well-cleaved rocks or rocks already weakened by other weathering agencies; wood is splintered, terracotta tiles disintegrated and clays disturbed by the mechanism, but its importance when acting upon fresh and cohesive crystalline rocks remains uncertain. Weathering achieves more than the disintegration of rocks, though this is its most important geomorphic effect. It causes specific landforms to develop. Many boulders possess a superficial hard layer of iron oxide and/or silica, substances which have migrated in solution from the inside of the block towards the surface. Not only is the exterior thus case-hardened but the depleted interior disintegrates easily. When weathering penetrates the shell the inside is rapidly attacked and only the hard outer layer remains to give hollowed or tortoiseshell rocks. Another superficial layer, the precise nature of which is little understood, is the well-known desert varnish or patina, a shiny coat on the surface of rocks and pebbles and characteristic of arid environments. Some varnishes are colourless, others light brown, yet others so dark a brown as to be virtually black. Its origin is unknown but is significant, for it has been suggested that the varnish grows darker with the passage of time; obviously before such a criterion could be used with confidence as a chronological tool its origin must be known with precision. Its formation is so slow that in Egypt, for example, it has been estimated that a light brown coating requires between 2,000 and 5,000 years to develop, a fully formed blackish veneer between 20,000 and 50,000 years. The development of relatively impermeable soil horizons that are subsequently exposed at the surface because of erosion of once overlying, easily eroded materials, and which thus become surface crusts, is widespread in arid regions, although it is also known outside the deserts, and indeed many of the examples in arid lands probably originated in former periods of humid climate. The crusts prevent the waters of occasional torrential downpours from penetrating deeply into the soil, and thus they contribute to the rapid run-off associated with desert storms. Also, after erosion has cut through the crust and exposed underlying soil layers, the hard layer forms a resistant capping (duricrust) on plateaux and mesas, such as are common in many parts of arid and semi-arid Australia. Some duricrust layers have been used as time markers for landforms and geological formations. The necessary conditions for this are that the crust forms fairly rapidly, and that it is sufficiently distinct in appearance to preclude the possibility of confusion with other crusts formed at other times. The Barrilaco calcrete of Mexico for instance is believed to date from about 7,000 B. C. The main silcrete of the northern districts of South Australia is believed to date from the Lower Miocene, the laterite of northern Australia to be of the Lower or Middle Miocene age.", "hypothesis": "The parts of Egyptian monuments exposed to sunlight were found to be affected by the weather more than those below the ground.", "label": "c"} +{"uid": "id_344", "premise": "WEATHERING IN THE DESERT In the deserts, as elsewhere, rocks at the earths surface are changed by weathering, which may be defined as the disintegration of rocks where they lie. Weathering processes are either chemical, when alteration of some of the constituent particles is involved; or mechanical, when there is merely the physical breaking apart and fragmentation of rocks. Which process will dominate depends primarily on the mineralogy and texture of the rock and the local climate, but several individual processes usually work together to the common end of rock disintegration. The great daily changes in temperature of deserts have long been supposed to be responsible for the disintegration of rocks, either by the differential heating of the various rock-forming minerals or by differential heating between the outer and inner parts of rock masses. However, both field observations and laboratory experiments have led to a reassessment of the importance of exposure to the suns rays in desert weathering. Almost half a century ago Barton remarked that the buried parts of some of the ancient monuments in Egypt were more weathered than were those parts fully exposed to the suns rays, and attributed this to the effects of water absorption below the ground surface. Laboratory experiments have shown that rocks subjected to many cycles of large temperature oscillations (larger than those experienced in nature) display no evidence of fissuring or fragmentation, as a result. However, when marked fluctuations of temperature occur in moist conditions small rock fragments quickly form. The expansive action of crystallising salts is often alleged to exert sufficient force to disintegrate rocks. Few would dispute that this mechanism is capable of disrupting fissile or well-cleaved rocks or rocks already weakened by other weathering agencies; wood is splintered, terracotta tiles disintegrated and clays disturbed by the mechanism, but its importance when acting upon fresh and cohesive crystalline rocks remains uncertain. Weathering achieves more than the disintegration of rocks, though this is its most important geomorphic effect. It causes specific landforms to develop. Many boulders possess a superficial hard layer of iron oxide and/or silica, substances which have migrated in solution from the inside of the block towards the surface. Not only is the exterior thus case-hardened but the depleted interior disintegrates easily. When weathering penetrates the shell the inside is rapidly attacked and only the hard outer layer remains to give hollowed or tortoiseshell rocks. Another superficial layer, the precise nature of which is little understood, is the well-known desert varnish or patina, a shiny coat on the surface of rocks and pebbles and characteristic of arid environments. Some varnishes are colourless, others light brown, yet others so dark a brown as to be virtually black. Its origin is unknown but is significant, for it has been suggested that the varnish grows darker with the passage of time; obviously before such a criterion could be used with confidence as a chronological tool its origin must be known with precision. Its formation is so slow that in Egypt, for example, it has been estimated that a light brown coating requires between 2,000 and 5,000 years to develop, a fully formed blackish veneer between 20,000 and 50,000 years. The development of relatively impermeable soil horizons that are subsequently exposed at the surface because of erosion of once overlying, easily eroded materials, and which thus become surface crusts, is widespread in arid regions, although it is also known outside the deserts, and indeed many of the examples in arid lands probably originated in former periods of humid climate. The crusts prevent the waters of occasional torrential downpours from penetrating deeply into the soil, and thus they contribute to the rapid run-off associated with desert storms. Also, after erosion has cut through the crust and exposed underlying soil layers, the hard layer forms a resistant capping (duricrust) on plateaux and mesas, such as are common in many parts of arid and semi-arid Australia. Some duricrust layers have been used as time markers for landforms and geological formations. The necessary conditions for this are that the crust forms fairly rapidly, and that it is sufficiently distinct in appearance to preclude the possibility of confusion with other crusts formed at other times. The Barrilaco calcrete of Mexico for instance is believed to date from about 7,000 B. C. The main silcrete of the northern districts of South Australia is believed to date from the Lower Miocene, the laterite of northern Australia to be of the Lower or Middle Miocene age.", "hypothesis": "Duricrust layering is no longer used as an indicator of time because of the confusion with similar crusts.", "label": "n"} +{"uid": "id_345", "premise": "WEATHERING IN THE DESERT In the deserts, as elsewhere, rocks at the earths surface are changed by weathering, which may be defined as the disintegration of rocks where they lie. Weathering processes are either chemical, when alteration of some of the constituent particles is involved; or mechanical, when there is merely the physical breaking apart and fragmentation of rocks. Which process will dominate depends primarily on the mineralogy and texture of the rock and the local climate, but several individual processes usually work together to the common end of rock disintegration. The great daily changes in temperature of deserts have long been supposed to be responsible for the disintegration of rocks, either by the differential heating of the various rock-forming minerals or by differential heating between the outer and inner parts of rock masses. However, both field observations and laboratory experiments have led to a reassessment of the importance of exposure to the suns rays in desert weathering. Almost half a century ago Barton remarked that the buried parts of some of the ancient monuments in Egypt were more weathered than were those parts fully exposed to the suns rays, and attributed this to the effects of water absorption below the ground surface. Laboratory experiments have shown that rocks subjected to many cycles of large temperature oscillations (larger than those experienced in nature) display no evidence of fissuring or fragmentation, as a result. However, when marked fluctuations of temperature occur in moist conditions small rock fragments quickly form. The expansive action of crystallising salts is often alleged to exert sufficient force to disintegrate rocks. Few would dispute that this mechanism is capable of disrupting fissile or well-cleaved rocks or rocks already weakened by other weathering agencies; wood is splintered, terracotta tiles disintegrated and clays disturbed by the mechanism, but its importance when acting upon fresh and cohesive crystalline rocks remains uncertain. Weathering achieves more than the disintegration of rocks, though this is its most important geomorphic effect. It causes specific landforms to develop. Many boulders possess a superficial hard layer of iron oxide and/or silica, substances which have migrated in solution from the inside of the block towards the surface. Not only is the exterior thus case-hardened but the depleted interior disintegrates easily. When weathering penetrates the shell the inside is rapidly attacked and only the hard outer layer remains to give hollowed or tortoiseshell rocks. Another superficial layer, the precise nature of which is little understood, is the well-known desert varnish or patina, a shiny coat on the surface of rocks and pebbles and characteristic of arid environments. Some varnishes are colourless, others light brown, yet others so dark a brown as to be virtually black. Its origin is unknown but is significant, for it has been suggested that the varnish grows darker with the passage of time; obviously before such a criterion could be used with confidence as a chronological tool its origin must be known with precision. Its formation is so slow that in Egypt, for example, it has been estimated that a light brown coating requires between 2,000 and 5,000 years to develop, a fully formed blackish veneer between 20,000 and 50,000 years. The development of relatively impermeable soil horizons that are subsequently exposed at the surface because of erosion of once overlying, easily eroded materials, and which thus become surface crusts, is widespread in arid regions, although it is also known outside the deserts, and indeed many of the examples in arid lands probably originated in former periods of humid climate. The crusts prevent the waters of occasional torrential downpours from penetrating deeply into the soil, and thus they contribute to the rapid run-off associated with desert storms. Also, after erosion has cut through the crust and exposed underlying soil layers, the hard layer forms a resistant capping (duricrust) on plateaux and mesas, such as are common in many parts of arid and semi-arid Australia. Some duricrust layers have been used as time markers for landforms and geological formations. The necessary conditions for this are that the crust forms fairly rapidly, and that it is sufficiently distinct in appearance to preclude the possibility of confusion with other crusts formed at other times. The Barrilaco calcrete of Mexico for instance is believed to date from about 7,000 B. C. The main silcrete of the northern districts of South Australia is believed to date from the Lower Miocene, the laterite of northern Australia to be of the Lower or Middle Miocene age.", "hypothesis": "Because of surface crusts, water from torrential rains cannot be fully absorbed into the ground and as a result causes run offs in arid regions.", "label": "e"} +{"uid": "id_346", "premise": "WEATHERING IN THE DESERT In the deserts, as elsewhere, rocks at the earths surface are changed by weathering, which may be defined as the disintegration of rocks where they lie. Weathering processes are either chemical, when alteration of some of the constituent particles is involved; or mechanical, when there is merely the physical breaking apart and fragmentation of rocks. Which process will dominate depends primarily on the mineralogy and texture of the rock and the local climate, but several individual processes usually work together to the common end of rock disintegration. The great daily changes in temperature of deserts have long been supposed to be responsible for the disintegration of rocks, either by the differential heating of the various rock-forming minerals or by differential heating between the outer and inner parts of rock masses. However, both field observations and laboratory experiments have led to a reassessment of the importance of exposure to the suns rays in desert weathering. Almost half a century ago Barton remarked that the buried parts of some of the ancient monuments in Egypt were more weathered than were those parts fully exposed to the suns rays, and attributed this to the effects of water absorption below the ground surface. Laboratory experiments have shown that rocks subjected to many cycles of large temperature oscillations (larger than those experienced in nature) display no evidence of fissuring or fragmentation, as a result. However, when marked fluctuations of temperature occur in moist conditions small rock fragments quickly form. The expansive action of crystallising salts is often alleged to exert sufficient force to disintegrate rocks. Few would dispute that this mechanism is capable of disrupting fissile or well-cleaved rocks or rocks already weakened by other weathering agencies; wood is splintered, terracotta tiles disintegrated and clays disturbed by the mechanism, but its importance when acting upon fresh and cohesive crystalline rocks remains uncertain. Weathering achieves more than the disintegration of rocks, though this is its most important geomorphic effect. It causes specific landforms to develop. Many boulders possess a superficial hard layer of iron oxide and/or silica, substances which have migrated in solution from the inside of the block towards the surface. Not only is the exterior thus case-hardened but the depleted interior disintegrates easily. When weathering penetrates the shell the inside is rapidly attacked and only the hard outer layer remains to give hollowed or tortoiseshell rocks. Another superficial layer, the precise nature of which is little understood, is the well-known desert varnish or patina, a shiny coat on the surface of rocks and pebbles and characteristic of arid environments. Some varnishes are colourless, others light brown, yet others so dark a brown as to be virtually black. Its origin is unknown but is significant, for it has been suggested that the varnish grows darker with the passage of time; obviously before such a criterion could be used with confidence as a chronological tool its origin must be known with precision. Its formation is so slow that in Egypt, for example, it has been estimated that a light brown coating requires between 2,000 and 5,000 years to develop, a fully formed blackish veneer between 20,000 and 50,000 years. The development of relatively impermeable soil horizons that are subsequently exposed at the surface because of erosion of once overlying, easily eroded materials, and which thus become surface crusts, is widespread in arid regions, although it is also known outside the deserts, and indeed many of the examples in arid lands probably originated in former periods of humid climate. The crusts prevent the waters of occasional torrential downpours from penetrating deeply into the soil, and thus they contribute to the rapid run-off associated with desert storms. Also, after erosion has cut through the crust and exposed underlying soil layers, the hard layer forms a resistant capping (duricrust) on plateaux and mesas, such as are common in many parts of arid and semi-arid Australia. Some duricrust layers have been used as time markers for landforms and geological formations. The necessary conditions for this are that the crust forms fairly rapidly, and that it is sufficiently distinct in appearance to preclude the possibility of confusion with other crusts formed at other times. The Barrilaco calcrete of Mexico for instance is believed to date from about 7,000 B. C. The main silcrete of the northern districts of South Australia is believed to date from the Lower Miocene, the laterite of northern Australia to be of the Lower or Middle Miocene age.", "hypothesis": "Granite which has been subjected to huge temperature swings tends not to exhibit any signs of disintegration as a result.", "label": "n"} +{"uid": "id_347", "premise": "WEST THAMES COLLEGE BACKGROUND INFORMATION FOR CANDIDATES West Thames College (initially known as Hounslow Borough College) came into existence in 1976 following the merger of Isleworth Polytechnic with part of Chiswick Polytechnic. Both parent colleges, in various guises, enjoyed a long tradition of service to the community dating back to the 1890s. The college is located at London Road, Isleworth, on a site occupied by the Victorian house of the Pears family, Spring Grove House. An earlier house of the same name on this site had been the home of Sir Joseph Banks, the botanist who named Botany Bay with Captain Cook in 1770. Later he founded Kew Gardens. Situated at the heart of West London, West Thames College is ideally placed to serve the training and education needs of local industry and local people. But its influence reaches much further than the immediate locality. Under its former name, Hounslow Borough College, it had already established a regional, national and international reputation for excellence. In fact, about eight per cent of its students come from continental Europe and further afield, whilst a further 52 per cent are from outside the immediate area. Since 1 April 1993, when it became independent of the local authority and adopted its new title, West Thames College has continued to build on that first class reputation. These days there is no such thing as a typical student. More than half of West Thames colleges 6000 students are over 19 years old. Some of these will be attending college part-time under their employers training schemes. Others will want to learn new skills purely out of interest, or out of a desire to improve their promotion chances, or they may want a change in career. The college is also very popular with 16-18 year olds, who see it as a practical alternative to a further two years at school. They want to study in the more adult atmosphere the college provides. They can choose from a far wider range of subjects than it would be practical for a sixth form to offer. If they want to go straight into employment they can still study at college to gain qualifications relevant to the job, either on a day-release basis or through Network or the Modern Apprenticeship Scheme.", "hypothesis": "Chiswick Polytechnic was closed at the same time West Thames College was opened.", "label": "n"} +{"uid": "id_348", "premise": "WEST THAMES COLLEGE BACKGROUND INFORMATION FOR CANDIDATES West Thames College (initially known as Hounslow Borough College) came into existence in 1976 following the merger of Isleworth Polytechnic with part of Chiswick Polytechnic. Both parent colleges, in various guises, enjoyed a long tradition of service to the community dating back to the 1890s. The college is located at London Road, Isleworth, on a site occupied by the Victorian house of the Pears family, Spring Grove House. An earlier house of the same name on this site had been the home of Sir Joseph Banks, the botanist who named Botany Bay with Captain Cook in 1770. Later he founded Kew Gardens. Situated at the heart of West London, West Thames College is ideally placed to serve the training and education needs of local industry and local people. But its influence reaches much further than the immediate locality. Under its former name, Hounslow Borough College, it had already established a regional, national and international reputation for excellence. In fact, about eight per cent of its students come from continental Europe and further afield, whilst a further 52 per cent are from outside the immediate area. Since 1 April 1993, when it became independent of the local authority and adopted its new title, West Thames College has continued to build on that first class reputation. These days there is no such thing as a typical student. More than half of West Thames colleges 6000 students are over 19 years old. Some of these will be attending college part-time under their employers training schemes. Others will want to learn new skills purely out of interest, or out of a desire to improve their promotion chances, or they may want a change in career. The college is also very popular with 16-18 year olds, who see it as a practical alternative to a further two years at school. They want to study in the more adult atmosphere the college provides. They can choose from a far wider range of subjects than it would be practical for a sixth form to offer. If they want to go straight into employment they can still study at college to gain qualifications relevant to the job, either on a day-release basis or through Network or the Modern Apprenticeship Scheme.", "hypothesis": "Students under the age of 16 cannot attend any of the courses offered by the college.", "label": "n"} +{"uid": "id_349", "premise": "WEST THAMES COLLEGE BACKGROUND INFORMATION FOR CANDIDATES West Thames College (initially known as Hounslow Borough College) came into existence in 1976 following the merger of Isleworth Polytechnic with part of Chiswick Polytechnic. Both parent colleges, in various guises, enjoyed a long tradition of service to the community dating back to the 1890s. The college is located at London Road, Isleworth, on a site occupied by the Victorian house of the Pears family, Spring Grove House. An earlier house of the same name on this site had been the home of Sir Joseph Banks, the botanist who named Botany Bay with Captain Cook in 1770. Later he founded Kew Gardens. Situated at the heart of West London, West Thames College is ideally placed to serve the training and education needs of local industry and local people. But its influence reaches much further than the immediate locality. Under its former name, Hounslow Borough College, it had already established a regional, national and international reputation for excellence. In fact, about eight per cent of its students come from continental Europe and further afield, whilst a further 52 per cent are from outside the immediate area. Since 1 April 1993, when it became independent of the local authority and adopted its new title, West Thames College has continued to build on that first class reputation. These days there is no such thing as a typical student. More than half of West Thames colleges 6000 students are over 19 years old. Some of these will be attending college part-time under their employers training schemes. Others will want to learn new skills purely out of interest, or out of a desire to improve their promotion chances, or they may want a change in career. The college is also very popular with 16-18 year olds, who see it as a practical alternative to a further two years at school. They want to study in the more adult atmosphere the college provides. They can choose from a far wider range of subjects than it would be practical for a sixth form to offer. If they want to go straight into employment they can still study at college to gain qualifications relevant to the job, either on a day-release basis or through Network or the Modern Apprenticeship Scheme.", "hypothesis": "There are fewer subjects to study in the sixth form of a school than at the college.", "label": "e"} +{"uid": "id_350", "premise": "WEST THAMES COLLEGE BACKGROUND INFORMATION FOR CANDIDATES West Thames College (initially known as Hounslow Borough College) came into existence in 1976 following the merger of Isleworth Polytechnic with part of Chiswick Polytechnic. Both parent colleges, in various guises, enjoyed a long tradition of service to the community dating back to the 1890s. The college is located at London Road, Isleworth, on a site occupied by the Victorian house of the Pears family, Spring Grove House. An earlier house of the same name on this site had been the home of Sir Joseph Banks, the botanist who named Botany Bay with Captain Cook in 1770. Later he founded Kew Gardens. Situated at the heart of West London, West Thames College is ideally placed to serve the training and education needs of local industry and local people. But its influence reaches much further than the immediate locality. Under its former name, Hounslow Borough College, it had already established a regional, national and international reputation for excellence. In fact, about eight per cent of its students come from continental Europe and further afield, whilst a further 52 per cent are from outside the immediate area. Since 1 April 1993, when it became independent of the local authority and adopted its new title, West Thames College has continued to build on that first class reputation. These days there is no such thing as a typical student. More than half of West Thames colleges 6000 students are over 19 years old. Some of these will be attending college part-time under their employers training schemes. Others will want to learn new skills purely out of interest, or out of a desire to improve their promotion chances, or they may want a change in career. The college is also very popular with 16-18 year olds, who see it as a practical alternative to a further two years at school. They want to study in the more adult atmosphere the college provides. They can choose from a far wider range of subjects than it would be practical for a sixth form to offer. If they want to go straight into employment they can still study at college to gain qualifications relevant to the job, either on a day-release basis or through Network or the Modern Apprenticeship Scheme.", "hypothesis": "There are currently 6000 students over the age of 19 attending the college.", "label": "c"} +{"uid": "id_351", "premise": "WEST THAMES COLLEGE BACKGROUND INFORMATION FOR CANDIDATES West Thames College (initially known as Hounslow Borough College) came into existence in 1976 following the merger of Isleworth Polytechnic with part of Chiswick Polytechnic. Both parent colleges, in various guises, enjoyed a long tradition of service to the community dating back to the 1890s. The college is located at London Road, Isleworth, on a site occupied by the Victorian house of the Pears family, Spring Grove House. An earlier house of the same name on this site had been the home of Sir Joseph Banks, the botanist who named Botany Bay with Captain Cook in 1770. Later he founded Kew Gardens. Situated at the heart of West London, West Thames College is ideally placed to serve the training and education needs of local industry and local people. But its influence reaches much further than the immediate locality. Under its former name, Hounslow Borough College, it had already established a regional, national and international reputation for excellence. In fact, about eight per cent of its students come from continental Europe and further afield, whilst a further 52 per cent are from outside the immediate area. Since 1 April 1993, when it became independent of the local authority and adopted its new title, West Thames College has continued to build on that first class reputation. These days there is no such thing as a typical student. More than half of West Thames colleges 6000 students are over 19 years old. Some of these will be attending college part-time under their employers training schemes. Others will want to learn new skills purely out of interest, or out of a desire to improve their promotion chances, or they may want a change in career. The college is also very popular with 16-18 year olds, who see it as a practical alternative to a further two years at school. They want to study in the more adult atmosphere the college provides. They can choose from a far wider range of subjects than it would be practical for a sixth form to offer. If they want to go straight into employment they can still study at college to gain qualifications relevant to the job, either on a day-release basis or through Network or the Modern Apprenticeship Scheme.", "hypothesis": "The college offers a more mature environment in which to learn than a school.", "label": "e"} +{"uid": "id_352", "premise": "WEST THAMES COLLEGE BACKGROUND INFORMATION FOR CANDIDATES West Thames College (initially known as Hounslow Borough College) came into existence in 1976 following the merger of Isleworth Polytechnic with part of Chiswick Polytechnic. Both parent colleges, in various guises, enjoyed a long tradition of service to the community dating back to the 1890s. The college is located at London Road, Isleworth, on a site occupied by the Victorian house of the Pears family, Spring Grove House. An earlier house of the same name on this site had been the home of Sir Joseph Banks, the botanist who named Botany Bay with Captain Cook in 1770. Later he founded Kew Gardens. Situated at the heart of West London, West Thames College is ideally placed to serve the training and education needs of local industry and local people. But its influence reaches much further than the immediate locality. Under its former name, Hounslow Borough College, it had already established a regional, national and international reputation for excellence. In fact, about eight per cent of its students come from continental Europe and further afield, whilst a further 52 per cent are from outside the immediate area. Since 1 April 1993, when it became independent of the local authority and adopted its new title, West Thames College has continued to build on that first class reputation. These days there is no such thing as a typical student. More than half of West Thames colleges 6000 students are over 19 years old. Some of these will be attending college part-time under their employers training schemes. Others will want to learn new skills purely out of interest, or out of a desire to improve their promotion chances, or they may want a change in career. The college is also very popular with 16-18 year olds, who see it as a practical alternative to a further two years at school. They want to study in the more adult atmosphere the college provides. They can choose from a far wider range of subjects than it would be practical for a sixth form to offer. If they want to go straight into employment they can still study at college to gain qualifications relevant to the job, either on a day-release basis or through Network or the Modern Apprenticeship Scheme.", "hypothesis": "The college changed its name to West Thames College in 1993.", "label": "e"} +{"uid": "id_353", "premise": "WEST THAMES COLLEGE BACKGROUND INFORMATION FOR CANDIDATES West Thames College (initially known as Hounslow Borough College) came into existence in 1976 following the merger of Isleworth Polytechnic with part of Chiswick Polytechnic. Both parent colleges, in various guises, enjoyed a long tradition of service to the community dating back to the 1890s. The college is located at London Road, Isleworth, on a site occupied by the Victorian house of the Pears family, Spring Grove House. An earlier house of the same name on this site had been the home of Sir Joseph Banks, the botanist who named Botany Bay with Captain Cook in 1770. Later he founded Kew Gardens. Situated at the heart of West London, West Thames College is ideally placed to serve the training and education needs of local industry and local people. But its influence reaches much further than the immediate locality. Under its former name, Hounslow Borough College, it had already established a regional, national and international reputation for excellence. In fact, about eight per cent of its students come from continental Europe and further afield, whilst a further 52 per cent are from outside the immediate area. Since 1 April 1993, when it became independent of the local authority and adopted its new title, West Thames College has continued to build on that first class reputation. These days there is no such thing as a typical student. More than half of West Thames colleges 6000 students are over 19 years old. Some of these will be attending college part-time under their employers training schemes. Others will want to learn new skills purely out of interest, or out of a desire to improve their promotion chances, or they may want a change in career. The college is also very popular with 16-18 year olds, who see it as a practical alternative to a further two years at school. They want to study in the more adult atmosphere the college provides. They can choose from a far wider range of subjects than it would be practical for a sixth form to offer. If they want to go straight into employment they can still study at college to gain qualifications relevant to the job, either on a day-release basis or through Network or the Modern Apprenticeship Scheme.", "hypothesis": "Most of the students at the college come from outside the local area.", "label": "e"} +{"uid": "id_354", "premise": "WESTLEY GENERAL HOSPITAL GUIDE FOR PATIENTS When you come to hospital for a planned stay, please remember that space is limited. We also advise you to bring an overnight bag even if you are only expecting to spend a day in hospital. A Clothing Please bring a selection of light clothing and personal belongings that may include: night clothes, a track suit, a sweater or fleece, a bathrobe, slippers or socks, glasses, contact lenses, dentures, a hearing aid, bottled drinks (plastic only), tissues, books and magazines, contact details of friends, cash to purchase items during your stay. B Toiletries Please bring a selection with you including a shaving kit if you are male. The hospital also runs a shop and trolley service from which extra items (additional toiletries, magazines, stamps, newspapers etc. ) can be purchased. C Valuables We strongly advise you not to bring any valuables with you as their security cannot be guaranteed. A closet is provided for some personal items. D Electrical appliances We ask that you do not bring electrical appliances with you. TV, radio and payphones are provided. E Medicines Please bring all your current medication with you, preferably in their original containers. On arrival the nursing staff will ask about your history and allergies. F Maternity Please bring the appropriate baby clothes and feeding equipment. For further information, please contact the Maternity Unit on 740648. G What Not to Bring Please do not bring any valuables (jewellery), personal computers, radios, TVs. The hospital cannot be held responsible for the loss of any items during your stay. Please note that the hospital does not allow the use of mobile telephones due to possible interference with patient monitoring equipment. H Smoking and Drinking Policy Smoking and alcohol are strictly prohibited in Westley Hospital. Patients wishing to smoke must do so outdoors. No alcohol is allowed on the premises. I Visiting Hours For details about when your friends and family can visit, see the list in your room or ward or check our website.", "hypothesis": "Dont bring any money to the hospital.", "label": "c"} +{"uid": "id_355", "premise": "WESTLEY GENERAL HOSPITAL GUIDE FOR PATIENTS When you come to hospital for a planned stay, please remember that space is limited. We also advise you to bring an overnight bag even if you are only expecting to spend a day in hospital. A Clothing Please bring a selection of light clothing and personal belongings that may include: night clothes, a track suit, a sweater or fleece, a bathrobe, slippers or socks, glasses, contact lenses, dentures, a hearing aid, bottled drinks (plastic only), tissues, books and magazines, contact details of friends, cash to purchase items during your stay. B Toiletries Please bring a selection with you including a shaving kit if you are male. The hospital also runs a shop and trolley service from which extra items (additional toiletries, magazines, stamps, newspapers etc. ) can be purchased. C Valuables We strongly advise you not to bring any valuables with you as their security cannot be guaranteed. A closet is provided for some personal items. D Electrical appliances We ask that you do not bring electrical appliances with you. TV, radio and payphones are provided. E Medicines Please bring all your current medication with you, preferably in their original containers. On arrival the nursing staff will ask about your history and allergies. F Maternity Please bring the appropriate baby clothes and feeding equipment. For further information, please contact the Maternity Unit on 740648. G What Not to Bring Please do not bring any valuables (jewellery), personal computers, radios, TVs. The hospital cannot be held responsible for the loss of any items during your stay. Please note that the hospital does not allow the use of mobile telephones due to possible interference with patient monitoring equipment. H Smoking and Drinking Policy Smoking and alcohol are strictly prohibited in Westley Hospital. Patients wishing to smoke must do so outdoors. No alcohol is allowed on the premises. I Visiting Hours For details about when your friends and family can visit, see the list in your room or ward or check our website.", "hypothesis": "Radios can interfere with hospital electronic equipment.", "label": "n"} +{"uid": "id_356", "premise": "WESTLEY GENERAL HOSPITAL GUIDE FOR PATIENTS When you come to hospital for a planned stay, please remember that space is limited. We also advise you to bring an overnight bag even if you are only expecting to spend a day in hospital. A Clothing Please bring a selection of light clothing and personal belongings that may include: night clothes, a track suit, a sweater or fleece, a bathrobe, slippers or socks, glasses, contact lenses, dentures, a hearing aid, bottled drinks (plastic only), tissues, books and magazines, contact details of friends, cash to purchase items during your stay. B Toiletries Please bring a selection with you including a shaving kit if you are male. The hospital also runs a shop and trolley service from which extra items (additional toiletries, magazines, stamps, newspapers etc. ) can be purchased. C Valuables We strongly advise you not to bring any valuables with you as their security cannot be guaranteed. A closet is provided for some personal items. D Electrical appliances We ask that you do not bring electrical appliances with you. TV, radio and payphones are provided. E Medicines Please bring all your current medication with you, preferably in their original containers. On arrival the nursing staff will ask about your history and allergies. F Maternity Please bring the appropriate baby clothes and feeding equipment. For further information, please contact the Maternity Unit on 740648. G What Not to Bring Please do not bring any valuables (jewellery), personal computers, radios, TVs. The hospital cannot be held responsible for the loss of any items during your stay. Please note that the hospital does not allow the use of mobile telephones due to possible interference with patient monitoring equipment. H Smoking and Drinking Policy Smoking and alcohol are strictly prohibited in Westley Hospital. Patients wishing to smoke must do so outdoors. No alcohol is allowed on the premises. I Visiting Hours For details about when your friends and family can visit, see the list in your room or ward or check our website.", "hypothesis": "Leave any false teeth at home.", "label": "c"} +{"uid": "id_357", "premise": "WESTLEY GENERAL HOSPITAL GUIDE FOR PATIENTS When you come to hospital for a planned stay, please remember that space is limited. We also advise you to bring an overnight bag even if you are only expecting to spend a day in hospital. A Clothing Please bring a selection of light clothing and personal belongings that may include: night clothes, a track suit, a sweater or fleece, a bathrobe, slippers or socks, glasses, contact lenses, dentures, a hearing aid, bottled drinks (plastic only), tissues, books and magazines, contact details of friends, cash to purchase items during your stay. B Toiletries Please bring a selection with you including a shaving kit if you are male. The hospital also runs a shop and trolley service from which extra items (additional toiletries, magazines, stamps, newspapers etc. ) can be purchased. C Valuables We strongly advise you not to bring any valuables with you as their security cannot be guaranteed. A closet is provided for some personal items. D Electrical appliances We ask that you do not bring electrical appliances with you. TV, radio and payphones are provided. E Medicines Please bring all your current medication with you, preferably in their original containers. On arrival the nursing staff will ask about your history and allergies. F Maternity Please bring the appropriate baby clothes and feeding equipment. For further information, please contact the Maternity Unit on 740648. G What Not to Bring Please do not bring any valuables (jewellery), personal computers, radios, TVs. The hospital cannot be held responsible for the loss of any items during your stay. Please note that the hospital does not allow the use of mobile telephones due to possible interference with patient monitoring equipment. H Smoking and Drinking Policy Smoking and alcohol are strictly prohibited in Westley Hospital. Patients wishing to smoke must do so outdoors. No alcohol is allowed on the premises. I Visiting Hours For details about when your friends and family can visit, see the list in your room or ward or check our website.", "hypothesis": "Telephone services are provided through coin or card operated telephones.", "label": "e"} +{"uid": "id_358", "premise": "WESTLEY GENERAL HOSPITAL GUIDE FOR PATIENTS When you come to hospital for a planned stay, please remember that space is limited. We also advise you to bring an overnight bag even if you are only expecting to spend a day in hospital. A Clothing Please bring a selection of light clothing and personal belongings that may include: night clothes, a track suit, a sweater or fleece, a bathrobe, slippers or socks, glasses, contact lenses, dentures, a hearing aid, bottled drinks (plastic only), tissues, books and magazines, contact details of friends, cash to purchase items during your stay. B Toiletries Please bring a selection with you including a shaving kit if you are male. The hospital also runs a shop and trolley service from which extra items (additional toiletries, magazines, stamps, newspapers etc. ) can be purchased. C Valuables We strongly advise you not to bring any valuables with you as their security cannot be guaranteed. A closet is provided for some personal items. D Electrical appliances We ask that you do not bring electrical appliances with you. TV, radio and payphones are provided. E Medicines Please bring all your current medication with you, preferably in their original containers. On arrival the nursing staff will ask about your history and allergies. F Maternity Please bring the appropriate baby clothes and feeding equipment. For further information, please contact the Maternity Unit on 740648. G What Not to Bring Please do not bring any valuables (jewellery), personal computers, radios, TVs. The hospital cannot be held responsible for the loss of any items during your stay. Please note that the hospital does not allow the use of mobile telephones due to possible interference with patient monitoring equipment. H Smoking and Drinking Policy Smoking and alcohol are strictly prohibited in Westley Hospital. Patients wishing to smoke must do so outdoors. No alcohol is allowed on the premises. I Visiting Hours For details about when your friends and family can visit, see the list in your room or ward or check our website.", "hypothesis": "You should pack a bag to stay for the night even if you intend only to be a day patient.", "label": "e"} +{"uid": "id_359", "premise": "WESTLEY SCHOOL OF ENGLISH Information for Students Timings The school is open Mon Fri from 7.30 am to 9.00 pm and on Saturday from 9.00 am to 12.30 pm. CLASS TIMINGS (Mon Fri) Lesson 1 8.45 am 10.15 am Lesson 2 10.45 am 12.15 pm Lesson 3 2.00 pm 3.30 pm Computer Room The school has a fully equipped computer lab with a free 24-hour internet connection. Students may use the computers at any time during school opening hours unless any class or activity is scheduled. In the evenings there is a booking system for the computers. Please read the rules for this in the computer room. Be advised that, due to the risk of viruses, students are not allowed to bring in and use their own disks or CDs. Self Access and Language Lab The lab is open and available for all students during school opening hours. There are tapes and self-study materials available for all levels. In the break times and the evenings there is a teacher on duty who can assist students with accessing material. Cafeteria The school cafeteria is open from 8.15 am to 5.00 pm. The cafeteria only sells hot food at lunchtime. A selection of sandwiches, snacks and hot or cold drinks are available at other times during the rest of the day. Attendance All students who come to the UK on student visas are required by law to attend a minimum of 85% of their scheduled courses. The school is required to inform the Department of Immigration of any student not fulfilling his visa obligations. A minimum attendance of 85% is also required for students to receive their course certificate. Fees All fees must be paid in full before the start of any course. A non-returnable deposit of 10% will secure a reservation on a course but the balance must be paid before classes begin.", "hypothesis": "The police will visit any student not completing the required attendance levels", "label": "n"} +{"uid": "id_360", "premise": "WESTLEY SCHOOL OF ENGLISH Information for Students Timings The school is open Mon Fri from 7.30 am to 9.00 pm and on Saturday from 9.00 am to 12.30 pm. CLASS TIMINGS (Mon Fri) Lesson 1 8.45 am 10.15 am Lesson 2 10.45 am 12.15 pm Lesson 3 2.00 pm 3.30 pm Computer Room The school has a fully equipped computer lab with a free 24-hour internet connection. Students may use the computers at any time during school opening hours unless any class or activity is scheduled. In the evenings there is a booking system for the computers. Please read the rules for this in the computer room. Be advised that, due to the risk of viruses, students are not allowed to bring in and use their own disks or CDs. Self Access and Language Lab The lab is open and available for all students during school opening hours. There are tapes and self-study materials available for all levels. In the break times and the evenings there is a teacher on duty who can assist students with accessing material. Cafeteria The school cafeteria is open from 8.15 am to 5.00 pm. The cafeteria only sells hot food at lunchtime. A selection of sandwiches, snacks and hot or cold drinks are available at other times during the rest of the day. Attendance All students who come to the UK on student visas are required by law to attend a minimum of 85% of their scheduled courses. The school is required to inform the Department of Immigration of any student not fulfilling his visa obligations. A minimum attendance of 85% is also required for students to receive their course certificate. Fees All fees must be paid in full before the start of any course. A non-returnable deposit of 10% will secure a reservation on a course but the balance must be paid before classes begin.", "hypothesis": "Students can go into the Language Lab at 8.30 on Thursday mornings.", "label": "e"} +{"uid": "id_361", "premise": "WESTLEY SCHOOL OF ENGLISH Information for Students Timings The school is open Mon Fri from 7.30 am to 9.00 pm and on Saturday from 9.00 am to 12.30 pm. CLASS TIMINGS (Mon Fri) Lesson 1 8.45 am 10.15 am Lesson 2 10.45 am 12.15 pm Lesson 3 2.00 pm 3.30 pm Computer Room The school has a fully equipped computer lab with a free 24-hour internet connection. Students may use the computers at any time during school opening hours unless any class or activity is scheduled. In the evenings there is a booking system for the computers. Please read the rules for this in the computer room. Be advised that, due to the risk of viruses, students are not allowed to bring in and use their own disks or CDs. Self Access and Language Lab The lab is open and available for all students during school opening hours. There are tapes and self-study materials available for all levels. In the break times and the evenings there is a teacher on duty who can assist students with accessing material. Cafeteria The school cafeteria is open from 8.15 am to 5.00 pm. The cafeteria only sells hot food at lunchtime. A selection of sandwiches, snacks and hot or cold drinks are available at other times during the rest of the day. Attendance All students who come to the UK on student visas are required by law to attend a minimum of 85% of their scheduled courses. The school is required to inform the Department of Immigration of any student not fulfilling his visa obligations. A minimum attendance of 85% is also required for students to receive their course certificate. Fees All fees must be paid in full before the start of any course. A non-returnable deposit of 10% will secure a reservation on a course but the balance must be paid before classes begin.", "hypothesis": "Students may not use their own floppy discs in the schools computers.", "label": "e"} +{"uid": "id_362", "premise": "WESTLEY SCHOOL OF ENGLISH Information for Students Timings The school is open Mon Fri from 7.30 am to 9.00 pm and on Saturday from 9.00 am to 12.30 pm. CLASS TIMINGS (Mon Fri) Lesson 1 8.45 am 10.15 am Lesson 2 10.45 am 12.15 pm Lesson 3 2.00 pm 3.30 pm Computer Room The school has a fully equipped computer lab with a free 24-hour internet connection. Students may use the computers at any time during school opening hours unless any class or activity is scheduled. In the evenings there is a booking system for the computers. Please read the rules for this in the computer room. Be advised that, due to the risk of viruses, students are not allowed to bring in and use their own disks or CDs. Self Access and Language Lab The lab is open and available for all students during school opening hours. There are tapes and self-study materials available for all levels. In the break times and the evenings there is a teacher on duty who can assist students with accessing material. Cafeteria The school cafeteria is open from 8.15 am to 5.00 pm. The cafeteria only sells hot food at lunchtime. A selection of sandwiches, snacks and hot or cold drinks are available at other times during the rest of the day. Attendance All students who come to the UK on student visas are required by law to attend a minimum of 85% of their scheduled courses. The school is required to inform the Department of Immigration of any student not fulfilling his visa obligations. A minimum attendance of 85% is also required for students to receive their course certificate. Fees All fees must be paid in full before the start of any course. A non-returnable deposit of 10% will secure a reservation on a course but the balance must be paid before classes begin.", "hypothesis": "Students can have a cooked breakfast in the cafe before their morning classes.", "label": "c"} +{"uid": "id_363", "premise": "WESTWINDS FARM CAMPSITE Open April September (Booking is advised for holidays in July and August to guarantee a place. ) Jim and Meg Oaks welcome you to the campsite. We hope you will enjoy your stay here. We ask all campers to show due care and consideration whilst staying here and to observe the following camp rules. Keep the campsite clean and tidy: dispose of litter in the bins provided; leave the showers, toilets and washing area in the same state as you found them; ensure your site is clear of all litter when you leave it. Dont obstruct rights of way. Keep cars, bikes, etc. off the road. Let sleeping campers have some peace. Dont make any noise after 10 oclock at night or before 7.30 in the morning. Dogs must be kept on a lead. Owners of dogs that disturb other campers by barking through the night will be asked to leave. Disorderly behaviour will not be tolerated. The lighting of fires is strictly prohibited. Ball games are not allowed on the campsite. There is plenty of room for ball games in the park opposite the campsite. Radios, portable music equipment, etc. must not be played at high volume. The management reserves the right to refuse admittance.", "hypothesis": "You should book ahead for the busier times of the year.", "label": "e"} +{"uid": "id_364", "premise": "WESTWINDS FARM CAMPSITE Open April September (Booking is advised for holidays in July and August to guarantee a place. ) Jim and Meg Oaks welcome you to the campsite. We hope you will enjoy your stay here. We ask all campers to show due care and consideration whilst staying here and to observe the following camp rules. Keep the campsite clean and tidy: dispose of litter in the bins provided; leave the showers, toilets and washing area in the same state as you found them; ensure your site is clear of all litter when you leave it. Dont obstruct rights of way. Keep cars, bikes, etc. off the road. Let sleeping campers have some peace. Dont make any noise after 10 oclock at night or before 7.30 in the morning. Dogs must be kept on a lead. Owners of dogs that disturb other campers by barking through the night will be asked to leave. Disorderly behaviour will not be tolerated. The lighting of fires is strictly prohibited. Ball games are not allowed on the campsite. There is plenty of room for ball games in the park opposite the campsite. Radios, portable music equipment, etc. must not be played at high volume. The management reserves the right to refuse admittance.", "hypothesis": "The owners of the campsite may not allow you to camp there.", "label": "e"} +{"uid": "id_365", "premise": "WESTWINDS FARM CAMPSITE Open April September (Booking is advised for holidays in July and August to guarantee a place. ) Jim and Meg Oaks welcome you to the campsite. We hope you will enjoy your stay here. We ask all campers to show due care and consideration whilst staying here and to observe the following camp rules. Keep the campsite clean and tidy: dispose of litter in the bins provided; leave the showers, toilets and washing area in the same state as you found them; ensure your site is clear of all litter when you leave it. Dont obstruct rights of way. Keep cars, bikes, etc. off the road. Let sleeping campers have some peace. Dont make any noise after 10 oclock at night or before 7.30 in the morning. Dogs must be kept on a lead. Owners of dogs that disturb other campers by barking through the night will be asked to leave. Disorderly behaviour will not be tolerated. The lighting of fires is strictly prohibited. Ball games are not allowed on the campsite. There is plenty of room for ball games in the park opposite the campsite. Radios, portable music equipment, etc. must not be played at high volume. The management reserves the right to refuse admittance.", "hypothesis": "You are not allowed to cook food on open fires.", "label": "e"} +{"uid": "id_366", "premise": "WESTWINDS FARM CAMPSITE Open April September (Booking is advised for holidays in July and August to guarantee a place. ) Jim and Meg Oaks welcome you to the campsite. We hope you will enjoy your stay here. We ask all campers to show due care and consideration whilst staying here and to observe the following camp rules. Keep the campsite clean and tidy: dispose of litter in the bins provided; leave the showers, toilets and washing area in the same state as you found them; ensure your site is clear of all litter when you leave it. Dont obstruct rights of way. Keep cars, bikes, etc. off the road. Let sleeping campers have some peace. Dont make any noise after 10 oclock at night or before 7.30 in the morning. Dogs must be kept on a lead. Owners of dogs that disturb other campers by barking through the night will be asked to leave. Disorderly behaviour will not be tolerated. The lighting of fires is strictly prohibited. Ball games are not allowed on the campsite. There is plenty of room for ball games in the park opposite the campsite. Radios, portable music equipment, etc. must not be played at high volume. The management reserves the right to refuse admittance.", "hypothesis": "No dogs are allowed on the campsite.", "label": "c"} +{"uid": "id_367", "premise": "WESTWINDS FARM CAMPSITE Open April September (Booking is advised for holidays in July and August to guarantee a place. ) Jim and Meg Oaks welcome you to the campsite. We hope you will enjoy your stay here. We ask all campers to show due care and consideration whilst staying here and to observe the following camp rules. Keep the campsite clean and tidy: dispose of litter in the bins provided; leave the showers, toilets and washing area in the same state as you found them; ensure your site is clear of all litter when you leave it. Dont obstruct rights of way. Keep cars, bikes, etc. off the road. Let sleeping campers have some peace. Dont make any noise after 10 oclock at night or before 7.30 in the morning. Dogs must be kept on a lead. Owners of dogs that disturb other campers by barking through the night will be asked to leave. Disorderly behaviour will not be tolerated. The lighting of fires is strictly prohibited. Ball games are not allowed on the campsite. There is plenty of room for ball games in the park opposite the campsite. Radios, portable music equipment, etc. must not be played at high volume. The management reserves the right to refuse admittance.", "hypothesis": "The campsite is open all year round.", "label": "c"} +{"uid": "id_368", "premise": "WESTWINDS FARM CAMPSITE Open April September (Booking is advised for holidays in July and August to guarantee a place. ) Jim and Meg Oaks welcome you to the campsite. We hope you will enjoy your stay here. We ask all campers to show due care and consideration whilst staying here and to observe the following camp rules. Keep the campsite clean and tidy: dispose of litter in the bins provided; leave the showers, toilets and washing area in the same state as you found them; ensure your site is clear of all litter when you leave it. Dont obstruct rights of way. Keep cars, bikes, etc. off the road. Let sleeping campers have some peace. Dont make any noise after 10 oclock at night or before 7.30 in the morning. Dogs must be kept on a lead. Owners of dogs that disturb other campers by barking through the night will be asked to leave. Disorderly behaviour will not be tolerated. The lighting of fires is strictly prohibited. Ball games are not allowed on the campsite. There is plenty of room for ball games in the park opposite the campsite. Radios, portable music equipment, etc. must not be played at high volume. The management reserves the right to refuse admittance.", "hypothesis": "The minimum stay at the campsite is two nights.", "label": "n"} +{"uid": "id_369", "premise": "WESTWINDS FARM CAMPSITE Open April September (Booking is advised for holidays in July and August to guarantee a place. ) Jim and Meg Oaks welcome you to the campsite. We hope you will enjoy your stay here. We ask all campers to show due care and consideration whilst staying here and to observe the following camp rules. Keep the campsite clean and tidy: dispose of litter in the bins provided; leave the showers, toilets and washing area in the same state as you found them; ensure your site is clear of all litter when you leave it. Dont obstruct rights of way. Keep cars, bikes, etc. off the road. Let sleeping campers have some peace. Dont make any noise after 10 oclock at night or before 7.30 in the morning. Dogs must be kept on a lead. Owners of dogs that disturb other campers by barking through the night will be asked to leave. Disorderly behaviour will not be tolerated. The lighting of fires is strictly prohibited. Ball games are not allowed on the campsite. There is plenty of room for ball games in the park opposite the campsite. Radios, portable music equipment, etc. must not be played at high volume. The management reserves the right to refuse admittance.", "hypothesis": "The entrance to the campsite is locked after 10 p. m.", "label": "n"} +{"uid": "id_370", "premise": "WHATS ON IN WINTER The Great Outdoors Sundays, June and July ORIENTEERING Where: various bush and farm locations Orienteering is an outdoor activity that combines adventure and sport with navigational skills through the bush. Take a hike or mountain-bike ride through a set course in a different bush or farm location on each excursion with guidance from a compass and a map. Each course is within an hours drive of the CBD. This is a fun, easy way to enhance fitness for the whole family, ages 7-70. To learn more about orienteering or sign up for a course, visit wa. orienteering. asn. au or call 9215 0700. Mountain Designs Adventure Race Australia 4 July Where: bush camp and forest retreat Adventure Race Australia heightens the thrill of adventure racing, combining biking, running, trekking, kayaking, rock climbing and other adventure sports to test physical strength, endurance and willpower. The race caters to both inexperienced and seasoned racers with a Raw course for beginners and a Hardcore course for racers who want an extra challenge. To get involved go to adventureaustralia. com. au Film Frenzy 21 June & 19 July MEMORABLE MOVIES IN MIDLAND Where: Town Hall Take a trip down memory lane at the Memorable Movies gathering, held once a month. This June the memorable movie is Roman Holiday, the 1953 classic starring Gregory Peck and Audrey Hepburn. Then in July there is a school holiday special presentation of The Worlds Fastest Indian, a true-life story of motorcycle enthusiast and world-record breaker Burt Munro, starring Anthony Hopkins. Festivals and Fairs 17 to 19 June HILLARYS ANTIQUE AND VINTAGE FAIR Where: Hillarys Boat Harbour The Antique and Vintage Fair will showcase hidden treasures from the past, including fascinating items from antique furniture to retro fashion. Antique valuers will also be on the premises to give expert advice on buying and selling as attendees peruse the various stalls underneath one giant tent. Music Magic 29 to 30 July A TRIBUTE TO LOUIS ARMSTRONG Where: Concert Hall Louis Armstrong revolutionised American jazz and dominated the scene for more than 60 years. He defines the jazz style and is a legendary figure in music history. Conductor Benjamin Northey will accompany trumpeter James Morrison to pay tribute to the famous musician by playing some of his most well-known and beloved hits. Go to waso. com. au for more details. All the Rest Until 18 October WHODUNNIT? EXHIBITION Where: Scitech Become a detective for a day at the Whodunnit? Exhibition. The exhibition is a fabricated crime scene in a zoo: someone has shot and killed a security guard, and a famous white rhino is missing. Guests use forensic science to obtain evidence and solve the crimes.", "hypothesis": "The motorcyclist was wearing a helmet when he crashed and died.", "label": "c"} +{"uid": "id_371", "premise": "WHATS ON IN WINTER The Great Outdoors Sundays, June and July ORIENTEERING Where: various bush and farm locations Orienteering is an outdoor activity that combines adventure and sport with navigational skills through the bush. Take a hike or mountain-bike ride through a set course in a different bush or farm location on each excursion with guidance from a compass and a map. Each course is within an hours drive of the CBD. This is a fun, easy way to enhance fitness for the whole family, ages 7-70. To learn more about orienteering or sign up for a course, visit wa. orienteering. asn. au or call 9215 0700. Mountain Designs Adventure Race Australia 4 July Where: bush camp and forest retreat Adventure Race Australia heightens the thrill of adventure racing, combining biking, running, trekking, kayaking, rock climbing and other adventure sports to test physical strength, endurance and willpower. The race caters to both inexperienced and seasoned racers with a Raw course for beginners and a Hardcore course for racers who want an extra challenge. To get involved go to adventureaustralia. com. au Film Frenzy 21 June & 19 July MEMORABLE MOVIES IN MIDLAND Where: Town Hall Take a trip down memory lane at the Memorable Movies gathering, held once a month. This June the memorable movie is Roman Holiday, the 1953 classic starring Gregory Peck and Audrey Hepburn. Then in July there is a school holiday special presentation of The Worlds Fastest Indian, a true-life story of motorcycle enthusiast and world-record breaker Burt Munro, starring Anthony Hopkins. Festivals and Fairs 17 to 19 June HILLARYS ANTIQUE AND VINTAGE FAIR Where: Hillarys Boat Harbour The Antique and Vintage Fair will showcase hidden treasures from the past, including fascinating items from antique furniture to retro fashion. Antique valuers will also be on the premises to give expert advice on buying and selling as attendees peruse the various stalls underneath one giant tent. Music Magic 29 to 30 July A TRIBUTE TO LOUIS ARMSTRONG Where: Concert Hall Louis Armstrong revolutionised American jazz and dominated the scene for more than 60 years. He defines the jazz style and is a legendary figure in music history. Conductor Benjamin Northey will accompany trumpeter James Morrison to pay tribute to the famous musician by playing some of his most well-known and beloved hits. Go to waso. com. au for more details. All the Rest Until 18 October WHODUNNIT? EXHIBITION Where: Scitech Become a detective for a day at the Whodunnit? Exhibition. The exhibition is a fabricated crime scene in a zoo: someone has shot and killed a security guard, and a famous white rhino is missing. Guests use forensic science to obtain evidence and solve the crimes.", "hypothesis": "This incident took place in Onondaga, New York.", "label": "e"} +{"uid": "id_372", "premise": "WHATS ON IN WINTER The Great Outdoors Sundays, June and July ORIENTEERING Where: various bush and farm locations Orienteering is an outdoor activity that combines adventure and sport with navigational skills through the bush. Take a hike or mountain-bike ride through a set course in a different bush or farm location on each excursion with guidance from a compass and a map. Each course is within an hours drive of the CBD. This is a fun, easy way to enhance fitness for the whole family, ages 7-70. To learn more about orienteering or sign up for a course, visit wa. orienteering. asn. au or call 9215 0700. Mountain Designs Adventure Race Australia 4 July Where: bush camp and forest retreat Adventure Race Australia heightens the thrill of adventure racing, combining biking, running, trekking, kayaking, rock climbing and other adventure sports to test physical strength, endurance and willpower. The race caters to both inexperienced and seasoned racers with a Raw course for beginners and a Hardcore course for racers who want an extra challenge. To get involved go to adventureaustralia. com. au Film Frenzy 21 June & 19 July MEMORABLE MOVIES IN MIDLAND Where: Town Hall Take a trip down memory lane at the Memorable Movies gathering, held once a month. This June the memorable movie is Roman Holiday, the 1953 classic starring Gregory Peck and Audrey Hepburn. Then in July there is a school holiday special presentation of The Worlds Fastest Indian, a true-life story of motorcycle enthusiast and world-record breaker Burt Munro, starring Anthony Hopkins. Festivals and Fairs 17 to 19 June HILLARYS ANTIQUE AND VINTAGE FAIR Where: Hillarys Boat Harbour The Antique and Vintage Fair will showcase hidden treasures from the past, including fascinating items from antique furniture to retro fashion. Antique valuers will also be on the premises to give expert advice on buying and selling as attendees peruse the various stalls underneath one giant tent. Music Magic 29 to 30 July A TRIBUTE TO LOUIS ARMSTRONG Where: Concert Hall Louis Armstrong revolutionised American jazz and dominated the scene for more than 60 years. He defines the jazz style and is a legendary figure in music history. Conductor Benjamin Northey will accompany trumpeter James Morrison to pay tribute to the famous musician by playing some of his most well-known and beloved hits. Go to waso. com. au for more details. All the Rest Until 18 October WHODUNNIT? EXHIBITION Where: Scitech Become a detective for a day at the Whodunnit? Exhibition. The exhibition is a fabricated crime scene in a zoo: someone has shot and killed a security guard, and a famous white rhino is missing. Guests use forensic science to obtain evidence and solve the crimes.", "hypothesis": "New York State requires motorcyclists to wear helmets.", "label": "e"} +{"uid": "id_373", "premise": "WHATS ON IN WINTER The Great Outdoors Sundays, June and July ORIENTEERING Where: various bush and farm locations Orienteering is an outdoor activity that combines adventure and sport with navigational skills through the bush. Take a hike or mountain-bike ride through a set course in a different bush or farm location on each excursion with guidance from a compass and a map. Each course is within an hours drive of the CBD. This is a fun, easy way to enhance fitness for the whole family, ages 7-70. To learn more about orienteering or sign up for a course, visit wa. orienteering. asn. au or call 9215 0700. Mountain Designs Adventure Race Australia 4 July Where: bush camp and forest retreat Adventure Race Australia heightens the thrill of adventure racing, combining biking, running, trekking, kayaking, rock climbing and other adventure sports to test physical strength, endurance and willpower. The race caters to both inexperienced and seasoned racers with a Raw course for beginners and a Hardcore course for racers who want an extra challenge. To get involved go to adventureaustralia. com. au Film Frenzy 21 June & 19 July MEMORABLE MOVIES IN MIDLAND Where: Town Hall Take a trip down memory lane at the Memorable Movies gathering, held once a month. This June the memorable movie is Roman Holiday, the 1953 classic starring Gregory Peck and Audrey Hepburn. Then in July there is a school holiday special presentation of The Worlds Fastest Indian, a true-life story of motorcycle enthusiast and world-record breaker Burt Munro, starring Anthony Hopkins. Festivals and Fairs 17 to 19 June HILLARYS ANTIQUE AND VINTAGE FAIR Where: Hillarys Boat Harbour The Antique and Vintage Fair will showcase hidden treasures from the past, including fascinating items from antique furniture to retro fashion. Antique valuers will also be on the premises to give expert advice on buying and selling as attendees peruse the various stalls underneath one giant tent. Music Magic 29 to 30 July A TRIBUTE TO LOUIS ARMSTRONG Where: Concert Hall Louis Armstrong revolutionised American jazz and dominated the scene for more than 60 years. He defines the jazz style and is a legendary figure in music history. Conductor Benjamin Northey will accompany trumpeter James Morrison to pay tribute to the famous musician by playing some of his most well-known and beloved hits. Go to waso. com. au for more details. All the Rest Until 18 October WHODUNNIT? EXHIBITION Where: Scitech Become a detective for a day at the Whodunnit? Exhibition. The exhibition is a fabricated crime scene in a zoo: someone has shot and killed a security guard, and a famous white rhino is missing. Guests use forensic science to obtain evidence and solve the crimes.", "hypothesis": "More than a hundred motorcyclists were taking part in this protest ride.", "label": "n"} +{"uid": "id_374", "premise": "WHATS ON IN WINTER The Great Outdoors Sundays, June and July ORIENTEERING Where: various bush and farm locations Orienteering is an outdoor activity that combines adventure and sport with navigational skills through the bush. Take a hike or mountain-bike ride through a set course in a different bush or farm location on each excursion with guidance from a compass and a map. Each course is within an hours drive of the CBD. This is a fun, easy way to enhance fitness for the whole family, ages 7-70. To learn more about orienteering or sign up for a course, visit wa. orienteering. asn. au or call 9215 0700. Mountain Designs Adventure Race Australia 4 July Where: bush camp and forest retreat Adventure Race Australia heightens the thrill of adventure racing, combining biking, running, trekking, kayaking, rock climbing and other adventure sports to test physical strength, endurance and willpower. The race caters to both inexperienced and seasoned racers with a Raw course for beginners and a Hardcore course for racers who want an extra challenge. To get involved go to adventureaustralia. com. au Film Frenzy 21 June & 19 July MEMORABLE MOVIES IN MIDLAND Where: Town Hall Take a trip down memory lane at the Memorable Movies gathering, held once a month. This June the memorable movie is Roman Holiday, the 1953 classic starring Gregory Peck and Audrey Hepburn. Then in July there is a school holiday special presentation of The Worlds Fastest Indian, a true-life story of motorcycle enthusiast and world-record breaker Burt Munro, starring Anthony Hopkins. Festivals and Fairs 17 to 19 June HILLARYS ANTIQUE AND VINTAGE FAIR Where: Hillarys Boat Harbour The Antique and Vintage Fair will showcase hidden treasures from the past, including fascinating items from antique furniture to retro fashion. Antique valuers will also be on the premises to give expert advice on buying and selling as attendees peruse the various stalls underneath one giant tent. Music Magic 29 to 30 July A TRIBUTE TO LOUIS ARMSTRONG Where: Concert Hall Louis Armstrong revolutionised American jazz and dominated the scene for more than 60 years. He defines the jazz style and is a legendary figure in music history. Conductor Benjamin Northey will accompany trumpeter James Morrison to pay tribute to the famous musician by playing some of his most well-known and beloved hits. Go to waso. com. au for more details. All the Rest Until 18 October WHODUNNIT? EXHIBITION Where: Scitech Become a detective for a day at the Whodunnit? Exhibition. The exhibition is a fabricated crime scene in a zoo: someone has shot and killed a security guard, and a famous white rhino is missing. Guests use forensic science to obtain evidence and solve the crimes.", "hypothesis": "Protests in the USA against compulsory use of motorcycle helmets have at times been successful.", "label": "e"} +{"uid": "id_375", "premise": "WHATS ON IN WINTER The Great Outdoors Sundays, June and July ORIENTEERING Where: various bush and farm locations Orienteering is an outdoor activity that combines adventure and sport with navigational skills through the bush. Take a hike or mountain-bike ride through a set course in a different bush or farm location on each excursion with guidance from a compass and a map. Each course is within an hours drive of the CBD. This is a fun, easy way to enhance fitness for the whole family, ages 7-70. To learn more about orienteering or sign up for a course, visit wa. orienteering. asn. au or call 9215 0700. Mountain Designs Adventure Race Australia 4 July Where: bush camp and forest retreat Adventure Race Australia heightens the thrill of adventure racing, combining biking, running, trekking, kayaking, rock climbing and other adventure sports to test physical strength, endurance and willpower. The race caters to both inexperienced and seasoned racers with a Raw course for beginners and a Hardcore course for racers who want an extra challenge. To get involved go to adventureaustralia. com. au Film Frenzy 21 June & 19 July MEMORABLE MOVIES IN MIDLAND Where: Town Hall Take a trip down memory lane at the Memorable Movies gathering, held once a month. This June the memorable movie is Roman Holiday, the 1953 classic starring Gregory Peck and Audrey Hepburn. Then in July there is a school holiday special presentation of The Worlds Fastest Indian, a true-life story of motorcycle enthusiast and world-record breaker Burt Munro, starring Anthony Hopkins. Festivals and Fairs 17 to 19 June HILLARYS ANTIQUE AND VINTAGE FAIR Where: Hillarys Boat Harbour The Antique and Vintage Fair will showcase hidden treasures from the past, including fascinating items from antique furniture to retro fashion. Antique valuers will also be on the premises to give expert advice on buying and selling as attendees peruse the various stalls underneath one giant tent. Music Magic 29 to 30 July A TRIBUTE TO LOUIS ARMSTRONG Where: Concert Hall Louis Armstrong revolutionised American jazz and dominated the scene for more than 60 years. He defines the jazz style and is a legendary figure in music history. Conductor Benjamin Northey will accompany trumpeter James Morrison to pay tribute to the famous musician by playing some of his most well-known and beloved hits. Go to waso. com. au for more details. All the Rest Until 18 October WHODUNNIT? EXHIBITION Where: Scitech Become a detective for a day at the Whodunnit? Exhibition. The exhibition is a fabricated crime scene in a zoo: someone has shot and killed a security guard, and a famous white rhino is missing. Guests use forensic science to obtain evidence and solve the crimes.", "hypothesis": "All states in the USA require motorcyclists to wear helmets.", "label": "c"} +{"uid": "id_376", "premise": "WHEN the subject is global warming, the villain is usually America . Although it produces a quarter of the greenhouse gases that are heating up the planet, it refuses to regulate them. When other countries agreed on an international treaty to do so he Kyoto protocolAmerica failed to ratify it. But not all American officialdom is happy with the federal government's stance. In fact,12 states disagree so fiercely that they are suing to force it to curb emissions of carbon dioxide, the most common greenhouse gas. The Supreme Court heard argument in the case on November 29th. The outcome will not be known for months, but the political wind seems to be shifting in favour of firmer action to counter climate change. The Clean Air Act charges the Environmental Protection Agency (EPA) with regulating air pollution from vehicles. But the EPA argues that Congress did not intend to include CO2 under that heading, and that to do so would extend the EPA's authority to an unreasonable extent. Furthermore, it contends that regulating emissions would not do good unless all or most other countries did the same. That is in keeping with the policies of President George Bush, who opposes mandatory curbs on emissions and believes that any international accord on global warming should apply to all countries unlike the Kyoto protocol, which exempts poor ones, including big polluters such as China and India . Ten states, among them gas-guzzling Texas and car-making Michigan, also back the EPA. The plaintiffs comprise 12 states, three cities, various NGOs, and American Samoa, a Pacific territory in danger of vanishing beneath the rising ocean. They are supported by a further six states, two power companies, a ski resort, and assorted clergymen, Indian tribes and agitated grandees such as Madeleine Albright, a former secretary of state. They point out that under the administration of Bill Clinton, the EPA decided that it did have the authority to regulate CO2. The act, they note, says the EPA should regulate any air pollutant that \"may reasonably be interpreted to endanger public health or welfare\". It goes on to define public welfare to include \"effects on soils, water, crops, vegetation, manmade materials, animals, wildlife, weather, visibility, and climate\". The Supreme Court may give a mixed ruling, decreeing that carbon dioxide is indeed a pollutant, but one the EPA is free to ignore or regulate as it pleases. Or it might dismiss the complaint on the grounds that the plaintiffs did not have the right to lodge it in the first place. In theory, they must prove that the EPA's foot-dragging has caused them some specific harm that regulation might remedy a tall order in a field as fraught with uncertainty as climatology. Even if the court found in the plaintiffs' favour, rapid change is unlikely. By the time the EPA had implemented such a ruling, Congress would probably have superseded it with a new law. That is the point, environmental groups say. They want Congress to pass a law tackling global warming, and hope that a favourable court ruling will jolly the politicians along. Moreover, the case has a bearing on several other bitterly-contested lawsuits. Carmakers, for example, are trying to get the courts to strike down a Californian state law based on certain provisions of the Clean Air Act that require them to reduce their vehicles' CO2 emissions. If the Supreme Court decides that the act does not apply to CO2, then the Californian law would also be in jeopardy. That, in turn, would scupper the decision of ten other states to adopt the same standard. However the Supreme Court rules, many state governments are determined to tackle climate change. California is in the vanguard. Its legislature has passed a law that will cap and then reduce industrial emissions of greenhouse gases. Seven eastern states have formed the Regional Greenhouse Gas Initiative, which will treat emissions from power plants the same way. Almost 400 mayors have signed an agreement to cut their cities' emissions in line with Kyoto . Many businesses, even some power companies, would rather see regulation now than prolonged uncertainty. And several of the leading contenders for 2008's presidential election are much keener on emissions caps than Mr Bush. Change is in the air.", "hypothesis": "An American island is in danger of disappearing beneath the rising ocean.", "label": "n"} +{"uid": "id_377", "premise": "WHEN the subject is global warming, the villain is usually America . Although it produces a quarter of the greenhouse gases that are heating up the planet, it refuses to regulate them. When other countries agreed on an international treaty to do so he Kyoto protocolAmerica failed to ratify it. But not all American officialdom is happy with the federal government's stance. In fact,12 states disagree so fiercely that they are suing to force it to curb emissions of carbon dioxide, the most common greenhouse gas. The Supreme Court heard argument in the case on November 29th. The outcome will not be known for months, but the political wind seems to be shifting in favour of firmer action to counter climate change. The Clean Air Act charges the Environmental Protection Agency (EPA) with regulating air pollution from vehicles. But the EPA argues that Congress did not intend to include CO2 under that heading, and that to do so would extend the EPA's authority to an unreasonable extent. Furthermore, it contends that regulating emissions would not do good unless all or most other countries did the same. That is in keeping with the policies of President George Bush, who opposes mandatory curbs on emissions and believes that any international accord on global warming should apply to all countries unlike the Kyoto protocol, which exempts poor ones, including big polluters such as China and India . Ten states, among them gas-guzzling Texas and car-making Michigan, also back the EPA. The plaintiffs comprise 12 states, three cities, various NGOs, and American Samoa, a Pacific territory in danger of vanishing beneath the rising ocean. They are supported by a further six states, two power companies, a ski resort, and assorted clergymen, Indian tribes and agitated grandees such as Madeleine Albright, a former secretary of state. They point out that under the administration of Bill Clinton, the EPA decided that it did have the authority to regulate CO2. The act, they note, says the EPA should regulate any air pollutant that \"may reasonably be interpreted to endanger public health or welfare\". It goes on to define public welfare to include \"effects on soils, water, crops, vegetation, manmade materials, animals, wildlife, weather, visibility, and climate\". The Supreme Court may give a mixed ruling, decreeing that carbon dioxide is indeed a pollutant, but one the EPA is free to ignore or regulate as it pleases. Or it might dismiss the complaint on the grounds that the plaintiffs did not have the right to lodge it in the first place. In theory, they must prove that the EPA's foot-dragging has caused them some specific harm that regulation might remedy a tall order in a field as fraught with uncertainty as climatology. Even if the court found in the plaintiffs' favour, rapid change is unlikely. By the time the EPA had implemented such a ruling, Congress would probably have superseded it with a new law. That is the point, environmental groups say. They want Congress to pass a law tackling global warming, and hope that a favourable court ruling will jolly the politicians along. Moreover, the case has a bearing on several other bitterly-contested lawsuits. Carmakers, for example, are trying to get the courts to strike down a Californian state law based on certain provisions of the Clean Air Act that require them to reduce their vehicles' CO2 emissions. If the Supreme Court decides that the act does not apply to CO2, then the Californian law would also be in jeopardy. That, in turn, would scupper the decision of ten other states to adopt the same standard. However the Supreme Court rules, many state governments are determined to tackle climate change. California is in the vanguard. Its legislature has passed a law that will cap and then reduce industrial emissions of greenhouse gases. Seven eastern states have formed the Regional Greenhouse Gas Initiative, which will treat emissions from power plants the same way. Almost 400 mayors have signed an agreement to cut their cities' emissions in line with Kyoto . Many businesses, even some power companies, would rather see regulation now than prolonged uncertainty. And several of the leading contenders for 2008's presidential election are much keener on emissions caps than Mr Bush. Change is in the air.", "hypothesis": "Texas and Michigan are among the 12 states which call for regulating air pollution.", "label": "c"} +{"uid": "id_378", "premise": "WHEN the subject is global warming, the villain is usually America . Although it produces a quarter of the greenhouse gases that are heating up the planet, it refuses to regulate them. When other countries agreed on an international treaty to do so he Kyoto protocolAmerica failed to ratify it. But not all American officialdom is happy with the federal government's stance. In fact,12 states disagree so fiercely that they are suing to force it to curb emissions of carbon dioxide, the most common greenhouse gas. The Supreme Court heard argument in the case on November 29th. The outcome will not be known for months, but the political wind seems to be shifting in favour of firmer action to counter climate change. The Clean Air Act charges the Environmental Protection Agency (EPA) with regulating air pollution from vehicles. But the EPA argues that Congress did not intend to include CO2 under that heading, and that to do so would extend the EPA's authority to an unreasonable extent. Furthermore, it contends that regulating emissions would not do good unless all or most other countries did the same. That is in keeping with the policies of President George Bush, who opposes mandatory curbs on emissions and believes that any international accord on global warming should apply to all countries unlike the Kyoto protocol, which exempts poor ones, including big polluters such as China and India . Ten states, among them gas-guzzling Texas and car-making Michigan, also back the EPA. The plaintiffs comprise 12 states, three cities, various NGOs, and American Samoa, a Pacific territory in danger of vanishing beneath the rising ocean. They are supported by a further six states, two power companies, a ski resort, and assorted clergymen, Indian tribes and agitated grandees such as Madeleine Albright, a former secretary of state. They point out that under the administration of Bill Clinton, the EPA decided that it did have the authority to regulate CO2. The act, they note, says the EPA should regulate any air pollutant that \"may reasonably be interpreted to endanger public health or welfare\". It goes on to define public welfare to include \"effects on soils, water, crops, vegetation, manmade materials, animals, wildlife, weather, visibility, and climate\". The Supreme Court may give a mixed ruling, decreeing that carbon dioxide is indeed a pollutant, but one the EPA is free to ignore or regulate as it pleases. Or it might dismiss the complaint on the grounds that the plaintiffs did not have the right to lodge it in the first place. In theory, they must prove that the EPA's foot-dragging has caused them some specific harm that regulation might remedy a tall order in a field as fraught with uncertainty as climatology. Even if the court found in the plaintiffs' favour, rapid change is unlikely. By the time the EPA had implemented such a ruling, Congress would probably have superseded it with a new law. That is the point, environmental groups say. They want Congress to pass a law tackling global warming, and hope that a favourable court ruling will jolly the politicians along. Moreover, the case has a bearing on several other bitterly-contested lawsuits. Carmakers, for example, are trying to get the courts to strike down a Californian state law based on certain provisions of the Clean Air Act that require them to reduce their vehicles' CO2 emissions. If the Supreme Court decides that the act does not apply to CO2, then the Californian law would also be in jeopardy. That, in turn, would scupper the decision of ten other states to adopt the same standard. However the Supreme Court rules, many state governments are determined to tackle climate change. California is in the vanguard. Its legislature has passed a law that will cap and then reduce industrial emissions of greenhouse gases. Seven eastern states have formed the Regional Greenhouse Gas Initiative, which will treat emissions from power plants the same way. Almost 400 mayors have signed an agreement to cut their cities' emissions in line with Kyoto . Many businesses, even some power companies, would rather see regulation now than prolonged uncertainty. And several of the leading contenders for 2008's presidential election are much keener on emissions caps than Mr Bush. Change is in the air.", "hypothesis": "The Supreme Court's ruling may influence the results of other lawsuits.", "label": "e"} +{"uid": "id_379", "premise": "WHEN the subject is global warming, the villain is usually America . Although it produces a quarter of the greenhouse gases that are heating up the planet, it refuses to regulate them. When other countries agreed on an international treaty to do so he Kyoto protocolAmerica failed to ratify it. But not all American officialdom is happy with the federal government's stance. In fact,12 states disagree so fiercely that they are suing to force it to curb emissions of carbon dioxide, the most common greenhouse gas. The Supreme Court heard argument in the case on November 29th. The outcome will not be known for months, but the political wind seems to be shifting in favour of firmer action to counter climate change. The Clean Air Act charges the Environmental Protection Agency (EPA) with regulating air pollution from vehicles. But the EPA argues that Congress did not intend to include CO2 under that heading, and that to do so would extend the EPA's authority to an unreasonable extent. Furthermore, it contends that regulating emissions would not do good unless all or most other countries did the same. That is in keeping with the policies of President George Bush, who opposes mandatory curbs on emissions and believes that any international accord on global warming should apply to all countries unlike the Kyoto protocol, which exempts poor ones, including big polluters such as China and India . Ten states, among them gas-guzzling Texas and car-making Michigan, also back the EPA. The plaintiffs comprise 12 states, three cities, various NGOs, and American Samoa, a Pacific territory in danger of vanishing beneath the rising ocean. They are supported by a further six states, two power companies, a ski resort, and assorted clergymen, Indian tribes and agitated grandees such as Madeleine Albright, a former secretary of state. They point out that under the administration of Bill Clinton, the EPA decided that it did have the authority to regulate CO2. The act, they note, says the EPA should regulate any air pollutant that \"may reasonably be interpreted to endanger public health or welfare\". It goes on to define public welfare to include \"effects on soils, water, crops, vegetation, manmade materials, animals, wildlife, weather, visibility, and climate\". The Supreme Court may give a mixed ruling, decreeing that carbon dioxide is indeed a pollutant, but one the EPA is free to ignore or regulate as it pleases. Or it might dismiss the complaint on the grounds that the plaintiffs did not have the right to lodge it in the first place. In theory, they must prove that the EPA's foot-dragging has caused them some specific harm that regulation might remedy a tall order in a field as fraught with uncertainty as climatology. Even if the court found in the plaintiffs' favour, rapid change is unlikely. By the time the EPA had implemented such a ruling, Congress would probably have superseded it with a new law. That is the point, environmental groups say. They want Congress to pass a law tackling global warming, and hope that a favourable court ruling will jolly the politicians along. Moreover, the case has a bearing on several other bitterly-contested lawsuits. Carmakers, for example, are trying to get the courts to strike down a Californian state law based on certain provisions of the Clean Air Act that require them to reduce their vehicles' CO2 emissions. If the Supreme Court decides that the act does not apply to CO2, then the Californian law would also be in jeopardy. That, in turn, would scupper the decision of ten other states to adopt the same standard. However the Supreme Court rules, many state governments are determined to tackle climate change. California is in the vanguard. Its legislature has passed a law that will cap and then reduce industrial emissions of greenhouse gases. Seven eastern states have formed the Regional Greenhouse Gas Initiative, which will treat emissions from power plants the same way. Almost 400 mayors have signed an agreement to cut their cities' emissions in line with Kyoto . Many businesses, even some power companies, would rather see regulation now than prolonged uncertainty. And several of the leading contenders for 2008's presidential election are much keener on emissions caps than Mr Bush. Change is in the air.", "hypothesis": "The plaintiffs can prove that the EPA foot-dragging has caused them harm that the regulation might remedy.", "label": "e"} +{"uid": "id_380", "premise": "Walking on water The availability of groundwater has always been taken for granted by Australians. Groundwater supplies have in prior times been perceived as a resource of infinite bounds the prevailing mindset was out of sight out of mind. This has all changed with the modern epoch. Persistent neglect has resulted in numerous complications for groundwater users and many interest groups have great stake in its management and allocation. Over-allocation of surface water and persistent water shortages mean that reliance of groundwater supplies is expected to swell. The main point of concern now is whether or not a groundwater source can deliver a sustainable yield. This relies on a proper management of discharge (outflow) and recharge (inflow) rates. Discharge occurs when humans extract water as well as through vegetation and evaporation into the atmosphere. Sustainable use therefore depends on more than keeping within the recharge rate: if humans use water at precisely the recharge rate, discharge through other ways can be adversely affected. Queensland has been one of the most active states in managing groundwater supplies. This is because the territory sits atop the Great Artesian Basin (GAB) an expansive underwater aquifer that covers nearly one-fifth of the Australian continent. This resource has long been used by indigenous people and outback communities, particularly in times of drought (when surface water could dry up for hundreds of kilometres on end). Since farmers at Kerribee pioneered the use of bores in the country, the number has spiralled beyond sustainable levels and caused water pressure and flow rates across the region to decline. Furthermore, estimates indicate that 80% of GAB outflow is wasted because of inefficient and out-dated delivery systems. Open drains used to keep livestock hydrated are a particular scourge much water is lost due to seepage and evaporation. A number of initiatives have been undertaken to help stem this problem. The Queensland government declared in 2005 a moratorium on issuing new licences for water extraction from GAB. A strategy group known as the Great Artesian Basin Consultative Council has also published a management plan that involved capping some bores (to prevent further declines in pressure) and rehabilitating hundreds of other bores and bore drains with troughs and polyester piping (to prevent water seeping into the earth). It is now also apparent that corruption of groundwater supplies by humans is going to be an issue to contend with. In 2006, thousands of Sydney residents had their groundwater usage curtailed due to industrial pollution of the Botany Stands aquifer. Bore water for any domestic purposes has since been off limits due to chemical seepage from an estimated 8 industrial sites. Nevertheless, groundwater plans continue apace. Development of a controversial desalination plant has been postponed indefinitely while the feasibility of exploiting two aquifers near Sydney is explored. Authorities intend to use the aquifers to provide up to 30 gigalitres of water a year during dry spells and then leave them alone to replenish during higher rainfall years. But the proposed scheme it riddled with difficulties: low flow rates are hampering extraction: replenishment rates are lower than expected, and salinity imbalances caused by the procedure could wreak havoc on efforts to preserve wetland flora and fauna ecosystems that rely on a plentiful, clean and steady supply of water from the aquifers. It is not too late to turn groundwater into a sustainable resource. Groundwater is renewable through surface run off (and, a much slower rate, in organic springs where it is literally drip fed through rock on its way to aquifers). At present however, experts believe excessive amounts of groundwater are being squandered on aesthetic projects such as keeping parks, gardens and golf courses green. Aside from more judicious use of groundwater, many experts also believe that we need to look at harnessing other potential sources in order to meet our water needs. During rainy seasons for example urban areas are inundated with storm water and flash flooding that can bring cities to a standstill. Better storm water control mechanisms could potentially capture and preserve this rainwater for use at a later date.", "hypothesis": "Australians have always seen groundwater as a precious resource.", "label": "c"} +{"uid": "id_381", "premise": "Walking on water The availability of groundwater has always been taken for granted by Australians. Groundwater supplies have in prior times been perceived as a resource of infinite bounds the prevailing mindset was out of sight out of mind. This has all changed with the modern epoch. Persistent neglect has resulted in numerous complications for groundwater users and many interest groups have great stake in its management and allocation. Over-allocation of surface water and persistent water shortages mean that reliance of groundwater supplies is expected to swell. The main point of concern now is whether or not a groundwater source can deliver a sustainable yield. This relies on a proper management of discharge (outflow) and recharge (inflow) rates. Discharge occurs when humans extract water as well as through vegetation and evaporation into the atmosphere. Sustainable use therefore depends on more than keeping within the recharge rate: if humans use water at precisely the recharge rate, discharge through other ways can be adversely affected. Queensland has been one of the most active states in managing groundwater supplies. This is because the territory sits atop the Great Artesian Basin (GAB) an expansive underwater aquifer that covers nearly one-fifth of the Australian continent. This resource has long been used by indigenous people and outback communities, particularly in times of drought (when surface water could dry up for hundreds of kilometres on end). Since farmers at Kerribee pioneered the use of bores in the country, the number has spiralled beyond sustainable levels and caused water pressure and flow rates across the region to decline. Furthermore, estimates indicate that 80% of GAB outflow is wasted because of inefficient and out-dated delivery systems. Open drains used to keep livestock hydrated are a particular scourge much water is lost due to seepage and evaporation. A number of initiatives have been undertaken to help stem this problem. The Queensland government declared in 2005 a moratorium on issuing new licences for water extraction from GAB. A strategy group known as the Great Artesian Basin Consultative Council has also published a management plan that involved capping some bores (to prevent further declines in pressure) and rehabilitating hundreds of other bores and bore drains with troughs and polyester piping (to prevent water seeping into the earth). It is now also apparent that corruption of groundwater supplies by humans is going to be an issue to contend with. In 2006, thousands of Sydney residents had their groundwater usage curtailed due to industrial pollution of the Botany Stands aquifer. Bore water for any domestic purposes has since been off limits due to chemical seepage from an estimated 8 industrial sites. Nevertheless, groundwater plans continue apace. Development of a controversial desalination plant has been postponed indefinitely while the feasibility of exploiting two aquifers near Sydney is explored. Authorities intend to use the aquifers to provide up to 30 gigalitres of water a year during dry spells and then leave them alone to replenish during higher rainfall years. But the proposed scheme it riddled with difficulties: low flow rates are hampering extraction: replenishment rates are lower than expected, and salinity imbalances caused by the procedure could wreak havoc on efforts to preserve wetland flora and fauna ecosystems that rely on a plentiful, clean and steady supply of water from the aquifers. It is not too late to turn groundwater into a sustainable resource. Groundwater is renewable through surface run off (and, a much slower rate, in organic springs where it is literally drip fed through rock on its way to aquifers). At present however, experts believe excessive amounts of groundwater are being squandered on aesthetic projects such as keeping parks, gardens and golf courses green. Aside from more judicious use of groundwater, many experts also believe that we need to look at harnessing other potential sources in order to meet our water needs. During rainy seasons for example urban areas are inundated with storm water and flash flooding that can bring cities to a standstill. Better storm water control mechanisms could potentially capture and preserve this rainwater for use at a later date.", "hypothesis": "Using water at the recharge rate or lower will ensure sustainable use.", "label": "c"} +{"uid": "id_382", "premise": "Walking on water The availability of groundwater has always been taken for granted by Australians. Groundwater supplies have in prior times been perceived as a resource of infinite bounds the prevailing mindset was out of sight out of mind. This has all changed with the modern epoch. Persistent neglect has resulted in numerous complications for groundwater users and many interest groups have great stake in its management and allocation. Over-allocation of surface water and persistent water shortages mean that reliance of groundwater supplies is expected to swell. The main point of concern now is whether or not a groundwater source can deliver a sustainable yield. This relies on a proper management of discharge (outflow) and recharge (inflow) rates. Discharge occurs when humans extract water as well as through vegetation and evaporation into the atmosphere. Sustainable use therefore depends on more than keeping within the recharge rate: if humans use water at precisely the recharge rate, discharge through other ways can be adversely affected. Queensland has been one of the most active states in managing groundwater supplies. This is because the territory sits atop the Great Artesian Basin (GAB) an expansive underwater aquifer that covers nearly one-fifth of the Australian continent. This resource has long been used by indigenous people and outback communities, particularly in times of drought (when surface water could dry up for hundreds of kilometres on end). Since farmers at Kerribee pioneered the use of bores in the country, the number has spiralled beyond sustainable levels and caused water pressure and flow rates across the region to decline. Furthermore, estimates indicate that 80% of GAB outflow is wasted because of inefficient and out-dated delivery systems. Open drains used to keep livestock hydrated are a particular scourge much water is lost due to seepage and evaporation. A number of initiatives have been undertaken to help stem this problem. The Queensland government declared in 2005 a moratorium on issuing new licences for water extraction from GAB. A strategy group known as the Great Artesian Basin Consultative Council has also published a management plan that involved capping some bores (to prevent further declines in pressure) and rehabilitating hundreds of other bores and bore drains with troughs and polyester piping (to prevent water seeping into the earth). It is now also apparent that corruption of groundwater supplies by humans is going to be an issue to contend with. In 2006, thousands of Sydney residents had their groundwater usage curtailed due to industrial pollution of the Botany Stands aquifer. Bore water for any domestic purposes has since been off limits due to chemical seepage from an estimated 8 industrial sites. Nevertheless, groundwater plans continue apace. Development of a controversial desalination plant has been postponed indefinitely while the feasibility of exploiting two aquifers near Sydney is explored. Authorities intend to use the aquifers to provide up to 30 gigalitres of water a year during dry spells and then leave them alone to replenish during higher rainfall years. But the proposed scheme it riddled with difficulties: low flow rates are hampering extraction: replenishment rates are lower than expected, and salinity imbalances caused by the procedure could wreak havoc on efforts to preserve wetland flora and fauna ecosystems that rely on a plentiful, clean and steady supply of water from the aquifers. It is not too late to turn groundwater into a sustainable resource. Groundwater is renewable through surface run off (and, a much slower rate, in organic springs where it is literally drip fed through rock on its way to aquifers). At present however, experts believe excessive amounts of groundwater are being squandered on aesthetic projects such as keeping parks, gardens and golf courses green. Aside from more judicious use of groundwater, many experts also believe that we need to look at harnessing other potential sources in order to meet our water needs. During rainy seasons for example urban areas are inundated with storm water and flash flooding that can bring cities to a standstill. Better storm water control mechanisms could potentially capture and preserve this rainwater for use at a later date.", "hypothesis": "Use of groundwater is predicted to increase.", "label": "e"} +{"uid": "id_383", "premise": "Walking on water The availability of groundwater has always been taken for granted by Australians. Groundwater supplies have in prior times been perceived as a resource of infinite bounds the prevailing mindset was out of sight out of mind. This has all changed with the modern epoch. Persistent neglect has resulted in numerous complications for groundwater users and many interest groups have great stake in its management and allocation. Over-allocation of surface water and persistent water shortages mean that reliance of groundwater supplies is expected to swell. The main point of concern now is whether or not a groundwater source can deliver a sustainable yield. This relies on a proper management of discharge (outflow) and recharge (inflow) rates. Discharge occurs when humans extract water as well as through vegetation and evaporation into the atmosphere. Sustainable use therefore depends on more than keeping within the recharge rate: if humans use water at precisely the recharge rate, discharge through other ways can be adversely affected. Queensland has been one of the most active states in managing groundwater supplies. This is because the territory sits atop the Great Artesian Basin (GAB) an expansive underwater aquifer that covers nearly one-fifth of the Australian continent. This resource has long been used by indigenous people and outback communities, particularly in times of drought (when surface water could dry up for hundreds of kilometres on end). Since farmers at Kerribee pioneered the use of bores in the country, the number has spiralled beyond sustainable levels and caused water pressure and flow rates across the region to decline. Furthermore, estimates indicate that 80% of GAB outflow is wasted because of inefficient and out-dated delivery systems. Open drains used to keep livestock hydrated are a particular scourge much water is lost due to seepage and evaporation. A number of initiatives have been undertaken to help stem this problem. The Queensland government declared in 2005 a moratorium on issuing new licences for water extraction from GAB. A strategy group known as the Great Artesian Basin Consultative Council has also published a management plan that involved capping some bores (to prevent further declines in pressure) and rehabilitating hundreds of other bores and bore drains with troughs and polyester piping (to prevent water seeping into the earth). It is now also apparent that corruption of groundwater supplies by humans is going to be an issue to contend with. In 2006, thousands of Sydney residents had their groundwater usage curtailed due to industrial pollution of the Botany Stands aquifer. Bore water for any domestic purposes has since been off limits due to chemical seepage from an estimated 8 industrial sites. Nevertheless, groundwater plans continue apace. Development of a controversial desalination plant has been postponed indefinitely while the feasibility of exploiting two aquifers near Sydney is explored. Authorities intend to use the aquifers to provide up to 30 gigalitres of water a year during dry spells and then leave them alone to replenish during higher rainfall years. But the proposed scheme it riddled with difficulties: low flow rates are hampering extraction: replenishment rates are lower than expected, and salinity imbalances caused by the procedure could wreak havoc on efforts to preserve wetland flora and fauna ecosystems that rely on a plentiful, clean and steady supply of water from the aquifers. It is not too late to turn groundwater into a sustainable resource. Groundwater is renewable through surface run off (and, a much slower rate, in organic springs where it is literally drip fed through rock on its way to aquifers). At present however, experts believe excessive amounts of groundwater are being squandered on aesthetic projects such as keeping parks, gardens and golf courses green. Aside from more judicious use of groundwater, many experts also believe that we need to look at harnessing other potential sources in order to meet our water needs. During rainy seasons for example urban areas are inundated with storm water and flash flooding that can bring cities to a standstill. Better storm water control mechanisms could potentially capture and preserve this rainwater for use at a later date.", "hypothesis": "Humans cannot alter the recharge rate of groundwater.", "label": "n"} +{"uid": "id_384", "premise": "Walking with dinosaurs Peter L. Falkingham and his colleagues at Manchester University are developing techniques which look set to revolutionize our understanding of how dinosaurs and other extinct animals behaved. The media image of palaeontologists who study prehistoric life is often of field workers camped in the desert in the hot sun, carefully picking away at the rock surrounding a large dinosaur bone. But Peter Falkingham has done little of that for a while now. Instead, he devotes himself to his computer. Not because he has become inundated with paperwork, but because he is a new kind of palaeontologist: a computational palaeontologist. What few people may consider is that uncovering a skeleton, or discovering a new species, is where the research begins, not where it ends. What we really want to understand is how the extinct animals and plants behaved in their natural habitats. Drs Bill Sellers and Phil Manning from the University of Manchester use a genetic algorithm a kind of computer code that can change itself and evolve to explore how extinct animals like dinosaurs, and our own early ancestors, walked and stalked. The fossilized bones of a complete dinosaur skeleton can tell scientists a lot about the animal, but they do not make up the complete picture and the computer can try to fill the gap. The computer model is given a digitized skeleton, and the locations of known muscles. The model then randomly activates the muscles. This, perhaps unsurprisingly, results almost without fail in the animal falling on its face. So the computer alters the activation pattern and tries again ... usually to similar effect. The modeled dinosaurs quickly evolve. If there is any improvement, the computer discards the old pattern and adopts the new one as the base for alteration. Eventually, the muscle activation pattern evolves a stable way of moving, the best possible solution is reached, and the dinosaur can walk, run, chase or graze. Assuming natural selection evolves the best possible solution too, the modeled animal should be moving in a manner similar to its now-extinct counterpart. And indeed, using the same method for living animals (humans, emu and ostriches) similar top speeds were achieved on the computer as in reality. By comparing their cyberspace results with real measurements of living species, the Manchester team of palaeontologists can be confident in the results computed showing how extinct prehistoric animals such as dinosaurs moved. The Manchester University team have used the computer simulations to produce a model of a giant meat-eating dinosaur. lt is called an acrocanthosaurus which literally means high spined lizard because of the spines which run along its backbone. It is not really known why they are there but scientists have speculated they could have supported a hump that stored fat and water reserves. There are also those who believe that the spines acted as a support for a sail. Of these, one half think it was used as a display and could be flushed with blood and the other half think it was used as a temperature-regulating device. It may have been a mixture of the two. The skull seems out of proportion with its thick, heavy body because it is so narrow and the jaws are delicate and fine. The feet are also worthy of note as they look surprisingly small in contrast to the animal as a whole. It has a deep broad tail and powerful leg muscles to aid locomotion. It walked on its back legs and its front legs were much shorter with powerful claws. Falkingham himself is investigating fossilized tracks, or footprints, using computer simulations to help analyze how extinct animals moved. Modern-day trackers who study the habitats of wild animals can tell you what animal made a track, whether that animal was walking or running, sometimes even the sex of the animal. But a fossil track poses a more considerable challenge to interpret in the same way. A crucial consideration is knowing what the environment including the mud, or sediment, upon which the animal walked was like millions of years ago when the track was made. Experiments can answer these questions but the number of variables is staggering. To physically recreate each scenario with a box of mud is extremely time-consuming and difficult to repeat accurately. This is where computer simulation comes in. Falkingham uses computational techniques to model a volume of mud and control the moisture content, consistency, and other conditions to simulate the mud of prehistoric times. A footprint is then made in the digital mud by a virtual foot. This footprint can be chopped up and viewed from any angle and stress values can be extracted and calculated from inside it. By running hundreds of these simulations simultaneously on supercomputers, Falkingham can start to understand what types of footprint would be expected if an animal moved in a certain way over a given kind of ground. Looking at the variation in the virtual tracks, researchers can make sense of fossil tracks with greater confidence. The application of computational techniques in palaeontology is becoming more prevalent every year. As computer power continues to increase, the range of problems that can be tackled and questions that can be answered will only expand.", "hypothesis": "Research carried out into the composition of prehistoric mud has been found to be inaccurate.", "label": "n"} +{"uid": "id_385", "premise": "Walking with dinosaurs Peter L. Falkingham and his colleagues at Manchester University are developing techniques which look set to revolutionize our understanding of how dinosaurs and other extinct animals behaved. The media image of palaeontologists who study prehistoric life is often of field workers camped in the desert in the hot sun, carefully picking away at the rock surrounding a large dinosaur bone. But Peter Falkingham has done little of that for a while now. Instead, he devotes himself to his computer. Not because he has become inundated with paperwork, but because he is a new kind of palaeontologist: a computational palaeontologist. What few people may consider is that uncovering a skeleton, or discovering a new species, is where the research begins, not where it ends. What we really want to understand is how the extinct animals and plants behaved in their natural habitats. Drs Bill Sellers and Phil Manning from the University of Manchester use a genetic algorithm a kind of computer code that can change itself and evolve to explore how extinct animals like dinosaurs, and our own early ancestors, walked and stalked. The fossilized bones of a complete dinosaur skeleton can tell scientists a lot about the animal, but they do not make up the complete picture and the computer can try to fill the gap. The computer model is given a digitized skeleton, and the locations of known muscles. The model then randomly activates the muscles. This, perhaps unsurprisingly, results almost without fail in the animal falling on its face. So the computer alters the activation pattern and tries again ... usually to similar effect. The modeled dinosaurs quickly evolve. If there is any improvement, the computer discards the old pattern and adopts the new one as the base for alteration. Eventually, the muscle activation pattern evolves a stable way of moving, the best possible solution is reached, and the dinosaur can walk, run, chase or graze. Assuming natural selection evolves the best possible solution too, the modeled animal should be moving in a manner similar to its now-extinct counterpart. And indeed, using the same method for living animals (humans, emu and ostriches) similar top speeds were achieved on the computer as in reality. By comparing their cyberspace results with real measurements of living species, the Manchester team of palaeontologists can be confident in the results computed showing how extinct prehistoric animals such as dinosaurs moved. The Manchester University team have used the computer simulations to produce a model of a giant meat-eating dinosaur. lt is called an acrocanthosaurus which literally means high spined lizard because of the spines which run along its backbone. It is not really known why they are there but scientists have speculated they could have supported a hump that stored fat and water reserves. There are also those who believe that the spines acted as a support for a sail. Of these, one half think it was used as a display and could be flushed with blood and the other half think it was used as a temperature-regulating device. It may have been a mixture of the two. The skull seems out of proportion with its thick, heavy body because it is so narrow and the jaws are delicate and fine. The feet are also worthy of note as they look surprisingly small in contrast to the animal as a whole. It has a deep broad tail and powerful leg muscles to aid locomotion. It walked on its back legs and its front legs were much shorter with powerful claws. Falkingham himself is investigating fossilized tracks, or footprints, using computer simulations to help analyze how extinct animals moved. Modern-day trackers who study the habitats of wild animals can tell you what animal made a track, whether that animal was walking or running, sometimes even the sex of the animal. But a fossil track poses a more considerable challenge to interpret in the same way. A crucial consideration is knowing what the environment including the mud, or sediment, upon which the animal walked was like millions of years ago when the track was made. Experiments can answer these questions but the number of variables is staggering. To physically recreate each scenario with a box of mud is extremely time-consuming and difficult to repeat accurately. This is where computer simulation comes in. Falkingham uses computational techniques to model a volume of mud and control the moisture content, consistency, and other conditions to simulate the mud of prehistoric times. A footprint is then made in the digital mud by a virtual foot. This footprint can be chopped up and viewed from any angle and stress values can be extracted and calculated from inside it. By running hundreds of these simulations simultaneously on supercomputers, Falkingham can start to understand what types of footprint would be expected if an animal moved in a certain way over a given kind of ground. Looking at the variation in the virtual tracks, researchers can make sense of fossil tracks with greater confidence. The application of computational techniques in palaeontology is becoming more prevalent every year. As computer power continues to increase, the range of problems that can be tackled and questions that can be answered will only expand.", "hypothesis": "When the Sellers and Manning computer model was used for people, it showed them moving faster than they are physically able to.", "label": "c"} +{"uid": "id_386", "premise": "Walking with dinosaurs Peter L. Falkingham and his colleagues at Manchester University are developing techniques which look set to revolutionize our understanding of how dinosaurs and other extinct animals behaved. The media image of palaeontologists who study prehistoric life is often of field workers camped in the desert in the hot sun, carefully picking away at the rock surrounding a large dinosaur bone. But Peter Falkingham has done little of that for a while now. Instead, he devotes himself to his computer. Not because he has become inundated with paperwork, but because he is a new kind of palaeontologist: a computational palaeontologist. What few people may consider is that uncovering a skeleton, or discovering a new species, is where the research begins, not where it ends. What we really want to understand is how the extinct animals and plants behaved in their natural habitats. Drs Bill Sellers and Phil Manning from the University of Manchester use a genetic algorithm a kind of computer code that can change itself and evolve to explore how extinct animals like dinosaurs, and our own early ancestors, walked and stalked. The fossilized bones of a complete dinosaur skeleton can tell scientists a lot about the animal, but they do not make up the complete picture and the computer can try to fill the gap. The computer model is given a digitized skeleton, and the locations of known muscles. The model then randomly activates the muscles. This, perhaps unsurprisingly, results almost without fail in the animal falling on its face. So the computer alters the activation pattern and tries again ... usually to similar effect. The modeled dinosaurs quickly evolve. If there is any improvement, the computer discards the old pattern and adopts the new one as the base for alteration. Eventually, the muscle activation pattern evolves a stable way of moving, the best possible solution is reached, and the dinosaur can walk, run, chase or graze. Assuming natural selection evolves the best possible solution too, the modeled animal should be moving in a manner similar to its now-extinct counterpart. And indeed, using the same method for living animals (humans, emu and ostriches) similar top speeds were achieved on the computer as in reality. By comparing their cyberspace results with real measurements of living species, the Manchester team of palaeontologists can be confident in the results computed showing how extinct prehistoric animals such as dinosaurs moved. The Manchester University team have used the computer simulations to produce a model of a giant meat-eating dinosaur. lt is called an acrocanthosaurus which literally means high spined lizard because of the spines which run along its backbone. It is not really known why they are there but scientists have speculated they could have supported a hump that stored fat and water reserves. There are also those who believe that the spines acted as a support for a sail. Of these, one half think it was used as a display and could be flushed with blood and the other half think it was used as a temperature-regulating device. It may have been a mixture of the two. The skull seems out of proportion with its thick, heavy body because it is so narrow and the jaws are delicate and fine. The feet are also worthy of note as they look surprisingly small in contrast to the animal as a whole. It has a deep broad tail and powerful leg muscles to aid locomotion. It walked on its back legs and its front legs were much shorter with powerful claws. Falkingham himself is investigating fossilized tracks, or footprints, using computer simulations to help analyze how extinct animals moved. Modern-day trackers who study the habitats of wild animals can tell you what animal made a track, whether that animal was walking or running, sometimes even the sex of the animal. But a fossil track poses a more considerable challenge to interpret in the same way. A crucial consideration is knowing what the environment including the mud, or sediment, upon which the animal walked was like millions of years ago when the track was made. Experiments can answer these questions but the number of variables is staggering. To physically recreate each scenario with a box of mud is extremely time-consuming and difficult to repeat accurately. This is where computer simulation comes in. Falkingham uses computational techniques to model a volume of mud and control the moisture content, consistency, and other conditions to simulate the mud of prehistoric times. A footprint is then made in the digital mud by a virtual foot. This footprint can be chopped up and viewed from any angle and stress values can be extracted and calculated from inside it. By running hundreds of these simulations simultaneously on supercomputers, Falkingham can start to understand what types of footprint would be expected if an animal moved in a certain way over a given kind of ground. Looking at the variation in the virtual tracks, researchers can make sense of fossil tracks with greater confidence. The application of computational techniques in palaeontology is becoming more prevalent every year. As computer power continues to increase, the range of problems that can be tackled and questions that can be answered will only expand.", "hypothesis": "Some palaeontologists have expressed reservations about the conclusions reached by the Manchester team concerning the movement of dinosaurs.", "label": "n"} +{"uid": "id_387", "premise": "Walking with dinosaurs Peter L. Falkingham and his colleagues at Manchester University are developing techniques which look set to revolutionize our understanding of how dinosaurs and other extinct animals behaved. The media image of palaeontologists who study prehistoric life is often of field workers camped in the desert in the hot sun, carefully picking away at the rock surrounding a large dinosaur bone. But Peter Falkingham has done little of that for a while now. Instead, he devotes himself to his computer. Not because he has become inundated with paperwork, but because he is a new kind of palaeontologist: a computational palaeontologist. What few people may consider is that uncovering a skeleton, or discovering a new species, is where the research begins, not where it ends. What we really want to understand is how the extinct animals and plants behaved in their natural habitats. Drs Bill Sellers and Phil Manning from the University of Manchester use a genetic algorithm a kind of computer code that can change itself and evolve to explore how extinct animals like dinosaurs, and our own early ancestors, walked and stalked. The fossilized bones of a complete dinosaur skeleton can tell scientists a lot about the animal, but they do not make up the complete picture and the computer can try to fill the gap. The computer model is given a digitized skeleton, and the locations of known muscles. The model then randomly activates the muscles. This, perhaps unsurprisingly, results almost without fail in the animal falling on its face. So the computer alters the activation pattern and tries again ... usually to similar effect. The modeled dinosaurs quickly evolve. If there is any improvement, the computer discards the old pattern and adopts the new one as the base for alteration. Eventually, the muscle activation pattern evolves a stable way of moving, the best possible solution is reached, and the dinosaur can walk, run, chase or graze. Assuming natural selection evolves the best possible solution too, the modeled animal should be moving in a manner similar to its now-extinct counterpart. And indeed, using the same method for living animals (humans, emu and ostriches) similar top speeds were achieved on the computer as in reality. By comparing their cyberspace results with real measurements of living species, the Manchester team of palaeontologists can be confident in the results computed showing how extinct prehistoric animals such as dinosaurs moved. The Manchester University team have used the computer simulations to produce a model of a giant meat-eating dinosaur. lt is called an acrocanthosaurus which literally means high spined lizard because of the spines which run along its backbone. It is not really known why they are there but scientists have speculated they could have supported a hump that stored fat and water reserves. There are also those who believe that the spines acted as a support for a sail. Of these, one half think it was used as a display and could be flushed with blood and the other half think it was used as a temperature-regulating device. It may have been a mixture of the two. The skull seems out of proportion with its thick, heavy body because it is so narrow and the jaws are delicate and fine. The feet are also worthy of note as they look surprisingly small in contrast to the animal as a whole. It has a deep broad tail and powerful leg muscles to aid locomotion. It walked on its back legs and its front legs were much shorter with powerful claws. Falkingham himself is investigating fossilized tracks, or footprints, using computer simulations to help analyze how extinct animals moved. Modern-day trackers who study the habitats of wild animals can tell you what animal made a track, whether that animal was walking or running, sometimes even the sex of the animal. But a fossil track poses a more considerable challenge to interpret in the same way. A crucial consideration is knowing what the environment including the mud, or sediment, upon which the animal walked was like millions of years ago when the track was made. Experiments can answer these questions but the number of variables is staggering. To physically recreate each scenario with a box of mud is extremely time-consuming and difficult to repeat accurately. This is where computer simulation comes in. Falkingham uses computational techniques to model a volume of mud and control the moisture content, consistency, and other conditions to simulate the mud of prehistoric times. A footprint is then made in the digital mud by a virtual foot. This footprint can be chopped up and viewed from any angle and stress values can be extracted and calculated from inside it. By running hundreds of these simulations simultaneously on supercomputers, Falkingham can start to understand what types of footprint would be expected if an animal moved in a certain way over a given kind of ground. Looking at the variation in the virtual tracks, researchers can make sense of fossil tracks with greater confidence. The application of computational techniques in palaeontology is becoming more prevalent every year. As computer power continues to increase, the range of problems that can be tackled and questions that can be answered will only expand.", "hypothesis": "An experienced tracker can analyse fossil footprints as easily as those made by live animals.", "label": "c"} +{"uid": "id_388", "premise": "Walking with dinosaurs Peter L. Falkingham and his colleagues at Manchester University are developing techniques which look set to revolutionize our understanding of how dinosaurs and other extinct animals behaved. The media image of palaeontologists who study prehistoric life is often of field workers camped in the desert in the hot sun, carefully picking away at the rock surrounding a large dinosaur bone. But Peter Falkingham has done little of that for a while now. Instead, he devotes himself to his computer. Not because he has become inundated with paperwork, but because he is a new kind of palaeontologist: a computational palaeontologist. What few people may consider is that uncovering a skeleton, or discovering a new species, is where the research begins, not where it ends. What we really want to understand is how the extinct animals and plants behaved in their natural habitats. Drs Bill Sellers and Phil Manning from the University of Manchester use a genetic algorithm a kind of computer code that can change itself and evolve to explore how extinct animals like dinosaurs, and our own early ancestors, walked and stalked. The fossilized bones of a complete dinosaur skeleton can tell scientists a lot about the animal, but they do not make up the complete picture and the computer can try to fill the gap. The computer model is given a digitized skeleton, and the locations of known muscles. The model then randomly activates the muscles. This, perhaps unsurprisingly, results almost without fail in the animal falling on its face. So the computer alters the activation pattern and tries again ... usually to similar effect. The modeled dinosaurs quickly evolve. If there is any improvement, the computer discards the old pattern and adopts the new one as the base for alteration. Eventually, the muscle activation pattern evolves a stable way of moving, the best possible solution is reached, and the dinosaur can walk, run, chase or graze. Assuming natural selection evolves the best possible solution too, the modeled animal should be moving in a manner similar to its now-extinct counterpart. And indeed, using the same method for living animals (humans, emu and ostriches) similar top speeds were achieved on the computer as in reality. By comparing their cyberspace results with real measurements of living species, the Manchester team of palaeontologists can be confident in the results computed showing how extinct prehistoric animals such as dinosaurs moved. The Manchester University team have used the computer simulations to produce a model of a giant meat-eating dinosaur. lt is called an acrocanthosaurus which literally means high spined lizard because of the spines which run along its backbone. It is not really known why they are there but scientists have speculated they could have supported a hump that stored fat and water reserves. There are also those who believe that the spines acted as a support for a sail. Of these, one half think it was used as a display and could be flushed with blood and the other half think it was used as a temperature-regulating device. It may have been a mixture of the two. The skull seems out of proportion with its thick, heavy body because it is so narrow and the jaws are delicate and fine. The feet are also worthy of note as they look surprisingly small in contrast to the animal as a whole. It has a deep broad tail and powerful leg muscles to aid locomotion. It walked on its back legs and its front legs were much shorter with powerful claws. Falkingham himself is investigating fossilized tracks, or footprints, using computer simulations to help analyze how extinct animals moved. Modern-day trackers who study the habitats of wild animals can tell you what animal made a track, whether that animal was walking or running, sometimes even the sex of the animal. But a fossil track poses a more considerable challenge to interpret in the same way. A crucial consideration is knowing what the environment including the mud, or sediment, upon which the animal walked was like millions of years ago when the track was made. Experiments can answer these questions but the number of variables is staggering. To physically recreate each scenario with a box of mud is extremely time-consuming and difficult to repeat accurately. This is where computer simulation comes in. Falkingham uses computational techniques to model a volume of mud and control the moisture content, consistency, and other conditions to simulate the mud of prehistoric times. A footprint is then made in the digital mud by a virtual foot. This footprint can be chopped up and viewed from any angle and stress values can be extracted and calculated from inside it. By running hundreds of these simulations simultaneously on supercomputers, Falkingham can start to understand what types of footprint would be expected if an animal moved in a certain way over a given kind of ground. Looking at the variation in the virtual tracks, researchers can make sense of fossil tracks with greater confidence. The application of computational techniques in palaeontology is becoming more prevalent every year. As computer power continues to increase, the range of problems that can be tackled and questions that can be answered will only expand.", "hypothesis": "In his study of prehistoric life, Peter Falkinghom rarely spends time on outdoor research.", "label": "e"} +{"uid": "id_389", "premise": "Walking with dinosaurs Peter L. Falkingham and his colleagues at Manchester University are developing techniques which look set to revolutionize our understanding of how dinosaurs and other extinct animals behaved. The media image of palaeontologists who study prehistoric life is often of field workers camped in the desert in the hot sun, carefully picking away at the rock surrounding a large dinosaur bone. But Peter Falkingham has done little of that for a while now. Instead, he devotes himself to his computer. Not because he has become inundated with paperwork, but because he is a new kind of palaeontologist: a computational palaeontologist. What few people may consider is that uncovering a skeleton, or discovering a new species, is where the research begins, not where it ends. What we really want to understand is how the extinct animals and plants behaved in their natural habitats. Drs Bill Sellers and Phil Manning from the University of Manchester use a genetic algorithm a kind of computer code that can change itself and evolve to explore how extinct animals like dinosaurs, and our own early ancestors, walked and stalked. The fossilized bones of a complete dinosaur skeleton can tell scientists a lot about the animal, but they do not make up the complete picture and the computer can try to fill the gap. The computer model is given a digitized skeleton, and the locations of known muscles. The model then randomly activates the muscles. This, perhaps unsurprisingly, results almost without fail in the animal falling on its face. So the computer alters the activation pattern and tries again ... usually to similar effect. The modeled dinosaurs quickly evolve. If there is any improvement, the computer discards the old pattern and adopts the new one as the base for alteration. Eventually, the muscle activation pattern evolves a stable way of moving, the best possible solution is reached, and the dinosaur can walk, run, chase or graze. Assuming natural selection evolves the best possible solution too, the modeled animal should be moving in a manner similar to its now-extinct counterpart. And indeed, using the same method for living animals (humans, emu and ostriches) similar top speeds were achieved on the computer as in reality. By comparing their cyberspace results with real measurements of living species, the Manchester team of palaeontologists can be confident in the results computed showing how extinct prehistoric animals such as dinosaurs moved. The Manchester University team have used the computer simulations to produce a model of a giant meat-eating dinosaur. lt is called an acrocanthosaurus which literally means high spined lizard because of the spines which run along its backbone. It is not really known why they are there but scientists have speculated they could have supported a hump that stored fat and water reserves. There are also those who believe that the spines acted as a support for a sail. Of these, one half think it was used as a display and could be flushed with blood and the other half think it was used as a temperature-regulating device. It may have been a mixture of the two. The skull seems out of proportion with its thick, heavy body because it is so narrow and the jaws are delicate and fine. The feet are also worthy of note as they look surprisingly small in contrast to the animal as a whole. It has a deep broad tail and powerful leg muscles to aid locomotion. It walked on its back legs and its front legs were much shorter with powerful claws. Falkingham himself is investigating fossilized tracks, or footprints, using computer simulations to help analyze how extinct animals moved. Modern-day trackers who study the habitats of wild animals can tell you what animal made a track, whether that animal was walking or running, sometimes even the sex of the animal. But a fossil track poses a more considerable challenge to interpret in the same way. A crucial consideration is knowing what the environment including the mud, or sediment, upon which the animal walked was like millions of years ago when the track was made. Experiments can answer these questions but the number of variables is staggering. To physically recreate each scenario with a box of mud is extremely time-consuming and difficult to repeat accurately. This is where computer simulation comes in. Falkingham uses computational techniques to model a volume of mud and control the moisture content, consistency, and other conditions to simulate the mud of prehistoric times. A footprint is then made in the digital mud by a virtual foot. This footprint can be chopped up and viewed from any angle and stress values can be extracted and calculated from inside it. By running hundreds of these simulations simultaneously on supercomputers, Falkingham can start to understand what types of footprint would be expected if an animal moved in a certain way over a given kind of ground. Looking at the variation in the virtual tracks, researchers can make sense of fossil tracks with greater confidence. The application of computational techniques in palaeontology is becoming more prevalent every year. As computer power continues to increase, the range of problems that can be tackled and questions that can be answered will only expand.", "hypothesis": "Several attempts are usually needed before the computer model of a dinosaur used by Sellers and Manning manages to stay upright.", "label": "e"} +{"uid": "id_390", "premise": "Walnuts cost more than peanuts. Walnuts cost less than pistachios.", "hypothesis": "Pistachios cost more than both peanuts and walnuts.", "label": "e"} +{"uid": "id_391", "premise": "Water Filter. An ingenious invention is set to bring clean water to the third world, and while the science may be cutting edge, the materials are extremely down to earth. A handful of clay yesterdays coffee grounds and some cow manure are the ingredients that could bring clean, safe drinking water to much of the third world. B. The simple new technology, developed by ANU materials scientist Mr. Tony Flynn, allows water filters to be made from commonly available materials and fired on the ground using cow manure as the source of heat, without the need for a kiln. The filters have been tested and shown to remove common pathogens (disease-producing organisms) including E-coli. Unlike other water filtering devices, the filters are simple and inexpensive to make. They are very simple to explain and demonstrate and can be made by anyone, anywhere, says Mr. Flynn. They dont require any western technology. All you need is terracotta clay, a compliant cow and a match. C. The production of the filters is extremely simple. Take a handful of dry, crushed clay, mix it with a handful of organic material, such as used tea leaves, coffee grounds or rice hulls, add enough water to make a stiff biscuit-like mixture and form a cylindrical pot that has one end closed, then dry it in the sun. According to Mr. Flynn, used coffee grounds have given the best results to date. Next, surround the pots with straw; put them in a mound of cow manure, light the straw and then top up the burning manure as required. In less than 60 minutes the filters are finished. The walls of the finished pot should be about as thick as an adults index. The properties of cow manure are vital as the fuel can reach a temperature of 700 degrees in half an hour and will be up to 950 degrees after another 20 to 30 minutes. The manure makes a good fuel because itis very high in organic material that bums readily and quickly; the manure has to be dry and is best used exactly as found in the field, there is no need to break it up or process it any further. D. A potters din is an expensive item and can could take up to four or five hours to get upto 800 degrees. It needs expensive or scarce fuel, such as gas or wood to heat it and experience to run it. With no technology, no insulation and nothing other than a pile of cow manure and a match, none of these restrictions apply, Mr. Flynn says. E. It is also helpful that, like terracotta clay and organic material, cow dung is freely available across the developing world. A cow is a natural fuel factory. My understanding is that cow dung as a fuel would be pretty much the same wherever you would find it. Just as using manure as a fuel for domestic uses is not a new idea, the porosity of clay is something that potters have known about for years, and something that as a former ceramics lecturer in the ANU School of Art, Mr. Flynn is well aware of. The difference is that rather than viewing the porous nature of the material as a problem after all not many people want a pot that wont hold water his filters capitalize on this property. F. Other commercial ceramic filters do exist, but, even if available, with prices starting at US$5 each, they are often outside the budgets of most people in the developing world. The filtration process is simple, but effective. The basic principle is that there are passages through the filter that are wide enough for water droplets to pass through, but too narrow for pathogens. Tests with the deadly E-coli bacterium have seen the filters remove 96.4 to 99.8 per cent of the pathogen well within safe levels. Using only one filter it takes two hours to filter a litre of water. The use of organic material, which burns away after firing, helps produce the structure in which pathogens will become trapped. It overcomes the potential problems of finer clays that may not let water through and also means that cracks are soon halted. And like clay and cow dung, it is universally available. G. The invention was born out of a World Vision project involving the Manatuto community in East Timor The charity wanted to help set up a small industry manufacturing water filters, but initial research found the local clay to be too fine a problem solved by the addition of organic material. While the AF problems of producing a working ceramic filter in East Timor were overcome, the solution was kiln-based and particular to that communitys materials and couldnt be applied elsewhere. Manure firing, with no requirement for a kiln, has made this zero technology approach available anywhere it is needed. With all the components being widely available, Mr. Flynn says there is no reason the technology couldnt be applied throughout the developing world, and with no plans to patent his idea, there will be no legal obstacles to it being adopted in any community that needs it. Everyone has a right to clean water, these filters have the potential to enable anyone in the world to drink water safely, says Mr. Flynn.", "hypothesis": "E-coli is the most difficult bacteria to combat.", "label": "n"} +{"uid": "id_392", "premise": "Water Filter. An ingenious invention is set to bring clean water to the third world, and while the science may be cutting edge, the materials are extremely down to earth. A handful of clay yesterdays coffee grounds and some cow manure are the ingredients that could bring clean, safe drinking water to much of the third world. B. The simple new technology, developed by ANU materials scientist Mr. Tony Flynn, allows water filters to be made from commonly available materials and fired on the ground using cow manure as the source of heat, without the need for a kiln. The filters have been tested and shown to remove common pathogens (disease-producing organisms) including E-coli. Unlike other water filtering devices, the filters are simple and inexpensive to make. They are very simple to explain and demonstrate and can be made by anyone, anywhere, says Mr. Flynn. They dont require any western technology. All you need is terracotta clay, a compliant cow and a match. C. The production of the filters is extremely simple. Take a handful of dry, crushed clay, mix it with a handful of organic material, such as used tea leaves, coffee grounds or rice hulls, add enough water to make a stiff biscuit-like mixture and form a cylindrical pot that has one end closed, then dry it in the sun. According to Mr. Flynn, used coffee grounds have given the best results to date. Next, surround the pots with straw; put them in a mound of cow manure, light the straw and then top up the burning manure as required. In less than 60 minutes the filters are finished. The walls of the finished pot should be about as thick as an adults index. The properties of cow manure are vital as the fuel can reach a temperature of 700 degrees in half an hour and will be up to 950 degrees after another 20 to 30 minutes. The manure makes a good fuel because itis very high in organic material that bums readily and quickly; the manure has to be dry and is best used exactly as found in the field, there is no need to break it up or process it any further. D. A potters din is an expensive item and can could take up to four or five hours to get upto 800 degrees. It needs expensive or scarce fuel, such as gas or wood to heat it and experience to run it. With no technology, no insulation and nothing other than a pile of cow manure and a match, none of these restrictions apply, Mr. Flynn says. E. It is also helpful that, like terracotta clay and organic material, cow dung is freely available across the developing world. A cow is a natural fuel factory. My understanding is that cow dung as a fuel would be pretty much the same wherever you would find it. Just as using manure as a fuel for domestic uses is not a new idea, the porosity of clay is something that potters have known about for years, and something that as a former ceramics lecturer in the ANU School of Art, Mr. Flynn is well aware of. The difference is that rather than viewing the porous nature of the material as a problem after all not many people want a pot that wont hold water his filters capitalize on this property. F. Other commercial ceramic filters do exist, but, even if available, with prices starting at US$5 each, they are often outside the budgets of most people in the developing world. The filtration process is simple, but effective. The basic principle is that there are passages through the filter that are wide enough for water droplets to pass through, but too narrow for pathogens. Tests with the deadly E-coli bacterium have seen the filters remove 96.4 to 99.8 per cent of the pathogen well within safe levels. Using only one filter it takes two hours to filter a litre of water. The use of organic material, which burns away after firing, helps produce the structure in which pathogens will become trapped. It overcomes the potential problems of finer clays that may not let water through and also means that cracks are soon halted. And like clay and cow dung, it is universally available. G. The invention was born out of a World Vision project involving the Manatuto community in East Timor The charity wanted to help set up a small industry manufacturing water filters, but initial research found the local clay to be too fine a problem solved by the addition of organic material. While the AF problems of producing a working ceramic filter in East Timor were overcome, the solution was kiln-based and particular to that communitys materials and couldnt be applied elsewhere. Manure firing, with no requirement for a kiln, has made this zero technology approach available anywhere it is needed. With all the components being widely available, Mr. Flynn says there is no reason the technology couldnt be applied throughout the developing world, and with no plans to patent his idea, there will be no legal obstacles to it being adopted in any community that needs it. Everyone has a right to clean water, these filters have the potential to enable anyone in the world to drink water safely, says Mr. Flynn.", "hypothesis": "Clay was initially found to be unsuitable for pot making.", "label": "e"} +{"uid": "id_393", "premise": "Water Filter. An ingenious invention is set to bring clean water to the third world, and while the science may be cutting edge, the materials are extremely down to earth. A handful of clay yesterdays coffee grounds and some cow manure are the ingredients that could bring clean, safe drinking water to much of the third world. B. The simple new technology, developed by ANU materials scientist Mr. Tony Flynn, allows water filters to be made from commonly available materials and fired on the ground using cow manure as the source of heat, without the need for a kiln. The filters have been tested and shown to remove common pathogens (disease-producing organisms) including E-coli. Unlike other water filtering devices, the filters are simple and inexpensive to make. They are very simple to explain and demonstrate and can be made by anyone, anywhere, says Mr. Flynn. They dont require any western technology. All you need is terracotta clay, a compliant cow and a match. C. The production of the filters is extremely simple. Take a handful of dry, crushed clay, mix it with a handful of organic material, such as used tea leaves, coffee grounds or rice hulls, add enough water to make a stiff biscuit-like mixture and form a cylindrical pot that has one end closed, then dry it in the sun. According to Mr. Flynn, used coffee grounds have given the best results to date. Next, surround the pots with straw; put them in a mound of cow manure, light the straw and then top up the burning manure as required. In less than 60 minutes the filters are finished. The walls of the finished pot should be about as thick as an adults index. The properties of cow manure are vital as the fuel can reach a temperature of 700 degrees in half an hour and will be up to 950 degrees after another 20 to 30 minutes. The manure makes a good fuel because itis very high in organic material that bums readily and quickly; the manure has to be dry and is best used exactly as found in the field, there is no need to break it up or process it any further. D. A potters din is an expensive item and can could take up to four or five hours to get upto 800 degrees. It needs expensive or scarce fuel, such as gas or wood to heat it and experience to run it. With no technology, no insulation and nothing other than a pile of cow manure and a match, none of these restrictions apply, Mr. Flynn says. E. It is also helpful that, like terracotta clay and organic material, cow dung is freely available across the developing world. A cow is a natural fuel factory. My understanding is that cow dung as a fuel would be pretty much the same wherever you would find it. Just as using manure as a fuel for domestic uses is not a new idea, the porosity of clay is something that potters have known about for years, and something that as a former ceramics lecturer in the ANU School of Art, Mr. Flynn is well aware of. The difference is that rather than viewing the porous nature of the material as a problem after all not many people want a pot that wont hold water his filters capitalize on this property. F. Other commercial ceramic filters do exist, but, even if available, with prices starting at US$5 each, they are often outside the budgets of most people in the developing world. The filtration process is simple, but effective. The basic principle is that there are passages through the filter that are wide enough for water droplets to pass through, but too narrow for pathogens. Tests with the deadly E-coli bacterium have seen the filters remove 96.4 to 99.8 per cent of the pathogen well within safe levels. Using only one filter it takes two hours to filter a litre of water. The use of organic material, which burns away after firing, helps produce the structure in which pathogens will become trapped. It overcomes the potential problems of finer clays that may not let water through and also means that cracks are soon halted. And like clay and cow dung, it is universally available. G. The invention was born out of a World Vision project involving the Manatuto community in East Timor The charity wanted to help set up a small industry manufacturing water filters, but initial research found the local clay to be too fine a problem solved by the addition of organic material. While the AF problems of producing a working ceramic filter in East Timor were overcome, the solution was kiln-based and particular to that communitys materials and couldnt be applied elsewhere. Manure firing, with no requirement for a kiln, has made this zero technology approach available anywhere it is needed. With all the components being widely available, Mr. Flynn says there is no reason the technology couldnt be applied throughout the developing world, and with no plans to patent his idea, there will be no legal obstacles to it being adopted in any community that needs it. Everyone has a right to clean water, these filters have the potential to enable anyone in the world to drink water safely, says Mr. Flynn.", "hypothesis": "Coffee grounds are twice as effective as other materials.", "label": "n"} +{"uid": "id_394", "premise": "Water Filter. An ingenious invention is set to bring clean water to the third world, and while the science may be cutting edge, the materials are extremely down to earth. A handful of clay yesterdays coffee grounds and some cow manure are the ingredients that could bring clean, safe drinking water to much of the third world. B. The simple new technology, developed by ANU materials scientist Mr. Tony Flynn, allows water filters to be made from commonly available materials and fired on the ground using cow manure as the source of heat, without the need for a kiln. The filters have been tested and shown to remove common pathogens (disease-producing organisms) including E-coli. Unlike other water filtering devices, the filters are simple and inexpensive to make. They are very simple to explain and demonstrate and can be made by anyone, anywhere, says Mr. Flynn. They dont require any western technology. All you need is terracotta clay, a compliant cow and a match. C. The production of the filters is extremely simple. Take a handful of dry, crushed clay, mix it with a handful of organic material, such as used tea leaves, coffee grounds or rice hulls, add enough water to make a stiff biscuit-like mixture and form a cylindrical pot that has one end closed, then dry it in the sun. According to Mr. Flynn, used coffee grounds have given the best results to date. Next, surround the pots with straw; put them in a mound of cow manure, light the straw and then top up the burning manure as required. In less than 60 minutes the filters are finished. The walls of the finished pot should be about as thick as an adults index. The properties of cow manure are vital as the fuel can reach a temperature of 700 degrees in half an hour and will be up to 950 degrees after another 20 to 30 minutes. The manure makes a good fuel because itis very high in organic material that bums readily and quickly; the manure has to be dry and is best used exactly as found in the field, there is no need to break it up or process it any further. D. A potters din is an expensive item and can could take up to four or five hours to get upto 800 degrees. It needs expensive or scarce fuel, such as gas or wood to heat it and experience to run it. With no technology, no insulation and nothing other than a pile of cow manure and a match, none of these restrictions apply, Mr. Flynn says. E. It is also helpful that, like terracotta clay and organic material, cow dung is freely available across the developing world. A cow is a natural fuel factory. My understanding is that cow dung as a fuel would be pretty much the same wherever you would find it. Just as using manure as a fuel for domestic uses is not a new idea, the porosity of clay is something that potters have known about for years, and something that as a former ceramics lecturer in the ANU School of Art, Mr. Flynn is well aware of. The difference is that rather than viewing the porous nature of the material as a problem after all not many people want a pot that wont hold water his filters capitalize on this property. F. Other commercial ceramic filters do exist, but, even if available, with prices starting at US$5 each, they are often outside the budgets of most people in the developing world. The filtration process is simple, but effective. The basic principle is that there are passages through the filter that are wide enough for water droplets to pass through, but too narrow for pathogens. Tests with the deadly E-coli bacterium have seen the filters remove 96.4 to 99.8 per cent of the pathogen well within safe levels. Using only one filter it takes two hours to filter a litre of water. The use of organic material, which burns away after firing, helps produce the structure in which pathogens will become trapped. It overcomes the potential problems of finer clays that may not let water through and also means that cracks are soon halted. And like clay and cow dung, it is universally available. G. The invention was born out of a World Vision project involving the Manatuto community in East Timor The charity wanted to help set up a small industry manufacturing water filters, but initial research found the local clay to be too fine a problem solved by the addition of organic material. While the AF problems of producing a working ceramic filter in East Timor were overcome, the solution was kiln-based and particular to that communitys materials and couldnt be applied elsewhere. Manure firing, with no requirement for a kiln, has made this zero technology approach available anywhere it is needed. With all the components being widely available, Mr. Flynn says there is no reason the technology couldnt be applied throughout the developing world, and with no plans to patent his idea, there will be no legal obstacles to it being adopted in any community that needs it. Everyone has a right to clean water, these filters have the potential to enable anyone in the world to drink water safely, says Mr. Flynn.", "hypothesis": "It takes half an hour for the manure to reach 950 degrees.", "label": "c"} +{"uid": "id_395", "premise": "Water and chips break new ground Computers have been shrinking ever since their conception almost two centuries ago, and the trend is set to continue with the latest developments in microchip manufacturing. The earliest prototype of a mechanical computer was called the Difference Engine, and was invented by an eccentric Victorian called Charles Babbage. It weighed over 15 tons and had 26,000 parts. Colossus, the first electronic computer, did not appear until the end of WWTI, and with its 1,500 vacuum tubes was even more complex and much heavier than its mechanical predecessor. It was only when the silicon-based microchip was invented in the early 1950s that computers started to become more compact. The first microchip computers were very complex and had more than 100,000 transistors, or electronic switches; however, they were still rather bulky and measured several metres across. Nowadays microchips are measured in nanometres (nm)that is, in billionths of a metreand the search for even smaller microchips continues as scientists work on new methods of microchip production. Today, most microchips are shaped by a process called lithographic etching, which uses ultraviolet (UV) light. A beam of UV light with a wavelength of only 193 nm is projected through a lens on to an etching mask, a micro device with slits, or long narrow cuts. When the UV light hits the surface of silicon chips, it removes microscopic layers of silicon to create patterns for the microchips circuits. Microchips with features as small as 65 nm can be created with this wavelength. However, lithographic etching is unable to make chips much smaller than 65 nm due to the fundamental properties of light. If the slit in the mask were made narrower, the air and nitrogen used in the space between the lens and the etching mask would diffuse the light, causing a blurred image. This means that 193-nm UV light cannot be used to produce microchips with features smaller than 65 nm. Manufacturers know that they need to go even smaller for the technological demands of this century, and they are looking for new methods of making microchips. One approach to solving the problem is to use microscopic mirrors to focus X-rays rather than ultraviolet light. X-rays with a wavelength of less than 25 nm can be created, allowing engineers to make components smaller than 15 nm. The process is known as X-ray lithography etching. However, this technology is extremely expensive, so manufacturers are continuing to search for a cheaper alternative. A technology called immersion lithography might be the solution. Although liquids are not commonly associated with computers, a tiny drop of water may be all it takes to make microprocessors smaller and more powerful. Intel and IBM, who made the first microprocessors, have recently developed a unique method of microchip production, which uses water droplets to enable manufacturers to shrink the chipsand at a reasonable price! The new microchip is produced by using a drop of water to narrow the gap between the light source and the etching mask, and shorten the wavelength of the UV light to less than 34 nm. This process can be used to manufacture microchips as small as 45 nm, or possibly even smaller. Initially, engineers feared that air bubbles and other contaminants in water drops would distort the light and ruin the microchip etching process, and the first experiments proved these fears to be well-founded. The problem was overcome by using high-purity water, free of air and other substances. Scientists are also experimenting with liquids other than waterdenser liquids such as hydrofluoric acidwhich may allow the wavelength to be shrunk still further, thus producing even smaller chips. IBM have already successfully implemented immersion lithography on some of their production lines and created a fully-functioning microprocessor. IBM also claim that they are able to produce microchips with very few defects. Although immersion lithography is very new, it is highly promising as it will make the production of 45 nm and 32 nm chips commercially viable. It is a significant milestone in chip manufacturing and will help to bring the costs of the chip down without fundamentally changing the microchip production processes. In the near future, the ground-breaking technology of immersion lithography will enable computer manufacturers to make powerful microchips that will be used in electronic devices smaller than a coin. This will open up new opportunities in the ever-shrinking world of digital technology.", "hypothesis": "The first electronic computer weighed more than the first mechanical prototype.", "label": "e"} +{"uid": "id_396", "premise": "Water and chips break new ground Computers have been shrinking ever since their conception almost two centuries ago, and the trend is set to continue with the latest developments in microchip manufacturing. The earliest prototype of a mechanical computer was called the Difference Engine, and was invented by an eccentric Victorian called Charles Babbage. It weighed over 15 tons and had 26,000 parts. Colossus, the first electronic computer, did not appear until the end of WWTI, and with its 1,500 vacuum tubes was even more complex and much heavier than its mechanical predecessor. It was only when the silicon-based microchip was invented in the early 1950s that computers started to become more compact. The first microchip computers were very complex and had more than 100,000 transistors, or electronic switches; however, they were still rather bulky and measured several metres across. Nowadays microchips are measured in nanometres (nm)that is, in billionths of a metreand the search for even smaller microchips continues as scientists work on new methods of microchip production. Today, most microchips are shaped by a process called lithographic etching, which uses ultraviolet (UV) light. A beam of UV light with a wavelength of only 193 nm is projected through a lens on to an etching mask, a micro device with slits, or long narrow cuts. When the UV light hits the surface of silicon chips, it removes microscopic layers of silicon to create patterns for the microchips circuits. Microchips with features as small as 65 nm can be created with this wavelength. However, lithographic etching is unable to make chips much smaller than 65 nm due to the fundamental properties of light. If the slit in the mask were made narrower, the air and nitrogen used in the space between the lens and the etching mask would diffuse the light, causing a blurred image. This means that 193-nm UV light cannot be used to produce microchips with features smaller than 65 nm. Manufacturers know that they need to go even smaller for the technological demands of this century, and they are looking for new methods of making microchips. One approach to solving the problem is to use microscopic mirrors to focus X-rays rather than ultraviolet light. X-rays with a wavelength of less than 25 nm can be created, allowing engineers to make components smaller than 15 nm. The process is known as X-ray lithography etching. However, this technology is extremely expensive, so manufacturers are continuing to search for a cheaper alternative. A technology called immersion lithography might be the solution. Although liquids are not commonly associated with computers, a tiny drop of water may be all it takes to make microprocessors smaller and more powerful. Intel and IBM, who made the first microprocessors, have recently developed a unique method of microchip production, which uses water droplets to enable manufacturers to shrink the chipsand at a reasonable price! The new microchip is produced by using a drop of water to narrow the gap between the light source and the etching mask, and shorten the wavelength of the UV light to less than 34 nm. This process can be used to manufacture microchips as small as 45 nm, or possibly even smaller. Initially, engineers feared that air bubbles and other contaminants in water drops would distort the light and ruin the microchip etching process, and the first experiments proved these fears to be well-founded. The problem was overcome by using high-purity water, free of air and other substances. Scientists are also experimenting with liquids other than waterdenser liquids such as hydrofluoric acidwhich may allow the wavelength to be shrunk still further, thus producing even smaller chips. IBM have already successfully implemented immersion lithography on some of their production lines and created a fully-functioning microprocessor. IBM also claim that they are able to produce microchips with very few defects. Although immersion lithography is very new, it is highly promising as it will make the production of 45 nm and 32 nm chips commercially viable. It is a significant milestone in chip manufacturing and will help to bring the costs of the chip down without fundamentally changing the microchip production processes. In the near future, the ground-breaking technology of immersion lithography will enable computer manufacturers to make powerful microchips that will be used in electronic devices smaller than a coin. This will open up new opportunities in the ever-shrinking world of digital technology.", "hypothesis": "Computers started to shrink with the invention of the microchip.", "label": "e"} +{"uid": "id_397", "premise": "Water and chips break new ground Computers have been shrinking ever since their conception almost two centuries ago, and the trend is set to continue with the latest developments in microchip manufacturing. The earliest prototype of a mechanical computer was called the Difference Engine, and was invented by an eccentric Victorian called Charles Babbage. It weighed over 15 tons and had 26,000 parts. Colossus, the first electronic computer, did not appear until the end of WWTI, and with its 1,500 vacuum tubes was even more complex and much heavier than its mechanical predecessor. It was only when the silicon-based microchip was invented in the early 1950s that computers started to become more compact. The first microchip computers were very complex and had more than 100,000 transistors, or electronic switches; however, they were still rather bulky and measured several metres across. Nowadays microchips are measured in nanometres (nm)that is, in billionths of a metreand the search for even smaller microchips continues as scientists work on new methods of microchip production. Today, most microchips are shaped by a process called lithographic etching, which uses ultraviolet (UV) light. A beam of UV light with a wavelength of only 193 nm is projected through a lens on to an etching mask, a micro device with slits, or long narrow cuts. When the UV light hits the surface of silicon chips, it removes microscopic layers of silicon to create patterns for the microchips circuits. Microchips with features as small as 65 nm can be created with this wavelength. However, lithographic etching is unable to make chips much smaller than 65 nm due to the fundamental properties of light. If the slit in the mask were made narrower, the air and nitrogen used in the space between the lens and the etching mask would diffuse the light, causing a blurred image. This means that 193-nm UV light cannot be used to produce microchips with features smaller than 65 nm. Manufacturers know that they need to go even smaller for the technological demands of this century, and they are looking for new methods of making microchips. One approach to solving the problem is to use microscopic mirrors to focus X-rays rather than ultraviolet light. X-rays with a wavelength of less than 25 nm can be created, allowing engineers to make components smaller than 15 nm. The process is known as X-ray lithography etching. However, this technology is extremely expensive, so manufacturers are continuing to search for a cheaper alternative. A technology called immersion lithography might be the solution. Although liquids are not commonly associated with computers, a tiny drop of water may be all it takes to make microprocessors smaller and more powerful. Intel and IBM, who made the first microprocessors, have recently developed a unique method of microchip production, which uses water droplets to enable manufacturers to shrink the chipsand at a reasonable price! The new microchip is produced by using a drop of water to narrow the gap between the light source and the etching mask, and shorten the wavelength of the UV light to less than 34 nm. This process can be used to manufacture microchips as small as 45 nm, or possibly even smaller. Initially, engineers feared that air bubbles and other contaminants in water drops would distort the light and ruin the microchip etching process, and the first experiments proved these fears to be well-founded. The problem was overcome by using high-purity water, free of air and other substances. Scientists are also experimenting with liquids other than waterdenser liquids such as hydrofluoric acidwhich may allow the wavelength to be shrunk still further, thus producing even smaller chips. IBM have already successfully implemented immersion lithography on some of their production lines and created a fully-functioning microprocessor. IBM also claim that they are able to produce microchips with very few defects. Although immersion lithography is very new, it is highly promising as it will make the production of 45 nm and 32 nm chips commercially viable. It is a significant milestone in chip manufacturing and will help to bring the costs of the chip down without fundamentally changing the microchip production processes. In the near future, the ground-breaking technology of immersion lithography will enable computer manufacturers to make powerful microchips that will be used in electronic devices smaller than a coin. This will open up new opportunities in the ever-shrinking world of digital technology.", "hypothesis": "In early 1950s engineers used ultraviolet rays to build the first microchip.", "label": "n"} +{"uid": "id_398", "premise": "Water and chips break new ground Computers have been shrinking ever since their conception almost two centuries ago, and the trend is set to continue with the latest developments in microchip manufacturing. The earliest prototype of a mechanical computer was called the Difference Engine, and was invented by an eccentric Victorian called Charles Babbage. It weighed over 15 tons and had 26,000 parts. Colossus, the first electronic computer, did not appear until the end of WWTI, and with its 1,500 vacuum tubes was even more complex and much heavier than its mechanical predecessor. It was only when the silicon-based microchip was invented in the early 1950s that computers started to become more compact. The first microchip computers were very complex and had more than 100,000 transistors, or electronic switches; however, they were still rather bulky and measured several metres across. Nowadays microchips are measured in nanometres (nm)that is, in billionths of a metreand the search for even smaller microchips continues as scientists work on new methods of microchip production. Today, most microchips are shaped by a process called lithographic etching, which uses ultraviolet (UV) light. A beam of UV light with a wavelength of only 193 nm is projected through a lens on to an etching mask, a micro device with slits, or long narrow cuts. When the UV light hits the surface of silicon chips, it removes microscopic layers of silicon to create patterns for the microchips circuits. Microchips with features as small as 65 nm can be created with this wavelength. However, lithographic etching is unable to make chips much smaller than 65 nm due to the fundamental properties of light. If the slit in the mask were made narrower, the air and nitrogen used in the space between the lens and the etching mask would diffuse the light, causing a blurred image. This means that 193-nm UV light cannot be used to produce microchips with features smaller than 65 nm. Manufacturers know that they need to go even smaller for the technological demands of this century, and they are looking for new methods of making microchips. One approach to solving the problem is to use microscopic mirrors to focus X-rays rather than ultraviolet light. X-rays with a wavelength of less than 25 nm can be created, allowing engineers to make components smaller than 15 nm. The process is known as X-ray lithography etching. However, this technology is extremely expensive, so manufacturers are continuing to search for a cheaper alternative. A technology called immersion lithography might be the solution. Although liquids are not commonly associated with computers, a tiny drop of water may be all it takes to make microprocessors smaller and more powerful. Intel and IBM, who made the first microprocessors, have recently developed a unique method of microchip production, which uses water droplets to enable manufacturers to shrink the chipsand at a reasonable price! The new microchip is produced by using a drop of water to narrow the gap between the light source and the etching mask, and shorten the wavelength of the UV light to less than 34 nm. This process can be used to manufacture microchips as small as 45 nm, or possibly even smaller. Initially, engineers feared that air bubbles and other contaminants in water drops would distort the light and ruin the microchip etching process, and the first experiments proved these fears to be well-founded. The problem was overcome by using high-purity water, free of air and other substances. Scientists are also experimenting with liquids other than waterdenser liquids such as hydrofluoric acidwhich may allow the wavelength to be shrunk still further, thus producing even smaller chips. IBM have already successfully implemented immersion lithography on some of their production lines and created a fully-functioning microprocessor. IBM also claim that they are able to produce microchips with very few defects. Although immersion lithography is very new, it is highly promising as it will make the production of 45 nm and 32 nm chips commercially viable. It is a significant milestone in chip manufacturing and will help to bring the costs of the chip down without fundamentally changing the microchip production processes. In the near future, the ground-breaking technology of immersion lithography will enable computer manufacturers to make powerful microchips that will be used in electronic devices smaller than a coin. This will open up new opportunities in the ever-shrinking world of digital technology.", "hypothesis": "X-ray lithography is an inexpensive alternative technology to lithographic etching.", "label": "c"} +{"uid": "id_399", "premise": "Water and chips break new ground Computers have been shrinking ever since their conception almost two centuries ago, and the trend is set to continue with the latest developments in microchip manufacturing. The earliest prototype of a mechanical computer was called the Difference Engine, and was invented by an eccentric Victorian called Charles Babbage. It weighed over 15 tons and had 26,000 parts. Colossus, the first electronic computer, did not appear until the end of WWTI, and with its 1,500 vacuum tubes was even more complex and much heavier than its mechanical predecessor. It was only when the silicon-based microchip was invented in the early 1950s that computers started to become more compact. The first microchip computers were very complex and had more than 100,000 transistors, or electronic switches; however, they were still rather bulky and measured several metres across. Nowadays microchips are measured in nanometres (nm)that is, in billionths of a metreand the search for even smaller microchips continues as scientists work on new methods of microchip production. Today, most microchips are shaped by a process called lithographic etching, which uses ultraviolet (UV) light. A beam of UV light with a wavelength of only 193 nm is projected through a lens on to an etching mask, a micro device with slits, or long narrow cuts. When the UV light hits the surface of silicon chips, it removes microscopic layers of silicon to create patterns for the microchips circuits. Microchips with features as small as 65 nm can be created with this wavelength. However, lithographic etching is unable to make chips much smaller than 65 nm due to the fundamental properties of light. If the slit in the mask were made narrower, the air and nitrogen used in the space between the lens and the etching mask would diffuse the light, causing a blurred image. This means that 193-nm UV light cannot be used to produce microchips with features smaller than 65 nm. Manufacturers know that they need to go even smaller for the technological demands of this century, and they are looking for new methods of making microchips. One approach to solving the problem is to use microscopic mirrors to focus X-rays rather than ultraviolet light. X-rays with a wavelength of less than 25 nm can be created, allowing engineers to make components smaller than 15 nm. The process is known as X-ray lithography etching. However, this technology is extremely expensive, so manufacturers are continuing to search for a cheaper alternative. A technology called immersion lithography might be the solution. Although liquids are not commonly associated with computers, a tiny drop of water may be all it takes to make microprocessors smaller and more powerful. Intel and IBM, who made the first microprocessors, have recently developed a unique method of microchip production, which uses water droplets to enable manufacturers to shrink the chipsand at a reasonable price! The new microchip is produced by using a drop of water to narrow the gap between the light source and the etching mask, and shorten the wavelength of the UV light to less than 34 nm. This process can be used to manufacture microchips as small as 45 nm, or possibly even smaller. Initially, engineers feared that air bubbles and other contaminants in water drops would distort the light and ruin the microchip etching process, and the first experiments proved these fears to be well-founded. The problem was overcome by using high-purity water, free of air and other substances. Scientists are also experimenting with liquids other than waterdenser liquids such as hydrofluoric acidwhich may allow the wavelength to be shrunk still further, thus producing even smaller chips. IBM have already successfully implemented immersion lithography on some of their production lines and created a fully-functioning microprocessor. IBM also claim that they are able to produce microchips with very few defects. Although immersion lithography is very new, it is highly promising as it will make the production of 45 nm and 32 nm chips commercially viable. It is a significant milestone in chip manufacturing and will help to bring the costs of the chip down without fundamentally changing the microchip production processes. In the near future, the ground-breaking technology of immersion lithography will enable computer manufacturers to make powerful microchips that will be used in electronic devices smaller than a coin. This will open up new opportunities in the ever-shrinking world of digital technology.", "hypothesis": "Immersion lithography has enabled microchip manufacturers to produce higher quality computer chips.", "label": "e"} +{"uid": "id_400", "premise": "Water occupies 71% of our planet. About 96.5% of the water found on Earth is not readily available for human consumption, and resides in the oceans and seas. Out of the remaining 3.5%, 1.7% can be found in groundwater, 1.7% in glaciers and ice caps in Antarctica and Greenland and 0.001% in the air as vapour and clouds. Water moves continually through the water cycle of evaporation and transpiration, condensation, precipitation and runoff, usually reaching the sea. Whereas evaporation refers to the phase shift of any liquid to gas, transpiration is the process of water movement through a plant. Runoff is the flow of water over the Earth's surface. It's created when too much rain falls, and there is no more room in the soil to absorb more water. One of the reasons seas are salty is because they contain large amounts of highly soluble salts (such as sodium and chloride) which were washed away by runoff water on its way to the sea.", "hypothesis": "Taking into account the water found in ice caps in Greenland and Antarctica, water constitutes over 71% of our planet.", "label": "c"} +{"uid": "id_401", "premise": "Water occupies 71% of our planet. About 96.5% of the water found on Earth is not readily available for human consumption, and resides in the oceans and seas. Out of the remaining 3.5%, 1.7% can be found in groundwater, 1.7% in glaciers and ice caps in Antarctica and Greenland and 0.001% in the air as vapour and clouds. Water moves continually through the water cycle of evaporation and transpiration, condensation, precipitation and runoff, usually reaching the sea. Whereas evaporation refers to the phase shift of any liquid to gas, transpiration is the process of water movement through a plant. Runoff is the flow of water over the Earth's surface. It's created when too much rain falls, and there is no more room in the soil to absorb more water. One of the reasons seas are salty is because they contain large amounts of highly soluble salts (such as sodium and chloride) which were washed away by runoff water on its way to the sea.", "hypothesis": "Water content in glaciers is not considered to be groundwater.", "label": "e"} +{"uid": "id_402", "premise": "Water occupies 71% of our planet. About 96.5% of the water found on Earth is not readily available for human consumption, and resides in the oceans and seas. Out of the remaining 3.5%, 1.7% can be found in groundwater, 1.7% in glaciers and ice caps in Antarctica and Greenland and 0.001% in the air as vapour and clouds. Water moves continually through the water cycle of evaporation and transpiration, condensation, precipitation and runoff, usually reaching the sea. Whereas evaporation refers to the phase shift of any liquid to gas, transpiration is the process of water movement through a plant. Runoff is the flow of water over the Earth's surface. It's created when too much rain falls, and there is no more room in the soil to absorb more water. One of the reasons seas are salty is because they contain large amounts of highly soluble salts (such as sodium and chloride) which were washed away by runoff water on its way to the sea.", "hypothesis": "The runoff stage in the water cycle takes the longest amount of time.", "label": "n"} +{"uid": "id_403", "premise": "Water stress and scarcity Water stress and scarcity occur when there is an imbalance between the availability of water and the demand for water. When we hear people talking about water stress and scarcity, we often think of drought but this is only one of several causes. Alex Karpov, a representative from the WHO explains some of the other issues that also impact the availability of fresh water, The deterioration of ground water and surface water quality, competition for water between different segments of society, for example, between agricultural, industrial, and domestic users, and even social and financial barriers, are all causes of water stress and scarcity today. While approximately three quarters of the earth are covered with water only a small proportion of it is available as fresh water. Of the available fresh water supplies, nearly 70% is withdrawn and used for irrigation to produce food, and the demand just keep growing. Although there is currently no global scarcity of water, more and more regions of the world are chronically short of water. At present, 1.1 billion people have little choice but to use potentially harmful sources of water, and 2.6 billion people, which is around half the developing world, lack access to adequate sanitation. As Kathie Coles, an executive from the charity World of Water, describes, the situation will deteriorate. Over the next 20 years, an estimated 1.8 billion people will be living in countries or regions with an absolute water scarcity, and two-thirds of the world population may be under pressure conditions. This situation will only worsen, as rapidly growing urban areas place heavy pressure on water supplies. Of course, there have been different initiatives put into place around the world to help with water stress and scarcity. With larger scale projects, such as the construction of piper water systems, remain important objectives of many development agencies, a shortage of time and finances will leave hundreds of millions of people without access to safe water in the foreseeable future. Georgina Ronaldson, a spokesperson for the World Bank, recently announced a way to deal with the current difficulties. To help developing countries, various concerned organisations have developed the Safe Water System (SWS), which is an adaptable and flexible intervention that employs scientific methods appropriate for the developing world. The SWS has been criticised in various corners as being too amateurish, but Ronaldson continues to justify the approach. The use of relevant technologies is important, an in many places around the world, water provision efforts suffer from a lack of technical knowledge to effectively manage or adapt a system to a communitys changing needs. The SWS is a community-based, integrative approach to improving health and quality of life through increased access to improve water, sanitation and hygiene. Darren Stanford, a water quality engineer, explains the important three step methodology. The first is an assessment of water delivery system from catchment to consumer. The second is implementing appropriate interventions, which can include protection of source waters, improvements to the water deliver systems, introduction of SWS, improved sanitation and hygiene education. The third is the evaluation of the impact of the intervention of the health and quality of life of the consumers. One example of how poor water access can affect local populations is the problems of guinea worms in remote parts of Africa. This is a preventable parasitic infection that affects poor communities that lack safe drinking water. The infection is transmitted to people who drink water containing copepods (tiny water fleas) that are infected with the larvae of guinea worms. Once ingested these larvae take up to one year to grow into adult worms; the female worms then emerge from the skin anywhere on the body. Will Goodman, a doctor with WHO, says that this can affect communities in different ways. The emergence of the adult female worm can be very painful, slow and disabling and prevents people from working in their fields, tending their animals, going to school, and caring for their families. Currently many organisations are helping the last nine endemic countries (all in Sub-Saharan Africa) to eradicate guinea worm. Since the Guinea Worm Eradication programme began, the incidence of guinea worm has declined from 1.5 million cases per year in 20 endemic countries to 25,018 reported cases in 2008 from the nine remaining endemic countries. The eradication efforts make use of simple intervention for providing safe drinking water including using cloth filters and pipe filters to strain the infected copepods from water, applying chemicals to the water supplies to kill the larvae, and preventing infected people from entering and contaminating the water supplies, as the worms emerge from their skins. Providing borehole wells and other supplies of water in endemic village is another important component of the eradication efforts. The provision of borehole well is one of the principal aims of SWS. Many existing dug wells in communities only pierce the topsoil, do not reach deep enough and are therefore readily affected by drought or by the natural declines from summer to autumn in the water table. SWS borehole wells can pierce the bedrock and access a deeper aquifer with water that is not affected by surface drought. These are also unaffected by guinea worm infestation and water is much safer for human consumption.", "hypothesis": "Majority of water available on earth is drinkable.", "label": "c"} +{"uid": "id_404", "premise": "Water stress and scarcity Water stress and scarcity occur when there is an imbalance between the availability of water and the demand for water. When we hear people talking about water stress and scarcity, we often think of drought but this is only one of several causes. Alex Karpov, a representative from the WHO explains some of the other issues that also impact the availability of fresh water, The deterioration of ground water and surface water quality, competition for water between different segments of society, for example, between agricultural, industrial, and domestic users, and even social and financial barriers, are all causes of water stress and scarcity today. While approximately three quarters of the earth are covered with water only a small proportion of it is available as fresh water. Of the available fresh water supplies, nearly 70% is withdrawn and used for irrigation to produce food, and the demand just keep growing. Although there is currently no global scarcity of water, more and more regions of the world are chronically short of water. At present, 1.1 billion people have little choice but to use potentially harmful sources of water, and 2.6 billion people, which is around half the developing world, lack access to adequate sanitation. As Kathie Coles, an executive from the charity World of Water, describes, the situation will deteriorate. Over the next 20 years, an estimated 1.8 billion people will be living in countries or regions with an absolute water scarcity, and two-thirds of the world population may be under pressure conditions. This situation will only worsen, as rapidly growing urban areas place heavy pressure on water supplies. Of course, there have been different initiatives put into place around the world to help with water stress and scarcity. With larger scale projects, such as the construction of piper water systems, remain important objectives of many development agencies, a shortage of time and finances will leave hundreds of millions of people without access to safe water in the foreseeable future. Georgina Ronaldson, a spokesperson for the World Bank, recently announced a way to deal with the current difficulties. To help developing countries, various concerned organisations have developed the Safe Water System (SWS), which is an adaptable and flexible intervention that employs scientific methods appropriate for the developing world. The SWS has been criticised in various corners as being too amateurish, but Ronaldson continues to justify the approach. The use of relevant technologies is important, an in many places around the world, water provision efforts suffer from a lack of technical knowledge to effectively manage or adapt a system to a communitys changing needs. The SWS is a community-based, integrative approach to improving health and quality of life through increased access to improve water, sanitation and hygiene. Darren Stanford, a water quality engineer, explains the important three step methodology. The first is an assessment of water delivery system from catchment to consumer. The second is implementing appropriate interventions, which can include protection of source waters, improvements to the water deliver systems, introduction of SWS, improved sanitation and hygiene education. The third is the evaluation of the impact of the intervention of the health and quality of life of the consumers. One example of how poor water access can affect local populations is the problems of guinea worms in remote parts of Africa. This is a preventable parasitic infection that affects poor communities that lack safe drinking water. The infection is transmitted to people who drink water containing copepods (tiny water fleas) that are infected with the larvae of guinea worms. Once ingested these larvae take up to one year to grow into adult worms; the female worms then emerge from the skin anywhere on the body. Will Goodman, a doctor with WHO, says that this can affect communities in different ways. The emergence of the adult female worm can be very painful, slow and disabling and prevents people from working in their fields, tending their animals, going to school, and caring for their families. Currently many organisations are helping the last nine endemic countries (all in Sub-Saharan Africa) to eradicate guinea worm. Since the Guinea Worm Eradication programme began, the incidence of guinea worm has declined from 1.5 million cases per year in 20 endemic countries to 25,018 reported cases in 2008 from the nine remaining endemic countries. The eradication efforts make use of simple intervention for providing safe drinking water including using cloth filters and pipe filters to strain the infected copepods from water, applying chemicals to the water supplies to kill the larvae, and preventing infected people from entering and contaminating the water supplies, as the worms emerge from their skins. Providing borehole wells and other supplies of water in endemic village is another important component of the eradication efforts. The provision of borehole well is one of the principal aims of SWS. Many existing dug wells in communities only pierce the topsoil, do not reach deep enough and are therefore readily affected by drought or by the natural declines from summer to autumn in the water table. SWS borehole wells can pierce the bedrock and access a deeper aquifer with water that is not affected by surface drought. These are also unaffected by guinea worm infestation and water is much safer for human consumption.", "hypothesis": "SWS focuses on providing boreholes to eradicate the problem of guinea worms.", "label": "e"} +{"uid": "id_405", "premise": "Water stress and scarcity Water stress and scarcity occur when there is an imbalance between the availability of water and the demand for water. When we hear people talking about water stress and scarcity, we often think of drought but this is only one of several causes. Alex Karpov, a representative from the WHO explains some of the other issues that also impact the availability of fresh water, The deterioration of ground water and surface water quality, competition for water between different segments of society, for example, between agricultural, industrial, and domestic users, and even social and financial barriers, are all causes of water stress and scarcity today. While approximately three quarters of the earth are covered with water only a small proportion of it is available as fresh water. Of the available fresh water supplies, nearly 70% is withdrawn and used for irrigation to produce food, and the demand just keep growing. Although there is currently no global scarcity of water, more and more regions of the world are chronically short of water. At present, 1.1 billion people have little choice but to use potentially harmful sources of water, and 2.6 billion people, which is around half the developing world, lack access to adequate sanitation. As Kathie Coles, an executive from the charity World of Water, describes, the situation will deteriorate. Over the next 20 years, an estimated 1.8 billion people will be living in countries or regions with an absolute water scarcity, and two-thirds of the world population may be under pressure conditions. This situation will only worsen, as rapidly growing urban areas place heavy pressure on water supplies. Of course, there have been different initiatives put into place around the world to help with water stress and scarcity. With larger scale projects, such as the construction of piper water systems, remain important objectives of many development agencies, a shortage of time and finances will leave hundreds of millions of people without access to safe water in the foreseeable future. Georgina Ronaldson, a spokesperson for the World Bank, recently announced a way to deal with the current difficulties. To help developing countries, various concerned organisations have developed the Safe Water System (SWS), which is an adaptable and flexible intervention that employs scientific methods appropriate for the developing world. The SWS has been criticised in various corners as being too amateurish, but Ronaldson continues to justify the approach. The use of relevant technologies is important, an in many places around the world, water provision efforts suffer from a lack of technical knowledge to effectively manage or adapt a system to a communitys changing needs. The SWS is a community-based, integrative approach to improving health and quality of life through increased access to improve water, sanitation and hygiene. Darren Stanford, a water quality engineer, explains the important three step methodology. The first is an assessment of water delivery system from catchment to consumer. The second is implementing appropriate interventions, which can include protection of source waters, improvements to the water deliver systems, introduction of SWS, improved sanitation and hygiene education. The third is the evaluation of the impact of the intervention of the health and quality of life of the consumers. One example of how poor water access can affect local populations is the problems of guinea worms in remote parts of Africa. This is a preventable parasitic infection that affects poor communities that lack safe drinking water. The infection is transmitted to people who drink water containing copepods (tiny water fleas) that are infected with the larvae of guinea worms. Once ingested these larvae take up to one year to grow into adult worms; the female worms then emerge from the skin anywhere on the body. Will Goodman, a doctor with WHO, says that this can affect communities in different ways. The emergence of the adult female worm can be very painful, slow and disabling and prevents people from working in their fields, tending their animals, going to school, and caring for their families. Currently many organisations are helping the last nine endemic countries (all in Sub-Saharan Africa) to eradicate guinea worm. Since the Guinea Worm Eradication programme began, the incidence of guinea worm has declined from 1.5 million cases per year in 20 endemic countries to 25,018 reported cases in 2008 from the nine remaining endemic countries. The eradication efforts make use of simple intervention for providing safe drinking water including using cloth filters and pipe filters to strain the infected copepods from water, applying chemicals to the water supplies to kill the larvae, and preventing infected people from entering and contaminating the water supplies, as the worms emerge from their skins. Providing borehole wells and other supplies of water in endemic village is another important component of the eradication efforts. The provision of borehole well is one of the principal aims of SWS. Many existing dug wells in communities only pierce the topsoil, do not reach deep enough and are therefore readily affected by drought or by the natural declines from summer to autumn in the water table. SWS borehole wells can pierce the bedrock and access a deeper aquifer with water that is not affected by surface drought. These are also unaffected by guinea worm infestation and water is much safer for human consumption.", "hypothesis": "Guinea worms in only found in Arica.", "label": "n"} +{"uid": "id_406", "premise": "Water stress and scarcity Water stress and scarcity occur when there is an imbalance between the availability of water and the demand for water. When we hear people talking about water stress and scarcity, we often think of drought but this is only one of several causes. Alex Karpov, a representative from the WHO explains some of the other issues that also impact the availability of fresh water, The deterioration of ground water and surface water quality, competition for water between different segments of society, for example, between agricultural, industrial, and domestic users, and even social and financial barriers, are all causes of water stress and scarcity today. While approximately three quarters of the earth are covered with water only a small proportion of it is available as fresh water. Of the available fresh water supplies, nearly 70% is withdrawn and used for irrigation to produce food, and the demand just keep growing. Although there is currently no global scarcity of water, more and more regions of the world are chronically short of water. At present, 1.1 billion people have little choice but to use potentially harmful sources of water, and 2.6 billion people, which is around half the developing world, lack access to adequate sanitation. As Kathie Coles, an executive from the charity World of Water, describes, the situation will deteriorate. Over the next 20 years, an estimated 1.8 billion people will be living in countries or regions with an absolute water scarcity, and two-thirds of the world population may be under pressure conditions. This situation will only worsen, as rapidly growing urban areas place heavy pressure on water supplies. Of course, there have been different initiatives put into place around the world to help with water stress and scarcity. With larger scale projects, such as the construction of piper water systems, remain important objectives of many development agencies, a shortage of time and finances will leave hundreds of millions of people without access to safe water in the foreseeable future. Georgina Ronaldson, a spokesperson for the World Bank, recently announced a way to deal with the current difficulties. To help developing countries, various concerned organisations have developed the Safe Water System (SWS), which is an adaptable and flexible intervention that employs scientific methods appropriate for the developing world. The SWS has been criticised in various corners as being too amateurish, but Ronaldson continues to justify the approach. The use of relevant technologies is important, an in many places around the world, water provision efforts suffer from a lack of technical knowledge to effectively manage or adapt a system to a communitys changing needs. The SWS is a community-based, integrative approach to improving health and quality of life through increased access to improve water, sanitation and hygiene. Darren Stanford, a water quality engineer, explains the important three step methodology. The first is an assessment of water delivery system from catchment to consumer. The second is implementing appropriate interventions, which can include protection of source waters, improvements to the water deliver systems, introduction of SWS, improved sanitation and hygiene education. The third is the evaluation of the impact of the intervention of the health and quality of life of the consumers. One example of how poor water access can affect local populations is the problems of guinea worms in remote parts of Africa. This is a preventable parasitic infection that affects poor communities that lack safe drinking water. The infection is transmitted to people who drink water containing copepods (tiny water fleas) that are infected with the larvae of guinea worms. Once ingested these larvae take up to one year to grow into adult worms; the female worms then emerge from the skin anywhere on the body. Will Goodman, a doctor with WHO, says that this can affect communities in different ways. The emergence of the adult female worm can be very painful, slow and disabling and prevents people from working in their fields, tending their animals, going to school, and caring for their families. Currently many organisations are helping the last nine endemic countries (all in Sub-Saharan Africa) to eradicate guinea worm. Since the Guinea Worm Eradication programme began, the incidence of guinea worm has declined from 1.5 million cases per year in 20 endemic countries to 25,018 reported cases in 2008 from the nine remaining endemic countries. The eradication efforts make use of simple intervention for providing safe drinking water including using cloth filters and pipe filters to strain the infected copepods from water, applying chemicals to the water supplies to kill the larvae, and preventing infected people from entering and contaminating the water supplies, as the worms emerge from their skins. Providing borehole wells and other supplies of water in endemic village is another important component of the eradication efforts. The provision of borehole well is one of the principal aims of SWS. Many existing dug wells in communities only pierce the topsoil, do not reach deep enough and are therefore readily affected by drought or by the natural declines from summer to autumn in the water table. SWS borehole wells can pierce the bedrock and access a deeper aquifer with water that is not affected by surface drought. These are also unaffected by guinea worm infestation and water is much safer for human consumption.", "hypothesis": "One of main reasons behind declining availability of water is demand from different working segments of society.", "label": "e"} +{"uid": "id_407", "premise": "Water stress and scarcity Water stress and scarcity occur when there is an imbalance between the availability of water and the demand for water. When we hear people talking about water stress and scarcity, we often think of drought but this is only one of several causes. Alex Karpov, a representative from the WHO explains some of the other issues that also impact the availability of fresh water, The deterioration of ground water and surface water quality, competition for water between different segments of society, for example, between agricultural, industrial, and domestic users, and even social and financial barriers, are all causes of water stress and scarcity today. While approximately three quarters of the earth are covered with water only a small proportion of it is available as fresh water. Of the available fresh water supplies, nearly 70% is withdrawn and used for irrigation to produce food, and the demand just keep growing. Although there is currently no global scarcity of water, more and more regions of the world are chronically short of water. At present, 1.1 billion people have little choice but to use potentially harmful sources of water, and 2.6 billion people, which is around half the developing world, lack access to adequate sanitation. As Kathie Coles, an executive from the charity World of Water, describes, the situation will deteriorate. Over the next 20 years, an estimated 1.8 billion people will be living in countries or regions with an absolute water scarcity, and two-thirds of the world population may be under pressure conditions. This situation will only worsen, as rapidly growing urban areas place heavy pressure on water supplies. Of course, there have been different initiatives put into place around the world to help with water stress and scarcity. With larger scale projects, such as the construction of piper water systems, remain important objectives of many development agencies, a shortage of time and finances will leave hundreds of millions of people without access to safe water in the foreseeable future. Georgina Ronaldson, a spokesperson for the World Bank, recently announced a way to deal with the current difficulties. To help developing countries, various concerned organisations have developed the Safe Water System (SWS), which is an adaptable and flexible intervention that employs scientific methods appropriate for the developing world. The SWS has been criticised in various corners as being too amateurish, but Ronaldson continues to justify the approach. The use of relevant technologies is important, an in many places around the world, water provision efforts suffer from a lack of technical knowledge to effectively manage or adapt a system to a communitys changing needs. The SWS is a community-based, integrative approach to improving health and quality of life through increased access to improve water, sanitation and hygiene. Darren Stanford, a water quality engineer, explains the important three step methodology. The first is an assessment of water delivery system from catchment to consumer. The second is implementing appropriate interventions, which can include protection of source waters, improvements to the water deliver systems, introduction of SWS, improved sanitation and hygiene education. The third is the evaluation of the impact of the intervention of the health and quality of life of the consumers. One example of how poor water access can affect local populations is the problems of guinea worms in remote parts of Africa. This is a preventable parasitic infection that affects poor communities that lack safe drinking water. The infection is transmitted to people who drink water containing copepods (tiny water fleas) that are infected with the larvae of guinea worms. Once ingested these larvae take up to one year to grow into adult worms; the female worms then emerge from the skin anywhere on the body. Will Goodman, a doctor with WHO, says that this can affect communities in different ways. The emergence of the adult female worm can be very painful, slow and disabling and prevents people from working in their fields, tending their animals, going to school, and caring for their families. Currently many organisations are helping the last nine endemic countries (all in Sub-Saharan Africa) to eradicate guinea worm. Since the Guinea Worm Eradication programme began, the incidence of guinea worm has declined from 1.5 million cases per year in 20 endemic countries to 25,018 reported cases in 2008 from the nine remaining endemic countries. The eradication efforts make use of simple intervention for providing safe drinking water including using cloth filters and pipe filters to strain the infected copepods from water, applying chemicals to the water supplies to kill the larvae, and preventing infected people from entering and contaminating the water supplies, as the worms emerge from their skins. Providing borehole wells and other supplies of water in endemic village is another important component of the eradication efforts. The provision of borehole well is one of the principal aims of SWS. Many existing dug wells in communities only pierce the topsoil, do not reach deep enough and are therefore readily affected by drought or by the natural declines from summer to autumn in the water table. SWS borehole wells can pierce the bedrock and access a deeper aquifer with water that is not affected by surface drought. These are also unaffected by guinea worm infestation and water is much safer for human consumption.", "hypothesis": "The SWS has been appreciated worldwide.", "label": "c"} +{"uid": "id_408", "premise": "Water stress and scarcity Water stress and scarcity occur when there is an imbalance between the availability of water and the demand for water. When we hear people talking about water stress and scarcity, we often think of drought but this is only one of several causes. Alex Karpov, a representative from the WHO explains some of the other issues that also impact the availability of fresh water, The deterioration of ground water and surface water quality, competition for water between different segments of society, for example, between agricultural, industrial, and domestic users, and even social and financial barriers, are all causes of water stress and scarcity today. While approximately three quarters of the earth are covered with water only a small proportion of it is available as fresh water. Of the available fresh water supplies, nearly 70% is withdrawn and used for irrigation to produce food, and the demand just keep growing. Although there is currently no global scarcity of water, more and more regions of the world are chronically short of water. At present, 1.1 billion people have little choice but to use potentially harmful sources of water, and 2.6 billion people, which is around half the developing world, lack access to adequate sanitation. As Kathie Coles, an executive from the charity World of Water, describes, the situation will deteriorate. Over the next 20 years, an estimated 1.8 billion people will be living in countries or regions with an absolute water scarcity, and two-thirds of the world population may be under pressure conditions. This situation will only worsen, as rapidly growing urban areas place heavy pressure on water supplies. Of course, there have been different initiatives put into place around the world to help with water stress and scarcity. With larger scale projects, such as the construction of piper water systems, remain important objectives of many development agencies, a shortage of time and finances will leave hundreds of millions of people without access to safe water in the foreseeable future. Georgina Ronaldson, a spokesperson for the World Bank, recently announced a way to deal with the current difficulties. To help developing countries, various concerned organisations have developed the Safe Water System (SWS), which is an adaptable and flexible intervention that employs scientific methods appropriate for the developing world. The SWS has been criticised in various corners as being too amateurish, but Ronaldson continues to justify the approach. The use of relevant technologies is important, an in many places around the world, water provision efforts suffer from a lack of technical knowledge to effectively manage or adapt a system to a communitys changing needs. The SWS is a community-based, integrative approach to improving health and quality of life through increased access to improve water, sanitation and hygiene. Darren Stanford, a water quality engineer, explains the important three step methodology. The first is an assessment of water delivery system from catchment to consumer. The second is implementing appropriate interventions, which can include protection of source waters, improvements to the water deliver systems, introduction of SWS, improved sanitation and hygiene education. The third is the evaluation of the impact of the intervention of the health and quality of life of the consumers. One example of how poor water access can affect local populations is the problems of guinea worms in remote parts of Africa. This is a preventable parasitic infection that affects poor communities that lack safe drinking water. The infection is transmitted to people who drink water containing copepods (tiny water fleas) that are infected with the larvae of guinea worms. Once ingested these larvae take up to one year to grow into adult worms; the female worms then emerge from the skin anywhere on the body. Will Goodman, a doctor with WHO, says that this can affect communities in different ways. The emergence of the adult female worm can be very painful, slow and disabling and prevents people from working in their fields, tending their animals, going to school, and caring for their families. Currently many organisations are helping the last nine endemic countries (all in Sub-Saharan Africa) to eradicate guinea worm. Since the Guinea Worm Eradication programme began, the incidence of guinea worm has declined from 1.5 million cases per year in 20 endemic countries to 25,018 reported cases in 2008 from the nine remaining endemic countries. The eradication efforts make use of simple intervention for providing safe drinking water including using cloth filters and pipe filters to strain the infected copepods from water, applying chemicals to the water supplies to kill the larvae, and preventing infected people from entering and contaminating the water supplies, as the worms emerge from their skins. Providing borehole wells and other supplies of water in endemic village is another important component of the eradication efforts. The provision of borehole well is one of the principal aims of SWS. Many existing dug wells in communities only pierce the topsoil, do not reach deep enough and are therefore readily affected by drought or by the natural declines from summer to autumn in the water table. SWS borehole wells can pierce the bedrock and access a deeper aquifer with water that is not affected by surface drought. These are also unaffected by guinea worm infestation and water is much safer for human consumption.", "hypothesis": "SWS has been implemented in more than 30 countries.", "label": "n"} +{"uid": "id_409", "premise": "Water, the most common liquid used for cleaning, has a property called surface tension. Molecules in the body of the water are surrounded by other molecules, but at the surface a tension is created as molecules are only surrounded by other molecules on the waterside. This tension inhibits the cleaning process, as it slows the wetting of surface due to tension causing the water to bead up. This is where water droplets hold their shape and do not spread. For effective cleaning to take place surface tension must be reduced so that water can spread. Surface active agents, or surfactants, are chemicals, which are able to do this effectively.", "hypothesis": "The molecules on the waterside hinder the cleaning process.", "label": "c"} +{"uid": "id_410", "premise": "Water, the most common liquid used for cleaning, has a property called surface tension. Molecules in the body of the water are surrounded by other molecules, but at the surface a tension is created as molecules are only surrounded by other molecules on the waterside. This tension inhibits the cleaning process, as it slows the wetting of surface due to tension causing the water to bead up. This is where water droplets hold their shape and do not spread. For effective cleaning to take place surface tension must be reduced so that water can spread. Surface active agents, or surfactants, are chemicals, which are able to do this effectively.", "hypothesis": "Surface-active agents, or surfactants, are only used for cleaning.", "label": "n"} +{"uid": "id_411", "premise": "Water, the most common liquid used for cleaning, has a property called surface tension. Molecules in the body of the water are surrounded by other molecules, but at the surface a tension is created as molecules are only surrounded by other molecules on the waterside. This tension inhibits the cleaning process, as it slows the wetting of surface due to tension causing the water to bead up. This is where water droplets hold their shape and do not spread. For effective cleaning to take place surface tension must be reduced so that water can spread. Surface active agents, or surfactants, are chemicals, which are able to do this effectively.", "hypothesis": "Water is the only known liquid used for cleaning.", "label": "c"} +{"uid": "id_412", "premise": "Water, the most common liquid used for cleaning, has a property called surface tension. Molecules in the body of the water are surrounded by other molecules, but at the surface a tension is created as molecules are only surrounded by other molecules on the waterside. This tension inhibits the cleaning process, as it slows the wetting of surface due to tension causing the water to bead up. This is where water droplets hold their shape and do not spread. For effective cleaning to take place surface tension must be reduced so that water can spread. Surface active agents, or surfactants, are chemicals, which are able to do this effectively.", "hypothesis": "If surfactant chemicals are added to water when cleaning a surface, surface tension will occur.", "label": "c"} +{"uid": "id_413", "premise": "Water, the most common liquid used for cleaning, has a property called surface tension. Molecules in the body of the water are surrounded by other molecules, but at the surface a tension is created as molecules are only surrounded by other molecules on the waterside. This tension inhibits the cleaning process, as it slows the wetting of surface due to tension causing the water to bead up. This is where water droplets hold their shape and do not spread. For effective cleaning to take place surface tension must be reduced so that water can spread. Surface-active agents, or surfactants, are chemicals, which are able to do this effectively.", "hypothesis": "Surface-active agents, or surfactants, are only used for cleaning.", "label": "n"} +{"uid": "id_414", "premise": "Water, the most common liquid used for cleaning, has a property called surface tension. Molecules in the body of the water are surrounded by other molecules, but at the surface a tension is created as molecules are only surrounded by other molecules on the waterside. This tension inhibits the cleaning process, as it slows the wetting of surface due to tension causing the water to bead up. This is where water droplets hold their shape and do not spread. For effective cleaning to take place surface tension must be reduced so that water can spread. Surface-active agents, or surfactants, are chemicals, which are able to do this effectively.", "hypothesis": "Water is the only known liquid used for cleaning.", "label": "c"} +{"uid": "id_415", "premise": "Water, the most common liquid used for cleaning, has a property called surface tension. Molecules in the body of the water are surrounded by other molecules, but at the surface a tension is created as molecules are only surrounded by other molecules on the waterside. This tension inhibits the cleaning process, as it slows the wetting of surface due to tension causing the water to bead up. This is where water droplets hold their shape and do not spread. For effective cleaning to take place surface tension must be reduced so that water can spread. Surface-active agents, or surfactants, are chemicals, which are able to do this effectively.", "hypothesis": "If surfactant chemicals are added to water when cleaning a surface, surface tension will occur.", "label": "c"} +{"uid": "id_416", "premise": "Water, the most common liquid used for cleaning, has a property called surface tension. Molecules in the body of the water are surrounded by other molecules, but at the surface a tension is created as molecules are only surrounded by other molecules on the waterside. This tension inhibits the cleaning process, as it slows the wetting of surface due to tension causing the water to bead up. This is where water droplets hold their shape and do not spread. For effective cleaning to take place surface tension must be reduced so that water can spread. Surface-active agents, or surfactants, are chemicals, which are able to do this effectively.", "hypothesis": "The molecules on the waterside hinder the cleaning process.", "label": "e"} +{"uid": "id_417", "premise": "Water, the most common liquid used for cleaning, has a property called surface tension. Molecules in the body of the water are surrounded by other molecules, but at the surface a tension is created as molecules are only surrounded by other molecules on the waterside. This tension inhibits the cleaning process, as it slows the wetting of surfaces due to tension causing the water to bead up. This is where water droplets hold their shape and do not spread. For effective cleaning to take place surface tension must be reduced so that water can spread. Surface-active agents, or surfactants, are chemicals which are able to do this effectively.", "hypothesis": "Water is the only known liquid used for cleaning.", "label": "c"} +{"uid": "id_418", "premise": "Water, the most common liquid used for cleaning, has a property called surface tension. Molecules in the body of the water are surrounded by other molecules, but at the surface a tension is created as molecules are only surrounded by other molecules on the waterside. This tension inhibits the cleaning process, as it slows the wetting of surfaces due to tension causing the water to bead up. This is where water droplets hold their shape and do not spread. For effective cleaning to take place surface tension must be reduced so that water can spread. Surface-active agents, or surfactants, are chemicals which are able to do this effectively.", "hypothesis": "the molecules on the waterside hinder the cleaning process.", "label": "e"} +{"uid": "id_419", "premise": "Water, the most common liquid used for cleaning, has a property called surface tension. Molecules in the body of the water are surrounded by other molecules, but at the surface a tension is created as molecules are only surrounded by other molecules on the waterside. This tension inhibits the cleaning process, as it slows the wetting of surfaces due to tension causing the water to bead up. This is where water droplets hold their shape and do not spread. For effective cleaning to take place surface tension must be reduced so that water can spread. Surface-active agents, or surfactants, are chemicals which are able to do this effectively.", "hypothesis": "surface active agents, or surfactants, are only used for cleaning.", "label": "n"} +{"uid": "id_420", "premise": "Waterfalls Waterfalls are places where rivers or streams direct their flow over vertical drops. They have always been a lure for their scenic beauty or, in the case of the biggest, their ability to showcase natures might and majesty. Niagara Falls, on the border of Canada and America (discharging the most water of all), is a magnet for visitors, as is Victoria Falls, also straddling an international boundary between Zimbabwe and Zambia, and presenting the single largest sheet of falling water in the world. Similarly, the remoteness and inaccessibility of the highest waterfall, Angel Falls, located deep in the middle of the Venezuelan jungle, has not stopped it from becoming one of the countrys top tourist attractions. There are many possible causes of waterfalls, but a common one is differences in rock type. When a river flows over a resistant rock bed, erosion is slow, but with the complex geological faulting of the Earths surface, softer patches of rock can be exposed. The water cuts into this, resulting in a minor turbulence at the boundary, stirring up pebbles and grit from the riverbed, which increases the erosive capacity of the current. And so a process begins whereby the river takes on two tiers, or levels, and a waterfall is born. Other more abrupt causes of waterfalls are earthquakes or landslides, which create fault lines in the land, or divert watercourses, respectively. Additionally, during past ice ages, glaciers scoured out many deep basins. These glaciers may have disappeared, but their feeder rivers can continue to flow as waterfalls into the remaining depressions. Obviously then, waterfalls come in a variety of shapes and sizes, as different as the local geology in which they are found, and this has resulted in an abundance of descriptive terms. The word cataract refers simply to a large powerful waterfall, while a cascade descends a series of rock steps. If these steps are very distinct, it is a tiered waterfall, and if each step is larger still, of approximately the same size, and with a significant pool of water at each base, it is known as a multi-step waterfall. If the falling water engages with the rock face, it often widens, to be called a horsetail waterfall, while if it does not touch the rock face at all, it is a plunge waterfall often the most picturesque. Regardless of such differences, all waterfalls have in common a vertical height and average flow of water. These features, taken together, are a measure of the waterfalls power, quantified using a ten-point logarithmic scale. Giant falls, such as Niagara, are graded at the very top of this scale, find smaller falls, which may occur in town creeks, at the bottom. Another common feature of larger falls is a plunge pool. This is caused by the rubble at the base of the falls, which is stirred and broken into smaller pieces. In the never-ending eddies and whirlpools, these pieces scour out a deep underwater basin. An interesting consequence is that such falls are in the process of retreat, since the softer material at the lower face suffers undercutting. This gives rise to rock shelters behind the falling water, which steadily become larger until the roof collapses, and the waterfall retreats significantly backward into the Earth. Of course, to people at large, a waterfall seems fixed and forever. Erosion is indeed a slow process; however, given a sufficiently powerful waterfall and the right sort of rock, the retreat can be over a meter a year. This would be clearly observable over a persons life time, and a fast-motion view, spanning several decades, would see an essentially unchanged height of falling water burrowing backwards with surprising evenness. Since this motion is towards higher elevations or through more hilly terrain, a host of geological features can be laid in the waterfalls retreating path. Victoria Falls are a prime example, with its lower reaches characterised by spectacular islands, gorges, and rock formations. This retreat occasionally causes problems, as can be seen with Niagara Falls. In just over ten millennia, the falls have moved almost 11 kilometres upstream. Since the Niagara river marks the border of Canada and America, as agreed in 1819, the detectable retreat of these falls since that time technically means that the Canadian frontier has advanced forward at the expense of America, although this argument has obviously caused dispute. More practically, with so much infrastructure, such as hotels, roads, bridges, and scenic viewpoints, all rigidly established, it remains important to limit the erosion. For this reason, the exposed ridges of the falls have been extensively strengthened, and underwater barriers installed to divert the more erosive of river currents. The most ambitious erosion-control measure took place in 1969 on Niagaras American Falls, whose retreat was nibbling away at American territory. The branch of the Niagara river which feeds these subsidiary falls was dammed, allowing the main Horseshoe Falls to absorb the excess flow. The then-completely-dry-and-exposed river bottom and cliff face allowed a team of US-army engineers to use bolts, cement, and brackets, to strengthen any unstable rock. Five months later, the temporary dam was destroyed with explosives, returning water to the falls, but with the inexorable erosion process having been slowed considerably.", "hypothesis": "Glaciers have produced the most waterfalls.", "label": "n"} +{"uid": "id_421", "premise": "Waterfalls Waterfalls are places where rivers or streams direct their flow over vertical drops. They have always been a lure for their scenic beauty or, in the case of the biggest, their ability to showcase natures might and majesty. Niagara Falls, on the border of Canada and America (discharging the most water of all), is a magnet for visitors, as is Victoria Falls, also straddling an international boundary between Zimbabwe and Zambia, and presenting the single largest sheet of falling water in the world. Similarly, the remoteness and inaccessibility of the highest waterfall, Angel Falls, located deep in the middle of the Venezuelan jungle, has not stopped it from becoming one of the countrys top tourist attractions. There are many possible causes of waterfalls, but a common one is differences in rock type. When a river flows over a resistant rock bed, erosion is slow, but with the complex geological faulting of the Earths surface, softer patches of rock can be exposed. The water cuts into this, resulting in a minor turbulence at the boundary, stirring up pebbles and grit from the riverbed, which increases the erosive capacity of the current. And so a process begins whereby the river takes on two tiers, or levels, and a waterfall is born. Other more abrupt causes of waterfalls are earthquakes or landslides, which create fault lines in the land, or divert watercourses, respectively. Additionally, during past ice ages, glaciers scoured out many deep basins. These glaciers may have disappeared, but their feeder rivers can continue to flow as waterfalls into the remaining depressions. Obviously then, waterfalls come in a variety of shapes and sizes, as different as the local geology in which they are found, and this has resulted in an abundance of descriptive terms. The word cataract refers simply to a large powerful waterfall, while a cascade descends a series of rock steps. If these steps are very distinct, it is a tiered waterfall, and if each step is larger still, of approximately the same size, and with a significant pool of water at each base, it is known as a multi-step waterfall. If the falling water engages with the rock face, it often widens, to be called a horsetail waterfall, while if it does not touch the rock face at all, it is a plunge waterfall often the most picturesque. Regardless of such differences, all waterfalls have in common a vertical height and average flow of water. These features, taken together, are a measure of the waterfalls power, quantified using a ten-point logarithmic scale. Giant falls, such as Niagara, are graded at the very top of this scale, find smaller falls, which may occur in town creeks, at the bottom. Another common feature of larger falls is a plunge pool. This is caused by the rubble at the base of the falls, which is stirred and broken into smaller pieces. In the never-ending eddies and whirlpools, these pieces scour out a deep underwater basin. An interesting consequence is that such falls are in the process of retreat, since the softer material at the lower face suffers undercutting. This gives rise to rock shelters behind the falling water, which steadily become larger until the roof collapses, and the waterfall retreats significantly backward into the Earth. Of course, to people at large, a waterfall seems fixed and forever. Erosion is indeed a slow process; however, given a sufficiently powerful waterfall and the right sort of rock, the retreat can be over a meter a year. This would be clearly observable over a persons life time, and a fast-motion view, spanning several decades, would see an essentially unchanged height of falling water burrowing backwards with surprising evenness. Since this motion is towards higher elevations or through more hilly terrain, a host of geological features can be laid in the waterfalls retreating path. Victoria Falls are a prime example, with its lower reaches characterised by spectacular islands, gorges, and rock formations. This retreat occasionally causes problems, as can be seen with Niagara Falls. In just over ten millennia, the falls have moved almost 11 kilometres upstream. Since the Niagara river marks the border of Canada and America, as agreed in 1819, the detectable retreat of these falls since that time technically means that the Canadian frontier has advanced forward at the expense of America, although this argument has obviously caused dispute. More practically, with so much infrastructure, such as hotels, roads, bridges, and scenic viewpoints, all rigidly established, it remains important to limit the erosion. For this reason, the exposed ridges of the falls have been extensively strengthened, and underwater barriers installed to divert the more erosive of river currents. The most ambitious erosion-control measure took place in 1969 on Niagaras American Falls, whose retreat was nibbling away at American territory. The branch of the Niagara river which feeds these subsidiary falls was dammed, allowing the main Horseshoe Falls to absorb the excess flow. The then-completely-dry-and-exposed river bottom and cliff face allowed a team of US-army engineers to use bolts, cement, and brackets, to strengthen any unstable rock. Five months later, the temporary dam was destroyed with explosives, returning water to the falls, but with the inexorable erosion process having been slowed considerably.", "hypothesis": "Landslides can create waterfalls faster than erosion.", "label": "e"} +{"uid": "id_422", "premise": "Waterfalls Waterfalls are places where rivers or streams direct their flow over vertical drops. They have always been a lure for their scenic beauty or, in the case of the biggest, their ability to showcase natures might and majesty. Niagara Falls, on the border of Canada and America (discharging the most water of all), is a magnet for visitors, as is Victoria Falls, also straddling an international boundary between Zimbabwe and Zambia, and presenting the single largest sheet of falling water in the world. Similarly, the remoteness and inaccessibility of the highest waterfall, Angel Falls, located deep in the middle of the Venezuelan jungle, has not stopped it from becoming one of the countrys top tourist attractions. There are many possible causes of waterfalls, but a common one is differences in rock type. When a river flows over a resistant rock bed, erosion is slow, but with the complex geological faulting of the Earths surface, softer patches of rock can be exposed. The water cuts into this, resulting in a minor turbulence at the boundary, stirring up pebbles and grit from the riverbed, which increases the erosive capacity of the current. And so a process begins whereby the river takes on two tiers, or levels, and a waterfall is born. Other more abrupt causes of waterfalls are earthquakes or landslides, which create fault lines in the land, or divert watercourses, respectively. Additionally, during past ice ages, glaciers scoured out many deep basins. These glaciers may have disappeared, but their feeder rivers can continue to flow as waterfalls into the remaining depressions. Obviously then, waterfalls come in a variety of shapes and sizes, as different as the local geology in which they are found, and this has resulted in an abundance of descriptive terms. The word cataract refers simply to a large powerful waterfall, while a cascade descends a series of rock steps. If these steps are very distinct, it is a tiered waterfall, and if each step is larger still, of approximately the same size, and with a significant pool of water at each base, it is known as a multi-step waterfall. If the falling water engages with the rock face, it often widens, to be called a horsetail waterfall, while if it does not touch the rock face at all, it is a plunge waterfall often the most picturesque. Regardless of such differences, all waterfalls have in common a vertical height and average flow of water. These features, taken together, are a measure of the waterfalls power, quantified using a ten-point logarithmic scale. Giant falls, such as Niagara, are graded at the very top of this scale, find smaller falls, which may occur in town creeks, at the bottom. Another common feature of larger falls is a plunge pool. This is caused by the rubble at the base of the falls, which is stirred and broken into smaller pieces. In the never-ending eddies and whirlpools, these pieces scour out a deep underwater basin. An interesting consequence is that such falls are in the process of retreat, since the softer material at the lower face suffers undercutting. This gives rise to rock shelters behind the falling water, which steadily become larger until the roof collapses, and the waterfall retreats significantly backward into the Earth. Of course, to people at large, a waterfall seems fixed and forever. Erosion is indeed a slow process; however, given a sufficiently powerful waterfall and the right sort of rock, the retreat can be over a meter a year. This would be clearly observable over a persons life time, and a fast-motion view, spanning several decades, would see an essentially unchanged height of falling water burrowing backwards with surprising evenness. Since this motion is towards higher elevations or through more hilly terrain, a host of geological features can be laid in the waterfalls retreating path. Victoria Falls are a prime example, with its lower reaches characterised by spectacular islands, gorges, and rock formations. This retreat occasionally causes problems, as can be seen with Niagara Falls. In just over ten millennia, the falls have moved almost 11 kilometres upstream. Since the Niagara river marks the border of Canada and America, as agreed in 1819, the detectable retreat of these falls since that time technically means that the Canadian frontier has advanced forward at the expense of America, although this argument has obviously caused dispute. More practically, with so much infrastructure, such as hotels, roads, bridges, and scenic viewpoints, all rigidly established, it remains important to limit the erosion. For this reason, the exposed ridges of the falls have been extensively strengthened, and underwater barriers installed to divert the more erosive of river currents. The most ambitious erosion-control measure took place in 1969 on Niagaras American Falls, whose retreat was nibbling away at American territory. The branch of the Niagara river which feeds these subsidiary falls was dammed, allowing the main Horseshoe Falls to absorb the excess flow. The then-completely-dry-and-exposed river bottom and cliff face allowed a team of US-army engineers to use bolts, cement, and brackets, to strengthen any unstable rock. Five months later, the temporary dam was destroyed with explosives, returning water to the falls, but with the inexorable erosion process having been slowed considerably.", "hypothesis": "Niagara, Victoria, and Angel Falls are on international boundaries.", "label": "c"} +{"uid": "id_423", "premise": "Waterfalls Waterfalls are places where rivers or streams direct their flow over vertical drops. They have always been a lure for their scenic beauty or, in the case of the biggest, their ability to showcase natures might and majesty. Niagara Falls, on the border of Canada and America (discharging the most water of all), is a magnet for visitors, as is Victoria Falls, also straddling an international boundary between Zimbabwe and Zambia, and presenting the single largest sheet of falling water in the world. Similarly, the remoteness and inaccessibility of the highest waterfall, Angel Falls, located deep in the middle of the Venezuelan jungle, has not stopped it from becoming one of the countrys top tourist attractions. There are many possible causes of waterfalls, but a common one is differences in rock type. When a river flows over a resistant rock bed, erosion is slow, but with the complex geological faulting of the Earths surface, softer patches of rock can be exposed. The water cuts into this, resulting in a minor turbulence at the boundary, stirring up pebbles and grit from the riverbed, which increases the erosive capacity of the current. And so a process begins whereby the river takes on two tiers, or levels, and a waterfall is born. Other more abrupt causes of waterfalls are earthquakes or landslides, which create fault lines in the land, or divert watercourses, respectively. Additionally, during past ice ages, glaciers scoured out many deep basins. These glaciers may have disappeared, but their feeder rivers can continue to flow as waterfalls into the remaining depressions. Obviously then, waterfalls come in a variety of shapes and sizes, as different as the local geology in which they are found, and this has resulted in an abundance of descriptive terms. The word cataract refers simply to a large powerful waterfall, while a cascade descends a series of rock steps. If these steps are very distinct, it is a tiered waterfall, and if each step is larger still, of approximately the same size, and with a significant pool of water at each base, it is known as a multi-step waterfall. If the falling water engages with the rock face, it often widens, to be called a horsetail waterfall, while if it does not touch the rock face at all, it is a plunge waterfall often the most picturesque. Regardless of such differences, all waterfalls have in common a vertical height and average flow of water. These features, taken together, are a measure of the waterfalls power, quantified using a ten-point logarithmic scale. Giant falls, such as Niagara, are graded at the very top of this scale, find smaller falls, which may occur in town creeks, at the bottom. Another common feature of larger falls is a plunge pool. This is caused by the rubble at the base of the falls, which is stirred and broken into smaller pieces. In the never-ending eddies and whirlpools, these pieces scour out a deep underwater basin. An interesting consequence is that such falls are in the process of retreat, since the softer material at the lower face suffers undercutting. This gives rise to rock shelters behind the falling water, which steadily become larger until the roof collapses, and the waterfall retreats significantly backward into the Earth. Of course, to people at large, a waterfall seems fixed and forever. Erosion is indeed a slow process; however, given a sufficiently powerful waterfall and the right sort of rock, the retreat can be over a meter a year. This would be clearly observable over a persons life time, and a fast-motion view, spanning several decades, would see an essentially unchanged height of falling water burrowing backwards with surprising evenness. Since this motion is towards higher elevations or through more hilly terrain, a host of geological features can be laid in the waterfalls retreating path. Victoria Falls are a prime example, with its lower reaches characterised by spectacular islands, gorges, and rock formations. This retreat occasionally causes problems, as can be seen with Niagara Falls. In just over ten millennia, the falls have moved almost 11 kilometres upstream. Since the Niagara river marks the border of Canada and America, as agreed in 1819, the detectable retreat of these falls since that time technically means that the Canadian frontier has advanced forward at the expense of America, although this argument has obviously caused dispute. More practically, with so much infrastructure, such as hotels, roads, bridges, and scenic viewpoints, all rigidly established, it remains important to limit the erosion. For this reason, the exposed ridges of the falls have been extensively strengthened, and underwater barriers installed to divert the more erosive of river currents. The most ambitious erosion-control measure took place in 1969 on Niagaras American Falls, whose retreat was nibbling away at American territory. The branch of the Niagara river which feeds these subsidiary falls was dammed, allowing the main Horseshoe Falls to absorb the excess flow. The then-completely-dry-and-exposed river bottom and cliff face allowed a team of US-army engineers to use bolts, cement, and brackets, to strengthen any unstable rock. Five months later, the temporary dam was destroyed with explosives, returning water to the falls, but with the inexorable erosion process having been slowed considerably.", "hypothesis": "Niagara is a Grade Ten waterfall.", "label": "e"} +{"uid": "id_424", "premise": "Waterfalls Waterfalls are places where rivers or streams direct their flow over vertical drops. They have always been a lure for their scenic beauty or, in the case of the biggest, their ability to showcase natures might and majesty. Niagara Falls, on the border of Canada and America (discharging the most water of all), is a magnet for visitors, as is Victoria Falls, also straddling an international boundary between Zimbabwe and Zambia, and presenting the single largest sheet of falling water in the world. Similarly, the remoteness and inaccessibility of the highest waterfall, Angel Falls, located deep in the middle of the Venezuelan jungle, has not stopped it from becoming one of the countrys top tourist attractions. There are many possible causes of waterfalls, but a common one is differences in rock type. When a river flows over a resistant rock bed, erosion is slow, but with the complex geological faulting of the Earths surface, softer patches of rock can be exposed. The water cuts into this, resulting in a minor turbulence at the boundary, stirring up pebbles and grit from the riverbed, which increases the erosive capacity of the current. And so a process begins whereby the river takes on two tiers, or levels, and a waterfall is born. Other more abrupt causes of waterfalls are earthquakes or landslides, which create fault lines in the land, or divert watercourses, respectively. Additionally, during past ice ages, glaciers scoured out many deep basins. These glaciers may have disappeared, but their feeder rivers can continue to flow as waterfalls into the remaining depressions. Obviously then, waterfalls come in a variety of shapes and sizes, as different as the local geology in which they are found, and this has resulted in an abundance of descriptive terms. The word cataract refers simply to a large powerful waterfall, while a cascade descends a series of rock steps. If these steps are very distinct, it is a tiered waterfall, and if each step is larger still, of approximately the same size, and with a significant pool of water at each base, it is known as a multi-step waterfall. If the falling water engages with the rock face, it often widens, to be called a horsetail waterfall, while if it does not touch the rock face at all, it is a plunge waterfall often the most picturesque. Regardless of such differences, all waterfalls have in common a vertical height and average flow of water. These features, taken together, are a measure of the waterfalls power, quantified using a ten-point logarithmic scale. Giant falls, such as Niagara, are graded at the very top of this scale, find smaller falls, which may occur in town creeks, at the bottom. Another common feature of larger falls is a plunge pool. This is caused by the rubble at the base of the falls, which is stirred and broken into smaller pieces. In the never-ending eddies and whirlpools, these pieces scour out a deep underwater basin. An interesting consequence is that such falls are in the process of retreat, since the softer material at the lower face suffers undercutting. This gives rise to rock shelters behind the falling water, which steadily become larger until the roof collapses, and the waterfall retreats significantly backward into the Earth. Of course, to people at large, a waterfall seems fixed and forever. Erosion is indeed a slow process; however, given a sufficiently powerful waterfall and the right sort of rock, the retreat can be over a meter a year. This would be clearly observable over a persons life time, and a fast-motion view, spanning several decades, would see an essentially unchanged height of falling water burrowing backwards with surprising evenness. Since this motion is towards higher elevations or through more hilly terrain, a host of geological features can be laid in the waterfalls retreating path. Victoria Falls are a prime example, with its lower reaches characterised by spectacular islands, gorges, and rock formations. This retreat occasionally causes problems, as can be seen with Niagara Falls. In just over ten millennia, the falls have moved almost 11 kilometres upstream. Since the Niagara river marks the border of Canada and America, as agreed in 1819, the detectable retreat of these falls since that time technically means that the Canadian frontier has advanced forward at the expense of America, although this argument has obviously caused dispute. More practically, with so much infrastructure, such as hotels, roads, bridges, and scenic viewpoints, all rigidly established, it remains important to limit the erosion. For this reason, the exposed ridges of the falls have been extensively strengthened, and underwater barriers installed to divert the more erosive of river currents. The most ambitious erosion-control measure took place in 1969 on Niagaras American Falls, whose retreat was nibbling away at American territory. The branch of the Niagara river which feeds these subsidiary falls was dammed, allowing the main Horseshoe Falls to absorb the excess flow. The then-completely-dry-and-exposed river bottom and cliff face allowed a team of US-army engineers to use bolts, cement, and brackets, to strengthen any unstable rock. Five months later, the temporary dam was destroyed with explosives, returning water to the falls, but with the inexorable erosion process having been slowed considerably.", "hypothesis": "A tiered waterfall has the largest steps.", "label": "c"} +{"uid": "id_425", "premise": "Waterways. At the height of the Industrial Revolution in the mid-19th century, huge quantities of coal had to be transported from the pithead for iron smelting, manufacturing and domestic use. Coastal shipping, navigable rivers and horse-drawn carts were either slow or restrictive in comparison to the new purpose-built canals. A horse could pull a narrowboat weighing 50 times as much as a cart. The UK soon developed a national network of canals and by the middle of the 19th century almost all major towns and cities had a canal. At the same time, there was controversy as to the rival merits of transporting coal by canal or by railway. Stephensons locomotive could transport vast quantities of coal and other goods more quickly than by canal and also offered a new means of passenger transport. The canal network was doomed, and investment was redirected into railways, with local lines laid down in the coal districts developed into a national system for the whole of the country. Road haulage in the 20th century brought more competition for canals, and only a few remained open until the Second World War. Further declines were inevitable, and the use of canals for industrial purposes was minimal in the 1960s. However, interest in canals for leisure purposes had begun to grow, and some were restored and reopened by volun- teers in the 1970s. This trend has continued, with canals attracting government funding for restoration projects. Canals are now a major tourist industry, with more than 10 million visitors per year and 30,000 craft. Today there are more boats on the canals than at the height of the Industrial Revolution.", "hypothesis": "There are more narrowboats on the canals today than at the height of the Industrial Revolution.", "label": "n"} +{"uid": "id_426", "premise": "Waterways. At the height of the Industrial Revolution in the mid-19th century, huge quantities of coal had to be transported from the pithead for iron smelting, manufacturing and domestic use. Coastal shipping, navigable rivers and horse-drawn carts were either slow or restrictive in comparison to the new purpose-built canals. A horse could pull a narrowboat weighing 50 times as much as a cart. The UK soon developed a national network of canals and by the middle of the 19th century almost all major towns and cities had a canal. At the same time, there was controversy as to the rival merits of transporting coal by canal or by railway. Stephensons locomotive could transport vast quantities of coal and other goods more quickly than by canal and also offered a new means of passenger transport. The canal network was doomed, and investment was redirected into railways, with local lines laid down in the coal districts developed into a national system for the whole of the country. Road haulage in the 20th century brought more competition for canals, and only a few remained open until the Second World War. Further declines were inevitable, and the use of canals for industrial purposes was minimal in the 1960s. However, interest in canals for leisure purposes had begun to grow, and some were restored and reopened by volun- teers in the 1970s. This trend has continued, with canals attracting government funding for restoration projects. Canals are now a major tourist industry, with more than 10 million visitors per year and 30,000 craft. Today there are more boats on the canals than at the height of the Industrial Revolution.", "hypothesis": "A network of canals was in place before a national system of railways.", "label": "e"} +{"uid": "id_427", "premise": "Waterways. At the height of the Industrial Revolution in the mid-19th century, huge quantities of coal had to be transported from the pithead for iron smelting, manufacturing and domestic use. Coastal shipping, navigable rivers and horse-drawn carts were either slow or restrictive in comparison to the new purpose-built canals. A horse could pull a narrowboat weighing 50 times as much as a cart. The UK soon developed a national network of canals and by the middle of the 19th century almost all major towns and cities had a canal. At the same time, there was controversy as to the rival merits of transporting coal by canal or by railway. Stephensons locomotive could transport vast quantities of coal and other goods more quickly than by canal and also offered a new means of passenger transport. The canal network was doomed, and investment was redirected into railways, with local lines laid down in the coal districts developed into a national system for the whole of the country. Road haulage in the 20th century brought more competition for canals, and only a few remained open until the Second World War. Further declines were inevitable, and the use of canals for industrial purposes was minimal in the 1960s. However, interest in canals for leisure purposes had begun to grow, and some were restored and reopened by volun- teers in the 1970s. This trend has continued, with canals attracting government funding for restoration projects. Canals are now a major tourist industry, with more than 10 million visitors per year and 30,000 craft. Today there are more boats on the canals than at the height of the Industrial Revolution.", "hypothesis": "The 1960s saw more interest in canals for leisure than for industry.", "label": "n"} +{"uid": "id_428", "premise": "Waterways. At the height of the Industrial Revolution in the mid-19th century, huge quantities of coal had to be transported from the pithead for iron smelting, manufacturing and domestic use. Coastal shipping, navigable rivers and horse-drawn carts were either slow or restrictive in comparison to the new purpose-built canals. A horse could pull a narrowboat weighing 50 times as much as a cart. The UK soon developed a national network of canals and by the middle of the 19th century almost all major towns and cities had a canal. At the same time, there was controversy as to the rival merits of transporting coal by canal or by railway. Stephensons locomotive could transport vast quantities of coal and other goods more quickly than by canal and also offered a new means of passenger transport. The canal network was doomed, and investment was redirected into railways, with local lines laid down in the coal districts developed into a national system for the whole of the country. Road haulage in the 20th century brought more competition for canals, and only a few remained open until the Second World War. Further declines were inevitable, and the use of canals for industrial purposes was minimal in the 1960s. However, interest in canals for leisure purposes had begun to grow, and some were restored and reopened by volun- teers in the 1970s. This trend has continued, with canals attracting government funding for restoration projects. Canals are now a major tourist industry, with more than 10 million visitors per year and 30,000 craft. Today there are more boats on the canals than at the height of the Industrial Revolution.", "hypothesis": "Stephensons locomotive succeeded because it could transport vast quantities of coal.", "label": "c"} +{"uid": "id_429", "premise": "Waves become swell when they leave the area of wind in which they were generated. Long after the wind that created it has stopped blowing, swell can continue to travel for thousands of miles and have a life span dependent on its wave length and the extent of ocean. The longer the wave the faster it travels and given sufficient sea room the longer it continues to travel. Wind can generate waves that travel faster than the wind itself and after a few hours of blowing the wave can be a long way ahead of the wind. At sea the arrival of a swell can be an indication of bad weather to come. If a long low swell arrives and it steadily increases in height then you should prepare for an approaching gale. If the swell remains long and low then it is likely that the wind that generated it is a long way away and you will escape it. Sometimes a swell generated far away crosses the waves generated by another wind. This can lead to a confused and in the extreme a dangerous sea state.", "hypothesis": "The sentence wind can generate waves that travel faster than the wind itself and after a few hours of blowing the wave can be a long way ahead of the wind would be more correct if it read Wind can generate waves that travel faster than the wind itself and after a few hours of blowing the swell can be a long way ahead of the wind.", "label": "e"} +{"uid": "id_430", "premise": "Waves become swell when they leave the area of wind in which they were generated. Long after the wind that created it has stopped blowing, swell can continue to travel for thousands of miles and have a life span dependent on its wave length and the extent of ocean. The longer the wave the faster it travels and given sufficient sea room the longer it continues to travel. Wind can generate waves that travel faster than the wind itself and after a few hours of blowing the wave can be a long way ahead of the wind. At sea the arrival of a swell can be an indication of bad weather to come. If a long low swell arrives and it steadily increases in height then you should prepare for an approaching gale. If the swell remains long and low then it is likely that the wind that generated it is a long way away and you will escape it. Sometimes a swell generated far away crosses the waves generated by another wind. This can lead to a confused and in the extreme a dangerous sea state.", "hypothesis": "The views expressed in the passage are a statement of the findings of experimental investigations.", "label": "n"} +{"uid": "id_431", "premise": "We are such optimists and opportunists that we find it hard not to adopt every new technology as soon as it comes along. As a result, we tend to discover the adverse consequences of these new practices the hard way. When problems emerge, as they inevitably seem to do, we set about a search for a better technology to help solve or alleviate the problems created by the first. However, some commentators argue that the debate over the introduction of new technology to genetically modify crops was not about an existing technology but about a proposed one, and for once they claim we tried to identify the benefits and risks before running blindly into them. The example is held up as a new way of assessing technologies before adopting them, and governments are urged to require companies to test and environmentally model new technologies before they are introduced. The difficulty with such a recommendation to governments is that not all will adopt them and most new technologies are introduced by multinational companies that exist beyond the control of one or a few governments. These companies therefore can choose to avoid new controls over their commercial activities by simply taking their developmental work elsewhere.", "hypothesis": "Some governments are already requiring companies to test and environmentally model the impact of new technologies before introducing them.", "label": "n"} +{"uid": "id_432", "premise": "We are such optimists and opportunists that we find it hard not to adopt every new technology as soon as it comes along. As a result, we tend to discover the adverse consequences of these new practices the hard way. When problems emerge, as they inevitably seem to do, we set about a search for a better technology to help solve or alleviate the problems created by the first. However, some commentators argue that the debate over the introduction of new technology to genetically modify crops was not about an existing technology but about a proposed one, and for once they claim we tried to identify the benefits and risks before running blindly into them. The example is held up as a new way of assessing technologies before adopting them, and governments are urged to require companies to test and environmentally model new technologies before they are introduced. The difficulty with such a recommendation to governments is that not all will adopt them and most new technologies are introduced by multinational companies that exist beyond the control of one or a few governments. These companies therefore can choose to avoid new controls over their commercial activities by simply taking their developmental work elsewhere.", "hypothesis": "If our government were to adopt the recommendation then we could look forward to no longer lurching from one failed technology to the next.", "label": "c"} +{"uid": "id_433", "premise": "We are such optimists and opportunists that we find it hard not to adopt every new technology as soon as it comes along. As a result, we tend to discover the adverse consequences of these new practices the hard way. When problems emerge, as they inevitably seem to do, we set about a search for a better technology to help solve or alleviate the problems created by the first. However, some commentators argue that the debate over the introduction of new technology to genetically modify crops was not about an existing technology but about a proposed one, and for once they claim we tried to identify the benefits and risks before running blindly into them. The example is held up as a new way of assessing technologies before adopting them, and governments are urged to require companies to test and environmentally model new technologies before they are introduced. The difficulty with such a recommendation to governments is that not all will adopt them and most new technologies are introduced by multinational companies that exist beyond the control of one or a few governments. These companies therefore can choose to avoid new controls over their commercial activities by simply taking their developmental work elsewhere.", "hypothesis": "Environmental problems such as acid rain or ozone depletion might have been avoided had the new approach been adopted in the past.", "label": "e"} +{"uid": "id_434", "premise": "We freeze some moments in time. Every culture has its frozen moments, events so important and personal that they transcend the normal flow of news. Americans of a certain age, for example, know precisely where they were and what they were doing when they learned that President Franklin D. Roosevelt had died. Another generation has absolute clarity of John F. Kennedys assassination. And no one who was older than a baby on 11 th September, 2001, will ever forget hearing about, or seeing, aeroplanes flying into skyscrapers. In 1945, people gathered around radios for the immediate news and stayed with the radio to hear more about their fallen leader and about the man who took his place. Newspapers printed extra editions and filled their columns with detail for days and weeks afterward. Magazines stepped back from the breaking news and offered perspective. 11 th September, 2001, followed a similarly grim pattern. We watched again and again the awful events. Consumers of news learned about the attacks, thanks to the television networks that showed the horror so graphically. Then we learned some of the hows and whys, as print publications and thoughtful broadcasters worked to bring depth to events that defied mere words. Journalists did some of their finest work and made me proud to be one of them. But something else, something profound, was happening this time around: news was being produced by regular people who had something to say and show, and not solely by the official news organisations that had traditionally decided how the first draft of history would look. This time, the first draft of history was being written in part, by the former audience. It was possible, it was inevitable, because of new publishing tools available on the Internet.", "hypothesis": "The author of this passage is a journalist.", "label": "e"} +{"uid": "id_435", "premise": "We have all heard about bullying in schools, but bullying in the workplace is a huge problem in the UK which results in nearly 19 million days of lost output per year and costs the country 6 billion pounds annually. Workplace bullying is the abuse of a position of power by one individual over another. Otherwise known as harassment, intimidation, aggression, coercive management and by other euphemisms, bullying in the workplace can take many forms involving gender, race or age. In a nutshell, workplace bullying means behaviour that is humiliating or offensive towards some individual. This kind of bullying ranges from violence to less obvious actions like deliberately ignoring a fellow worker.", "hypothesis": "Deliberately ignoring a colleague is a form of bullying.", "label": "e"} +{"uid": "id_436", "premise": "We have all heard about bullying in schools, but bullying in the workplace is a huge problem in the UK which results in nearly 19 million days of lost output per year and costs the country 6 billion pounds annually. Workplace bullying is the abuse of a position of power by one individual over another. Otherwise known as harassment, intimidation, aggression, coercive management and by other euphemisms, bullying in the workplace can take many forms involving gender, race or age. In a nutshell, workplace bullying means behaviour that is humiliating or offensive towards some individual. This kind of bullying ranges from violence to less obvious actions like deliberately ignoring a fellow worker.", "hypothesis": "Bullying in the workplace hinders UK economic output.", "label": "e"} +{"uid": "id_437", "premise": "We have all heard about bullying in schools, but bullying in the workplace is a huge problem in the UK which results in nearly 19 million days of lost output per year and costs the country 6 billion pounds annually. Workplace bullying is the abuse of a position of power by one individual over another. Otherwise known as harassment, intimidation, aggression, coercive management and by other euphemisms, bullying in the workplace can take many forms involving gender, race or age. In a nutshell, workplace bullying means behaviour that is humiliating or offensive towards some individual. This kind of bullying ranges from violence to less obvious actions like deliberately ignoring a fellow worker.", "hypothesis": "Another name for workplace bullying is coercive management.", "label": "e"} +{"uid": "id_438", "premise": "We have all heard about bullying in schools, but bullying in the workplace is a huge problem in the UK which results in nearly 19 million days of lost output per year and costs the country 6 billion pounds annually. Workplace bullying is the abuse of a position of power by one individual over another. Otherwise known as harassment, intimidation, aggression, coercive management and by other euphemisms, bullying in the workplace can take many forms involving gender, race or age. In a nutshell, workplace bullying means behaviour that is humiliating or offensive towards some individual. This kind of bullying ranges from violence to less obvious actions like deliberately ignoring a fellow worker.", "hypothesis": "Bullying in the workplace is sometimes caused by religious intolerance.", "label": "n"} +{"uid": "id_439", "premise": "We know the city where HIV first emerged It is easy to see why AIDS seemed so mysterious and frightening when US medics first encountered it 35 years ago. The condition robbed young, healthy people of their strong immune system, leaving them weak and vulnerable. And it seemed to come out of nowhere. Today we know much more how and why HIV the virus that leads to AIDS has become a global pandemic. Unsurprisingly, sex workers unwittingly played a part. But no less important were the roles of trade, the collapse of colonialism, and 20th Century sociopolitical reform. HIV did not really appear out of nowhere, of course. It probably began as a virus affecting monkeys and apes in west central Africa. From there it jumped species into humans on several occasions, perhaps because people ate infected bushmeat. Some people carry a version of HIV closely related to that seen in sooty mangabey monkeys, for instance. But HIV that came from monkeys has not become a global problem. We are more closely related to apes, like gorillas and chimpanzees, than we are to monkeys. But even when HIV has passed into human populations from these apes, it has not necessarily turned into a widespread health issue. HIV originating from apes typically belongs to a type of virus called HIV-1. One is called HIV-1 group O, and human cases are largely confined to west Africa. In fact, only one form of HIV has spread far and wide after jumping to humans. This version, which probably originated from chimpanzees, is called HIV-1 group M (for major). More than 90% of HIV infections belong in group M. Which raises an obvious question: whats so special about HIV-1 group M? A study published in 2014 suggests a surprising answer: there might be nothing particularly special about group M. It is not especially infectious, as you might expect. Instead, it seems that this form of HIV simply took advantage of events. Ecological rather than evolutionary factors drove its rapid spread, says Nuno Faria at the University of Oxford in the UK. Faria and his colleagues built a family tree of HIV, by looking at a diverse array of HIV genomes collected from about 800 infected people from central Africa. Genomes pick up new mutations at a fairly steady rate, so by comparing two genome sequences and counting the differences they could work out when the two last shared a common ancestor. This technique is widely used, for example to establish that our common ancestor with chimpanzees lived at least 7 million years ago. RNA viruses such as HIV evolve approximately 1 million times faster than human DNA, says Faria. This means the HIV molecular clock ticks very fast indeed. It ticks so fast, Faria and his colleagues found that the HIV genomes all shared a common ancestor that existed no more than 100 years ago. The HIV-1 group M pandemic probably first began in the 1920s. Then the team went further. Because they knew where each of the HIV samples had been collected, they could place the origin of the pandemic in a specific city: Kinshasa, now the capital of the Democratic Republic of Congo. At this point, the researchers changed tack. They turned to historical records to work out why HIV infections in an African city in the 1920s could ultimately spark a pandemic. A likely sequence of events quickly became obvious. In the 1920s, DR Congo was a Belgian colony and Kinshasa then known as Leopoldville had just been made the capital. The city became a very attractive destination for young working men seeking their fortunes, and for sex workers only too willing to help them spend their earnings. The virus spread quickly through the population. It did not remain confined to the city. The researchers discovered that the capital of the Belgian Congo was, in the 1920s, one of the best connected cities in Africa. Taking full advantage of an extensive rail network used by hundreds of thousands of people each year, the virus spread to cities 900 miles (1500km) away in just 20 years. Everything was in place for an explosion in infection rates in the 1960s. The beginning of that decade brought another change. Belgian Congo gained its independence, and became an attractive source of employment to French speakers elsewhere in the world, including Haiti. When these young Haitians returned home a few years later they took a particular form of HIV-1 group M, called subtype B, to the western side of the Atlantic. It arrived in the US in the 1970s, just as sexual liberation and homophobic attitudes were leading to concentrations of gay men in cosmopolitan cities like New York and San Francisco. Once more, HIV took advantage of the sociopolitical situation to spread quickly through the US and Europe. There is no reason to believe that other subtypes would not have spread as quickly as subtype B, given similar ecological circumstances, says Faria. The story of the spread of HIV is not over yet. For instance, in 2015 there was an outbreak in the US state of Indiana, associated with drug injecting. The US Centers for Disease Control and Prevention has been analyzing the HIV genome sequences and data about location and time of infection, says Yonatan Grad at the Harvard School of Public Health in Boston, Massachusetts. These data help to understand the extent of the outbreak, and will further help to understand when public health interventions have worked. This approach can work for other pathogens. In 2014, Grad and his colleague Marc Lipsitch published an investigation into the spread of drug-resistant gonorrhoea across the US. Because we had representative sequences from individuals in different cities at different times and with different sexual orientations, we could show the spread was from the west of the country to the east, says Lipsitch. Whats more, they could confirm that the drug-resistant form of gonorrhoea appeared to have circulated predominantly in men who have sex with men. That could prompt increased screening in these at-risk populations, in an effort to reduce further spread. In other words, there is real power to studying pathogens like HIV and gonorrhoea through the prism of human society.", "hypothesis": "Humans are not closely related to monkey.", "label": "n"} +{"uid": "id_440", "premise": "We know the city where HIV first emerged It is easy to see why AIDS seemed so mysterious and frightening when US medics first encountered it 35 years ago. The condition robbed young, healthy people of their strong immune system, leaving them weak and vulnerable. And it seemed to come out of nowhere. Today we know much more how and why HIV the virus that leads to AIDS has become a global pandemic. Unsurprisingly, sex workers unwittingly played a part. But no less important were the roles of trade, the collapse of colonialism, and 20th Century sociopolitical reform. HIV did not really appear out of nowhere, of course. It probably began as a virus affecting monkeys and apes in west central Africa. From there it jumped species into humans on several occasions, perhaps because people ate infected bushmeat. Some people carry a version of HIV closely related to that seen in sooty mangabey monkeys, for instance. But HIV that came from monkeys has not become a global problem. We are more closely related to apes, like gorillas and chimpanzees, than we are to monkeys. But even when HIV has passed into human populations from these apes, it has not necessarily turned into a widespread health issue. HIV originating from apes typically belongs to a type of virus called HIV-1. One is called HIV-1 group O, and human cases are largely confined to west Africa. In fact, only one form of HIV has spread far and wide after jumping to humans. This version, which probably originated from chimpanzees, is called HIV-1 group M (for major). More than 90% of HIV infections belong in group M. Which raises an obvious question: whats so special about HIV-1 group M? A study published in 2014 suggests a surprising answer: there might be nothing particularly special about group M. It is not especially infectious, as you might expect. Instead, it seems that this form of HIV simply took advantage of events. Ecological rather than evolutionary factors drove its rapid spread, says Nuno Faria at the University of Oxford in the UK. Faria and his colleagues built a family tree of HIV, by looking at a diverse array of HIV genomes collected from about 800 infected people from central Africa. Genomes pick up new mutations at a fairly steady rate, so by comparing two genome sequences and counting the differences they could work out when the two last shared a common ancestor. This technique is widely used, for example to establish that our common ancestor with chimpanzees lived at least 7 million years ago. RNA viruses such as HIV evolve approximately 1 million times faster than human DNA, says Faria. This means the HIV molecular clock ticks very fast indeed. It ticks so fast, Faria and his colleagues found that the HIV genomes all shared a common ancestor that existed no more than 100 years ago. The HIV-1 group M pandemic probably first began in the 1920s. Then the team went further. Because they knew where each of the HIV samples had been collected, they could place the origin of the pandemic in a specific city: Kinshasa, now the capital of the Democratic Republic of Congo. At this point, the researchers changed tack. They turned to historical records to work out why HIV infections in an African city in the 1920s could ultimately spark a pandemic. A likely sequence of events quickly became obvious. In the 1920s, DR Congo was a Belgian colony and Kinshasa then known as Leopoldville had just been made the capital. The city became a very attractive destination for young working men seeking their fortunes, and for sex workers only too willing to help them spend their earnings. The virus spread quickly through the population. It did not remain confined to the city. The researchers discovered that the capital of the Belgian Congo was, in the 1920s, one of the best connected cities in Africa. Taking full advantage of an extensive rail network used by hundreds of thousands of people each year, the virus spread to cities 900 miles (1500km) away in just 20 years. Everything was in place for an explosion in infection rates in the 1960s. The beginning of that decade brought another change. Belgian Congo gained its independence, and became an attractive source of employment to French speakers elsewhere in the world, including Haiti. When these young Haitians returned home a few years later they took a particular form of HIV-1 group M, called subtype B, to the western side of the Atlantic. It arrived in the US in the 1970s, just as sexual liberation and homophobic attitudes were leading to concentrations of gay men in cosmopolitan cities like New York and San Francisco. Once more, HIV took advantage of the sociopolitical situation to spread quickly through the US and Europe. There is no reason to believe that other subtypes would not have spread as quickly as subtype B, given similar ecological circumstances, says Faria. The story of the spread of HIV is not over yet. For instance, in 2015 there was an outbreak in the US state of Indiana, associated with drug injecting. The US Centers for Disease Control and Prevention has been analyzing the HIV genome sequences and data about location and time of infection, says Yonatan Grad at the Harvard School of Public Health in Boston, Massachusetts. These data help to understand the extent of the outbreak, and will further help to understand when public health interventions have worked. This approach can work for other pathogens. In 2014, Grad and his colleague Marc Lipsitch published an investigation into the spread of drug-resistant gonorrhoea across the US. Because we had representative sequences from individuals in different cities at different times and with different sexual orientations, we could show the spread was from the west of the country to the east, says Lipsitch. Whats more, they could confirm that the drug-resistant form of gonorrhoea appeared to have circulated predominantly in men who have sex with men. That could prompt increased screening in these at-risk populations, in an effort to reduce further spread. In other words, there is real power to studying pathogens like HIV and gonorrhoea through the prism of human society.", "hypothesis": "It is believed that HIV appeared out of nowhere.", "label": "c"} +{"uid": "id_441", "premise": "We know the city where HIV first emerged It is easy to see why AIDS seemed so mysterious and frightening when US medics first encountered it 35 years ago. The condition robbed young, healthy people of their strong immune system, leaving them weak and vulnerable. And it seemed to come out of nowhere. Today we know much more how and why HIV the virus that leads to AIDS has become a global pandemic. Unsurprisingly, sex workers unwittingly played a part. But no less important were the roles of trade, the collapse of colonialism, and 20th Century sociopolitical reform. HIV did not really appear out of nowhere, of course. It probably began as a virus affecting monkeys and apes in west central Africa. From there it jumped species into humans on several occasions, perhaps because people ate infected bushmeat. Some people carry a version of HIV closely related to that seen in sooty mangabey monkeys, for instance. But HIV that came from monkeys has not become a global problem. We are more closely related to apes, like gorillas and chimpanzees, than we are to monkeys. But even when HIV has passed into human populations from these apes, it has not necessarily turned into a widespread health issue. HIV originating from apes typically belongs to a type of virus called HIV-1. One is called HIV-1 group O, and human cases are largely confined to west Africa. In fact, only one form of HIV has spread far and wide after jumping to humans. This version, which probably originated from chimpanzees, is called HIV-1 group M (for major). More than 90% of HIV infections belong in group M. Which raises an obvious question: whats so special about HIV-1 group M? A study published in 2014 suggests a surprising answer: there might be nothing particularly special about group M. It is not especially infectious, as you might expect. Instead, it seems that this form of HIV simply took advantage of events. Ecological rather than evolutionary factors drove its rapid spread, says Nuno Faria at the University of Oxford in the UK. Faria and his colleagues built a family tree of HIV, by looking at a diverse array of HIV genomes collected from about 800 infected people from central Africa. Genomes pick up new mutations at a fairly steady rate, so by comparing two genome sequences and counting the differences they could work out when the two last shared a common ancestor. This technique is widely used, for example to establish that our common ancestor with chimpanzees lived at least 7 million years ago. RNA viruses such as HIV evolve approximately 1 million times faster than human DNA, says Faria. This means the HIV molecular clock ticks very fast indeed. It ticks so fast, Faria and his colleagues found that the HIV genomes all shared a common ancestor that existed no more than 100 years ago. The HIV-1 group M pandemic probably first began in the 1920s. Then the team went further. Because they knew where each of the HIV samples had been collected, they could place the origin of the pandemic in a specific city: Kinshasa, now the capital of the Democratic Republic of Congo. At this point, the researchers changed tack. They turned to historical records to work out why HIV infections in an African city in the 1920s could ultimately spark a pandemic. A likely sequence of events quickly became obvious. In the 1920s, DR Congo was a Belgian colony and Kinshasa then known as Leopoldville had just been made the capital. The city became a very attractive destination for young working men seeking their fortunes, and for sex workers only too willing to help them spend their earnings. The virus spread quickly through the population. It did not remain confined to the city. The researchers discovered that the capital of the Belgian Congo was, in the 1920s, one of the best connected cities in Africa. Taking full advantage of an extensive rail network used by hundreds of thousands of people each year, the virus spread to cities 900 miles (1500km) away in just 20 years. Everything was in place for an explosion in infection rates in the 1960s. The beginning of that decade brought another change. Belgian Congo gained its independence, and became an attractive source of employment to French speakers elsewhere in the world, including Haiti. When these young Haitians returned home a few years later they took a particular form of HIV-1 group M, called subtype B, to the western side of the Atlantic. It arrived in the US in the 1970s, just as sexual liberation and homophobic attitudes were leading to concentrations of gay men in cosmopolitan cities like New York and San Francisco. Once more, HIV took advantage of the sociopolitical situation to spread quickly through the US and Europe. There is no reason to believe that other subtypes would not have spread as quickly as subtype B, given similar ecological circumstances, says Faria. The story of the spread of HIV is not over yet. For instance, in 2015 there was an outbreak in the US state of Indiana, associated with drug injecting. The US Centers for Disease Control and Prevention has been analyzing the HIV genome sequences and data about location and time of infection, says Yonatan Grad at the Harvard School of Public Health in Boston, Massachusetts. These data help to understand the extent of the outbreak, and will further help to understand when public health interventions have worked. This approach can work for other pathogens. In 2014, Grad and his colleague Marc Lipsitch published an investigation into the spread of drug-resistant gonorrhoea across the US. Because we had representative sequences from individuals in different cities at different times and with different sexual orientations, we could show the spread was from the west of the country to the east, says Lipsitch. Whats more, they could confirm that the drug-resistant form of gonorrhoea appeared to have circulated predominantly in men who have sex with men. That could prompt increased screening in these at-risk populations, in an effort to reduce further spread. In other words, there is real power to studying pathogens like HIV and gonorrhoea through the prism of human society.", "hypothesis": "HIV-1 group O originated in 1920s.", "label": "n"} +{"uid": "id_442", "premise": "We know the city where HIV first emerged It is easy to see why AIDS seemed so mysterious and frightening when US medics first encountered it 35 years ago. The condition robbed young, healthy people of their strong immune system, leaving them weak and vulnerable. And it seemed to come out of nowhere. Today we know much more how and why HIV the virus that leads to AIDS has become a global pandemic. Unsurprisingly, sex workers unwittingly played a part. But no less important were the roles of trade, the collapse of colonialism, and 20th Century sociopolitical reform. HIV did not really appear out of nowhere, of course. It probably began as a virus affecting monkeys and apes in west central Africa. From there it jumped species into humans on several occasions, perhaps because people ate infected bushmeat. Some people carry a version of HIV closely related to that seen in sooty mangabey monkeys, for instance. But HIV that came from monkeys has not become a global problem. We are more closely related to apes, like gorillas and chimpanzees, than we are to monkeys. But even when HIV has passed into human populations from these apes, it has not necessarily turned into a widespread health issue. HIV originating from apes typically belongs to a type of virus called HIV-1. One is called HIV-1 group O, and human cases are largely confined to west Africa. In fact, only one form of HIV has spread far and wide after jumping to humans. This version, which probably originated from chimpanzees, is called HIV-1 group M (for major). More than 90% of HIV infections belong in group M. Which raises an obvious question: whats so special about HIV-1 group M? A study published in 2014 suggests a surprising answer: there might be nothing particularly special about group M. It is not especially infectious, as you might expect. Instead, it seems that this form of HIV simply took advantage of events. Ecological rather than evolutionary factors drove its rapid spread, says Nuno Faria at the University of Oxford in the UK. Faria and his colleagues built a family tree of HIV, by looking at a diverse array of HIV genomes collected from about 800 infected people from central Africa. Genomes pick up new mutations at a fairly steady rate, so by comparing two genome sequences and counting the differences they could work out when the two last shared a common ancestor. This technique is widely used, for example to establish that our common ancestor with chimpanzees lived at least 7 million years ago. RNA viruses such as HIV evolve approximately 1 million times faster than human DNA, says Faria. This means the HIV molecular clock ticks very fast indeed. It ticks so fast, Faria and his colleagues found that the HIV genomes all shared a common ancestor that existed no more than 100 years ago. The HIV-1 group M pandemic probably first began in the 1920s. Then the team went further. Because they knew where each of the HIV samples had been collected, they could place the origin of the pandemic in a specific city: Kinshasa, now the capital of the Democratic Republic of Congo. At this point, the researchers changed tack. They turned to historical records to work out why HIV infections in an African city in the 1920s could ultimately spark a pandemic. A likely sequence of events quickly became obvious. In the 1920s, DR Congo was a Belgian colony and Kinshasa then known as Leopoldville had just been made the capital. The city became a very attractive destination for young working men seeking their fortunes, and for sex workers only too willing to help them spend their earnings. The virus spread quickly through the population. It did not remain confined to the city. The researchers discovered that the capital of the Belgian Congo was, in the 1920s, one of the best connected cities in Africa. Taking full advantage of an extensive rail network used by hundreds of thousands of people each year, the virus spread to cities 900 miles (1500km) away in just 20 years. Everything was in place for an explosion in infection rates in the 1960s. The beginning of that decade brought another change. Belgian Congo gained its independence, and became an attractive source of employment to French speakers elsewhere in the world, including Haiti. When these young Haitians returned home a few years later they took a particular form of HIV-1 group M, called subtype B, to the western side of the Atlantic. It arrived in the US in the 1970s, just as sexual liberation and homophobic attitudes were leading to concentrations of gay men in cosmopolitan cities like New York and San Francisco. Once more, HIV took advantage of the sociopolitical situation to spread quickly through the US and Europe. There is no reason to believe that other subtypes would not have spread as quickly as subtype B, given similar ecological circumstances, says Faria. The story of the spread of HIV is not over yet. For instance, in 2015 there was an outbreak in the US state of Indiana, associated with drug injecting. The US Centers for Disease Control and Prevention has been analyzing the HIV genome sequences and data about location and time of infection, says Yonatan Grad at the Harvard School of Public Health in Boston, Massachusetts. These data help to understand the extent of the outbreak, and will further help to understand when public health interventions have worked. This approach can work for other pathogens. In 2014, Grad and his colleague Marc Lipsitch published an investigation into the spread of drug-resistant gonorrhoea across the US. Because we had representative sequences from individuals in different cities at different times and with different sexual orientations, we could show the spread was from the west of the country to the east, says Lipsitch. Whats more, they could confirm that the drug-resistant form of gonorrhoea appeared to have circulated predominantly in men who have sex with men. That could prompt increased screening in these at-risk populations, in an effort to reduce further spread. In other words, there is real power to studying pathogens like HIV and gonorrhoea through the prism of human society.", "hypothesis": "HIV-1 group M has something special.", "label": "c"} +{"uid": "id_443", "premise": "We know the city where HIV first emerged It is easy to see why AIDS seemed so mysterious and frightening when US medics first encountered it 35 years ago. The condition robbed young, healthy people of their strong immune system, leaving them weak and vulnerable. And it seemed to come out of nowhere. Today we know much more how and why HIV the virus that leads to AIDS has become a global pandemic. Unsurprisingly, sex workers unwittingly played a part. But no less important were the roles of trade, the collapse of colonialism, and 20th Century sociopolitical reform. HIV did not really appear out of nowhere, of course. It probably began as a virus affecting monkeys and apes in west central Africa. From there it jumped species into humans on several occasions, perhaps because people ate infected bushmeat. Some people carry a version of HIV closely related to that seen in sooty mangabey monkeys, for instance. But HIV that came from monkeys has not become a global problem. We are more closely related to apes, like gorillas and chimpanzees, than we are to monkeys. But even when HIV has passed into human populations from these apes, it has not necessarily turned into a widespread health issue. HIV originating from apes typically belongs to a type of virus called HIV-1. One is called HIV-1 group O, and human cases are largely confined to west Africa. In fact, only one form of HIV has spread far and wide after jumping to humans. This version, which probably originated from chimpanzees, is called HIV-1 group M (for major). More than 90% of HIV infections belong in group M. Which raises an obvious question: whats so special about HIV-1 group M? A study published in 2014 suggests a surprising answer: there might be nothing particularly special about group M. It is not especially infectious, as you might expect. Instead, it seems that this form of HIV simply took advantage of events. Ecological rather than evolutionary factors drove its rapid spread, says Nuno Faria at the University of Oxford in the UK. Faria and his colleagues built a family tree of HIV, by looking at a diverse array of HIV genomes collected from about 800 infected people from central Africa. Genomes pick up new mutations at a fairly steady rate, so by comparing two genome sequences and counting the differences they could work out when the two last shared a common ancestor. This technique is widely used, for example to establish that our common ancestor with chimpanzees lived at least 7 million years ago. RNA viruses such as HIV evolve approximately 1 million times faster than human DNA, says Faria. This means the HIV molecular clock ticks very fast indeed. It ticks so fast, Faria and his colleagues found that the HIV genomes all shared a common ancestor that existed no more than 100 years ago. The HIV-1 group M pandemic probably first began in the 1920s. Then the team went further. Because they knew where each of the HIV samples had been collected, they could place the origin of the pandemic in a specific city: Kinshasa, now the capital of the Democratic Republic of Congo. At this point, the researchers changed tack. They turned to historical records to work out why HIV infections in an African city in the 1920s could ultimately spark a pandemic. A likely sequence of events quickly became obvious. In the 1920s, DR Congo was a Belgian colony and Kinshasa then known as Leopoldville had just been made the capital. The city became a very attractive destination for young working men seeking their fortunes, and for sex workers only too willing to help them spend their earnings. The virus spread quickly through the population. It did not remain confined to the city. The researchers discovered that the capital of the Belgian Congo was, in the 1920s, one of the best connected cities in Africa. Taking full advantage of an extensive rail network used by hundreds of thousands of people each year, the virus spread to cities 900 miles (1500km) away in just 20 years. Everything was in place for an explosion in infection rates in the 1960s. The beginning of that decade brought another change. Belgian Congo gained its independence, and became an attractive source of employment to French speakers elsewhere in the world, including Haiti. When these young Haitians returned home a few years later they took a particular form of HIV-1 group M, called subtype B, to the western side of the Atlantic. It arrived in the US in the 1970s, just as sexual liberation and homophobic attitudes were leading to concentrations of gay men in cosmopolitan cities like New York and San Francisco. Once more, HIV took advantage of the sociopolitical situation to spread quickly through the US and Europe. There is no reason to believe that other subtypes would not have spread as quickly as subtype B, given similar ecological circumstances, says Faria. The story of the spread of HIV is not over yet. For instance, in 2015 there was an outbreak in the US state of Indiana, associated with drug injecting. The US Centers for Disease Control and Prevention has been analyzing the HIV genome sequences and data about location and time of infection, says Yonatan Grad at the Harvard School of Public Health in Boston, Massachusetts. These data help to understand the extent of the outbreak, and will further help to understand when public health interventions have worked. This approach can work for other pathogens. In 2014, Grad and his colleague Marc Lipsitch published an investigation into the spread of drug-resistant gonorrhoea across the US. Because we had representative sequences from individuals in different cities at different times and with different sexual orientations, we could show the spread was from the west of the country to the east, says Lipsitch. Whats more, they could confirm that the drug-resistant form of gonorrhoea appeared to have circulated predominantly in men who have sex with men. That could prompt increased screening in these at-risk populations, in an effort to reduce further spread. In other words, there is real power to studying pathogens like HIV and gonorrhoea through the prism of human society.", "hypothesis": "Human DNA evolves approximately 1 million times slower than HIV.", "label": "e"} +{"uid": "id_444", "premise": "We know the city where HIV first emerged It is easy to see why AIDS seemed so mysterious and frightening when US medics first encountered it 35 years ago. The condition robbed young, healthy people of their strong immune system, leaving them weak and vulnerable. And it seemed to come out of nowhere. Today we know much more how and why HIV the virus that leads to AIDS has become a global pandemic. Unsurprisingly, sex workers unwittingly played a part. But no less important were the roles of trade, the collapse of colonialism, and 20th Century sociopolitical reform. HIV did not really appear out of nowhere, of course. It probably began as a virus affecting monkeys and apes in west central Africa. From there it jumped species into humans on several occasions, perhaps because people ate infected bushmeat. Some people carry a version of HIV closely related to that seen in sooty mangabey monkeys, for instance. But HIV that came from monkeys has not become a global problem. We are more closely related to apes, like gorillas and chimpanzees, than we are to monkeys. But even when HIV has passed into human populations from these apes, it has not necessarily turned into a widespread health issue. HIV originating from apes typically belongs to a type of virus called HIV-1. One is called HIV-1 group O, and human cases are largely confined to west Africa. In fact, only one form of HIV has spread far and wide after jumping to humans. This version, which probably originated from chimpanzees, is called HIV-1 group M (for major). More than 90% of HIV infections belong in group M. Which raises an obvious question: whats so special about HIV-1 group M? A study published in 2014 suggests a surprising answer: there might be nothing particularly special about group M. It is not especially infectious, as you might expect. Instead, it seems that this form of HIV simply took advantage of events. Ecological rather than evolutionary factors drove its rapid spread, says Nuno Faria at the University of Oxford in the UK. Faria and his colleagues built a family tree of HIV, by looking at a diverse array of HIV genomes collected from about 800 infected people from central Africa. Genomes pick up new mutations at a fairly steady rate, so by comparing two genome sequences and counting the differences they could work out when the two last shared a common ancestor. This technique is widely used, for example to establish that our common ancestor with chimpanzees lived at least 7 million years ago. RNA viruses such as HIV evolve approximately 1 million times faster than human DNA, says Faria. This means the HIV molecular clock ticks very fast indeed. It ticks so fast, Faria and his colleagues found that the HIV genomes all shared a common ancestor that existed no more than 100 years ago. The HIV-1 group M pandemic probably first began in the 1920s. Then the team went further. Because they knew where each of the HIV samples had been collected, they could place the origin of the pandemic in a specific city: Kinshasa, now the capital of the Democratic Republic of Congo. At this point, the researchers changed tack. They turned to historical records to work out why HIV infections in an African city in the 1920s could ultimately spark a pandemic. A likely sequence of events quickly became obvious. In the 1920s, DR Congo was a Belgian colony and Kinshasa then known as Leopoldville had just been made the capital. The city became a very attractive destination for young working men seeking their fortunes, and for sex workers only too willing to help them spend their earnings. The virus spread quickly through the population. It did not remain confined to the city. The researchers discovered that the capital of the Belgian Congo was, in the 1920s, one of the best connected cities in Africa. Taking full advantage of an extensive rail network used by hundreds of thousands of people each year, the virus spread to cities 900 miles (1500km) away in just 20 years. Everything was in place for an explosion in infection rates in the 1960s. The beginning of that decade brought another change. Belgian Congo gained its independence, and became an attractive source of employment to French speakers elsewhere in the world, including Haiti. When these young Haitians returned home a few years later they took a particular form of HIV-1 group M, called subtype B, to the western side of the Atlantic. It arrived in the US in the 1970s, just as sexual liberation and homophobic attitudes were leading to concentrations of gay men in cosmopolitan cities like New York and San Francisco. Once more, HIV took advantage of the sociopolitical situation to spread quickly through the US and Europe. There is no reason to believe that other subtypes would not have spread as quickly as subtype B, given similar ecological circumstances, says Faria. The story of the spread of HIV is not over yet. For instance, in 2015 there was an outbreak in the US state of Indiana, associated with drug injecting. The US Centers for Disease Control and Prevention has been analyzing the HIV genome sequences and data about location and time of infection, says Yonatan Grad at the Harvard School of Public Health in Boston, Massachusetts. These data help to understand the extent of the outbreak, and will further help to understand when public health interventions have worked. This approach can work for other pathogens. In 2014, Grad and his colleague Marc Lipsitch published an investigation into the spread of drug-resistant gonorrhoea across the US. Because we had representative sequences from individuals in different cities at different times and with different sexual orientations, we could show the spread was from the west of the country to the east, says Lipsitch. Whats more, they could confirm that the drug-resistant form of gonorrhoea appeared to have circulated predominantly in men who have sex with men. That could prompt increased screening in these at-risk populations, in an effort to reduce further spread. In other words, there is real power to studying pathogens like HIV and gonorrhoea through the prism of human society.", "hypothesis": "The most important role in developing AIDS as a pandemia was played by sex workers.", "label": "c"} +{"uid": "id_445", "premise": "We know the city where HIV first emerged It is easy to see why AIDS seemed so mysterious and frightening when US medics first encountered it 35 years ago. The condition robbed young, healthy people of their strong immune system, leaving them weak and vulnerable. And it seemed to come out of nowhere. Today we know much more how and why HIV the virus that leads to AIDS has become a global pandemic. Unsurprisingly, sex workers unwittingly played a part. But no less important were the roles of trade, the collapse of colonialism, and 20th Century sociopolitical reform. HIV did not really appear out of nowhere, of course. It probably began as a virus affecting monkeys and apes in west central Africa. From there it jumped species into humans on several occasions, perhaps because people ate infected bushmeat. Some people carry a version of HIV closely related to that seen in sooty mangabey monkeys, for instance. But HIV that came from monkeys has not become a global problem. We are more closely related to apes, like gorillas and chimpanzees, than we are to monkeys. But even when HIV has passed into human populations from these apes, it has not necessarily turned into a widespread health issue. HIV originating from apes typically belongs to a type of virus called HIV-1. One is called HIV-1 group O, and human cases are largely confined to west Africa. In fact, only one form of HIV has spread far and wide after jumping to humans. This version, which probably originated from chimpanzees, is called HIV-1 group M (for major). More than 90% of HIV infections belong in group M. Which raises an obvious question: whats so special about HIV-1 group M? A study published in 2014 suggests a surprising answer: there might be nothing particularly special about group M. It is not especially infectious, as you might expect. Instead, it seems that this form of HIV simply took advantage of events. Ecological rather than evolutionary factors drove its rapid spread, says Nuno Faria at the University of Oxford in the UK. Faria and his colleagues built a family tree of HIV, by looking at a diverse array of HIV genomes collected from about 800 infected people from central Africa. Genomes pick up new mutations at a fairly steady rate, so by comparing two genome sequences and counting the differences they could work out when the two last shared a common ancestor. This technique is widely used, for example to establish that our common ancestor with chimpanzees lived at least 7 million years ago. RNA viruses such as HIV evolve approximately 1 million times faster than human DNA, says Faria. This means the HIV molecular clock ticks very fast indeed. It ticks so fast, Faria and his colleagues found that the HIV genomes all shared a common ancestor that existed no more than 100 years ago. The HIV-1 group M pandemic probably first began in the 1920s. Then the team went further. Because they knew where each of the HIV samples had been collected, they could place the origin of the pandemic in a specific city: Kinshasa, now the capital of the Democratic Republic of Congo. At this point, the researchers changed tack. They turned to historical records to work out why HIV infections in an African city in the 1920s could ultimately spark a pandemic. A likely sequence of events quickly became obvious. In the 1920s, DR Congo was a Belgian colony and Kinshasa then known as Leopoldville had just been made the capital. The city became a very attractive destination for young working men seeking their fortunes, and for sex workers only too willing to help them spend their earnings. The virus spread quickly through the population. It did not remain confined to the city. The researchers discovered that the capital of the Belgian Congo was, in the 1920s, one of the best connected cities in Africa. Taking full advantage of an extensive rail network used by hundreds of thousands of people each year, the virus spread to cities 900 miles (1500km) away in just 20 years. Everything was in place for an explosion in infection rates in the 1960s. The beginning of that decade brought another change. Belgian Congo gained its independence, and became an attractive source of employment to French speakers elsewhere in the world, including Haiti. When these young Haitians returned home a few years later they took a particular form of HIV-1 group M, called subtype B, to the western side of the Atlantic. It arrived in the US in the 1970s, just as sexual liberation and homophobic attitudes were leading to concentrations of gay men in cosmopolitan cities like New York and San Francisco. Once more, HIV took advantage of the sociopolitical situation to spread quickly through the US and Europe. There is no reason to believe that other subtypes would not have spread as quickly as subtype B, given similar ecological circumstances, says Faria. The story of the spread of HIV is not over yet. For instance, in 2015 there was an outbreak in the US state of Indiana, associated with drug injecting. The US Centers for Disease Control and Prevention has been analyzing the HIV genome sequences and data about location and time of infection, says Yonatan Grad at the Harvard School of Public Health in Boston, Massachusetts. These data help to understand the extent of the outbreak, and will further help to understand when public health interventions have worked. This approach can work for other pathogens. In 2014, Grad and his colleague Marc Lipsitch published an investigation into the spread of drug-resistant gonorrhoea across the US. Because we had representative sequences from individuals in different cities at different times and with different sexual orientations, we could show the spread was from the west of the country to the east, says Lipsitch. Whats more, they could confirm that the drug-resistant form of gonorrhoea appeared to have circulated predominantly in men who have sex with men. That could prompt increased screening in these at-risk populations, in an effort to reduce further spread. In other words, there is real power to studying pathogens like HIV and gonorrhoea through the prism of human society.", "hypothesis": "Scientists believe that HIV already existed in 1920s.", "label": "e"} +{"uid": "id_446", "premise": "We know the city where HIV first emerged It is easy to see why AIDS seemed so mysterious and frightening when US medics first encountered it 35 years ago. The condition robbed young, healthy people of their strong immune system, leaving them weak and vulnerable. And it seemed to come out of nowhere. Today we know much more how and why HIV the virus that leads to AIDS has become a global pandemic. Unsurprisingly, sex workers unwittingly played a part. But no less important were the roles of trade, the collapse of colonialism, and 20th Century sociopolitical reform. HIV did not really appear out of nowhere, of course. It probably began as a virus affecting monkeys and apes in west central Africa. From there it jumped species into humans on several occasions, perhaps because people ate infected bushmeat. Some people carry a version of HIV closely related to that seen in sooty mangabey monkeys, for instance. But HIV that came from monkeys has not become a global problem. We are more closely related to apes, like gorillas and chimpanzees, than we are to monkeys. But even when HIV has passed into human populations from these apes, it has not necessarily turned into a widespread health issue. HIV originating from apes typically belongs to a type of virus called HIV-1. One is called HIV-1 group O, and human cases are largely confined to west Africa. In fact, only one form of HIV has spread far and wide after jumping to humans. This version, which probably originated from chimpanzees, is called HIV-1 group M (for major). More than 90% of HIV infections belong in group M. Which raises an obvious question: whats so special about HIV-1 group M? A study published in 2014 suggests a surprising answer: there might be nothing particularly special about group M. It is not especially infectious, as you might expect. Instead, it seems that this form of HIV simply took advantage of events. Ecological rather than evolutionary factors drove its rapid spread, says Nuno Faria at the University of Oxford in the UK. Faria and his colleagues built a family tree of HIV, by looking at a diverse array of HIV genomes collected from about 800 infected people from central Africa. Genomes pick up new mutations at a fairly steady rate, so by comparing two genome sequences and counting the differences they could work out when the two last shared a common ancestor. This technique is widely used, for example to establish that our common ancestor with chimpanzees lived at least 7 million years ago. RNA viruses such as HIV evolve approximately 1 million times faster than human DNA, says Faria. This means the HIV molecular clock ticks very fast indeed. It ticks so fast, Faria and his colleagues found that the HIV genomes all shared a common ancestor that existed no more than 100 years ago. The HIV-1 group M pandemic probably first began in the 1920s. Then the team went further. Because they knew where each of the HIV samples had been collected, they could place the origin of the pandemic in a specific city: Kinshasa, now the capital of the Democratic Republic of Congo. At this point, the researchers changed tack. They turned to historical records to work out why HIV infections in an African city in the 1920s could ultimately spark a pandemic. A likely sequence of events quickly became obvious. In the 1920s, DR Congo was a Belgian colony and Kinshasa then known as Leopoldville had just been made the capital. The city became a very attractive destination for young working men seeking their fortunes, and for sex workers only too willing to help them spend their earnings. The virus spread quickly through the population. It did not remain confined to the city. The researchers discovered that the capital of the Belgian Congo was, in the 1920s, one of the best connected cities in Africa. Taking full advantage of an extensive rail network used by hundreds of thousands of people each year, the virus spread to cities 900 miles (1500km) away in just 20 years. Everything was in place for an explosion in infection rates in the 1960s. The beginning of that decade brought another change. Belgian Congo gained its independence, and became an attractive source of employment to French speakers elsewhere in the world, including Haiti. When these young Haitians returned home a few years later they took a particular form of HIV-1 group M, called subtype B, to the western side of the Atlantic. It arrived in the US in the 1970s, just as sexual liberation and homophobic attitudes were leading to concentrations of gay men in cosmopolitan cities like New York and San Francisco. Once more, HIV took advantage of the sociopolitical situation to spread quickly through the US and Europe. There is no reason to believe that other subtypes would not have spread as quickly as subtype B, given similar ecological circumstances, says Faria. The story of the spread of HIV is not over yet. For instance, in 2015 there was an outbreak in the US state of Indiana, associated with drug injecting. The US Centers for Disease Control and Prevention has been analyzing the HIV genome sequences and data about location and time of infection, says Yonatan Grad at the Harvard School of Public Health in Boston, Massachusetts. These data help to understand the extent of the outbreak, and will further help to understand when public health interventions have worked. This approach can work for other pathogens. In 2014, Grad and his colleague Marc Lipsitch published an investigation into the spread of drug-resistant gonorrhoea across the US. Because we had representative sequences from individuals in different cities at different times and with different sexual orientations, we could show the spread was from the west of the country to the east, says Lipsitch. Whats more, they could confirm that the drug-resistant form of gonorrhoea appeared to have circulated predominantly in men who have sex with men. That could prompt increased screening in these at-risk populations, in an effort to reduce further spread. In other words, there is real power to studying pathogens like HIV and gonorrhoea through the prism of human society.", "hypothesis": "AIDS were first encountered 35 years ago.", "label": "e"} +{"uid": "id_447", "premise": "We like to think of ourselves as unique but we are in fact 99.9 per cent genetically identical. DNA, which comprises the chemical code, governs the construction and function of every cell in our body. The Human Genome Project mapped the sequence for human DNA and provided a blueprint of the DNA shared by every person. But what of the 0.1 per cent that is not common to all mankind and was left out of the Human Genome Project blueprint? It is responsible for all individual idiosyncrasies and the differences between racial and ethnic groups. If it were not for this minute percentage there would be no individual differences. We would be clones. Individual differences could be greatly increased if we were to think the unthinkable and allow genetic engineering of the human DNA. This would involve inserting genes from one cell into another and changing that cells DNA and its characteristics. In theory it would be possible to take the DNA from an entirely different species and insert it into human cells. Such radical modifications could certainly make us much more unique.", "hypothesis": "A word that means the same as blueprint is design.", "label": "e"} +{"uid": "id_448", "premise": "We like to think of ourselves as unique but we are in fact 99.9 per cent genetically identical. DNA, which comprises the chemical code, governs the construction and function of every cell in our body. The Human Genome Project mapped the sequence for human DNA and provided a blueprint of the DNA shared by every person. But what of the 0.1 per cent that is not common to all mankind and was left out of the Human Genome Project blueprint? It is responsible for all individual idiosyncrasies and the differences between racial and ethnic groups. If it were not for this minute percentage there would be no individual differences. We would be clones. Individual differences could be greatly increased if we were to think the unthinkable and allow genetic engineering of the human DNA. This would involve inserting genes from one cell into another and changing that cells DNA and its characteristics. In theory it would be possible to take the DNA from an entirely different species and insert it into human cells. Such radical modifications could certainly make us much more unique.", "hypothesis": "It can be inferred from the passage that a DNA molecule is contained in the nucleus of every cell in our body.", "label": "c"} +{"uid": "id_449", "premise": "We like to think of ourselves as unique but we are in fact 99.9 per cent genetically identical. DNA, which comprises the chemical code, governs the construction and function of every cell in our body. The Human Genome Project mapped the sequence for human DNA and provided a blueprint of the DNA shared by every person. But what of the 0.1 per cent that is not common to all mankind and was left out of the Human Genome Project blueprint? It is responsible for all individual idiosyncrasies and the differences between racial and ethnic groups. If it were not for this minute percentage there would be no individual differences. We would be clones. Individual differences could be greatly increased if we were to think the unthinkable and allow genetic engineering of the human DNA. This would involve inserting genes from one cell into another and changing that cells DNA and its characteristics. In theory it would be possible to take the DNA from an entirely different species and insert it into human cells. Such radical modifications could certainly make us much more unique.", "hypothesis": "The Human Genome Project is mentioned in the project in relation to cloning.", "label": "c"} +{"uid": "id_450", "premise": "We like to think of ourselves as unique but we are in fact 99.9 per cent genetically identical. DNA, which comprises the chemical code, governs the construction and function of every cell in our body. The Human Genome Project mapped the sequence for human DNA and provided a blueprint of the DNA shared by every person. But what of the 0.1 per cent that is not common to all mankind and was left out of the Human Genome Project blueprint? It is responsible for all individual idiosyncrasies and the differences between racial and ethnic groups. If it were not for this minute percentage there would be no individual differences. We would be clones. Individual differences could be greatly increased if we were to think the unthinkable and allow genetic engineering of the human DNA. This would involve inserting genes from one cell into another and changing that cells DNA and its characteristics. In theory it would be possible to take the DNA from an entirely different species and insert it into human cells. Such radical modifications could certainly make us much more unique.", "hypothesis": "It can be inferred from the passage that the author does not approve of the genetic engineering of human DNA.", "label": "n"} +{"uid": "id_451", "premise": "We like to think of ourselves as unique but we are in fact 99.9 per cent genetically identical. DNA, which comprises the chemical code, governs the construction and function of every cell in our body. The Human Genome Project mapped the sequence for human DNA and provided a blueprint of the DNA shared by every person. But what of the 0.1 per cent that is not common to all mankind and was left out of the Human Genome Project blueprint? It is responsible for all individual idiosyncrasies and the differences between racial and ethnic groups. If it were not for this minute percentage there would be no individual differences. We would be clones. Individual differences could be greatly increased if we were to think the unthinkable and allow genetic engineering of the human DNA. This would involve inserting genes from one cell into another and changing that cells DNA and its characteristics. In theory it would be possible to take the DNA from an entirely different species and insert it into human cells. Such radical modifications could certainly make us much more unique.", "hypothesis": "In the context of the passage idiosyncrasies means unconventional behaviour.", "label": "c"} +{"uid": "id_452", "premise": "We suffer a suspension of judgement when we hand over a card to purchase something and spend funds that we intended to use for something essential or unintentionally create an unauthorized overdraft. These spur-of-the-moment lapses are more likely to occur when we pay for something electronically or with credit than with hard cash. This is because of a widely held perception that electronic money and credit are somehow not as real or valuable as notes and coins. Retailers play on this emotional weakness with offers of in- store cards and buy now play later deals. But, nowhere is our Achilles heel exploited more than on the internet where it is impossible to pay with ready money and perhaps the sites that have perfected this form of exploitation are those that offer gambling. The sites regulated by the Gaming Commission have safeguards but the unregulated sites set out to encourage people to stake more to recover their losses and do not provide facilities to allow the gambler to set limits on how much they will fritter.", "hypothesis": "Sites unregulated by the Gaming Commission are unlicensed.", "label": "n"} +{"uid": "id_453", "premise": "We suffer a suspension of judgement when we hand over a card to purchase something and spend funds that we intended to use for something essential or unintentionally create an unauthorized overdraft. These spur-of-the-moment lapses are more likely to occur when we pay for something electronically or with credit than with hard cash. This is because of a widely held perception that electronic money and credit are somehow not as real or valuable as notes and coins. Retailers play on this emotional weakness with offers of in- store cards and buy now play later deals. But, nowhere is our Achilles heel exploited more than on the internet where it is impossible to pay with ready money and perhaps the sites that have perfected this form of exploitation are those that offer gambling. The sites regulated by the Gaming Commission have safeguards but the unregulated sites set out to encourage people to stake more to recover their losses and do not provide facilities to allow the gambler to set limits on how much they will fritter.", "hypothesis": "We suffer a suspension of judgement when we hand over a card to purchase something.", "label": "c"} +{"uid": "id_454", "premise": "We suffer a suspension of judgement when we hand over a card to purchase something and spend funds that we intended to use for something essential or unintentionally create an unauthorized overdraft. These spur-of-the-moment lapses are more likely to occur when we pay for something electronically or with credit than with hard cash. This is because of a widely held perception that electronic money and credit are somehow not as real or valuable as notes and coins. Retailers play on this emotional weakness with offers of in- store cards and buy now play later deals. But, nowhere is our Achilles heel exploited more than on the internet where it is impossible to pay with ready money and perhaps the sites that have perfected this form of exploitation are those that offer gambling. The sites regulated by the Gaming Commission have safeguards but the unregulated sites set out to encourage people to stake more to recover their losses and do not provide facilities to allow the gambler to set limits on how much they will fritter.", "hypothesis": "Electronic money and credit have a lower psychological value than cash in your hand.", "label": "e"} +{"uid": "id_455", "premise": "Weather forecast sometimes has a better history record than economists. They make a great contribution to the revenue. For example, a recent weather forecast said there would be a storm in a resort resulting thousand dollars books decrease that day while actually the resort enjoyed a sunny day. Weather forecast would bring marketing mix alteration to various super market or retail shops when they review the forecast", "hypothesis": "Economists forecasts are prone to some biggest errors.", "label": "n"} +{"uid": "id_456", "premise": "Weather forecast sometimes has a better history record than economists. They make a great contribution to the revenue. For example, a recent weather forecast said there would be a storm in a resort resulting thousand dollars books decrease that day while actually the resort enjoyed a sunny day. Weather forecast would bring marketing mix alteration to various super market or retail shops when they review the forecast", "hypothesis": "Travellers or people having holidays pay little attention to the weather.", "label": "n"} +{"uid": "id_457", "premise": "Weather forecast sometimes has a better history record than economists. They make a great contribution to the revenue. For example, a recent weather forecast said there would be a storm in a resort resulting thousand dollars books decrease that day while actually the resort enjoyed a sunny day. Weather forecast would bring marketing mix alteration to various super market or retail shops when they review the forecast", "hypothesis": "It would be very beneficial for super markets and retailers to be informed of the weather forecast.", "label": "e"} +{"uid": "id_458", "premise": "Weighty problem. The World Health Organization (WHO) reports that obesity has reached epidemic proportions worldwide, with three times as many overweight adults as there were 20 years ago. Almost one-quarter of the adult population of the UK are now classed as obese, and they are over-represented in their use of NHS services. The most widely used tool to assess obesity is body mass index (BMI), which divides 2 weight in kilograms by height in metres squared to give the units kg/m . A BMI of 25 or above is defined as overweight or pre-obese, and a BMI of 30 or more is defined as obese. People with a BMI of 40 or above are morbidly obese; they are at severe risk of developing co-morbidities like cardiovascular disease and type 2 diabetes, which reduce life expectancy and increase hospital stay. Patients weighing more than 20 stone are described as bariatric the word originated from the Greek word baros meaning heavy and iatrics meaning medical treatment. The large size of bariatric patients often leads to poor mobility, with implications for manual handling, equipment, beds, chairs and space. Most bariatric patients will have a BMI in excess of 40, though not all bariatric people will be morbidly obese nor will every person with a BMI over 30 be obese. For example, a six-foot-five rugby player weighing 21 stone (BMI = 35) with a muscular build and a good weight distribution might be athletic. In these larger people the waist-to-hip ratio can serve as a more reliable indicator of a weight problem. A ratio of 1.0 or more is consistent with an excess of fat around the waist and the need to lose weight. From a health perspective, maximum safe waist measurements are reported as 40 inches (102 cm) for men and 35 inches (89 cm) for women irrespective of fat distribution.", "hypothesis": "A patient with a waist-to-hip ratio of 1.1 is obese.", "label": "n"} +{"uid": "id_459", "premise": "Weighty problem. The World Health Organization (WHO) reports that obesity has reached epidemic proportions worldwide, with three times as many overweight adults as there were 20 years ago. Almost one-quarter of the adult population of the UK are now classed as obese, and they are over-represented in their use of NHS services. The most widely used tool to assess obesity is body mass index (BMI), which divides 2 weight in kilograms by height in metres squared to give the units kg/m . A BMI of 25 or above is defined as overweight or pre-obese, and a BMI of 30 or more is defined as obese. People with a BMI of 40 or above are morbidly obese; they are at severe risk of developing co-morbidities like cardiovascular disease and type 2 diabetes, which reduce life expectancy and increase hospital stay. Patients weighing more than 20 stone are described as bariatric the word originated from the Greek word baros meaning heavy and iatrics meaning medical treatment. The large size of bariatric patients often leads to poor mobility, with implications for manual handling, equipment, beds, chairs and space. Most bariatric patients will have a BMI in excess of 40, though not all bariatric people will be morbidly obese nor will every person with a BMI over 30 be obese. For example, a six-foot-five rugby player weighing 21 stone (BMI = 35) with a muscular build and a good weight distribution might be athletic. In these larger people the waist-to-hip ratio can serve as a more reliable indicator of a weight problem. A ratio of 1.0 or more is consistent with an excess of fat around the waist and the need to lose weight. From a health perspective, maximum safe waist measurements are reported as 40 inches (102 cm) for men and 35 inches (89 cm) for women irrespective of fat distribution.", "hypothesis": "A patient with a waist measurement over 40 inches is obese.", "label": "n"} +{"uid": "id_460", "premise": "Weighty problem. The World Health Organization (WHO) reports that obesity has reached epidemic proportions worldwide, with three times as many overweight adults as there were 20 years ago. Almost one-quarter of the adult population of the UK are now classed as obese, and they are over-represented in their use of NHS services. The most widely used tool to assess obesity is body mass index (BMI), which divides 2 weight in kilograms by height in metres squared to give the units kg/m . A BMI of 25 or above is defined as overweight or pre-obese, and a BMI of 30 or more is defined as obese. People with a BMI of 40 or above are morbidly obese; they are at severe risk of developing co-morbidities like cardiovascular disease and type 2 diabetes, which reduce life expectancy and increase hospital stay. Patients weighing more than 20 stone are described as bariatric the word originated from the Greek word baros meaning heavy and iatrics meaning medical treatment. The large size of bariatric patients often leads to poor mobility, with implications for manual handling, equipment, beds, chairs and space. Most bariatric patients will have a BMI in excess of 40, though not all bariatric people will be morbidly obese nor will every person with a BMI over 30 be obese. For example, a six-foot-five rugby player weighing 21 stone (BMI = 35) with a muscular build and a good weight distribution might be athletic. In these larger people the waist-to-hip ratio can serve as a more reliable indicator of a weight problem. A ratio of 1.0 or more is consistent with an excess of fat around the waist and the need to lose weight. From a health perspective, maximum safe waist measurements are reported as 40 inches (102 cm) for men and 35 inches (89 cm) for women irrespective of fat distribution.", "hypothesis": "A patient with a BMI of between 25.0 and 29.9 is pre-obese.", "label": "e"} +{"uid": "id_461", "premise": "Weighty problem. The World Health Organization (WHO) reports that obesity has reached epidemic proportions worldwide, with three times as many overweight adults as there were 20 years ago. Almost one-quarter of the adult population of the UK are now classed as obese, and they are over-represented in their use of NHS services. The most widely used tool to assess obesity is body mass index (BMI), which divides 2 weight in kilograms by height in metres squared to give the units kg/m . A BMI of 25 or above is defined as overweight or pre-obese, and a BMI of 30 or more is defined as obese. People with a BMI of 40 or above are morbidly obese; they are at severe risk of developing co-morbidities like cardiovascular disease and type 2 diabetes, which reduce life expectancy and increase hospital stay. Patients weighing more than 20 stone are described as bariatric the word originated from the Greek word baros meaning heavy and iatrics meaning medical treatment. The large size of bariatric patients often leads to poor mobility, with implications for manual handling, equipment, beds, chairs and space. Most bariatric patients will have a BMI in excess of 40, though not all bariatric people will be morbidly obese nor will every person with a BMI over 30 be obese. For example, a six-foot-five rugby player weighing 21 stone (BMI = 35) with a muscular build and a good weight distribution might be athletic. In these larger people the waist-to-hip ratio can serve as a more reliable indicator of a weight problem. A ratio of 1.0 or more is consistent with an excess of fat around the waist and the need to lose weight. From a health perspective, maximum safe waist measurements are reported as 40 inches (102 cm) for men and 35 inches (89 cm) for women irrespective of fat distribution.", "hypothesis": "At least one-quarter of obese adults use NHS services.", "label": "c"} +{"uid": "id_462", "premise": "Well-regulated, ethical practices should always be an area of primary concern for any business. In an environment where multinational conglomerates predominate, owners of small businesses may feel anonymous enough to become flexible about their code of ethics. However, the increasingly inescapable attention of the media allows an unprecedented number of individuals to access news and information with greater speed than ever before unethical practices can become a matter of public knowledge overnight, with devastating consequences. Codes of ethical practice should apply not only to clients, but to employees, who are just as able to draw inappropriate behaviour on the part of their employers to the public's attention. In today's society, businesses of any size must be able to demonstrate transparency and accountability in their dealings with employees, clients, and the public alike.", "hypothesis": "Employees of a company should be subject to ethical codes of practice.", "label": "e"} +{"uid": "id_463", "premise": "Well-regulated, ethical practices should always be an area of primary concern for any business. In an environment where multinational conglomerates predominate, owners of small businesses may feel anonymous enough to become flexible about their code of ethics. However, the increasingly inescapable attention of the media allows an unprecedented number of individuals to access news and information with greater speed than ever before unethical practices can become a matter of public knowledge overnight, with devastating consequences. Codes of ethical practice should apply not only to clients, but to employees, who are just as able to draw inappropriate behaviour on the part of their employers to the public's attention. In today's society, businesses of any size must be able to demonstrate transparency and accountability in their dealings with employees, clients, and the public alike.", "hypothesis": "Unethical practices are only a problem if the public becomes aware of them.", "label": "n"} +{"uid": "id_464", "premise": "Well-regulated, ethical practices should always be an area of primary concern for any business. In an environment where multinational conglomerates predominate, owners of small businesses may feel anonymous enough to become flexible about their code of ethics. However, the increasingly inescapable attention of the media allows an unprecedented number of individuals to access news and information with greater speed than ever before unethical practices can become a matter of public knowledge overnight, with devastating consequences. Codes of ethical practice should apply not only to clients, but to employees, who are just as able to draw inappropriate behaviour on the part of their employers to the public's attention. In today's society, businesses of any size must be able to demonstrate transparency and accountability in their dealings with employees, clients, and the public alike.", "hypothesis": "More people than ever before have access to information about companies' ethical practices.", "label": "e"} +{"uid": "id_465", "premise": "Westley Business School Preparation Courses for Students 80% of the students who take our courses are mature students who have not done any formal study for several years. Many of the courses at the Westley Business School require a good knowledge of various skills. If you feel you need some extra preparation before your course, look below and see if any of our preparation courses suit your needs. All courses take place in August, and for enrolled students all the courses listed below are free. Course 1 STATISTICS A grounding in statistics is a must for any prospective business student. This is a one week course (Mon Fri) consisting of one lecture every night. The tutor will ensure that by the end of the course, you will have had a thorough introduction to all the statistical skills that you will need to start your course at Westley Business School. Each lecture runs from 6pm to 9pm. Course 2 ESSAY WRITING This is a self-study pack containing guidance, practice and tests. At the end of the course (it should take about 10 hours of self-study) you will receive a 1 hour tutorial with the essay writing tutor who will go over your work with you. Course 3 BASIC MATHS This is a one-off lecture of 3 hours aimed at reviewing all the basic maths that you will vaguely remember from school! This course is run on a first come, first served basis and there are only 20 places (every Monday in August from 5.45pm 8.45pm) so dont be late. Course 4 COMPUTING This 2 week course (Mon Fri 6.30pm 8.30pm) will give students all the basic computer skills that they will need for their courses at Westley Business School. There are two courses running concurrently with only 10 PLACES in each so book early! NB UNLESS OTHERWISE STATED, YOU MUST BOOK IN ADVANCE FOR THESE COURSES AT THE MAIN WESTLEY BUSINESS SCHOOL RECEPTION", "hypothesis": "Students registered at Westley Business College dont have to pay for the preparation course.", "label": "e"} +{"uid": "id_466", "premise": "Westley Business School Preparation Courses for Students 80% of the students who take our courses are mature students who have not done any formal study for several years. Many of the courses at the Westley Business School require a good knowledge of various skills. If you feel you need some extra preparation before your course, look below and see if any of our preparation courses suit your needs. All courses take place in August, and for enrolled students all the courses listed below are free. Course 1 STATISTICS A grounding in statistics is a must for any prospective business student. This is a one week course (Mon Fri) consisting of one lecture every night. The tutor will ensure that by the end of the course, you will have had a thorough introduction to all the statistical skills that you will need to start your course at Westley Business School. Each lecture runs from 6pm to 9pm. Course 2 ESSAY WRITING This is a self-study pack containing guidance, practice and tests. At the end of the course (it should take about 10 hours of self-study) you will receive a 1 hour tutorial with the essay writing tutor who will go over your work with you. Course 3 BASIC MATHS This is a one-off lecture of 3 hours aimed at reviewing all the basic maths that you will vaguely remember from school! This course is run on a first come, first served basis and there are only 20 places (every Monday in August from 5.45pm 8.45pm) so dont be late. Course 4 COMPUTING This 2 week course (Mon Fri 6.30pm 8.30pm) will give students all the basic computer skills that they will need for their courses at Westley Business School. There are two courses running concurrently with only 10 PLACES in each so book early! NB UNLESS OTHERWISE STATED, YOU MUST BOOK IN ADVANCE FOR THESE COURSES AT THE MAIN WESTLEY BUSINESS SCHOOL RECEPTION", "hypothesis": "Most students at Westley Business School are older than the average college student.", "label": "e"} +{"uid": "id_467", "premise": "Westley Business School Preparation Courses for Students 80% of the students who take our courses are mature students who have not done any formal study for several years. Many of the courses at the Westley Business School require a good knowledge of various skills. If you feel you need some extra preparation before your course, look below and see if any of our preparation courses suit your needs. All courses take place in August, and for enrolled students all the courses listed below are free. Course 1 STATISTICS A grounding in statistics is a must for any prospective business student. This is a one week course (Mon Fri) consisting of one lecture every night. The tutor will ensure that by the end of the course, you will have had a thorough introduction to all the statistical skills that you will need to start your course at Westley Business School. Each lecture runs from 6pm to 9pm. Course 2 ESSAY WRITING This is a self-study pack containing guidance, practice and tests. At the end of the course (it should take about 10 hours of self-study) you will receive a 1 hour tutorial with the essay writing tutor who will go over your work with you. Course 3 BASIC MATHS This is a one-off lecture of 3 hours aimed at reviewing all the basic maths that you will vaguely remember from school! This course is run on a first come, first served basis and there are only 20 places (every Monday in August from 5.45pm 8.45pm) so dont be late. Course 4 COMPUTING This 2 week course (Mon Fri 6.30pm 8.30pm) will give students all the basic computer skills that they will need for their courses at Westley Business School. There are two courses running concurrently with only 10 PLACES in each so book early! NB UNLESS OTHERWISE STATED, YOU MUST BOOK IN ADVANCE FOR THESE COURSES AT THE MAIN WESTLEY BUSINESS SCHOOL RECEPTION", "hypothesis": "All taught courses are held in the Westley Business School main building.", "label": "n"} +{"uid": "id_468", "premise": "Westley Central Surgery Information Opening Hours Monday to Friday 8.30 am 6.00 pm Saturday 9.00 am 10.00 am (emergencies only) Surgeries Ten-minute appointments are given, although longer periods can be allocated on request. Morning surgery is between 8.30 am and 11.00 am, and afternoon surgery between 3.00 pm and 5.30 pm. These times may change during holiday periods and for staff training. We will always see you the same day for an urgent problem, although we cannot guarantee that this will be with the doctor of your choice. An urgent appointment is intended for matters that cannot wait until the next available routine appointment. Giving our staff an outline of the nature of the problem may help them organize the most appropriate response. We will often ask the doctor to ring you back to help decide the most appropriate way to deal with your problem. If you are unable to attend an appointment, please let us know so that we can offer the appointment to someone else. Results of Tests If you are asked to phone for results, please ring between 11.30 and 12.30. Please allow at least three working days for the results to be available. X-ray results take two weeks to arrive back at the surgery. Prescriptions Please allow at least two full days notice of your prescription requirements. With every prescription issued a printed sheet is given showing details of all your medicines. Please retain this. When you require a further prescription, please use this sheet as a tick list to request the medicines you require or obtain a request slip from reception. You can come in to order your prescription or post or fax your request. If you would like us to post your prescription to you, please include a stamped, self-addressed envelope. We do not accept telephone requests for repeat prescriptions as this can result in errors. Home Visits If you require a doctor to visit you at home, please ring the surgery before 10.00 am if possible. The doctors usually visit patients between 12.00 pm and 3.00 pm. New Patients To register with the Practice, please attend reception with your medical card if you have it, as well as the details of your previous doctor. You will be encouraged to attend a New Patients Health Check with one of our practice nurses. Emergency calls To speak to the doctor urgently you can ring the main surgery telephone number or ring the emergency mobile phone. For the mobile, please allow 25 seconds for connection. If the mobile phone is in use, or the doctor is in an area of poor reception, your call will be transferred to an answer phone. The emergency doctor will be alerted and will call you back. Practice Area Unfortunately we can only accept registration from patients who live within our practice area. If you move outside this area, you will be asked to register with another doctor. If you are in any doubt as to whether you are in our area, please speak to the reception staff. Charges There is a charge for some medical services that fall outside those provided by the NHS. These services include private sick notes, passport forms, holiday cancellation forms, insurance reports and employment medicals. Some travel vaccinations are also charged for and we charge for issuing a private prescription.", "hypothesis": "Ten minutes is the maximum available length for an appointment.", "label": "c"} +{"uid": "id_469", "premise": "Westley Central Surgery Information Opening Hours Monday to Friday 8.30 am 6.00 pm Saturday 9.00 am 10.00 am (emergencies only) Surgeries Ten-minute appointments are given, although longer periods can be allocated on request. Morning surgery is between 8.30 am and 11.00 am, and afternoon surgery between 3.00 pm and 5.30 pm. These times may change during holiday periods and for staff training. We will always see you the same day for an urgent problem, although we cannot guarantee that this will be with the doctor of your choice. An urgent appointment is intended for matters that cannot wait until the next available routine appointment. Giving our staff an outline of the nature of the problem may help them organize the most appropriate response. We will often ask the doctor to ring you back to help decide the most appropriate way to deal with your problem. If you are unable to attend an appointment, please let us know so that we can offer the appointment to someone else. Results of Tests If you are asked to phone for results, please ring between 11.30 and 12.30. Please allow at least three working days for the results to be available. X-ray results take two weeks to arrive back at the surgery. Prescriptions Please allow at least two full days notice of your prescription requirements. With every prescription issued a printed sheet is given showing details of all your medicines. Please retain this. When you require a further prescription, please use this sheet as a tick list to request the medicines you require or obtain a request slip from reception. You can come in to order your prescription or post or fax your request. If you would like us to post your prescription to you, please include a stamped, self-addressed envelope. We do not accept telephone requests for repeat prescriptions as this can result in errors. Home Visits If you require a doctor to visit you at home, please ring the surgery before 10.00 am if possible. The doctors usually visit patients between 12.00 pm and 3.00 pm. New Patients To register with the Practice, please attend reception with your medical card if you have it, as well as the details of your previous doctor. You will be encouraged to attend a New Patients Health Check with one of our practice nurses. Emergency calls To speak to the doctor urgently you can ring the main surgery telephone number or ring the emergency mobile phone. For the mobile, please allow 25 seconds for connection. If the mobile phone is in use, or the doctor is in an area of poor reception, your call will be transferred to an answer phone. The emergency doctor will be alerted and will call you back. Practice Area Unfortunately we can only accept registration from patients who live within our practice area. If you move outside this area, you will be asked to register with another doctor. If you are in any doubt as to whether you are in our area, please speak to the reception staff. Charges There is a charge for some medical services that fall outside those provided by the NHS. These services include private sick notes, passport forms, holiday cancellation forms, insurance reports and employment medicals. Some travel vaccinations are also charged for and we charge for issuing a private prescription.", "hypothesis": "You cannot order a repeat prescription over the phone.", "label": "e"} +{"uid": "id_470", "premise": "Westley Central Surgery Information Opening Hours Monday to Friday 8.30 am 6.00 pm Saturday 9.00 am 10.00 am (emergencies only) Surgeries Ten-minute appointments are given, although longer periods can be allocated on request. Morning surgery is between 8.30 am and 11.00 am, and afternoon surgery between 3.00 pm and 5.30 pm. These times may change during holiday periods and for staff training. We will always see you the same day for an urgent problem, although we cannot guarantee that this will be with the doctor of your choice. An urgent appointment is intended for matters that cannot wait until the next available routine appointment. Giving our staff an outline of the nature of the problem may help them organize the most appropriate response. We will often ask the doctor to ring you back to help decide the most appropriate way to deal with your problem. If you are unable to attend an appointment, please let us know so that we can offer the appointment to someone else. Results of Tests If you are asked to phone for results, please ring between 11.30 and 12.30. Please allow at least three working days for the results to be available. X-ray results take two weeks to arrive back at the surgery. Prescriptions Please allow at least two full days notice of your prescription requirements. With every prescription issued a printed sheet is given showing details of all your medicines. Please retain this. When you require a further prescription, please use this sheet as a tick list to request the medicines you require or obtain a request slip from reception. You can come in to order your prescription or post or fax your request. If you would like us to post your prescription to you, please include a stamped, self-addressed envelope. We do not accept telephone requests for repeat prescriptions as this can result in errors. Home Visits If you require a doctor to visit you at home, please ring the surgery before 10.00 am if possible. The doctors usually visit patients between 12.00 pm and 3.00 pm. New Patients To register with the Practice, please attend reception with your medical card if you have it, as well as the details of your previous doctor. You will be encouraged to attend a New Patients Health Check with one of our practice nurses. Emergency calls To speak to the doctor urgently you can ring the main surgery telephone number or ring the emergency mobile phone. For the mobile, please allow 25 seconds for connection. If the mobile phone is in use, or the doctor is in an area of poor reception, your call will be transferred to an answer phone. The emergency doctor will be alerted and will call you back. Practice Area Unfortunately we can only accept registration from patients who live within our practice area. If you move outside this area, you will be asked to register with another doctor. If you are in any doubt as to whether you are in our area, please speak to the reception staff. Charges There is a charge for some medical services that fall outside those provided by the NHS. These services include private sick notes, passport forms, holiday cancellation forms, insurance reports and employment medicals. Some travel vaccinations are also charged for and we charge for issuing a private prescription.", "hypothesis": "If you have had an x-ray, call the surgery no earlier than one week following the date of the x-ray for the result.", "label": "c"} +{"uid": "id_471", "premise": "Westley Central Surgery Information Opening Hours Monday to Friday 8.30 am 6.00 pm Saturday 9.00 am 10.00 am (emergencies only) Surgeries Ten-minute appointments are given, although longer periods can be allocated on request. Morning surgery is between 8.30 am and 11.00 am, and afternoon surgery between 3.00 pm and 5.30 pm. These times may change during holiday periods and for staff training. We will always see you the same day for an urgent problem, although we cannot guarantee that this will be with the doctor of your choice. An urgent appointment is intended for matters that cannot wait until the next available routine appointment. Giving our staff an outline of the nature of the problem may help them organize the most appropriate response. We will often ask the doctor to ring you back to help decide the most appropriate way to deal with your problem. If you are unable to attend an appointment, please let us know so that we can offer the appointment to someone else. Results of Tests If you are asked to phone for results, please ring between 11.30 and 12.30. Please allow at least three working days for the results to be available. X-ray results take two weeks to arrive back at the surgery. Prescriptions Please allow at least two full days notice of your prescription requirements. With every prescription issued a printed sheet is given showing details of all your medicines. Please retain this. When you require a further prescription, please use this sheet as a tick list to request the medicines you require or obtain a request slip from reception. You can come in to order your prescription or post or fax your request. If you would like us to post your prescription to you, please include a stamped, self-addressed envelope. We do not accept telephone requests for repeat prescriptions as this can result in errors. Home Visits If you require a doctor to visit you at home, please ring the surgery before 10.00 am if possible. The doctors usually visit patients between 12.00 pm and 3.00 pm. New Patients To register with the Practice, please attend reception with your medical card if you have it, as well as the details of your previous doctor. You will be encouraged to attend a New Patients Health Check with one of our practice nurses. Emergency calls To speak to the doctor urgently you can ring the main surgery telephone number or ring the emergency mobile phone. For the mobile, please allow 25 seconds for connection. If the mobile phone is in use, or the doctor is in an area of poor reception, your call will be transferred to an answer phone. The emergency doctor will be alerted and will call you back. Practice Area Unfortunately we can only accept registration from patients who live within our practice area. If you move outside this area, you will be asked to register with another doctor. If you are in any doubt as to whether you are in our area, please speak to the reception staff. Charges There is a charge for some medical services that fall outside those provided by the NHS. These services include private sick notes, passport forms, holiday cancellation forms, insurance reports and employment medicals. Some travel vaccinations are also charged for and we charge for issuing a private prescription.", "hypothesis": "One of the practices four doctors will conduct a New Patients Health Check with any new patients to the practice.", "label": "c"} +{"uid": "id_472", "premise": "Whale Strandings When the last stranded whale of a group eventually dies, the story does not end there. A team of researchers begins to investigate, collecting skin samples for instance, recording anything that could help them answer the crucial question: why? Theories abound, some more convincing than others. In recent years, navy sonar has been accused of causing certain whales to strand. It is known that noise pollution from offshore industry, shipping and sonar can impair underwater communication, but can it really drive whales onto our beaches? In 1998, researchers at the Pelagos Cetacean Research Institute, a Greek non-profit scientific group, linked whale strandings with low- frequency sonar tests being carried out by the North Atlantic Treaty Organisation (NATO). They recorded the stranding of 12 Cuviers beaked whales over 38.2 kilometres of coastline. NATO later admitted it had been testing new sonar technology in the same area at the time as the strandings had occurred. Mass whale strandings involve four or more animals. Typically they all wash ashore together, but in mass atypical strandings (such as the one in Greece), the whales dont strand as a group; they are scattered over a larger area. For humans, hearing a sudden loud noise might prove frightening, but it does not induce mass fatality. For whales, on the other hand, there is a theory on how sonar can kill. The noise can surprise the animal, causing it to swim too quickly to the surface. The result is decompression sickness, a hazard human divers know all too well. If a diver ascends too quickly from a high-pressure underwater environment to a lower-pressure one, gases dissolved in blood and tissue expand and form bubbles. The bubbles block the flow of blood to vital organs, and can ultimately lead to death. Plausible as this seems, it is still a theory and based on our more comprehensive knowledge of land-based animals. For this reason, some scientists are wary. Whale expert Karen Evans is one such scientist. Another is Rosemary Gales, a leading expert on whale strandings. She says sonar technology cannot always be blamed for mass strandings. Its a case-by-case situation. Whales have been stranding for a very long time pre-sonar. And when 80% of all Australian whale strandings occur around Tasmania, Gales and her team must continue in the search for answers. When animals beach next to each other at the same time, the most common cause has nothing to do with humans at all. Theyre highly social creatures, says Gales. When they mass strand its complete panic and chaos. If one of the group strands and sounds the alarm, others will try to swim to its aid, and become stuck themselves. Activities such as sonar testing can hint at when a stranding may occur, but if conservationists are to reduce the number of strandings, or improve rescue operations, they need information on where strandings are likely to occur as well. With this in mind, Ralph James, physicist at the University of Western Australia in Perth, thinks he may have discovered why whales turn up only on some beaches. In 1986 he went to Augusta, Western Australia, where more than 100 false killer whales had beached. I found out from chatting to the locals that whales had been stranding there for decades. So I asked myself, what is it about this beach? From this question that James pondered over 20 years ago, grew the universitys Whale Stranding Analysis Project. Data has since revealed that all mass strandings around Australia occur on gently sloping sandy beaches, some with inclines of less than 0.5%. For whale species that depend on an echolocation system to navigate, this kind of beach spells disaster. Usually, as they swim, they make clicking noises, and the resulting sound waves are reflected in an echo and travel back to them. However, these just fade out on shallow beaches, so the whale doesnt hear an echo and it crashes onto the shore. But that is not all. Physics, it appears, can help with the when as well as the where. The ocean is full of bubbles. Larger ones rise quickly to the surface and disappear, whilst smaller ones called microbubbles can last for days. It is these that absorb whale clicks! Rough weather generates more bubbles than usual, James adds. So, during and after a storm, echolocating whales are essentially swimming blind. Last year was a bad one for strandings in Australia. Can we predict if this or any other year will be any better? Some scientists believe we can. They have found trends which could be used to forecast bad years for strandings in the future. In 2005, a survey by Klaus Vanselow and Klaus Ricklefs of sperm whale strandings in the North Sea even found a correlation between these and the sunspot cycle, and suggested that changes in the Earths magnetic field might be involved. But others are sceptical. Their study was interesting ... but the analyses they used were flawed on a number of levels, says Evans. In the same year, she co-authored a study on Australian strandings that uncovered a completely different trend. We analysed data from 1920 to 2002 ... and observed a clear periodicity in the number of whales stranded each year that coincides with a major climatic cycle. To put it more simply, she says, in the years when strong westerly and southerly winds bring cool water rich in nutrients closer to the Australia coast, there is an increase in the number of fish. The whales follow. So what causes mass strandings? Its probably many different components, says James. And he is probably right. But the point is we now know what many of those components are.", "hypothesis": "There is now agreement amongst scientists that changes in the Earths magnetic fields contribute to whale strandings.", "label": "c"} +{"uid": "id_473", "premise": "Whale Strandings When the last stranded whale of a group eventually dies, the story does not end there. A team of researchers begins to investigate, collecting skin samples for instance, recording anything that could help them answer the crucial question: why? Theories abound, some more convincing than others. In recent years, navy sonar has been accused of causing certain whales to strand. It is known that noise pollution from offshore industry, shipping and sonar can impair underwater communication, but can it really drive whales onto our beaches? In 1998, researchers at the Pelagos Cetacean Research Institute, a Greek non-profit scientific group, linked whale strandings with low- frequency sonar tests being carried out by the North Atlantic Treaty Organisation (NATO). They recorded the stranding of 12 Cuviers beaked whales over 38.2 kilometres of coastline. NATO later admitted it had been testing new sonar technology in the same area at the time as the strandings had occurred. Mass whale strandings involve four or more animals. Typically they all wash ashore together, but in mass atypical strandings (such as the one in Greece), the whales dont strand as a group; they are scattered over a larger area. For humans, hearing a sudden loud noise might prove frightening, but it does not induce mass fatality. For whales, on the other hand, there is a theory on how sonar can kill. The noise can surprise the animal, causing it to swim too quickly to the surface. The result is decompression sickness, a hazard human divers know all too well. If a diver ascends too quickly from a high-pressure underwater environment to a lower-pressure one, gases dissolved in blood and tissue expand and form bubbles. The bubbles block the flow of blood to vital organs, and can ultimately lead to death. Plausible as this seems, it is still a theory and based on our more comprehensive knowledge of land-based animals. For this reason, some scientists are wary. Whale expert Karen Evans is one such scientist. Another is Rosemary Gales, a leading expert on whale strandings. She says sonar technology cannot always be blamed for mass strandings. Its a case-by-case situation. Whales have been stranding for a very long time pre-sonar. And when 80% of all Australian whale strandings occur around Tasmania, Gales and her team must continue in the search for answers. When animals beach next to each other at the same time, the most common cause has nothing to do with humans at all. Theyre highly social creatures, says Gales. When they mass strand its complete panic and chaos. If one of the group strands and sounds the alarm, others will try to swim to its aid, and become stuck themselves. Activities such as sonar testing can hint at when a stranding may occur, but if conservationists are to reduce the number of strandings, or improve rescue operations, they need information on where strandings are likely to occur as well. With this in mind, Ralph James, physicist at the University of Western Australia in Perth, thinks he may have discovered why whales turn up only on some beaches. In 1986 he went to Augusta, Western Australia, where more than 100 false killer whales had beached. I found out from chatting to the locals that whales had been stranding there for decades. So I asked myself, what is it about this beach? From this question that James pondered over 20 years ago, grew the universitys Whale Stranding Analysis Project. Data has since revealed that all mass strandings around Australia occur on gently sloping sandy beaches, some with inclines of less than 0.5%. For whale species that depend on an echolocation system to navigate, this kind of beach spells disaster. Usually, as they swim, they make clicking noises, and the resulting sound waves are reflected in an echo and travel back to them. However, these just fade out on shallow beaches, so the whale doesnt hear an echo and it crashes onto the shore. But that is not all. Physics, it appears, can help with the when as well as the where. The ocean is full of bubbles. Larger ones rise quickly to the surface and disappear, whilst smaller ones called microbubbles can last for days. It is these that absorb whale clicks! Rough weather generates more bubbles than usual, James adds. So, during and after a storm, echolocating whales are essentially swimming blind. Last year was a bad one for strandings in Australia. Can we predict if this or any other year will be any better? Some scientists believe we can. They have found trends which could be used to forecast bad years for strandings in the future. In 2005, a survey by Klaus Vanselow and Klaus Ricklefs of sperm whale strandings in the North Sea even found a correlation between these and the sunspot cycle, and suggested that changes in the Earths magnetic field might be involved. But others are sceptical. Their study was interesting ... but the analyses they used were flawed on a number of levels, says Evans. In the same year, she co-authored a study on Australian strandings that uncovered a completely different trend. We analysed data from 1920 to 2002 ... and observed a clear periodicity in the number of whales stranded each year that coincides with a major climatic cycle. To put it more simply, she says, in the years when strong westerly and southerly winds bring cool water rich in nutrients closer to the Australia coast, there is an increase in the number of fish. The whales follow. So what causes mass strandings? Its probably many different components, says James. And he is probably right. But the point is we now know what many of those components are.", "hypothesis": "The whales stranded in Greece were found at different points along the coast.", "label": "e"} +{"uid": "id_474", "premise": "Whale Strandings When the last stranded whale of a group eventually dies, the story does not end there. A team of researchers begins to investigate, collecting skin samples for instance, recording anything that could help them answer the crucial question: why? Theories abound, some more convincing than others. In recent years, navy sonar has been accused of causing certain whales to strand. It is known that noise pollution from offshore industry, shipping and sonar can impair underwater communication, but can it really drive whales onto our beaches? In 1998, researchers at the Pelagos Cetacean Research Institute, a Greek non-profit scientific group, linked whale strandings with low- frequency sonar tests being carried out by the North Atlantic Treaty Organisation (NATO). They recorded the stranding of 12 Cuviers beaked whales over 38.2 kilometres of coastline. NATO later admitted it had been testing new sonar technology in the same area at the time as the strandings had occurred. Mass whale strandings involve four or more animals. Typically they all wash ashore together, but in mass atypical strandings (such as the one in Greece), the whales dont strand as a group; they are scattered over a larger area. For humans, hearing a sudden loud noise might prove frightening, but it does not induce mass fatality. For whales, on the other hand, there is a theory on how sonar can kill. The noise can surprise the animal, causing it to swim too quickly to the surface. The result is decompression sickness, a hazard human divers know all too well. If a diver ascends too quickly from a high-pressure underwater environment to a lower-pressure one, gases dissolved in blood and tissue expand and form bubbles. The bubbles block the flow of blood to vital organs, and can ultimately lead to death. Plausible as this seems, it is still a theory and based on our more comprehensive knowledge of land-based animals. For this reason, some scientists are wary. Whale expert Karen Evans is one such scientist. Another is Rosemary Gales, a leading expert on whale strandings. She says sonar technology cannot always be blamed for mass strandings. Its a case-by-case situation. Whales have been stranding for a very long time pre-sonar. And when 80% of all Australian whale strandings occur around Tasmania, Gales and her team must continue in the search for answers. When animals beach next to each other at the same time, the most common cause has nothing to do with humans at all. Theyre highly social creatures, says Gales. When they mass strand its complete panic and chaos. If one of the group strands and sounds the alarm, others will try to swim to its aid, and become stuck themselves. Activities such as sonar testing can hint at when a stranding may occur, but if conservationists are to reduce the number of strandings, or improve rescue operations, they need information on where strandings are likely to occur as well. With this in mind, Ralph James, physicist at the University of Western Australia in Perth, thinks he may have discovered why whales turn up only on some beaches. In 1986 he went to Augusta, Western Australia, where more than 100 false killer whales had beached. I found out from chatting to the locals that whales had been stranding there for decades. So I asked myself, what is it about this beach? From this question that James pondered over 20 years ago, grew the universitys Whale Stranding Analysis Project. Data has since revealed that all mass strandings around Australia occur on gently sloping sandy beaches, some with inclines of less than 0.5%. For whale species that depend on an echolocation system to navigate, this kind of beach spells disaster. Usually, as they swim, they make clicking noises, and the resulting sound waves are reflected in an echo and travel back to them. However, these just fade out on shallow beaches, so the whale doesnt hear an echo and it crashes onto the shore. But that is not all. Physics, it appears, can help with the when as well as the where. The ocean is full of bubbles. Larger ones rise quickly to the surface and disappear, whilst smaller ones called microbubbles can last for days. It is these that absorb whale clicks! Rough weather generates more bubbles than usual, James adds. So, during and after a storm, echolocating whales are essentially swimming blind. Last year was a bad one for strandings in Australia. Can we predict if this or any other year will be any better? Some scientists believe we can. They have found trends which could be used to forecast bad years for strandings in the future. In 2005, a survey by Klaus Vanselow and Klaus Ricklefs of sperm whale strandings in the North Sea even found a correlation between these and the sunspot cycle, and suggested that changes in the Earths magnetic field might be involved. But others are sceptical. Their study was interesting ... but the analyses they used were flawed on a number of levels, says Evans. In the same year, she co-authored a study on Australian strandings that uncovered a completely different trend. We analysed data from 1920 to 2002 ... and observed a clear periodicity in the number of whales stranded each year that coincides with a major climatic cycle. To put it more simply, she says, in the years when strong westerly and southerly winds bring cool water rich in nutrients closer to the Australia coast, there is an increase in the number of fish. The whales follow. So what causes mass strandings? Its probably many different components, says James. And he is probably right. But the point is we now know what many of those components are.", "hypothesis": "The aim of the research by the Pelagos Institute in 1998 was to prove that navy sonar was responsible for whale strandings.", "label": "n"} +{"uid": "id_475", "premise": "Whale Strandings When the last stranded whale of a group eventually dies, the story does not end there. A team of researchers begins to investigate, collecting skin samples for instance, recording anything that could help them answer the crucial question: why? Theories abound, some more convincing than others. In recent years, navy sonar has been accused of causing certain whales to strand. It is known that noise pollution from offshore industry, shipping and sonar can impair underwater communication, but can it really drive whales onto our beaches? In 1998, researchers at the Pelagos Cetacean Research Institute, a Greek non-profit scientific group, linked whale strandings with low- frequency sonar tests being carried out by the North Atlantic Treaty Organisation (NATO). They recorded the stranding of 12 Cuviers beaked whales over 38.2 kilometres of coastline. NATO later admitted it had been testing new sonar technology in the same area at the time as the strandings had occurred. Mass whale strandings involve four or more animals. Typically they all wash ashore together, but in mass atypical strandings (such as the one in Greece), the whales dont strand as a group; they are scattered over a larger area. For humans, hearing a sudden loud noise might prove frightening, but it does not induce mass fatality. For whales, on the other hand, there is a theory on how sonar can kill. The noise can surprise the animal, causing it to swim too quickly to the surface. The result is decompression sickness, a hazard human divers know all too well. If a diver ascends too quickly from a high-pressure underwater environment to a lower-pressure one, gases dissolved in blood and tissue expand and form bubbles. The bubbles block the flow of blood to vital organs, and can ultimately lead to death. Plausible as this seems, it is still a theory and based on our more comprehensive knowledge of land-based animals. For this reason, some scientists are wary. Whale expert Karen Evans is one such scientist. Another is Rosemary Gales, a leading expert on whale strandings. She says sonar technology cannot always be blamed for mass strandings. Its a case-by-case situation. Whales have been stranding for a very long time pre-sonar. And when 80% of all Australian whale strandings occur around Tasmania, Gales and her team must continue in the search for answers. When animals beach next to each other at the same time, the most common cause has nothing to do with humans at all. Theyre highly social creatures, says Gales. When they mass strand its complete panic and chaos. If one of the group strands and sounds the alarm, others will try to swim to its aid, and become stuck themselves. Activities such as sonar testing can hint at when a stranding may occur, but if conservationists are to reduce the number of strandings, or improve rescue operations, they need information on where strandings are likely to occur as well. With this in mind, Ralph James, physicist at the University of Western Australia in Perth, thinks he may have discovered why whales turn up only on some beaches. In 1986 he went to Augusta, Western Australia, where more than 100 false killer whales had beached. I found out from chatting to the locals that whales had been stranding there for decades. So I asked myself, what is it about this beach? From this question that James pondered over 20 years ago, grew the universitys Whale Stranding Analysis Project. Data has since revealed that all mass strandings around Australia occur on gently sloping sandy beaches, some with inclines of less than 0.5%. For whale species that depend on an echolocation system to navigate, this kind of beach spells disaster. Usually, as they swim, they make clicking noises, and the resulting sound waves are reflected in an echo and travel back to them. However, these just fade out on shallow beaches, so the whale doesnt hear an echo and it crashes onto the shore. But that is not all. Physics, it appears, can help with the when as well as the where. The ocean is full of bubbles. Larger ones rise quickly to the surface and disappear, whilst smaller ones called microbubbles can last for days. It is these that absorb whale clicks! Rough weather generates more bubbles than usual, James adds. So, during and after a storm, echolocating whales are essentially swimming blind. Last year was a bad one for strandings in Australia. Can we predict if this or any other year will be any better? Some scientists believe we can. They have found trends which could be used to forecast bad years for strandings in the future. In 2005, a survey by Klaus Vanselow and Klaus Ricklefs of sperm whale strandings in the North Sea even found a correlation between these and the sunspot cycle, and suggested that changes in the Earths magnetic field might be involved. But others are sceptical. Their study was interesting ... but the analyses they used were flawed on a number of levels, says Evans. In the same year, she co-authored a study on Australian strandings that uncovered a completely different trend. We analysed data from 1920 to 2002 ... and observed a clear periodicity in the number of whales stranded each year that coincides with a major climatic cycle. To put it more simply, she says, in the years when strong westerly and southerly winds bring cool water rich in nutrients closer to the Australia coast, there is an increase in the number of fish. The whales follow. So what causes mass strandings? Its probably many different components, says James. And he is probably right. But the point is we now know what many of those components are.", "hypothesis": "Rosemary Gales has questioned the research techniques used by the Greek scientists.", "label": "n"} +{"uid": "id_476", "premise": "Whale Strandings When the last stranded whale of a group eventually dies, the story does not end there. A team of researchers begins to investigate, collecting skin samples for instance, recording anything that could help them answer the crucial question: why? Theories abound, some more convincing than others. In recent years, navy sonar has been accused of causing certain whales to strand. It is known that noise pollution from offshore industry, shipping and sonar can impair underwater communication, but can it really drive whales onto our beaches? In 1998, researchers at the Pelagos Cetacean Research Institute, a Greek non-profit scientific group, linked whale strandings with low- frequency sonar tests being carried out by the North Atlantic Treaty Organisation (NATO). They recorded the stranding of 12 Cuviers beaked whales over 38.2 kilometres of coastline. NATO later admitted it had been testing new sonar technology in the same area at the time as the strandings had occurred. Mass whale strandings involve four or more animals. Typically they all wash ashore together, but in mass atypical strandings (such as the one in Greece), the whales dont strand as a group; they are scattered over a larger area. For humans, hearing a sudden loud noise might prove frightening, but it does not induce mass fatality. For whales, on the other hand, there is a theory on how sonar can kill. The noise can surprise the animal, causing it to swim too quickly to the surface. The result is decompression sickness, a hazard human divers know all too well. If a diver ascends too quickly from a high-pressure underwater environment to a lower-pressure one, gases dissolved in blood and tissue expand and form bubbles. The bubbles block the flow of blood to vital organs, and can ultimately lead to death. Plausible as this seems, it is still a theory and based on our more comprehensive knowledge of land-based animals. For this reason, some scientists are wary. Whale expert Karen Evans is one such scientist. Another is Rosemary Gales, a leading expert on whale strandings. She says sonar technology cannot always be blamed for mass strandings. Its a case-by-case situation. Whales have been stranding for a very long time pre-sonar. And when 80% of all Australian whale strandings occur around Tasmania, Gales and her team must continue in the search for answers. When animals beach next to each other at the same time, the most common cause has nothing to do with humans at all. Theyre highly social creatures, says Gales. When they mass strand its complete panic and chaos. If one of the group strands and sounds the alarm, others will try to swim to its aid, and become stuck themselves. Activities such as sonar testing can hint at when a stranding may occur, but if conservationists are to reduce the number of strandings, or improve rescue operations, they need information on where strandings are likely to occur as well. With this in mind, Ralph James, physicist at the University of Western Australia in Perth, thinks he may have discovered why whales turn up only on some beaches. In 1986 he went to Augusta, Western Australia, where more than 100 false killer whales had beached. I found out from chatting to the locals that whales had been stranding there for decades. So I asked myself, what is it about this beach? From this question that James pondered over 20 years ago, grew the universitys Whale Stranding Analysis Project. Data has since revealed that all mass strandings around Australia occur on gently sloping sandy beaches, some with inclines of less than 0.5%. For whale species that depend on an echolocation system to navigate, this kind of beach spells disaster. Usually, as they swim, they make clicking noises, and the resulting sound waves are reflected in an echo and travel back to them. However, these just fade out on shallow beaches, so the whale doesnt hear an echo and it crashes onto the shore. But that is not all. Physics, it appears, can help with the when as well as the where. The ocean is full of bubbles. Larger ones rise quickly to the surface and disappear, whilst smaller ones called microbubbles can last for days. It is these that absorb whale clicks! Rough weather generates more bubbles than usual, James adds. So, during and after a storm, echolocating whales are essentially swimming blind. Last year was a bad one for strandings in Australia. Can we predict if this or any other year will be any better? Some scientists believe we can. They have found trends which could be used to forecast bad years for strandings in the future. In 2005, a survey by Klaus Vanselow and Klaus Ricklefs of sperm whale strandings in the North Sea even found a correlation between these and the sunspot cycle, and suggested that changes in the Earths magnetic field might be involved. But others are sceptical. Their study was interesting ... but the analyses they used were flawed on a number of levels, says Evans. In the same year, she co-authored a study on Australian strandings that uncovered a completely different trend. We analysed data from 1920 to 2002 ... and observed a clear periodicity in the number of whales stranded each year that coincides with a major climatic cycle. To put it more simply, she says, in the years when strong westerly and southerly winds bring cool water rich in nutrients closer to the Australia coast, there is an increase in the number of fish. The whales follow. So what causes mass strandings? Its probably many different components, says James. And he is probably right. But the point is we now know what many of those components are.", "hypothesis": "According to Gales, whales are likely to try to help another whale in trouble.", "label": "e"} +{"uid": "id_477", "premise": "What are you laughing at? We like to think that laughing is the height of human sophistication. Our big brains let us see the humour in a strategically positioned pun, an unexpected plot twist or a clever piece of wordplay. But while joking and wit are uniquely human inventions, laughter certainly is not. Other creatures, including chimpanzees, gorillas, and even rats, chuckle. Obviously, they dont crack up at Homer Simpson or titter at the bosss dreadful jokes, but the fact that they laugh in the first place suggests that sniggers and chortles have been around for a lot longer than we have. It points the way to the origins of laughter, suggesting a much more practical purpose than you might think. There is no doubt that laughing typically involves groups of people. Laughter evolved as a signal to others it almost disappears when we are alone, says Robert Provine, a neuroscientist at the University of Maryland. Provine found that most laughter comes as a polite reaction to everyday remarks such as see you later, rather than anything particularly funny. And the way we laugh depends on the company were keeping. Men tend to laugh longer and harder when they are with other men, perhaps as a way of bonding. Women tend to laugh more and at a higher pitch when men are present, possibly indicating flirtation or even submission. To find the origins of laughter, Provine believes we need to look at the play. He points out that the masters of laughing are children, and nowhere is their talent more obvious than in the boisterous antics, and the original context plays, he says. Well-known primate watchers, including Dian Fossey and Jane Goodall, have long argued that chimps laugh while at play. The sound they produce is known as a panting laugh. It seems obvious when you watch their behaviour they even have the same ticklish spots as we do. But remove the context, and the parallel between human laughter and a chimps characteristic pant laugh is not so clear. When Provine played a tape of the pant laughs to 119 of his students, for example, only two guessed correctly what it was. These findings underline how chimp and human laughter vary. When we laugh the sound is usually produced by chopping up a single exhalation into a series of shorter with one sound produced on each inward and outward breath. The question is: does this pant laughter have the same source as our own laughter? New research lends weight to the idea that it does. The findings come from Elke Zimmerman, head of the Institute for Zoology in Germany, who compared the sounds made by babies and chimpanzees in response to tickling during the first year of their life. Using sound spectrographs to reveal the pitch and intensity of vocalizations, she discovered that chimp and human baby laughter follow broadly the same pattern. Zimmerman believes the closeness of baby laughter to chimp laughter supports the idea that laughter was around long before humans arrived on the scene. What started simply as a modification of breathing associated with enjoyable and playful interactions has acquired a symbolic meaning as an indicator of pleasure. Pinpointing when laughter developed is another matter. Humans and chimps share a common ancestor that lived perhaps 8 million years ago, but animals might have been laughing long before that. More distantly related primates, including gorillas, laugh, and anecdotal evidence suggests that other social mammals may do too. Scientists are currently testing such stories with a comparative analysis of just how common, laughter is, among animals. So far, though, the most compelling evidence for laughter beyond primates comes from research done by Jaak Panksepp from Bowling Green State University, Ohio, into the ultrasonic chirps produced by rats during play and in response to tickling. All this still doesnt answer the question of why we laugh at all. One idea is that if laughter and tickling originated as a way of sealing the relationship between mother and child. Another is that the reflex response to tickling is protective, alerting us to the presence of crawling creatures that might harm us or compelling us to defend the parts of our bodies that are most vulnerable in hand-to-hand combat. But the idea that has gained most popular in recent years is that laughter in response to tickling is a way for two individuals to signal and test their trust in one another. This hypothesis starts from the observation that although a little tickle can be enjoyable if it goes on too long it can be torture. By engaging in a bout of tickling, we put ourselves at the mercy of another individual, and laughing is a signal that our laughter is what makes it a reliable signal of trust according to Tom Flamson, a laughter researcher at the University of California, Los Angeles. Even in rats, laughter, tickle, play, and trust are linked. Rats chirp a lot when they play, says Flamson. These chirps can be aroused by tickling. And they get bonded to us as a result, which certainly seems like a show of trust. Well never know which animal laughed the first laugh, or why. But we can be sure it wasnt in response to a prehistoric joke. The funny thing is that while the origins of laughter are probably quite serious, we owe human laughter and our language-based humour to the same unique skill. While other animals pant, we alone can control our breath well enough to produce the sound of laughter. Without that control, there would also be no speech and no jokes to endure.", "hypothesis": "Primates lack sufficient breath control to be able to produce laughs the way humans do.", "label": "e"} +{"uid": "id_478", "premise": "What are you laughing at? We like to think that laughing is the height of human sophistication. Our big brains let us see the humour in a strategically positioned pun, an unexpected plot twist or a clever piece of wordplay. But while joking and wit are uniquely human inventions, laughter certainly is not. Other creatures, including chimpanzees, gorillas, and even rats, chuckle. Obviously, they dont crack up at Homer Simpson or titter at the bosss dreadful jokes, but the fact that they laugh in the first place suggests that sniggers and chortles have been around for a lot longer than we have. It points the way to the origins of laughter, suggesting a much more practical purpose than you might think. There is no doubt that laughing typically involves groups of people. Laughter evolved as a signal to others it almost disappears when we are alone, says Robert Provine, a neuroscientist at the University of Maryland. Provine found that most laughter comes as a polite reaction to everyday remarks such as see you later, rather than anything particularly funny. And the way we laugh depends on the company were keeping. Men tend to laugh longer and harder when they are with other men, perhaps as a way of bonding. Women tend to laugh more and at a higher pitch when men are present, possibly indicating flirtation or even submission. To find the origins of laughter, Provine believes we need to look at the play. He points out that the masters of laughing are children, and nowhere is their talent more obvious than in the boisterous antics, and the original context plays, he says. Well-known primate watchers, including Dian Fossey and Jane Goodall, have long argued that chimps laugh while at play. The sound they produce is known as a panting laugh. It seems obvious when you watch their behaviour they even have the same ticklish spots as we do. But remove the context, and the parallel between human laughter and a chimps characteristic pant laugh is not so clear. When Provine played a tape of the pant laughs to 119 of his students, for example, only two guessed correctly what it was. These findings underline how chimp and human laughter vary. When we laugh the sound is usually produced by chopping up a single exhalation into a series of shorter with one sound produced on each inward and outward breath. The question is: does this pant laughter have the same source as our own laughter? New research lends weight to the idea that it does. The findings come from Elke Zimmerman, head of the Institute for Zoology in Germany, who compared the sounds made by babies and chimpanzees in response to tickling during the first year of their life. Using sound spectrographs to reveal the pitch and intensity of vocalizations, she discovered that chimp and human baby laughter follow broadly the same pattern. Zimmerman believes the closeness of baby laughter to chimp laughter supports the idea that laughter was around long before humans arrived on the scene. What started simply as a modification of breathing associated with enjoyable and playful interactions has acquired a symbolic meaning as an indicator of pleasure. Pinpointing when laughter developed is another matter. Humans and chimps share a common ancestor that lived perhaps 8 million years ago, but animals might have been laughing long before that. More distantly related primates, including gorillas, laugh, and anecdotal evidence suggests that other social mammals may do too. Scientists are currently testing such stories with a comparative analysis of just how common, laughter is, among animals. So far, though, the most compelling evidence for laughter beyond primates comes from research done by Jaak Panksepp from Bowling Green State University, Ohio, into the ultrasonic chirps produced by rats during play and in response to tickling. All this still doesnt answer the question of why we laugh at all. One idea is that if laughter and tickling originated as a way of sealing the relationship between mother and child. Another is that the reflex response to tickling is protective, alerting us to the presence of crawling creatures that might harm us or compelling us to defend the parts of our bodies that are most vulnerable in hand-to-hand combat. But the idea that has gained most popular in recent years is that laughter in response to tickling is a way for two individuals to signal and test their trust in one another. This hypothesis starts from the observation that although a little tickle can be enjoyable if it goes on too long it can be torture. By engaging in a bout of tickling, we put ourselves at the mercy of another individual, and laughing is a signal that our laughter is what makes it a reliable signal of trust according to Tom Flamson, a laughter researcher at the University of California, Los Angeles. Even in rats, laughter, tickle, play, and trust are linked. Rats chirp a lot when they play, says Flamson. These chirps can be aroused by tickling. And they get bonded to us as a result, which certainly seems like a show of trust. Well never know which animal laughed the first laugh, or why. But we can be sure it wasnt in response to a prehistoric joke. The funny thing is that while the origins of laughter are probably quite serious, we owe human laughter and our language-based humour to the same unique skill. While other animals pant, we alone can control our breath well enough to produce the sound of laughter. Without that control, there would also be no speech and no jokes to endure.", "hypothesis": "Chimpanzees produce laughter in a wider range of situations than rats do", "label": "n"} +{"uid": "id_479", "premise": "What are you laughing at? We like to think that laughing is the height of human sophistication. Our big brains let us see the humour in a strategically positioned pun, an unexpected plot twist or a clever piece of wordplay. But while joking and wit are uniquely human inventions, laughter certainly is not. Other creatures, including chimpanzees, gorillas, and even rats, chuckle. Obviously, they dont crack up at Homer Simpson or titter at the bosss dreadful jokes, but the fact that they laugh in the first place suggests that sniggers and chortles have been around for a lot longer than we have. It points the way to the origins of laughter, suggesting a much more practical purpose than you might think. There is no doubt that laughing typically involves groups of people. Laughter evolved as a signal to others it almost disappears when we are alone, says Robert Provine, a neuroscientist at the University of Maryland. Provine found that most laughter comes as a polite reaction to everyday remarks such as see you later, rather than anything particularly funny. And the way we laugh depends on the company were keeping. Men tend to laugh longer and harder when they are with other men, perhaps as a way of bonding. Women tend to laugh more and at a higher pitch when men are present, possibly indicating flirtation or even submission. To find the origins of laughter, Provine believes we need to look at the play. He points out that the masters of laughing are children, and nowhere is their talent more obvious than in the boisterous antics, and the original context plays, he says. Well-known primate watchers, including Dian Fossey and Jane Goodall, have long argued that chimps laugh while at play. The sound they produce is known as a panting laugh. It seems obvious when you watch their behaviour they even have the same ticklish spots as we do. But remove the context, and the parallel between human laughter and a chimps characteristic pant laugh is not so clear. When Provine played a tape of the pant laughs to 119 of his students, for example, only two guessed correctly what it was. These findings underline how chimp and human laughter vary. When we laugh the sound is usually produced by chopping up a single exhalation into a series of shorter with one sound produced on each inward and outward breath. The question is: does this pant laughter have the same source as our own laughter? New research lends weight to the idea that it does. The findings come from Elke Zimmerman, head of the Institute for Zoology in Germany, who compared the sounds made by babies and chimpanzees in response to tickling during the first year of their life. Using sound spectrographs to reveal the pitch and intensity of vocalizations, she discovered that chimp and human baby laughter follow broadly the same pattern. Zimmerman believes the closeness of baby laughter to chimp laughter supports the idea that laughter was around long before humans arrived on the scene. What started simply as a modification of breathing associated with enjoyable and playful interactions has acquired a symbolic meaning as an indicator of pleasure. Pinpointing when laughter developed is another matter. Humans and chimps share a common ancestor that lived perhaps 8 million years ago, but animals might have been laughing long before that. More distantly related primates, including gorillas, laugh, and anecdotal evidence suggests that other social mammals may do too. Scientists are currently testing such stories with a comparative analysis of just how common, laughter is, among animals. So far, though, the most compelling evidence for laughter beyond primates comes from research done by Jaak Panksepp from Bowling Green State University, Ohio, into the ultrasonic chirps produced by rats during play and in response to tickling. All this still doesnt answer the question of why we laugh at all. One idea is that if laughter and tickling originated as a way of sealing the relationship between mother and child. Another is that the reflex response to tickling is protective, alerting us to the presence of crawling creatures that might harm us or compelling us to defend the parts of our bodies that are most vulnerable in hand-to-hand combat. But the idea that has gained most popular in recent years is that laughter in response to tickling is a way for two individuals to signal and test their trust in one another. This hypothesis starts from the observation that although a little tickle can be enjoyable if it goes on too long it can be torture. By engaging in a bout of tickling, we put ourselves at the mercy of another individual, and laughing is a signal that our laughter is what makes it a reliable signal of trust according to Tom Flamson, a laughter researcher at the University of California, Los Angeles. Even in rats, laughter, tickle, play, and trust are linked. Rats chirp a lot when they play, says Flamson. These chirps can be aroused by tickling. And they get bonded to us as a result, which certainly seems like a show of trust. Well never know which animal laughed the first laugh, or why. But we can be sure it wasnt in response to a prehistoric joke. The funny thing is that while the origins of laughter are probably quite serious, we owe human laughter and our language-based humour to the same unique skill. While other animals pant, we alone can control our breath well enough to produce the sound of laughter. Without that control, there would also be no speech and no jokes to endure.", "hypothesis": "Both men and women laugh more when they are with members of the same sex.", "label": "n"} +{"uid": "id_480", "premise": "What determines whether a product will succeed or fail? In 1990, six out of ten new products lasted for less than three months in the marketplace. In 2007, two out of ten succeeded. These products are all promoted and great emphasis is placed on brand and logo. Still, most fail and manufacturers must go a bit further if they are to improve the prospects of their products success. Your iPod (you almost certainly have one) does not have a logo on it but it is instantly recognizable from its shape and feel. What about your mobile phone? There is a good chance it is a Nokia, and when it rings you immediately recognize the tone, which is a part of the Nokia brand. An incredible 60 per cent of people recognize it.", "hypothesis": "Given that the worlds population is around 8 billion people, the passage suggests that approaching 5 billion people will recognize the Nokia ring tone.", "label": "c"} +{"uid": "id_481", "premise": "What determines whether a product will succeed or fail? In 1990, six out of ten new products lasted for less than three months in the marketplace. In 2007, two out of ten succeeded. These products are all promoted and great emphasis is placed on brand and logo. Still, most fail and manufacturers must go a bit further if they are to improve the prospects of their products success. Your iPod (you almost certainly have one) does not have a logo on it but it is instantly recognizable from its shape and feel. What about your mobile phone? There is a good chance it is a Nokia, and when it rings you immediately recognize the tone, which is a part of the Nokia brand. An incredible 60 per cent of people recognize it.", "hypothesis": "The authors strategy is to look at success stories.", "label": "e"} +{"uid": "id_482", "premise": "What determines whether a product will succeed or fail? In 1990, six out of ten new products lasted for less than three months in the marketplace. In 2007, two out of ten succeeded. These products are all promoted and great emphasis is placed on brand and logo. Still, most fail and manufacturers must go a bit further if they are to improve the prospects of their products success. Your iPod (you almost certainly have one) does not have a logo on it but it is instantly recognizable from its shape and feel. What about your mobile phone? There is a good chance it is a Nokia, and when it rings you immediately recognize the tone, which is a part of the Nokia brand. An incredible 60 per cent of people recognize it.", "hypothesis": "The passage is making the point that product success depends on more than a catchy brand name and a memorable logo.", "label": "e"} +{"uid": "id_483", "premise": "What do we mean by being talented or gifted? The most obvious way is to look at the work someone does and if they are capable of significant success, label them as talented. The purely quantitative route percentage definition looks not at individuals, but at simple percentages, such as the top five per cent of the population, and labels them by definition as gifted. This definition has fallen from favour, eclipsed by the advent of IQ tests, favoured by luminaries such as Professor Hans Eysenck, where a series of written or verbal tests of general intelligence leads to a score of intelligence. The IQ test has been eclipsed in turn. Most people studying intelligence and creativity in the new millennium now prefer a broader definition, using a multifaceted approach where talents in many areas are recognised rather than purely concentrating on academic achievement. If we are therefore assuming that talented, creative or gifted individuals may need to be assessed across a range of abilities, does this mean intelligence can run in families as a genetic or inherited tendency? Mental dysfunction such as schizophrenia can, so is an efficient mental capacity passed on from parent to child? Animal experiments throw some light on this question, and on the whole area of whether it is genetics, the environment or a combination of the two that allows for intelligence and creative ability. Different strains of rats show great differences in intelligence or rat reasoning. If these are brought up in normal conditions and then through a maze to reach a food goal, the bright strain make far fewer wrong turns that the dull ones. But if the environment is made dull and boring the number of errors becomes equal. Return the rats to an exciting maze and the discrepancy returns as before but is much smaller. In other words, a dull rat in a stimulating environment will almost do as well as a bright rat who is bored in a normal one. This principle applies to humans too someone may be born with innate intelligence, but their environment probably has the final say over whether they become creative or even a genius. Evidence now exists that most young children, if given enough opportunities and encouragement, are able to achieve significant and sustainable levels of academic or sporting prowess. Bright or creative children are often physically very active at the same time, and so may receive more parental attention as a result almost by default in order to ensure their safety. They may also talk earlier, and this, in turn, breeds parental interest. This can sometimes cause problems with other siblings who may feel jealous even though they themselves may be bright. Their creative talents may be undervalued and so never come to fruition. Two themes seem to run through famously creative families as a result. The first is that the parents were able to identify the talents of each child, and nurture and encourage these accordingly but in an even-handed manner. Individual differences were encouraged, and friendly sibling rivalry was not seen as a particular problem. If the father is, say, a famous actor, there is no undue pressure for his children to follow him onto the boards, but instead their chosen interests are encouraged. There need not even by any obvious talent in such a family since there always needs to be someone who sets the family career in motion, as in the case of the Sheen acting dynasty. Martin Sheen was the seventh of ten children born to a Spanish immigrant father and an Irish mother. Despite intense parental disapproval he turned his back on entrance exams to university and borrowed cash from a local priest to start a fledgling acting career. His acting successes in films such as Badlands and Apocalypse Now made him one of the most highly-regarded actors of the 1970s. Three sons Emilio Estevez, Ramon Estevez and Charlie Sheen have followed him into the profession as a consequence of being inspired by his motivation and enthusiasm. A stream seems to run through creative families. Such children are not necessarily smothered with love by their parents. They feel loved and wanted, and are secure in their home, but are often more surrounded by an atmosphere of work and where following a calling appears to be important. They may see from their parents that it takes time and dedication to be master of a craft, and so are in less of a hurry to achieve for themselves once they start to work. The generation of creativity is complex: it is a mixture of genetics, the environment, parental teaching and luck that determines how successful or talented family members are. This last point luck is often not mentioned where talent is concerned but plays an undoubted part. Mozart, considered by many to be the finest composer of all time, was lucky to be living in an age that encouraged the writing of music. He was brought up surrounded by it, his father was a musician who encouraged him to the point of giving up his job to promote his child genius, and he learnt musical composition with frightening speed the speed of a genius. Mozart himself simply wanted to create the finest music ever written but did not necessarily view himself as a genius he could write sublime music at will, and so often preferred to lead a hedonistic lifestyle that he found more exciting than writing music to order. Albert Einstein and Bill Gates are two more examples of people whose talents have blossomed by virtue of the times they were living in. Einstein was a solitary, somewhat slow child who had affection at home but whose phenomenal intelligence emerged without any obvious parental input. This may have been partly due to the fact that at the start of the 20th Century a lot of the Newtonian laws of physics were being questioned, leaving a fertile ground for ideas such as his to be developed. Bill Gates may have had the creative vision to develop Microsoft, but without the new computer age dawning at the same time he may never have achieved the position on the world stage he now occupies.", "hypothesis": "The importance of luck in the genius equation tends to be ignored.", "label": "e"} +{"uid": "id_484", "premise": "What do we mean by being talented or gifted? The most obvious way is to look at the work someone does and if they are capable of significant success, label them as talented. The purely quantitative route percentage definition looks not at individuals, but at simple percentages, such as the top five per cent of the population, and labels them by definition as gifted. This definition has fallen from favour, eclipsed by the advent of IQ tests, favoured by luminaries such as Professor Hans Eysenck, where a series of written or verbal tests of general intelligence leads to a score of intelligence. The IQ test has been eclipsed in turn. Most people studying intelligence and creativity in the new millennium now prefer a broader definition, using a multifaceted approach where talents in many areas are recognised rather than purely concentrating on academic achievement. If we are therefore assuming that talented, creative or gifted individuals may need to be assessed across a range of abilities, does this mean intelligence can run in families as a genetic or inherited tendency? Mental dysfunction such as schizophrenia can, so is an efficient mental capacity passed on from parent to child? Animal experiments throw some light on this question, and on the whole area of whether it is genetics, the environment or a combination of the two that allows for intelligence and creative ability. Different strains of rats show great differences in intelligence or rat reasoning. If these are brought up in normal conditions and then through a maze to reach a food goal, the bright strain make far fewer wrong turns that the dull ones. But if the environment is made dull and boring the number of errors becomes equal. Return the rats to an exciting maze and the discrepancy returns as before but is much smaller. In other words, a dull rat in a stimulating environment will almost do as well as a bright rat who is bored in a normal one. This principle applies to humans too someone may be born with innate intelligence, but their environment probably has the final say over whether they become creative or even a genius. Evidence now exists that most young children, if given enough opportunities and encouragement, are able to achieve significant and sustainable levels of academic or sporting prowess. Bright or creative children are often physically very active at the same time, and so may receive more parental attention as a result almost by default in order to ensure their safety. They may also talk earlier, and this, in turn, breeds parental interest. This can sometimes cause problems with other siblings who may feel jealous even though they themselves may be bright. Their creative talents may be undervalued and so never come to fruition. Two themes seem to run through famously creative families as a result. The first is that the parents were able to identify the talents of each child, and nurture and encourage these accordingly but in an even-handed manner. Individual differences were encouraged, and friendly sibling rivalry was not seen as a particular problem. If the father is, say, a famous actor, there is no undue pressure for his children to follow him onto the boards, but instead their chosen interests are encouraged. There need not even by any obvious talent in such a family since there always needs to be someone who sets the family career in motion, as in the case of the Sheen acting dynasty. Martin Sheen was the seventh of ten children born to a Spanish immigrant father and an Irish mother. Despite intense parental disapproval he turned his back on entrance exams to university and borrowed cash from a local priest to start a fledgling acting career. His acting successes in films such as Badlands and Apocalypse Now made him one of the most highly-regarded actors of the 1970s. Three sons Emilio Estevez, Ramon Estevez and Charlie Sheen have followed him into the profession as a consequence of being inspired by his motivation and enthusiasm. A stream seems to run through creative families. Such children are not necessarily smothered with love by their parents. They feel loved and wanted, and are secure in their home, but are often more surrounded by an atmosphere of work and where following a calling appears to be important. They may see from their parents that it takes time and dedication to be master of a craft, and so are in less of a hurry to achieve for themselves once they start to work. The generation of creativity is complex: it is a mixture of genetics, the environment, parental teaching and luck that determines how successful or talented family members are. This last point luck is often not mentioned where talent is concerned but plays an undoubted part. Mozart, considered by many to be the finest composer of all time, was lucky to be living in an age that encouraged the writing of music. He was brought up surrounded by it, his father was a musician who encouraged him to the point of giving up his job to promote his child genius, and he learnt musical composition with frightening speed the speed of a genius. Mozart himself simply wanted to create the finest music ever written but did not necessarily view himself as a genius he could write sublime music at will, and so often preferred to lead a hedonistic lifestyle that he found more exciting than writing music to order. Albert Einstein and Bill Gates are two more examples of people whose talents have blossomed by virtue of the times they were living in. Einstein was a solitary, somewhat slow child who had affection at home but whose phenomenal intelligence emerged without any obvious parental input. This may have been partly due to the fact that at the start of the 20th Century a lot of the Newtonian laws of physics were being questioned, leaving a fertile ground for ideas such as his to be developed. Bill Gates may have had the creative vision to develop Microsoft, but without the new computer age dawning at the same time he may never have achieved the position on the world stage he now occupies.", "hypothesis": "Einstein and Gates would have achieved success in any era.", "label": "c"} +{"uid": "id_485", "premise": "What do we mean by being talented or gifted? The most obvious way is to look at the work someone does and if they are capable of significant success, label them as talented. The purely quantitative route percentage definition looks not at individuals, but at simple percentages, such as the top five per cent of the population, and labels them by definition as gifted. This definition has fallen from favour, eclipsed by the advent of IQ tests, favoured by luminaries such as Professor Hans Eysenck, where a series of written or verbal tests of general intelligence leads to a score of intelligence. The IQ test has been eclipsed in turn. Most people studying intelligence and creativity in the new millennium now prefer a broader definition, using a multifaceted approach where talents in many areas are recognised rather than purely concentrating on academic achievement. If we are therefore assuming that talented, creative or gifted individuals may need to be assessed across a range of abilities, does this mean intelligence can run in families as a genetic or inherited tendency? Mental dysfunction such as schizophrenia can, so is an efficient mental capacity passed on from parent to child? Animal experiments throw some light on this question, and on the whole area of whether it is genetics, the environment or a combination of the two that allows for intelligence and creative ability. Different strains of rats show great differences in intelligence or rat reasoning. If these are brought up in normal conditions and then through a maze to reach a food goal, the bright strain make far fewer wrong turns that the dull ones. But if the environment is made dull and boring the number of errors becomes equal. Return the rats to an exciting maze and the discrepancy returns as before but is much smaller. In other words, a dull rat in a stimulating environment will almost do as well as a bright rat who is bored in a normal one. This principle applies to humans too someone may be born with innate intelligence, but their environment probably has the final say over whether they become creative or even a genius. Evidence now exists that most young children, if given enough opportunities and encouragement, are able to achieve significant and sustainable levels of academic or sporting prowess. Bright or creative children are often physically very active at the same time, and so may receive more parental attention as a result almost by default in order to ensure their safety. They may also talk earlier, and this, in turn, breeds parental interest. This can sometimes cause problems with other siblings who may feel jealous even though they themselves may be bright. Their creative talents may be undervalued and so never come to fruition. Two themes seem to run through famously creative families as a result. The first is that the parents were able to identify the talents of each child, and nurture and encourage these accordingly but in an even-handed manner. Individual differences were encouraged, and friendly sibling rivalry was not seen as a particular problem. If the father is, say, a famous actor, there is no undue pressure for his children to follow him onto the boards, but instead their chosen interests are encouraged. There need not even by any obvious talent in such a family since there always needs to be someone who sets the family career in motion, as in the case of the Sheen acting dynasty. Martin Sheen was the seventh of ten children born to a Spanish immigrant father and an Irish mother. Despite intense parental disapproval he turned his back on entrance exams to university and borrowed cash from a local priest to start a fledgling acting career. His acting successes in films such as Badlands and Apocalypse Now made him one of the most highly-regarded actors of the 1970s. Three sons Emilio Estevez, Ramon Estevez and Charlie Sheen have followed him into the profession as a consequence of being inspired by his motivation and enthusiasm. A stream seems to run through creative families. Such children are not necessarily smothered with love by their parents. They feel loved and wanted, and are secure in their home, but are often more surrounded by an atmosphere of work and where following a calling appears to be important. They may see from their parents that it takes time and dedication to be master of a craft, and so are in less of a hurry to achieve for themselves once they start to work. The generation of creativity is complex: it is a mixture of genetics, the environment, parental teaching and luck that determines how successful or talented family members are. This last point luck is often not mentioned where talent is concerned but plays an undoubted part. Mozart, considered by many to be the finest composer of all time, was lucky to be living in an age that encouraged the writing of music. He was brought up surrounded by it, his father was a musician who encouraged him to the point of giving up his job to promote his child genius, and he learnt musical composition with frightening speed the speed of a genius. Mozart himself simply wanted to create the finest music ever written but did not necessarily view himself as a genius he could write sublime music at will, and so often preferred to lead a hedonistic lifestyle that he found more exciting than writing music to order. Albert Einstein and Bill Gates are two more examples of people whose talents have blossomed by virtue of the times they were living in. Einstein was a solitary, somewhat slow child who had affection at home but whose phenomenal intelligence emerged without any obvious parental input. This may have been partly due to the fact that at the start of the 20th Century a lot of the Newtonian laws of physics were being questioned, leaving a fertile ground for ideas such as his to be developed. Bill Gates may have had the creative vision to develop Microsoft, but without the new computer age dawning at the same time he may never have achieved the position on the world stage he now occupies.", "hypothesis": "Intelligence tests have now been proved to be unreliable.", "label": "n"} +{"uid": "id_486", "premise": "What do we mean by being talented or gifted? The most obvious way is to look at the work someone does and if they are capable of significant success, label them as talented. The purely quantitative route percentage definition looks not at individuals, but at simple percentages, such as the top five per cent of the population, and labels them by definition as gifted. This definition has fallen from favour, eclipsed by the advent of IQ tests, favoured by luminaries such as Professor Hans Eysenck, where a series of written or verbal tests of general intelligence leads to a score of intelligence. The IQ test has been eclipsed in turn. Most people studying intelligence and creativity in the new millennium now prefer a broader definition, using a multifaceted approach where talents in many areas are recognised rather than purely concentrating on academic achievement. If we are therefore assuming that talented, creative or gifted individuals may need to be assessed across a range of abilities, does this mean intelligence can run in families as a genetic or inherited tendency? Mental dysfunction such as schizophrenia can, so is an efficient mental capacity passed on from parent to child? Animal experiments throw some light on this question, and on the whole area of whether it is genetics, the environment or a combination of the two that allows for intelligence and creative ability. Different strains of rats show great differences in intelligence or rat reasoning. If these are brought up in normal conditions and then through a maze to reach a food goal, the bright strain make far fewer wrong turns that the dull ones. But if the environment is made dull and boring the number of errors becomes equal. Return the rats to an exciting maze and the discrepancy returns as before but is much smaller. In other words, a dull rat in a stimulating environment will almost do as well as a bright rat who is bored in a normal one. This principle applies to humans too someone may be born with innate intelligence, but their environment probably has the final say over whether they become creative or even a genius. Evidence now exists that most young children, if given enough opportunities and encouragement, are able to achieve significant and sustainable levels of academic or sporting prowess. Bright or creative children are often physically very active at the same time, and so may receive more parental attention as a result almost by default in order to ensure their safety. They may also talk earlier, and this, in turn, breeds parental interest. This can sometimes cause problems with other siblings who may feel jealous even though they themselves may be bright. Their creative talents may be undervalued and so never come to fruition. Two themes seem to run through famously creative families as a result. The first is that the parents were able to identify the talents of each child, and nurture and encourage these accordingly but in an even-handed manner. Individual differences were encouraged, and friendly sibling rivalry was not seen as a particular problem. If the father is, say, a famous actor, there is no undue pressure for his children to follow him onto the boards, but instead their chosen interests are encouraged. There need not even by any obvious talent in such a family since there always needs to be someone who sets the family career in motion, as in the case of the Sheen acting dynasty. Martin Sheen was the seventh of ten children born to a Spanish immigrant father and an Irish mother. Despite intense parental disapproval he turned his back on entrance exams to university and borrowed cash from a local priest to start a fledgling acting career. His acting successes in films such as Badlands and Apocalypse Now made him one of the most highly-regarded actors of the 1970s. Three sons Emilio Estevez, Ramon Estevez and Charlie Sheen have followed him into the profession as a consequence of being inspired by his motivation and enthusiasm. A stream seems to run through creative families. Such children are not necessarily smothered with love by their parents. They feel loved and wanted, and are secure in their home, but are often more surrounded by an atmosphere of work and where following a calling appears to be important. They may see from their parents that it takes time and dedication to be master of a craft, and so are in less of a hurry to achieve for themselves once they start to work. The generation of creativity is complex: it is a mixture of genetics, the environment, parental teaching and luck that determines how successful or talented family members are. This last point luck is often not mentioned where talent is concerned but plays an undoubted part. Mozart, considered by many to be the finest composer of all time, was lucky to be living in an age that encouraged the writing of music. He was brought up surrounded by it, his father was a musician who encouraged him to the point of giving up his job to promote his child genius, and he learnt musical composition with frightening speed the speed of a genius. Mozart himself simply wanted to create the finest music ever written but did not necessarily view himself as a genius he could write sublime music at will, and so often preferred to lead a hedonistic lifestyle that he found more exciting than writing music to order. Albert Einstein and Bill Gates are two more examples of people whose talents have blossomed by virtue of the times they were living in. Einstein was a solitary, somewhat slow child who had affection at home but whose phenomenal intelligence emerged without any obvious parental input. This may have been partly due to the fact that at the start of the 20th Century a lot of the Newtonian laws of physics were being questioned, leaving a fertile ground for ideas such as his to be developed. Bill Gates may have had the creative vision to develop Microsoft, but without the new computer age dawning at the same time he may never have achieved the position on the world stage he now occupies.", "hypothesis": "The brother or sister of a gifted older child may fail to fulfil their own potential.", "label": "e"} +{"uid": "id_487", "premise": "What do we mean by being talented or gifted? The most obvious way is to look at the work someone does and if they are capable of significant success, label them as talented. The purely quantitative route percentage definition looks not at individuals, but at simple percentages, such as the top five per cent of the population, and labels them by definition as gifted. This definition has fallen from favour, eclipsed by the advent of IQ tests, favoured by luminaries such as Professor Hans Eysenck, where a series of written or verbal tests of general intelligence leads to a score of intelligence. The IQ test has been eclipsed in turn. Most people studying intelligence and creativity in the new millennium now prefer a broader definition, using a multifaceted approach where talents in many areas are recognised rather than purely concentrating on academic achievement. If we are therefore assuming that talented, creative or gifted individuals may need to be assessed across a range of abilities, does this mean intelligence can run in families as a genetic or inherited tendency? Mental dysfunction such as schizophrenia can, so is an efficient mental capacity passed on from parent to child? Animal experiments throw some light on this question, and on the whole area of whether it is genetics, the environment or a combination of the two that allows for intelligence and creative ability. Different strains of rats show great differences in intelligence or rat reasoning. If these are brought up in normal conditions and then through a maze to reach a food goal, the bright strain make far fewer wrong turns that the dull ones. But if the environment is made dull and boring the number of errors becomes equal. Return the rats to an exciting maze and the discrepancy returns as before but is much smaller. In other words, a dull rat in a stimulating environment will almost do as well as a bright rat who is bored in a normal one. This principle applies to humans too someone may be born with innate intelligence, but their environment probably has the final say over whether they become creative or even a genius. Evidence now exists that most young children, if given enough opportunities and encouragement, are able to achieve significant and sustainable levels of academic or sporting prowess. Bright or creative children are often physically very active at the same time, and so may receive more parental attention as a result almost by default in order to ensure their safety. They may also talk earlier, and this, in turn, breeds parental interest. This can sometimes cause problems with other siblings who may feel jealous even though they themselves may be bright. Their creative talents may be undervalued and so never come to fruition. Two themes seem to run through famously creative families as a result. The first is that the parents were able to identify the talents of each child, and nurture and encourage these accordingly but in an even-handed manner. Individual differences were encouraged, and friendly sibling rivalry was not seen as a particular problem. If the father is, say, a famous actor, there is no undue pressure for his children to follow him onto the boards, but instead their chosen interests are encouraged. There need not even by any obvious talent in such a family since there always needs to be someone who sets the family career in motion, as in the case of the Sheen acting dynasty. Martin Sheen was the seventh of ten children born to a Spanish immigrant father and an Irish mother. Despite intense parental disapproval he turned his back on entrance exams to university and borrowed cash from a local priest to start a fledgling acting career. His acting successes in films such as Badlands and Apocalypse Now made him one of the most highly-regarded actors of the 1970s. Three sons Emilio Estevez, Ramon Estevez and Charlie Sheen have followed him into the profession as a consequence of being inspired by his motivation and enthusiasm. A stream seems to run through creative families. Such children are not necessarily smothered with love by their parents. They feel loved and wanted, and are secure in their home, but are often more surrounded by an atmosphere of work and where following a calling appears to be important. They may see from their parents that it takes time and dedication to be master of a craft, and so are in less of a hurry to achieve for themselves once they start to work. The generation of creativity is complex: it is a mixture of genetics, the environment, parental teaching and luck that determines how successful or talented family members are. This last point luck is often not mentioned where talent is concerned but plays an undoubted part. Mozart, considered by many to be the finest composer of all time, was lucky to be living in an age that encouraged the writing of music. He was brought up surrounded by it, his father was a musician who encouraged him to the point of giving up his job to promote his child genius, and he learnt musical composition with frightening speed the speed of a genius. Mozart himself simply wanted to create the finest music ever written but did not necessarily view himself as a genius he could write sublime music at will, and so often preferred to lead a hedonistic lifestyle that he found more exciting than writing music to order. Albert Einstein and Bill Gates are two more examples of people whose talents have blossomed by virtue of the times they were living in. Einstein was a solitary, somewhat slow child who had affection at home but whose phenomenal intelligence emerged without any obvious parental input. This may have been partly due to the fact that at the start of the 20th Century a lot of the Newtonian laws of physics were being questioned, leaving a fertile ground for ideas such as his to be developed. Bill Gates may have had the creative vision to develop Microsoft, but without the new computer age dawning at the same time he may never have achieved the position on the world stage he now occupies.", "hypothesis": "Mozart was acutely aware of his own remarkable talent.", "label": "c"} +{"uid": "id_488", "premise": "What is Meaning The end, product of education, yours and mine and everybodys, is the total pattern of reactions and possible reactions we have inside ourselves. If you did not have within you at this moment the pattern of reactions that we call the ability to read. you would see here only meaningless black marks on paper. Because of the trained patterns of response, you are (or are not) stirred to patriotism by martial music, your feelings of reverence are aroused by symbols of your religion, you listen more respectfully to the health advice of someone who has MD after his name than to that of someone who hasnt. What I call here a pattern of reactions, then, is the sum total of the ways we act in response to events, to words, and to symbols. Our reaction patterns or our semantic habits, are the internal and most important residue of whatever years of education or miseducation we may have received from our parents conduct toward us in childhood as well as their teachings, from the formal education we may have had, from all the lectures we have listened to, from the radio programs and the movies and television shows we have experienced, from all the books and newspapers and comic strips we have read, from the conversations we have had with friends and associates, and from all our experiences. If, as the result of all these influences that make us what we are, our semantic habits are reasonably similar to those of most people around us, we are regarded as normal, or perhaps dull. If our semantic habits are noticeably different from those of others, we are regarded as individualistic or original. or, if the differences are disapproved of or viewed with alarm, as crazy. Semantics is sometimes defined in dictionaries as the science of the meaning of words which would not be a bad definition if people didnt assume that the search for the meanings of words begins and ends with looking them up in a dictionary. If one stops to think for a moment, it is clear that to define a word, as a dictionary does, is simply to explain the word with more words. To be thorough about defining, we should next have to define the words used in the definition, then define the words used in defining the words used in the definition and so on. Defining words with more words, in short, gets us at once into what mathematicians call an infinite regress. Alternatively, it can get us into the kind of run-around we sometimes encounter when we look up impertinence and find it defined as impudence, so we look up impudence and find it defined as impertinence. Yetand here we come to another common reaction patternpeople often act as if words can be explained fully with more words. To a person who asked for a definition of jazz, Louis Armstrong is said to have replied, Man. when you got to ask what it is, youll never get to know, proving himself to be an intuitive semanticist as well as a great trumpet player. Semantics, then, does not deal with the meaning of words as that expression is commonly understood. P. W. Bridgman, the Nobel Prize winner and physicist, once wrote, The true meaning of a term is to be found by observing what a man does with it, not by what he says about it. He made an enormous contribution to science by showing that the meaning of a scientific term lies in the operations, the things done, that establish its validity, rather than in verbal definitions. Here is a simple, everyday kind of example of operational definition. If you say, This table measures six feet in length, you could prove it by taking a foot rule, performing the operation of laying it end to end while counting, One... two... three... four... But if you sayand revolutionists have started uprisings with just this statement Man is born free, but everywhere he is in chains! what operations could you perform to demonstrate its accuracy or inaccuracy? But let us carry this suggestion of operationalism outside the physical sciences where Bridgman applied it, and observe what operations people perform as the result of both the language they use and the language other people use in communicating to them. Here is a personnel manager studying an application blank. He comes to the words Education: Harvard University, and drops the application blank in the wastebasket (thats the operation) because, as he would say if you asked him, I dont like Harvard men. This is an instance of meaning at workbut it is not a meaning that can be found in dictionaries. If I seem to be taking a long time to explain what semantics is about, it is because I am trying, in the course of explanation, to introduce the reader to a certain way of looking at human behavior. I say human responses because, so far as we know, human beings are the only creatures that have, over and above that biological equipment which we have in common with other creatures, the additional capacity for manufacturing symbols and systems of symbols. When we react to a flag, we are not reacting simply to a piece of cloth, but to the meaning with which it has been symbolically endowed. When we react to a word, we are not reacting to a set of sounds, but to the meaning with which that set of sounds has been symbolically endowed. A basic idea in general semantics, therefore, is that the meaning of words (or other symbols) is not in the words, but in our own semantic reactions. If I were to tell a shockingly obscene story in Arabic or Hindustani or Swahili before an audience that understood only English, no one would blush or be angry; the story would be neither shocking nor obscene-induced, it would not even be a story. Likewise, the value of a dollar bill is not in the bill, but in our social agreement to accept it as a symbol of value. If that agreement were to break down through the collapse of our government, the dollar bill would become only a scrap of paper. We do not understand a dollar bill by staring at it long and hard. We understand it by observing how people act with respect to it. We understand it by understanding the social mechanisms and the loyalties that keep it meaningful. Semantics is therefore a social study, basic to all other social studies.", "hypothesis": "Some statements are incapable of being proved or disproved.", "label": "e"} +{"uid": "id_489", "premise": "What is Meaning The end, product of education, yours and mine and everybodys, is the total pattern of reactions and possible reactions we have inside ourselves. If you did not have within you at this moment the pattern of reactions that we call the ability to read. you would see here only meaningless black marks on paper. Because of the trained patterns of response, you are (or are not) stirred to patriotism by martial music, your feelings of reverence are aroused by symbols of your religion, you listen more respectfully to the health advice of someone who has MD after his name than to that of someone who hasnt. What I call here a pattern of reactions, then, is the sum total of the ways we act in response to events, to words, and to symbols. Our reaction patterns or our semantic habits, are the internal and most important residue of whatever years of education or miseducation we may have received from our parents conduct toward us in childhood as well as their teachings, from the formal education we may have had, from all the lectures we have listened to, from the radio programs and the movies and television shows we have experienced, from all the books and newspapers and comic strips we have read, from the conversations we have had with friends and associates, and from all our experiences. If, as the result of all these influences that make us what we are, our semantic habits are reasonably similar to those of most people around us, we are regarded as normal, or perhaps dull. If our semantic habits are noticeably different from those of others, we are regarded as individualistic or original. or, if the differences are disapproved of or viewed with alarm, as crazy. Semantics is sometimes defined in dictionaries as the science of the meaning of words which would not be a bad definition if people didnt assume that the search for the meanings of words begins and ends with looking them up in a dictionary. If one stops to think for a moment, it is clear that to define a word, as a dictionary does, is simply to explain the word with more words. To be thorough about defining, we should next have to define the words used in the definition, then define the words used in defining the words used in the definition and so on. Defining words with more words, in short, gets us at once into what mathematicians call an infinite regress. Alternatively, it can get us into the kind of run-around we sometimes encounter when we look up impertinence and find it defined as impudence, so we look up impudence and find it defined as impertinence. Yetand here we come to another common reaction patternpeople often act as if words can be explained fully with more words. To a person who asked for a definition of jazz, Louis Armstrong is said to have replied, Man. when you got to ask what it is, youll never get to know, proving himself to be an intuitive semanticist as well as a great trumpet player. Semantics, then, does not deal with the meaning of words as that expression is commonly understood. P. W. Bridgman, the Nobel Prize winner and physicist, once wrote, The true meaning of a term is to be found by observing what a man does with it, not by what he says about it. He made an enormous contribution to science by showing that the meaning of a scientific term lies in the operations, the things done, that establish its validity, rather than in verbal definitions. Here is a simple, everyday kind of example of operational definition. If you say, This table measures six feet in length, you could prove it by taking a foot rule, performing the operation of laying it end to end while counting, One... two... three... four... But if you sayand revolutionists have started uprisings with just this statement Man is born free, but everywhere he is in chains! what operations could you perform to demonstrate its accuracy or inaccuracy? But let us carry this suggestion of operationalism outside the physical sciences where Bridgman applied it, and observe what operations people perform as the result of both the language they use and the language other people use in communicating to them. Here is a personnel manager studying an application blank. He comes to the words Education: Harvard University, and drops the application blank in the wastebasket (thats the operation) because, as he would say if you asked him, I dont like Harvard men. This is an instance of meaning at workbut it is not a meaning that can be found in dictionaries. If I seem to be taking a long time to explain what semantics is about, it is because I am trying, in the course of explanation, to introduce the reader to a certain way of looking at human behavior. I say human responses because, so far as we know, human beings are the only creatures that have, over and above that biological equipment which we have in common with other creatures, the additional capacity for manufacturing symbols and systems of symbols. When we react to a flag, we are not reacting simply to a piece of cloth, but to the meaning with which it has been symbolically endowed. When we react to a word, we are not reacting to a set of sounds, but to the meaning with which that set of sounds has been symbolically endowed. A basic idea in general semantics, therefore, is that the meaning of words (or other symbols) is not in the words, but in our own semantic reactions. If I were to tell a shockingly obscene story in Arabic or Hindustani or Swahili before an audience that understood only English, no one would blush or be angry; the story would be neither shocking nor obscene-induced, it would not even be a story. Likewise, the value of a dollar bill is not in the bill, but in our social agreement to accept it as a symbol of value. If that agreement were to break down through the collapse of our government, the dollar bill would become only a scrap of paper. We do not understand a dollar bill by staring at it long and hard. We understand it by observing how people act with respect to it. We understand it by understanding the social mechanisms and the loyalties that keep it meaningful. Semantics is therefore a social study, basic to all other social studies.", "hypothesis": "Flags and words are eliciting responses of the same reason.", "label": "e"} +{"uid": "id_490", "premise": "What is Meaning The end, product of education, yours and mine and everybodys, is the total pattern of reactions and possible reactions we have inside ourselves. If you did not have within you at this moment the pattern of reactions that we call the ability to read. you would see here only meaningless black marks on paper. Because of the trained patterns of response, you are (or are not) stirred to patriotism by martial music, your feelings of reverence are aroused by symbols of your religion, you listen more respectfully to the health advice of someone who has MD after his name than to that of someone who hasnt. What I call here a pattern of reactions, then, is the sum total of the ways we act in response to events, to words, and to symbols. Our reaction patterns or our semantic habits, are the internal and most important residue of whatever years of education or miseducation we may have received from our parents conduct toward us in childhood as well as their teachings, from the formal education we may have had, from all the lectures we have listened to, from the radio programs and the movies and television shows we have experienced, from all the books and newspapers and comic strips we have read, from the conversations we have had with friends and associates, and from all our experiences. If, as the result of all these influences that make us what we are, our semantic habits are reasonably similar to those of most people around us, we are regarded as normal, or perhaps dull. If our semantic habits are noticeably different from those of others, we are regarded as individualistic or original. or, if the differences are disapproved of or viewed with alarm, as crazy. Semantics is sometimes defined in dictionaries as the science of the meaning of words which would not be a bad definition if people didnt assume that the search for the meanings of words begins and ends with looking them up in a dictionary. If one stops to think for a moment, it is clear that to define a word, as a dictionary does, is simply to explain the word with more words. To be thorough about defining, we should next have to define the words used in the definition, then define the words used in defining the words used in the definition and so on. Defining words with more words, in short, gets us at once into what mathematicians call an infinite regress. Alternatively, it can get us into the kind of run-around we sometimes encounter when we look up impertinence and find it defined as impudence, so we look up impudence and find it defined as impertinence. Yetand here we come to another common reaction patternpeople often act as if words can be explained fully with more words. To a person who asked for a definition of jazz, Louis Armstrong is said to have replied, Man. when you got to ask what it is, youll never get to know, proving himself to be an intuitive semanticist as well as a great trumpet player. Semantics, then, does not deal with the meaning of words as that expression is commonly understood. P. W. Bridgman, the Nobel Prize winner and physicist, once wrote, The true meaning of a term is to be found by observing what a man does with it, not by what he says about it. He made an enormous contribution to science by showing that the meaning of a scientific term lies in the operations, the things done, that establish its validity, rather than in verbal definitions. Here is a simple, everyday kind of example of operational definition. If you say, This table measures six feet in length, you could prove it by taking a foot rule, performing the operation of laying it end to end while counting, One... two... three... four... But if you sayand revolutionists have started uprisings with just this statement Man is born free, but everywhere he is in chains! what operations could you perform to demonstrate its accuracy or inaccuracy? But let us carry this suggestion of operationalism outside the physical sciences where Bridgman applied it, and observe what operations people perform as the result of both the language they use and the language other people use in communicating to them. Here is a personnel manager studying an application blank. He comes to the words Education: Harvard University, and drops the application blank in the wastebasket (thats the operation) because, as he would say if you asked him, I dont like Harvard men. This is an instance of meaning at workbut it is not a meaning that can be found in dictionaries. If I seem to be taking a long time to explain what semantics is about, it is because I am trying, in the course of explanation, to introduce the reader to a certain way of looking at human behavior. I say human responses because, so far as we know, human beings are the only creatures that have, over and above that biological equipment which we have in common with other creatures, the additional capacity for manufacturing symbols and systems of symbols. When we react to a flag, we are not reacting simply to a piece of cloth, but to the meaning with which it has been symbolically endowed. When we react to a word, we are not reacting to a set of sounds, but to the meaning with which that set of sounds has been symbolically endowed. A basic idea in general semantics, therefore, is that the meaning of words (or other symbols) is not in the words, but in our own semantic reactions. If I were to tell a shockingly obscene story in Arabic or Hindustani or Swahili before an audience that understood only English, no one would blush or be angry; the story would be neither shocking nor obscene-induced, it would not even be a story. Likewise, the value of a dollar bill is not in the bill, but in our social agreement to accept it as a symbol of value. If that agreement were to break down through the collapse of our government, the dollar bill would become only a scrap of paper. We do not understand a dollar bill by staring at it long and hard. We understand it by observing how people act with respect to it. We understand it by understanding the social mechanisms and the loyalties that keep it meaningful. Semantics is therefore a social study, basic to all other social studies.", "hypothesis": "A story can be entertaining without being understood.", "label": "c"} +{"uid": "id_491", "premise": "What is Meaning The end, product of education, yours and mine and everybodys, is the total pattern of reactions and possible reactions we have inside ourselves. If you did not have within you at this moment the pattern of reactions that we call the ability to read. you would see here only meaningless black marks on paper. Because of the trained patterns of response, you are (or are not) stirred to patriotism by martial music, your feelings of reverence are aroused by symbols of your religion, you listen more respectfully to the health advice of someone who has MD after his name than to that of someone who hasnt. What I call here a pattern of reactions, then, is the sum total of the ways we act in response to events, to words, and to symbols. Our reaction patterns or our semantic habits, are the internal and most important residue of whatever years of education or miseducation we may have received from our parents conduct toward us in childhood as well as their teachings, from the formal education we may have had, from all the lectures we have listened to, from the radio programs and the movies and television shows we have experienced, from all the books and newspapers and comic strips we have read, from the conversations we have had with friends and associates, and from all our experiences. If, as the result of all these influences that make us what we are, our semantic habits are reasonably similar to those of most people around us, we are regarded as normal, or perhaps dull. If our semantic habits are noticeably different from those of others, we are regarded as individualistic or original. or, if the differences are disapproved of or viewed with alarm, as crazy. Semantics is sometimes defined in dictionaries as the science of the meaning of words which would not be a bad definition if people didnt assume that the search for the meanings of words begins and ends with looking them up in a dictionary. If one stops to think for a moment, it is clear that to define a word, as a dictionary does, is simply to explain the word with more words. To be thorough about defining, we should next have to define the words used in the definition, then define the words used in defining the words used in the definition and so on. Defining words with more words, in short, gets us at once into what mathematicians call an infinite regress. Alternatively, it can get us into the kind of run-around we sometimes encounter when we look up impertinence and find it defined as impudence, so we look up impudence and find it defined as impertinence. Yetand here we come to another common reaction patternpeople often act as if words can be explained fully with more words. To a person who asked for a definition of jazz, Louis Armstrong is said to have replied, Man. when you got to ask what it is, youll never get to know, proving himself to be an intuitive semanticist as well as a great trumpet player. Semantics, then, does not deal with the meaning of words as that expression is commonly understood. P. W. Bridgman, the Nobel Prize winner and physicist, once wrote, The true meaning of a term is to be found by observing what a man does with it, not by what he says about it. He made an enormous contribution to science by showing that the meaning of a scientific term lies in the operations, the things done, that establish its validity, rather than in verbal definitions. Here is a simple, everyday kind of example of operational definition. If you say, This table measures six feet in length, you could prove it by taking a foot rule, performing the operation of laying it end to end while counting, One... two... three... four... But if you sayand revolutionists have started uprisings with just this statement Man is born free, but everywhere he is in chains! what operations could you perform to demonstrate its accuracy or inaccuracy? But let us carry this suggestion of operationalism outside the physical sciences where Bridgman applied it, and observe what operations people perform as the result of both the language they use and the language other people use in communicating to them. Here is a personnel manager studying an application blank. He comes to the words Education: Harvard University, and drops the application blank in the wastebasket (thats the operation) because, as he would say if you asked him, I dont like Harvard men. This is an instance of meaning at workbut it is not a meaning that can be found in dictionaries. If I seem to be taking a long time to explain what semantics is about, it is because I am trying, in the course of explanation, to introduce the reader to a certain way of looking at human behavior. I say human responses because, so far as we know, human beings are the only creatures that have, over and above that biological equipment which we have in common with other creatures, the additional capacity for manufacturing symbols and systems of symbols. When we react to a flag, we are not reacting simply to a piece of cloth, but to the meaning with which it has been symbolically endowed. When we react to a word, we are not reacting to a set of sounds, but to the meaning with which that set of sounds has been symbolically endowed. A basic idea in general semantics, therefore, is that the meaning of words (or other symbols) is not in the words, but in our own semantic reactions. If I were to tell a shockingly obscene story in Arabic or Hindustani or Swahili before an audience that understood only English, no one would blush or be angry; the story would be neither shocking nor obscene-induced, it would not even be a story. Likewise, the value of a dollar bill is not in the bill, but in our social agreement to accept it as a symbol of value. If that agreement were to break down through the collapse of our government, the dollar bill would become only a scrap of paper. We do not understand a dollar bill by staring at it long and hard. We understand it by observing how people act with respect to it. We understand it by understanding the social mechanisms and the loyalties that keep it meaningful. Semantics is therefore a social study, basic to all other social studies.", "hypothesis": "Meaning that is personal to individuals is less worthy to study than shared meanings.", "label": "n"} +{"uid": "id_492", "premise": "What is a novel? A novel is a marketable commodity, of the class collectively termed luxuries, as not contributing directly to the support of life or the maintenance of health. The novel, therefore, is an intellectual artistic luxury in that it can be of no use to a man when he is at work, but may conduce to peace of mind and delectation during his hours of idleness. Probably, no one denies that the first object of the novel is to amuse and interest the reader. But it is often said that the novel should instruct as well as afford amusement, and the novel-with-a-purpose is the realisation of this idea. The purpose-novel, then, proposes to serve two masters, besides procuring a reasonable amount of bread and butter for its writer and publisher, it proposes to escape from my definition of the novel in general and make itself an intellectual moral lesson instead of an intellectual artistic luxury. It constitutes a violation of the unwritten contract tacitly existing between writer and reader. A man buys what purports to be a work of fiction, a romance, a novel, a story of adventure, pays his money, takes his book home, prepares to enjoy it at his ease, and discovers that he has paid a dollar for somebodys views on socialism, religion, or the divorce laws.", "hypothesis": "According to the author, a writer should write a novel with the sole purpose of amusement", "label": "e"} +{"uid": "id_493", "premise": "What is a novel? A novel is a marketable commodity, of the class collectively termed luxuries, as not contributing directly to the support of life or the maintenance of health. The novel, therefore, is an intellectual artistic luxury in that it can be of no use to a man when he is at work, but may conduce to peace of mind and delectation during his hours of idleness. Probably, no one denies that the first object of the novel is to amuse and interest the reader. But it is often said that the novel should instruct as well as afford amusement, and the novel-with-a-purpose is the realisation of this idea. The purpose-novel, then, proposes to serve two masters, besides procuring a reasonable amount of bread and butter for its writer and publisher, it proposes to escape from my definition of the novel in general and make itself an intellectual moral lesson instead of an intellectual artistic luxury. It constitutes a violation of the unwritten contract tacitly existing between writer and reader. A man buys what purports to be a work of fiction, a romance, a novel, a story of adventure, pays his money, takes his book home, prepares to enjoy it at his ease, and discovers that he has paid a dollar for somebodys views on socialism, religion, or the divorce laws.", "hypothesis": "The writer of the passage regards a novel which has a strong view on socialism as their biggest hate.", "label": "n"} +{"uid": "id_494", "premise": "What is it like to run a large supermarket? Jill Insley finds out You cant beat really good service. Ive been shopping in the Thamesmead branch of supermarket chain Morrisons, in south-east London, and Ive experienced at first hand, the stores latest maxim for improving the shopping experience help, offer, thank. This involves identifying customers who might need help, greeting them, asking what they need, providing it, thanking them and leaving them in peace. If they dont look like they want help, theyll be left alone. But if theyre standing looking lost and perplexed, a member of staff will approach them. Staff are expected to be friendly to everyone. My checkout assistant has certainly said something to amuse the woman in front of me, shes smiling as she leaves. Adrian Perriss, manager of the branch, has discussed the approach with each of his 387 staff. He says its about recognising that someone needs help, not being a nuisance to them. When hes in another store, hes irritated by someone saying, Can I help you? when hes only just walked in to have a quick look at the products. How anyone can be friendly and enthusiastic when they start work at dawn beats me. The store opens at 7 am, Monday to Saturday, meaning that some staff, including Perriss, have to be here at 6 am to make sure its clean, safe and stocked up for the morning rush. Sometimes he walks in at 6 am and thinks theyre never going to be ready on time but they always are. Theres so much going on overnight 20 people working on unloading three enormous trailers full of groceries. Perriss has worked in supermarkets since 1982, when he became a trolley boy on a weekly salary of 76. It was less money than my previous job, but I loved it. It was different and diverse. I was doing trolleys, portering, bread, cakes, dairy and general maintenance. After a period in the produce department, looking after the fruit and vegetables, he was made produce manager, then assistant store manager, before reaching the top job in 1998. This involved intensive training and assessment through the companys future store manager programme, learning how to analyse and prioritise sales. wastage, recruitment and many other issues. Perriss first stop as store manager was at a store which was closed soon afterwards though he was not to blame. Despite the disappointing start, his career went from strength to strength and he was put in charge of launching new stores and heading up a concept store, where the then new ideas of preparing and cooking pizzas in store, and having a proper florist, and fruit and vegetable markets were Mailed. All Morrisons managers from the whole country spent three days there to see the new concept. That was hard work, he says, long days, seven days a week, for about a year. Although he oversees a store with a large turnover, there is a strongly practical aspect to Perrisss job. As we walk around, he chats to all the staff while checking the layout of their counters and the quality of the produce. He examines the baking potato shelf and rejects three, one that has split virtually in half and two that are beginning to go green. He then pulls out a lemon that looks fine to me. When I ask why, he picks up a second lemon and says: Close your eyes and just feel and tell me which you would keep. I do and realise that while one is firm and hard, the other is going a bit squashy. Despite eagle-eyed Perriss pulling out fruit and veg that most of us would buy without a second thought, the wastage each week is tiny: produce worth 4,200 is marked down for a quick sale, and only 400-worth is scrapped. This, he explains, is down to Morrisons method of ordering, still done manually rather than by computer. Department heads know exactly how much theyve sold that day and how much theyre likely to sell the next, based on sales records and allowing for influences such as the weather. Perriss is in charge of 1,000 man-hours a week across the store. To help him, he has a key team of four, who each have direct responsibility for different departments. He is keen to hear what staff think. He recently held a talent day, inviting employees interested in moving to a new job within the store to come and talk to him about why they thought they should be promoted, and discuss how to go about it. We had twenty-three people come through the door, people wanting to talk about progression, he says. What do they need to do to become a supervisor? Twenty-three people will be better members of staff as a result of that talk. His favourite department is fish, which has a 4 m-long counter run by Debbie and Angela, who are busy having a discussion about how to cook a particular fish with a customer. But it is one of just 20 or so departments around the store and Perriss admits the pressure of making sure he knows whats happening on them all can be intense. You have to do so much and there could be something wrong with every single one, every day, he says. Youve got to minimise those things and shrink them into perspective. Youve got to love the job. And Perriss certainly does.", "hypothesis": "Perriss was surprised how many staff asked about promotion on the talent day.", "label": "n"} +{"uid": "id_495", "premise": "What is it like to run a large supermarket? Jill Insley finds out You cant beat really good service. Ive been shopping in the Thamesmead branch of supermarket chain Morrisons, in south-east London, and Ive experienced at first hand, the stores latest maxim for improving the shopping experience help, offer, thank. This involves identifying customers who might need help, greeting them, asking what they need, providing it, thanking them and leaving them in peace. If they dont look like they want help, theyll be left alone. But if theyre standing looking lost and perplexed, a member of staff will approach them. Staff are expected to be friendly to everyone. My checkout assistant has certainly said something to amuse the woman in front of me, shes smiling as she leaves. Adrian Perriss, manager of the branch, has discussed the approach with each of his 387 staff. He says its about recognising that someone needs help, not being a nuisance to them. When hes in another store, hes irritated by someone saying, Can I help you? when hes only just walked in to have a quick look at the products. How anyone can be friendly and enthusiastic when they start work at dawn beats me. The store opens at 7 am, Monday to Saturday, meaning that some staff, including Perriss, have to be here at 6 am to make sure its clean, safe and stocked up for the morning rush. Sometimes he walks in at 6 am and thinks theyre never going to be ready on time but they always are. Theres so much going on overnight 20 people working on unloading three enormous trailers full of groceries. Perriss has worked in supermarkets since 1982, when he became a trolley boy on a weekly salary of 76. It was less money than my previous job, but I loved it. It was different and diverse. I was doing trolleys, portering, bread, cakes, dairy and general maintenance. After a period in the produce department, looking after the fruit and vegetables, he was made produce manager, then assistant store manager, before reaching the top job in 1998. This involved intensive training and assessment through the companys future store manager programme, learning how to analyse and prioritise sales. wastage, recruitment and many other issues. Perriss first stop as store manager was at a store which was closed soon afterwards though he was not to blame. Despite the disappointing start, his career went from strength to strength and he was put in charge of launching new stores and heading up a concept store, where the then new ideas of preparing and cooking pizzas in store, and having a proper florist, and fruit and vegetable markets were Mailed. All Morrisons managers from the whole country spent three days there to see the new concept. That was hard work, he says, long days, seven days a week, for about a year. Although he oversees a store with a large turnover, there is a strongly practical aspect to Perrisss job. As we walk around, he chats to all the staff while checking the layout of their counters and the quality of the produce. He examines the baking potato shelf and rejects three, one that has split virtually in half and two that are beginning to go green. He then pulls out a lemon that looks fine to me. When I ask why, he picks up a second lemon and says: Close your eyes and just feel and tell me which you would keep. I do and realise that while one is firm and hard, the other is going a bit squashy. Despite eagle-eyed Perriss pulling out fruit and veg that most of us would buy without a second thought, the wastage each week is tiny: produce worth 4,200 is marked down for a quick sale, and only 400-worth is scrapped. This, he explains, is down to Morrisons method of ordering, still done manually rather than by computer. Department heads know exactly how much theyve sold that day and how much theyre likely to sell the next, based on sales records and allowing for influences such as the weather. Perriss is in charge of 1,000 man-hours a week across the store. To help him, he has a key team of four, who each have direct responsibility for different departments. He is keen to hear what staff think. He recently held a talent day, inviting employees interested in moving to a new job within the store to come and talk to him about why they thought they should be promoted, and discuss how to go about it. We had twenty-three people come through the door, people wanting to talk about progression, he says. What do they need to do to become a supervisor? Twenty-three people will be better members of staff as a result of that talk. His favourite department is fish, which has a 4 m-long counter run by Debbie and Angela, who are busy having a discussion about how to cook a particular fish with a customer. But it is one of just 20 or so departments around the store and Perriss admits the pressure of making sure he knows whats happening on them all can be intense. You have to do so much and there could be something wrong with every single one, every day, he says. Youve got to minimise those things and shrink them into perspective. Youve got to love the job. And Perriss certainly does.", "hypothesis": "Perriss encourages staff to offer help to all customers.", "label": "c"} +{"uid": "id_496", "premise": "What is it like to run a large supermarket? Jill Insley finds out You cant beat really good service. Ive been shopping in the Thamesmead branch of supermarket chain Morrisons, in south-east London, and Ive experienced at first hand, the stores latest maxim for improving the shopping experience help, offer, thank. This involves identifying customers who might need help, greeting them, asking what they need, providing it, thanking them and leaving them in peace. If they dont look like they want help, theyll be left alone. But if theyre standing looking lost and perplexed, a member of staff will approach them. Staff are expected to be friendly to everyone. My checkout assistant has certainly said something to amuse the woman in front of me, shes smiling as she leaves. Adrian Perriss, manager of the branch, has discussed the approach with each of his 387 staff. He says its about recognising that someone needs help, not being a nuisance to them. When hes in another store, hes irritated by someone saying, Can I help you? when hes only just walked in to have a quick look at the products. How anyone can be friendly and enthusiastic when they start work at dawn beats me. The store opens at 7 am, Monday to Saturday, meaning that some staff, including Perriss, have to be here at 6 am to make sure its clean, safe and stocked up for the morning rush. Sometimes he walks in at 6 am and thinks theyre never going to be ready on time but they always are. Theres so much going on overnight 20 people working on unloading three enormous trailers full of groceries. Perriss has worked in supermarkets since 1982, when he became a trolley boy on a weekly salary of 76. It was less money than my previous job, but I loved it. It was different and diverse. I was doing trolleys, portering, bread, cakes, dairy and general maintenance. After a period in the produce department, looking after the fruit and vegetables, he was made produce manager, then assistant store manager, before reaching the top job in 1998. This involved intensive training and assessment through the companys future store manager programme, learning how to analyse and prioritise sales. wastage, recruitment and many other issues. Perriss first stop as store manager was at a store which was closed soon afterwards though he was not to blame. Despite the disappointing start, his career went from strength to strength and he was put in charge of launching new stores and heading up a concept store, where the then new ideas of preparing and cooking pizzas in store, and having a proper florist, and fruit and vegetable markets were Mailed. All Morrisons managers from the whole country spent three days there to see the new concept. That was hard work, he says, long days, seven days a week, for about a year. Although he oversees a store with a large turnover, there is a strongly practical aspect to Perrisss job. As we walk around, he chats to all the staff while checking the layout of their counters and the quality of the produce. He examines the baking potato shelf and rejects three, one that has split virtually in half and two that are beginning to go green. He then pulls out a lemon that looks fine to me. When I ask why, he picks up a second lemon and says: Close your eyes and just feel and tell me which you would keep. I do and realise that while one is firm and hard, the other is going a bit squashy. Despite eagle-eyed Perriss pulling out fruit and veg that most of us would buy without a second thought, the wastage each week is tiny: produce worth 4,200 is marked down for a quick sale, and only 400-worth is scrapped. This, he explains, is down to Morrisons method of ordering, still done manually rather than by computer. Department heads know exactly how much theyve sold that day and how much theyre likely to sell the next, based on sales records and allowing for influences such as the weather. Perriss is in charge of 1,000 man-hours a week across the store. To help him, he has a key team of four, who each have direct responsibility for different departments. He is keen to hear what staff think. He recently held a talent day, inviting employees interested in moving to a new job within the store to come and talk to him about why they thought they should be promoted, and discuss how to go about it. We had twenty-three people come through the door, people wanting to talk about progression, he says. What do they need to do to become a supervisor? Twenty-three people will be better members of staff as a result of that talk. His favourite department is fish, which has a 4 m-long counter run by Debbie and Angela, who are busy having a discussion about how to cook a particular fish with a customer. But it is one of just 20 or so departments around the store and Perriss admits the pressure of making sure he knows whats happening on them all can be intense. You have to do so much and there could be something wrong with every single one, every day, he says. Youve got to minimise those things and shrink them into perspective. Youve got to love the job. And Perriss certainly does.", "hypothesis": "Perriss is sometimes worried that customers will arrive before the store is ready for them.", "label": "e"} +{"uid": "id_497", "premise": "What is it like to run a large supermarket? Jill Insley finds out You cant beat really good service. Ive been shopping in the Thamesmead branch of supermarket chain Morrisons, in south-east London, and Ive experienced at first hand, the stores latest maxim for improving the shopping experience help, offer, thank. This involves identifying customers who might need help, greeting them, asking what they need, providing it, thanking them and leaving them in peace. If they dont look like they want help, theyll be left alone. But if theyre standing looking lost and perplexed, a member of staff will approach them. Staff are expected to be friendly to everyone. My checkout assistant has certainly said something to amuse the woman in front of me, shes smiling as she leaves. Adrian Perriss, manager of the branch, has discussed the approach with each of his 387 staff. He says its about recognising that someone needs help, not being a nuisance to them. When hes in another store, hes irritated by someone saying, Can I help you? when hes only just walked in to have a quick look at the products. How anyone can be friendly and enthusiastic when they start work at dawn beats me. The store opens at 7 am, Monday to Saturday, meaning that some staff, including Perriss, have to be here at 6 am to make sure its clean, safe and stocked up for the morning rush. Sometimes he walks in at 6 am and thinks theyre never going to be ready on time but they always are. Theres so much going on overnight 20 people working on unloading three enormous trailers full of groceries. Perriss has worked in supermarkets since 1982, when he became a trolley boy on a weekly salary of 76. It was less money than my previous job, but I loved it. It was different and diverse. I was doing trolleys, portering, bread, cakes, dairy and general maintenance. After a period in the produce department, looking after the fruit and vegetables, he was made produce manager, then assistant store manager, before reaching the top job in 1998. This involved intensive training and assessment through the companys future store manager programme, learning how to analyse and prioritise sales. wastage, recruitment and many other issues. Perriss first stop as store manager was at a store which was closed soon afterwards though he was not to blame. Despite the disappointing start, his career went from strength to strength and he was put in charge of launching new stores and heading up a concept store, where the then new ideas of preparing and cooking pizzas in store, and having a proper florist, and fruit and vegetable markets were Mailed. All Morrisons managers from the whole country spent three days there to see the new concept. That was hard work, he says, long days, seven days a week, for about a year. Although he oversees a store with a large turnover, there is a strongly practical aspect to Perrisss job. As we walk around, he chats to all the staff while checking the layout of their counters and the quality of the produce. He examines the baking potato shelf and rejects three, one that has split virtually in half and two that are beginning to go green. He then pulls out a lemon that looks fine to me. When I ask why, he picks up a second lemon and says: Close your eyes and just feel and tell me which you would keep. I do and realise that while one is firm and hard, the other is going a bit squashy. Despite eagle-eyed Perriss pulling out fruit and veg that most of us would buy without a second thought, the wastage each week is tiny: produce worth 4,200 is marked down for a quick sale, and only 400-worth is scrapped. This, he explains, is down to Morrisons method of ordering, still done manually rather than by computer. Department heads know exactly how much theyve sold that day and how much theyre likely to sell the next, based on sales records and allowing for influences such as the weather. Perriss is in charge of 1,000 man-hours a week across the store. To help him, he has a key team of four, who each have direct responsibility for different departments. He is keen to hear what staff think. He recently held a talent day, inviting employees interested in moving to a new job within the store to come and talk to him about why they thought they should be promoted, and discuss how to go about it. We had twenty-three people come through the door, people wanting to talk about progression, he says. What do they need to do to become a supervisor? Twenty-three people will be better members of staff as a result of that talk. His favourite department is fish, which has a 4 m-long counter run by Debbie and Angela, who are busy having a discussion about how to cook a particular fish with a customer. But it is one of just 20 or so departments around the store and Perriss admits the pressure of making sure he knows whats happening on them all can be intense. You have to do so much and there could be something wrong with every single one, every day, he says. Youve got to minimise those things and shrink them into perspective. Youve got to love the job. And Perriss certainly does.", "hypothesis": "When Perriss first became a store manager, he knew the store was going to close.", "label": "n"} +{"uid": "id_498", "premise": "What is it like to run a large supermarket? Jill Insley finds out You cant beat really good service. Ive been shopping in the Thamesmead branch of supermarket chain Morrisons, in south-east London, and Ive experienced at first hand, the stores latest maxim for improving the shopping experience help, offer, thank. This involves identifying customers who might need help, greeting them, asking what they need, providing it, thanking them and leaving them in peace. If they dont look like they want help, theyll be left alone. But if theyre standing looking lost and perplexed, a member of staff will approach them. Staff are expected to be friendly to everyone. My checkout assistant has certainly said something to amuse the woman in front of me, shes smiling as she leaves. Adrian Perriss, manager of the branch, has discussed the approach with each of his 387 staff. He says its about recognising that someone needs help, not being a nuisance to them. When hes in another store, hes irritated by someone saying, Can I help you? when hes only just walked in to have a quick look at the products. How anyone can be friendly and enthusiastic when they start work at dawn beats me. The store opens at 7 am, Monday to Saturday, meaning that some staff, including Perriss, have to be here at 6 am to make sure its clean, safe and stocked up for the morning rush. Sometimes he walks in at 6 am and thinks theyre never going to be ready on time but they always are. Theres so much going on overnight 20 people working on unloading three enormous trailers full of groceries. Perriss has worked in supermarkets since 1982, when he became a trolley boy on a weekly salary of 76. It was less money than my previous job, but I loved it. It was different and diverse. I was doing trolleys, portering, bread, cakes, dairy and general maintenance. After a period in the produce department, looking after the fruit and vegetables, he was made produce manager, then assistant store manager, before reaching the top job in 1998. This involved intensive training and assessment through the companys future store manager programme, learning how to analyse and prioritise sales. wastage, recruitment and many other issues. Perriss first stop as store manager was at a store which was closed soon afterwards though he was not to blame. Despite the disappointing start, his career went from strength to strength and he was put in charge of launching new stores and heading up a concept store, where the then new ideas of preparing and cooking pizzas in store, and having a proper florist, and fruit and vegetable markets were Mailed. All Morrisons managers from the whole country spent three days there to see the new concept. That was hard work, he says, long days, seven days a week, for about a year. Although he oversees a store with a large turnover, there is a strongly practical aspect to Perrisss job. As we walk around, he chats to all the staff while checking the layout of their counters and the quality of the produce. He examines the baking potato shelf and rejects three, one that has split virtually in half and two that are beginning to go green. He then pulls out a lemon that looks fine to me. When I ask why, he picks up a second lemon and says: Close your eyes and just feel and tell me which you would keep. I do and realise that while one is firm and hard, the other is going a bit squashy. Despite eagle-eyed Perriss pulling out fruit and veg that most of us would buy without a second thought, the wastage each week is tiny: produce worth 4,200 is marked down for a quick sale, and only 400-worth is scrapped. This, he explains, is down to Morrisons method of ordering, still done manually rather than by computer. Department heads know exactly how much theyve sold that day and how much theyre likely to sell the next, based on sales records and allowing for influences such as the weather. Perriss is in charge of 1,000 man-hours a week across the store. To help him, he has a key team of four, who each have direct responsibility for different departments. He is keen to hear what staff think. He recently held a talent day, inviting employees interested in moving to a new job within the store to come and talk to him about why they thought they should be promoted, and discuss how to go about it. We had twenty-three people come through the door, people wanting to talk about progression, he says. What do they need to do to become a supervisor? Twenty-three people will be better members of staff as a result of that talk. His favourite department is fish, which has a 4 m-long counter run by Debbie and Angela, who are busy having a discussion about how to cook a particular fish with a customer. But it is one of just 20 or so departments around the store and Perriss admits the pressure of making sure he knows whats happening on them all can be intense. You have to do so much and there could be something wrong with every single one, every day, he says. Youve got to minimise those things and shrink them into perspective. Youve got to love the job. And Perriss certainly does.", "hypothesis": "On average, produce worth 4,200 is thrown away every week.", "label": "c"} +{"uid": "id_499", "premise": "What is it that draws us to these creatures? This inhuman place makes human monsters, wrote Stephen King in his novel The Shining. Many academics agree that monsters lurk in the deepest recesses, they prowl through our ancestral minds appearing in the half-light, under the bed or at the bottom of the sea. They dont really exist, but they play a huge role in our mindscapes, in our dreams, stories, nightmares, myths and so on, says Matthias Classen, assistant professor of literature and media at Aarhus University in Denmark, who studies monsters in literature. Monsters say something about human psychology, not the world. One Norse legend talks of the Kraken, a deep sea creature that was the curse of fishermen. If sailors found a place with many fish, most likely it was the monster that was driving them to the surface. If it saw the ship it would pluck the hapless sailors from the boat and drag them to a watery grave. This terrifying legend occupied the mind and pen of the poet Alfred Lord Tennyson too. In his short 1830 poem The Kraken he wrote: Below the thunders of the upper deep, / Far far beneath in the abysmal sea, / His ancient, dreamless, uninvaded sleep / The Kraken sleepeth. The deeper we travel into the ocean, the deeper we delve into our own psyche. And when we can go no further there lurks the Kraken. Most likely the Kraken is based on a real creature the giant squid. The huge mollusc takes pride of place as the personification of the terrors of the deep sea. Sailors would have encountered it at the surface, dying, and probably thrashing about. It would have made a weird sight, about the most alien thing you can imagine, says Edith Widder, CEO at the Ocean Research and Conservation Association. It has eight lashing arms and two slashing tentacles growing straight out of its head and its got serrated suckers that can latch on to the slimiest of prey and its got a parrot beak that can rip flesh. Its got an eye the size of your head, its got a jet propulsion system and three hearts that pump blue blood. The giant squid continued to dominate stories of sea monsters with the famous 1870 novel, Twenty Thousand Leagues Under the Sea, by Jules Verne. Vernes submarine fantasy is a classic story of puny man against a gigantic squid. The monster needed no embellishment this creature was scary enough, and Verne incorporated as much fact as possible into the story, says Emily Alder from Edinburgh Napier University. Twenty Thousand Leagues Under the Sea and another contemporaneous book, Victor Hugos Toilers of the Sea, both tried to represent the giant squid as they might have been actual zoological animals, much more taking the squid as a biological creature than a mythical creature. It was a given that the squid was vicious and would readily attack humans given the chance. That myth wasnt busted until 2012, when Edith Widder and her colleagues were the first people to successfully film giant squid under water and see first-hand the true character of the monster of the deep. They realised previous attempts to film squid had failed because the bright lights and noisy thrusters on submersibles had frightened them away. By quietening down the engines and using bioluminescence to attract it, they managed to see this most extraordinary animal in its natural habitat. It serenely glided into view, its body rippled with metallic colours of bronze and silver. Its huge, intelligent eye watched the submarine warily as it delicately picked at the bait with its beak. It was balletic and mesmeric. It could not have been further from the gnashing, human-destroying creature of myth and literature. In reality this is a gentle giant that is easily scared and pecks at its food. Another giant squid lies peacefully in the Natural History Museum in London, in the Spirit Room, where it is preserved in a huge glass case. In 2004 it was caught in a fishing net off the Falkland Islands and died at the surface. The crew immediately froze its body and it was sent to be preserved in the museum by the Curator of Molluscs, Jon Ablett. It is called Archie, an affectionate short version of its Latin name Architeuthis dux. It is the longest preserved specimen of a giant squid in the world. It really has brought science to life for many people, says Ablett. Sometimes I feel a bit overshadowed by Archie, most of my work is on slugs and snails but unfortunately most people dont want to talk about that! And so today we can watch Archies graceful relative on film and stare Archie herself (she is a female) eye-to-eye in a museum. But have we finally slain the monster of the deep? Now we know there is nothing to be afraid of, can the Kraken finally be laid to rest? Probably not says Classen. We humans are afraid of the strangest things. They dont need to be realistic. Theres no indication that enlightenment and scientific progress has banished the monsters from the shadows of our imaginations. We will continue to be afraid of very strange things, including probably sea monsters. Indeed we are. The Kraken made a fearsome appearance in the blockbuster series Pirates of the Caribbean. It forced Captain Jack Sparrow to face his demons in a terrifying face-to-face encounter. Pirates needed the monstrous Kraken, nothing else would do. Or, as the German film director Werner Herzog put it, What would an ocean be without a monster lurking in the dark? It would be like sleep without dreams.", "hypothesis": "Werner Herzog suggests that Kraken is essential to the ocean.", "label": "n"} +{"uid": "id_500", "premise": "What is it that draws us to these creatures? This inhuman place makes human monsters, wrote Stephen King in his novel The Shining. Many academics agree that monsters lurk in the deepest recesses, they prowl through our ancestral minds appearing in the half-light, under the bed or at the bottom of the sea. They dont really exist, but they play a huge role in our mindscapes, in our dreams, stories, nightmares, myths and so on, says Matthias Classen, assistant professor of literature and media at Aarhus University in Denmark, who studies monsters in literature. Monsters say something about human psychology, not the world. One Norse legend talks of the Kraken, a deep sea creature that was the curse of fishermen. If sailors found a place with many fish, most likely it was the monster that was driving them to the surface. If it saw the ship it would pluck the hapless sailors from the boat and drag them to a watery grave. This terrifying legend occupied the mind and pen of the poet Alfred Lord Tennyson too. In his short 1830 poem The Kraken he wrote: Below the thunders of the upper deep, / Far far beneath in the abysmal sea, / His ancient, dreamless, uninvaded sleep / The Kraken sleepeth. The deeper we travel into the ocean, the deeper we delve into our own psyche. And when we can go no further there lurks the Kraken. Most likely the Kraken is based on a real creature the giant squid. The huge mollusc takes pride of place as the personification of the terrors of the deep sea. Sailors would have encountered it at the surface, dying, and probably thrashing about. It would have made a weird sight, about the most alien thing you can imagine, says Edith Widder, CEO at the Ocean Research and Conservation Association. It has eight lashing arms and two slashing tentacles growing straight out of its head and its got serrated suckers that can latch on to the slimiest of prey and its got a parrot beak that can rip flesh. Its got an eye the size of your head, its got a jet propulsion system and three hearts that pump blue blood. The giant squid continued to dominate stories of sea monsters with the famous 1870 novel, Twenty Thousand Leagues Under the Sea, by Jules Verne. Vernes submarine fantasy is a classic story of puny man against a gigantic squid. The monster needed no embellishment this creature was scary enough, and Verne incorporated as much fact as possible into the story, says Emily Alder from Edinburgh Napier University. Twenty Thousand Leagues Under the Sea and another contemporaneous book, Victor Hugos Toilers of the Sea, both tried to represent the giant squid as they might have been actual zoological animals, much more taking the squid as a biological creature than a mythical creature. It was a given that the squid was vicious and would readily attack humans given the chance. That myth wasnt busted until 2012, when Edith Widder and her colleagues were the first people to successfully film giant squid under water and see first-hand the true character of the monster of the deep. They realised previous attempts to film squid had failed because the bright lights and noisy thrusters on submersibles had frightened them away. By quietening down the engines and using bioluminescence to attract it, they managed to see this most extraordinary animal in its natural habitat. It serenely glided into view, its body rippled with metallic colours of bronze and silver. Its huge, intelligent eye watched the submarine warily as it delicately picked at the bait with its beak. It was balletic and mesmeric. It could not have been further from the gnashing, human-destroying creature of myth and literature. In reality this is a gentle giant that is easily scared and pecks at its food. Another giant squid lies peacefully in the Natural History Museum in London, in the Spirit Room, where it is preserved in a huge glass case. In 2004 it was caught in a fishing net off the Falkland Islands and died at the surface. The crew immediately froze its body and it was sent to be preserved in the museum by the Curator of Molluscs, Jon Ablett. It is called Archie, an affectionate short version of its Latin name Architeuthis dux. It is the longest preserved specimen of a giant squid in the world. It really has brought science to life for many people, says Ablett. Sometimes I feel a bit overshadowed by Archie, most of my work is on slugs and snails but unfortunately most people dont want to talk about that! And so today we can watch Archies graceful relative on film and stare Archie herself (she is a female) eye-to-eye in a museum. But have we finally slain the monster of the deep? Now we know there is nothing to be afraid of, can the Kraken finally be laid to rest? Probably not says Classen. We humans are afraid of the strangest things. They dont need to be realistic. Theres no indication that enlightenment and scientific progress has banished the monsters from the shadows of our imaginations. We will continue to be afraid of very strange things, including probably sea monsters. Indeed we are. The Kraken made a fearsome appearance in the blockbuster series Pirates of the Caribbean. It forced Captain Jack Sparrow to face his demons in a terrifying face-to-face encounter. Pirates needed the monstrous Kraken, nothing else would do. Or, as the German film director Werner Herzog put it, What would an ocean be without a monster lurking in the dark? It would be like sleep without dreams.", "hypothesis": "According to Classen, people can be scared both by imaginary and real monsters.", "label": "e"} +{"uid": "id_501", "premise": "What is it that draws us to these creatures? This inhuman place makes human monsters, wrote Stephen King in his novel The Shining. Many academics agree that monsters lurk in the deepest recesses, they prowl through our ancestral minds appearing in the half-light, under the bed or at the bottom of the sea. They dont really exist, but they play a huge role in our mindscapes, in our dreams, stories, nightmares, myths and so on, says Matthias Classen, assistant professor of literature and media at Aarhus University in Denmark, who studies monsters in literature. Monsters say something about human psychology, not the world. One Norse legend talks of the Kraken, a deep sea creature that was the curse of fishermen. If sailors found a place with many fish, most likely it was the monster that was driving them to the surface. If it saw the ship it would pluck the hapless sailors from the boat and drag them to a watery grave. This terrifying legend occupied the mind and pen of the poet Alfred Lord Tennyson too. In his short 1830 poem The Kraken he wrote: Below the thunders of the upper deep, / Far far beneath in the abysmal sea, / His ancient, dreamless, uninvaded sleep / The Kraken sleepeth. The deeper we travel into the ocean, the deeper we delve into our own psyche. And when we can go no further there lurks the Kraken. Most likely the Kraken is based on a real creature the giant squid. The huge mollusc takes pride of place as the personification of the terrors of the deep sea. Sailors would have encountered it at the surface, dying, and probably thrashing about. It would have made a weird sight, about the most alien thing you can imagine, says Edith Widder, CEO at the Ocean Research and Conservation Association. It has eight lashing arms and two slashing tentacles growing straight out of its head and its got serrated suckers that can latch on to the slimiest of prey and its got a parrot beak that can rip flesh. Its got an eye the size of your head, its got a jet propulsion system and three hearts that pump blue blood. The giant squid continued to dominate stories of sea monsters with the famous 1870 novel, Twenty Thousand Leagues Under the Sea, by Jules Verne. Vernes submarine fantasy is a classic story of puny man against a gigantic squid. The monster needed no embellishment this creature was scary enough, and Verne incorporated as much fact as possible into the story, says Emily Alder from Edinburgh Napier University. Twenty Thousand Leagues Under the Sea and another contemporaneous book, Victor Hugos Toilers of the Sea, both tried to represent the giant squid as they might have been actual zoological animals, much more taking the squid as a biological creature than a mythical creature. It was a given that the squid was vicious and would readily attack humans given the chance. That myth wasnt busted until 2012, when Edith Widder and her colleagues were the first people to successfully film giant squid under water and see first-hand the true character of the monster of the deep. They realised previous attempts to film squid had failed because the bright lights and noisy thrusters on submersibles had frightened them away. By quietening down the engines and using bioluminescence to attract it, they managed to see this most extraordinary animal in its natural habitat. It serenely glided into view, its body rippled with metallic colours of bronze and silver. Its huge, intelligent eye watched the submarine warily as it delicately picked at the bait with its beak. It was balletic and mesmeric. It could not have been further from the gnashing, human-destroying creature of myth and literature. In reality this is a gentle giant that is easily scared and pecks at its food. Another giant squid lies peacefully in the Natural History Museum in London, in the Spirit Room, where it is preserved in a huge glass case. In 2004 it was caught in a fishing net off the Falkland Islands and died at the surface. The crew immediately froze its body and it was sent to be preserved in the museum by the Curator of Molluscs, Jon Ablett. It is called Archie, an affectionate short version of its Latin name Architeuthis dux. It is the longest preserved specimen of a giant squid in the world. It really has brought science to life for many people, says Ablett. Sometimes I feel a bit overshadowed by Archie, most of my work is on slugs and snails but unfortunately most people dont want to talk about that! And so today we can watch Archies graceful relative on film and stare Archie herself (she is a female) eye-to-eye in a museum. But have we finally slain the monster of the deep? Now we know there is nothing to be afraid of, can the Kraken finally be laid to rest? Probably not says Classen. We humans are afraid of the strangest things. They dont need to be realistic. Theres no indication that enlightenment and scientific progress has banished the monsters from the shadows of our imaginations. We will continue to be afraid of very strange things, including probably sea monsters. Indeed we are. The Kraken made a fearsome appearance in the blockbuster series Pirates of the Caribbean. It forced Captain Jack Sparrow to face his demons in a terrifying face-to-face encounter. Pirates needed the monstrous Kraken, nothing else would do. Or, as the German film director Werner Herzog put it, What would an ocean be without a monster lurking in the dark? It would be like sleep without dreams.", "hypothesis": "Giant squid was caught alive in 2004 and brought to the museum.", "label": "c"} +{"uid": "id_502", "premise": "What is it that draws us to these creatures? This inhuman place makes human monsters, wrote Stephen King in his novel The Shining. Many academics agree that monsters lurk in the deepest recesses, they prowl through our ancestral minds appearing in the half-light, under the bed or at the bottom of the sea. They dont really exist, but they play a huge role in our mindscapes, in our dreams, stories, nightmares, myths and so on, says Matthias Classen, assistant professor of literature and media at Aarhus University in Denmark, who studies monsters in literature. Monsters say something about human psychology, not the world. One Norse legend talks of the Kraken, a deep sea creature that was the curse of fishermen. If sailors found a place with many fish, most likely it was the monster that was driving them to the surface. If it saw the ship it would pluck the hapless sailors from the boat and drag them to a watery grave. This terrifying legend occupied the mind and pen of the poet Alfred Lord Tennyson too. In his short 1830 poem The Kraken he wrote: Below the thunders of the upper deep, / Far far beneath in the abysmal sea, / His ancient, dreamless, uninvaded sleep / The Kraken sleepeth. The deeper we travel into the ocean, the deeper we delve into our own psyche. And when we can go no further there lurks the Kraken. Most likely the Kraken is based on a real creature the giant squid. The huge mollusc takes pride of place as the personification of the terrors of the deep sea. Sailors would have encountered it at the surface, dying, and probably thrashing about. It would have made a weird sight, about the most alien thing you can imagine, says Edith Widder, CEO at the Ocean Research and Conservation Association. It has eight lashing arms and two slashing tentacles growing straight out of its head and its got serrated suckers that can latch on to the slimiest of prey and its got a parrot beak that can rip flesh. Its got an eye the size of your head, its got a jet propulsion system and three hearts that pump blue blood. The giant squid continued to dominate stories of sea monsters with the famous 1870 novel, Twenty Thousand Leagues Under the Sea, by Jules Verne. Vernes submarine fantasy is a classic story of puny man against a gigantic squid. The monster needed no embellishment this creature was scary enough, and Verne incorporated as much fact as possible into the story, says Emily Alder from Edinburgh Napier University. Twenty Thousand Leagues Under the Sea and another contemporaneous book, Victor Hugos Toilers of the Sea, both tried to represent the giant squid as they might have been actual zoological animals, much more taking the squid as a biological creature than a mythical creature. It was a given that the squid was vicious and would readily attack humans given the chance. That myth wasnt busted until 2012, when Edith Widder and her colleagues were the first people to successfully film giant squid under water and see first-hand the true character of the monster of the deep. They realised previous attempts to film squid had failed because the bright lights and noisy thrusters on submersibles had frightened them away. By quietening down the engines and using bioluminescence to attract it, they managed to see this most extraordinary animal in its natural habitat. It serenely glided into view, its body rippled with metallic colours of bronze and silver. Its huge, intelligent eye watched the submarine warily as it delicately picked at the bait with its beak. It was balletic and mesmeric. It could not have been further from the gnashing, human-destroying creature of myth and literature. In reality this is a gentle giant that is easily scared and pecks at its food. Another giant squid lies peacefully in the Natural History Museum in London, in the Spirit Room, where it is preserved in a huge glass case. In 2004 it was caught in a fishing net off the Falkland Islands and died at the surface. The crew immediately froze its body and it was sent to be preserved in the museum by the Curator of Molluscs, Jon Ablett. It is called Archie, an affectionate short version of its Latin name Architeuthis dux. It is the longest preserved specimen of a giant squid in the world. It really has brought science to life for many people, says Ablett. Sometimes I feel a bit overshadowed by Archie, most of my work is on slugs and snails but unfortunately most people dont want to talk about that! And so today we can watch Archies graceful relative on film and stare Archie herself (she is a female) eye-to-eye in a museum. But have we finally slain the monster of the deep? Now we know there is nothing to be afraid of, can the Kraken finally be laid to rest? Probably not says Classen. We humans are afraid of the strangest things. They dont need to be realistic. Theres no indication that enlightenment and scientific progress has banished the monsters from the shadows of our imaginations. We will continue to be afraid of very strange things, including probably sea monsters. Indeed we are. The Kraken made a fearsome appearance in the blockbuster series Pirates of the Caribbean. It forced Captain Jack Sparrow to face his demons in a terrifying face-to-face encounter. Pirates needed the monstrous Kraken, nothing else would do. Or, as the German film director Werner Herzog put it, What would an ocean be without a monster lurking in the dark? It would be like sleep without dreams.", "hypothesis": "Previous attempts on filming the squid had failed due to the fact that the creature was scared.", "label": "e"} +{"uid": "id_503", "premise": "What is it that draws us to these creatures? This inhuman place makes human monsters, wrote Stephen King in his novel The Shining. Many academics agree that monsters lurk in the deepest recesses, they prowl through our ancestral minds appearing in the half-light, under the bed or at the bottom of the sea. They dont really exist, but they play a huge role in our mindscapes, in our dreams, stories, nightmares, myths and so on, says Matthias Classen, assistant professor of literature and media at Aarhus University in Denmark, who studies monsters in literature. Monsters say something about human psychology, not the world. One Norse legend talks of the Kraken, a deep sea creature that was the curse of fishermen. If sailors found a place with many fish, most likely it was the monster that was driving them to the surface. If it saw the ship it would pluck the hapless sailors from the boat and drag them to a watery grave. This terrifying legend occupied the mind and pen of the poet Alfred Lord Tennyson too. In his short 1830 poem The Kraken he wrote: Below the thunders of the upper deep, / Far far beneath in the abysmal sea, / His ancient, dreamless, uninvaded sleep / The Kraken sleepeth. The deeper we travel into the ocean, the deeper we delve into our own psyche. And when we can go no further there lurks the Kraken. Most likely the Kraken is based on a real creature the giant squid. The huge mollusc takes pride of place as the personification of the terrors of the deep sea. Sailors would have encountered it at the surface, dying, and probably thrashing about. It would have made a weird sight, about the most alien thing you can imagine, says Edith Widder, CEO at the Ocean Research and Conservation Association. It has eight lashing arms and two slashing tentacles growing straight out of its head and its got serrated suckers that can latch on to the slimiest of prey and its got a parrot beak that can rip flesh. Its got an eye the size of your head, its got a jet propulsion system and three hearts that pump blue blood. The giant squid continued to dominate stories of sea monsters with the famous 1870 novel, Twenty Thousand Leagues Under the Sea, by Jules Verne. Vernes submarine fantasy is a classic story of puny man against a gigantic squid. The monster needed no embellishment this creature was scary enough, and Verne incorporated as much fact as possible into the story, says Emily Alder from Edinburgh Napier University. Twenty Thousand Leagues Under the Sea and another contemporaneous book, Victor Hugos Toilers of the Sea, both tried to represent the giant squid as they might have been actual zoological animals, much more taking the squid as a biological creature than a mythical creature. It was a given that the squid was vicious and would readily attack humans given the chance. That myth wasnt busted until 2012, when Edith Widder and her colleagues were the first people to successfully film giant squid under water and see first-hand the true character of the monster of the deep. They realised previous attempts to film squid had failed because the bright lights and noisy thrusters on submersibles had frightened them away. By quietening down the engines and using bioluminescence to attract it, they managed to see this most extraordinary animal in its natural habitat. It serenely glided into view, its body rippled with metallic colours of bronze and silver. Its huge, intelligent eye watched the submarine warily as it delicately picked at the bait with its beak. It was balletic and mesmeric. It could not have been further from the gnashing, human-destroying creature of myth and literature. In reality this is a gentle giant that is easily scared and pecks at its food. Another giant squid lies peacefully in the Natural History Museum in London, in the Spirit Room, where it is preserved in a huge glass case. In 2004 it was caught in a fishing net off the Falkland Islands and died at the surface. The crew immediately froze its body and it was sent to be preserved in the museum by the Curator of Molluscs, Jon Ablett. It is called Archie, an affectionate short version of its Latin name Architeuthis dux. It is the longest preserved specimen of a giant squid in the world. It really has brought science to life for many people, says Ablett. Sometimes I feel a bit overshadowed by Archie, most of my work is on slugs and snails but unfortunately most people dont want to talk about that! And so today we can watch Archies graceful relative on film and stare Archie herself (she is a female) eye-to-eye in a museum. But have we finally slain the monster of the deep? Now we know there is nothing to be afraid of, can the Kraken finally be laid to rest? Probably not says Classen. We humans are afraid of the strangest things. They dont need to be realistic. Theres no indication that enlightenment and scientific progress has banished the monsters from the shadows of our imaginations. We will continue to be afraid of very strange things, including probably sea monsters. Indeed we are. The Kraken made a fearsome appearance in the blockbuster series Pirates of the Caribbean. It forced Captain Jack Sparrow to face his demons in a terrifying face-to-face encounter. Pirates needed the monstrous Kraken, nothing else would do. Or, as the German film director Werner Herzog put it, What would an ocean be without a monster lurking in the dark? It would be like sleep without dreams.", "hypothesis": "Jon Ablett admits that he likes Archie.", "label": "n"} +{"uid": "id_504", "premise": "What is it that draws us to these creatures? This inhuman place makes human monsters, wrote Stephen King in his novel The Shining. Many academics agree that monsters lurk in the deepest recesses, they prowl through our ancestral minds appearing in the half-light, under the bed or at the bottom of the sea. They dont really exist, but they play a huge role in our mindscapes, in our dreams, stories, nightmares, myths and so on, says Matthias Classen, assistant professor of literature and media at Aarhus University in Denmark, who studies monsters in literature. Monsters say something about human psychology, not the world. One Norse legend talks of the Kraken, a deep sea creature that was the curse of fishermen. If sailors found a place with many fish, most likely it was the monster that was driving them to the surface. If it saw the ship it would pluck the hapless sailors from the boat and drag them to a watery grave. This terrifying legend occupied the mind and pen of the poet Alfred Lord Tennyson too. In his short 1830 poem The Kraken he wrote: Below the thunders of the upper deep, / Far far beneath in the abysmal sea, / His ancient, dreamless, uninvaded sleep / The Kraken sleepeth. The deeper we travel into the ocean, the deeper we delve into our own psyche. And when we can go no further there lurks the Kraken. Most likely the Kraken is based on a real creature the giant squid. The huge mollusc takes pride of place as the personification of the terrors of the deep sea. Sailors would have encountered it at the surface, dying, and probably thrashing about. It would have made a weird sight, about the most alien thing you can imagine, says Edith Widder, CEO at the Ocean Research and Conservation Association. It has eight lashing arms and two slashing tentacles growing straight out of its head and its got serrated suckers that can latch on to the slimiest of prey and its got a parrot beak that can rip flesh. Its got an eye the size of your head, its got a jet propulsion system and three hearts that pump blue blood. The giant squid continued to dominate stories of sea monsters with the famous 1870 novel, Twenty Thousand Leagues Under the Sea, by Jules Verne. Vernes submarine fantasy is a classic story of puny man against a gigantic squid. The monster needed no embellishment this creature was scary enough, and Verne incorporated as much fact as possible into the story, says Emily Alder from Edinburgh Napier University. Twenty Thousand Leagues Under the Sea and another contemporaneous book, Victor Hugos Toilers of the Sea, both tried to represent the giant squid as they might have been actual zoological animals, much more taking the squid as a biological creature than a mythical creature. It was a given that the squid was vicious and would readily attack humans given the chance. That myth wasnt busted until 2012, when Edith Widder and her colleagues were the first people to successfully film giant squid under water and see first-hand the true character of the monster of the deep. They realised previous attempts to film squid had failed because the bright lights and noisy thrusters on submersibles had frightened them away. By quietening down the engines and using bioluminescence to attract it, they managed to see this most extraordinary animal in its natural habitat. It serenely glided into view, its body rippled with metallic colours of bronze and silver. Its huge, intelligent eye watched the submarine warily as it delicately picked at the bait with its beak. It was balletic and mesmeric. It could not have been further from the gnashing, human-destroying creature of myth and literature. In reality this is a gentle giant that is easily scared and pecks at its food. Another giant squid lies peacefully in the Natural History Museum in London, in the Spirit Room, where it is preserved in a huge glass case. In 2004 it was caught in a fishing net off the Falkland Islands and died at the surface. The crew immediately froze its body and it was sent to be preserved in the museum by the Curator of Molluscs, Jon Ablett. It is called Archie, an affectionate short version of its Latin name Architeuthis dux. It is the longest preserved specimen of a giant squid in the world. It really has brought science to life for many people, says Ablett. Sometimes I feel a bit overshadowed by Archie, most of my work is on slugs and snails but unfortunately most people dont want to talk about that! And so today we can watch Archies graceful relative on film and stare Archie herself (she is a female) eye-to-eye in a museum. But have we finally slain the monster of the deep? Now we know there is nothing to be afraid of, can the Kraken finally be laid to rest? Probably not says Classen. We humans are afraid of the strangest things. They dont need to be realistic. Theres no indication that enlightenment and scientific progress has banished the monsters from the shadows of our imaginations. We will continue to be afraid of very strange things, including probably sea monsters. Indeed we are. The Kraken made a fearsome appearance in the blockbuster series Pirates of the Caribbean. It forced Captain Jack Sparrow to face his demons in a terrifying face-to-face encounter. Pirates needed the monstrous Kraken, nothing else would do. Or, as the German film director Werner Herzog put it, What would an ocean be without a monster lurking in the dark? It would be like sleep without dreams.", "hypothesis": "Matthias Classen is unsure about the possibility of monsters existence.", "label": "c"} +{"uid": "id_505", "premise": "What is it that draws us to these creatures? This inhuman place makes human monsters, wrote Stephen King in his novel The Shining. Many academics agree that monsters lurk in the deepest recesses, they prowl through our ancestral minds appearing in the half-light, under the bed or at the bottom of the sea. They dont really exist, but they play a huge role in our mindscapes, in our dreams, stories, nightmares, myths and so on, says Matthias Classen, assistant professor of literature and media at Aarhus University in Denmark, who studies monsters in literature. Monsters say something about human psychology, not the world. One Norse legend talks of the Kraken, a deep sea creature that was the curse of fishermen. If sailors found a place with many fish, most likely it was the monster that was driving them to the surface. If it saw the ship it would pluck the hapless sailors from the boat and drag them to a watery grave. This terrifying legend occupied the mind and pen of the poet Alfred Lord Tennyson too. In his short 1830 poem The Kraken he wrote: Below the thunders of the upper deep, / Far far beneath in the abysmal sea, / His ancient, dreamless, uninvaded sleep / The Kraken sleepeth. The deeper we travel into the ocean, the deeper we delve into our own psyche. And when we can go no further there lurks the Kraken. Most likely the Kraken is based on a real creature the giant squid. The huge mollusc takes pride of place as the personification of the terrors of the deep sea. Sailors would have encountered it at the surface, dying, and probably thrashing about. It would have made a weird sight, about the most alien thing you can imagine, says Edith Widder, CEO at the Ocean Research and Conservation Association. It has eight lashing arms and two slashing tentacles growing straight out of its head and its got serrated suckers that can latch on to the slimiest of prey and its got a parrot beak that can rip flesh. Its got an eye the size of your head, its got a jet propulsion system and three hearts that pump blue blood. The giant squid continued to dominate stories of sea monsters with the famous 1870 novel, Twenty Thousand Leagues Under the Sea, by Jules Verne. Vernes submarine fantasy is a classic story of puny man against a gigantic squid. The monster needed no embellishment this creature was scary enough, and Verne incorporated as much fact as possible into the story, says Emily Alder from Edinburgh Napier University. Twenty Thousand Leagues Under the Sea and another contemporaneous book, Victor Hugos Toilers of the Sea, both tried to represent the giant squid as they might have been actual zoological animals, much more taking the squid as a biological creature than a mythical creature. It was a given that the squid was vicious and would readily attack humans given the chance. That myth wasnt busted until 2012, when Edith Widder and her colleagues were the first people to successfully film giant squid under water and see first-hand the true character of the monster of the deep. They realised previous attempts to film squid had failed because the bright lights and noisy thrusters on submersibles had frightened them away. By quietening down the engines and using bioluminescence to attract it, they managed to see this most extraordinary animal in its natural habitat. It serenely glided into view, its body rippled with metallic colours of bronze and silver. Its huge, intelligent eye watched the submarine warily as it delicately picked at the bait with its beak. It was balletic and mesmeric. It could not have been further from the gnashing, human-destroying creature of myth and literature. In reality this is a gentle giant that is easily scared and pecks at its food. Another giant squid lies peacefully in the Natural History Museum in London, in the Spirit Room, where it is preserved in a huge glass case. In 2004 it was caught in a fishing net off the Falkland Islands and died at the surface. The crew immediately froze its body and it was sent to be preserved in the museum by the Curator of Molluscs, Jon Ablett. It is called Archie, an affectionate short version of its Latin name Architeuthis dux. It is the longest preserved specimen of a giant squid in the world. It really has brought science to life for many people, says Ablett. Sometimes I feel a bit overshadowed by Archie, most of my work is on slugs and snails but unfortunately most people dont want to talk about that! And so today we can watch Archies graceful relative on film and stare Archie herself (she is a female) eye-to-eye in a museum. But have we finally slain the monster of the deep? Now we know there is nothing to be afraid of, can the Kraken finally be laid to rest? Probably not says Classen. We humans are afraid of the strangest things. They dont need to be realistic. Theres no indication that enlightenment and scientific progress has banished the monsters from the shadows of our imaginations. We will continue to be afraid of very strange things, including probably sea monsters. Indeed we are. The Kraken made a fearsome appearance in the blockbuster series Pirates of the Caribbean. It forced Captain Jack Sparrow to face his demons in a terrifying face-to-face encounter. Pirates needed the monstrous Kraken, nothing else would do. Or, as the German film director Werner Herzog put it, What would an ocean be without a monster lurking in the dark? It would be like sleep without dreams.", "hypothesis": "Kraken is probably based on an imaginary animal.", "label": "c"} +{"uid": "id_506", "premise": "What is patriotism? Is it love of ones birthplace, the place of childhoods recollections and hopes, dreams and aspirations? Is it the place where, in child- like naivety, we would watch the fleeting clouds and wonder why we too could not run so swiftly? The place where we would count the milliard glittering stars, terror-stricken lest each one an eye should be, piercing the very depths of our little souls? Is it the place where we would listen to the music of the birds and long to have wings to fly, even as they, to distant lands? Or the place where wewould sit at mothers knee, enraptured by wonderful tales of great deeds and conquests? In short, is it love for the spot, every inch representing dear and precious recollections of a happy, joyous, and playful childhood? If that were patriotism, few American men of today could be called upon to be patriotic, since the place of play has been turned into a factory, mill, and mine, while deafening sounds of machinery have replaced the music of the birds. Nor can we still hear the tales of great deeds, for the stories our mothers tell today are but those of sorrow, tears and grief. What, then, is patriotism? Patriotism, sir, is the last resort of scoundrels, said Dr. Johnson. Leo Tolstoy, the greatest anti-patriot of our times, defines patriotism as the principle that will justify the training of wholesale murderers; a trade that requires better equipment for the exercise of man-killing than the making of such necessities of life as shoes, clothing, and houses; a trade that guarantees better returns and greater glory than that of the average workingman.", "hypothesis": "The author believes patriotism as being the love for the spot where one grew up and had many happy memories.", "label": "n"} +{"uid": "id_507", "premise": "What is patriotism? Is it love of ones birthplace, the place of childhoods recollections and hopes, dreams and aspirations? Is it the place where, in child- like naivety, we would watch the fleeting clouds and wonder why we too could not run so swiftly? The place where we would count the milliard glittering stars, terror-stricken lest each one an eye should be, piercing the very depths of our little souls? Is it the place where we would listen to the music of the birds and long to have wings to fly, even as they, to distant lands? Or the place where wewould sit at mothers knee, enraptured by wonderful tales of great deeds and conquests? In short, is it love for the spot, every inch representing dear and precious recollections of a happy, joyous, and playful childhood? If that were patriotism, few American men of today could be called upon to be patriotic, since the place of play has been turned into a factory, mill, and mine, while deafening sounds of machinery have replaced the music of the birds. Nor can we still hear the tales of great deeds, for the stories our mothers tell today are but those of sorrow, tears and grief. What, then, is patriotism? Patriotism, sir, is the last resort of scoundrels, said Dr. Johnson. Leo Tolstoy, the greatest anti-patriot of our times, defines patriotism as the principle that will justify the training of wholesale murderers; a trade that requires better equipment for the exercise of man-killing than the making of such necessities of life as shoes, clothing, and houses; a trade that guarantees better returns and greater glory than that of the average workingman.", "hypothesis": "Few Americans of today are patriotic", "label": "c"} +{"uid": "id_508", "premise": "What qualities and attributes make a political leader successful? A recent poll asked voters what they looked for in the ideal political leader; a good economic strategy perhaps, a willingness to admit the past mistakes of ones party or the ability to be likeable as a person. The popular answer seems to be credibility; voters want someone they can trust. How does this translate into a political strategy? To begin with, the poll suggests confusion surrounds the very basics of politics; what do the parties stand for anymore? The past few years has seen such a convergence in political ideals that the once clear blue conservative and red labour lines are now somewhat purple. To be credible, to be a successful political leader, you mustnt be afraid of hanging the banners and stating your policy. By this, and not doing a political 180 after the election, is the key to number 10.", "hypothesis": "Conservative and Labour parties have become more distinct.", "label": "c"} +{"uid": "id_509", "premise": "What qualities and attributes make a political leader successful? A recent poll asked voters what they looked for in the ideal political leader; a good economic strategy perhaps, a willingness to admit the past mistakes of ones party or the ability to be likeable as a person. The popular answer seems to be credibility; voters want someone they can trust. How does this translate into a political strategy? To begin with, the poll suggests confusion surrounds the very basics of politics; what do the parties stand for anymore? The past few years has seen such a convergence in political ideals that the once clear blue conservative and red labour lines are now somewhat purple. To be credible, to be a successful political leader, you mustnt be afraid of hanging the banners and stating your policy. By this, and not doing a political 180 after the election, is the key to number 10.", "hypothesis": "conservative and Labour parties have become similar", "label": "e"} +{"uid": "id_510", "premise": "What qualities and attributes make a political leader successful? A recent poll asked voters what they looked for in the ideal political leader; a good economic strategy perhaps, a willingness to admit the past mistakes of ones party or the ability to be likeable as a person. The popular answer seems to be credibility; voters want someone they can trust. How does this translate into a political strategy? To begin with, the poll suggests confusion surrounds the very basics of politics; what do the parties stand for anymore? The past few years has seen such a convergence in political ideals that the once clear blue conservative and red labour lines are now somewhat purple. To be credible, to be a successful political leader, you mustnt be afraid of hanging the banners and stating your policy. By this, and not doing a political 180 after the election, is the key to number 10.", "hypothesis": "Conservative and Labour party are currently merging", "label": "n"} +{"uid": "id_511", "premise": "What qualities and attributes make a political leader successful? A recent poll asked voters what they looked for in the ideal political leader; a good economic strategy perhaps, a willingness to admit the past mistakes of ones party or the ability to be likeable as a person. The popular answer seems to be credibility; voters want someone they can trust. How does this translate into a political strategy? To begin with, the poll suggests confusion surrounds the very basics of politics; what do the parties stand for anymore? The past few years has seen such a convergence in political ideals that the once clear blue conservative and red labour lines are now somewhat purple. To be credible, to be a successful political leader, you mustnt be afraid of hanging the banners and stating your policy. By this, and not doing a political 180 after the election, is the key to number 10.", "hypothesis": "Conservative and Labour MPs have become geographically closer.", "label": "n"} +{"uid": "id_512", "premise": "What the Managers Really Do? When students graduate and first enter the workforce, the most common choice is to find an entry-level position. This can be a job such as an unpaid internship, an assistant, a secretary, or a junior partner position. Traditionally, we start with simpler jobs and work our way up. Young professionals start out with a plan to become senior partners, associates, or even managers of a workplace. However, these promotions can be few and far between, leaving many young professionals unfamiliar with management experience. An important step is understanding the role and responsibilities of a person in a managing position. Managers are organisational members who are responsible for the work performance of other organisational members. Managers have formal authority to use organisational resources and to make decisions. Managers at different levels of the organisation engage in different amounts of time on the four managerial functions of planning, organising, leading, and controlling. However, as many professionals already know, managing styles can be very different depending on where you work. Some managing styles are strictly hierarchical. Other managing styles can be more casual and relaxed, where the manager may act more like a team member rather than a strict boss. Many researchers have created a more scientific approach in studying these different approaches to managing. In the 1960s, researcher Henry Mintzberg created a seminal organisational model using three categories. These categories represent three major functional approaches, which are designated as interpersonal, informational and decisional. Introduced Category 1: INTERPERSONAL ROLES. Interpersonal roles require managers to direct and supervise employees and the organisation. The figurehead is typically a top of middle manager. This manager may communicate future organisational goals or ethical guidelines to employees at company meetings. They also attend ribbon-cutting ceremonies, host receptions, presentations and other activities associated with the figurehead role. A leader acts as an example for other employees to follow, gives commands and directions to subordinates, makes decisions, and mobilises employee support. They are also responsible for the selection and training of employees. Managers must be leaders at all levels of the organisation; often lower-level managers look to top management for this leadership example. In the role of liaison, a manager must coordinate the work of others in different work units, establish alliances between others, and work to share resources. This role is particularly critical for middle managers, who must often compete with other managers for important resources, yet must maintain successful working relationships with them for long time periods. Introduced Category 2: INFORMATIONAL ROLES. Informational roles are those in which managers obtain and transmit information. These roles have changed dramatically as technology has improved. The monitor evaluates the performance of others and takes corrective action to improve that performance. Monitors also watch for changes in the environment and within the company that may affect individual and organisational performance. Monitoring occurs at all levels of management. The role of disseminator requires that managers inform employees of changes that affect them and the organisation. They also communicate the companys vision and purpose. Introduced Category 3: DECISIONAL ROLES. Decisional roles require managers to plan strategy and utilise resources. There are four specific roles that are decisional. The entrepreneur role requires the manager to assign resources to develop innovative goods and services, or to expand a business. The disturbance handler corrects unanticipated problems facing the organisation from the internal or external environment. The third decisional role, that of resource allocator, involves determining which work units will get which resources. Top managers are likely to make large, overall budget decisions, while middle managers may make more specific allocations. Finally, the negotiator works with others, such as suppliers, distributors, or labor unions, to reach agreements regarding products and services. Although Mintzbergs initial research in 1960s helped categorise manager approaches, Mintzberg was still concerned about research involving other roles in the workplace. Minstzberg considered expanding his research to other roles, such as the role of disseminator, figurehead, liaison and spokesperson. Each role would have different special characteristics, and a new categorisation system would have to be made for each role to understand it properly. While Mintzbergs initial research was helpful in starting the conversation, there has since been criticism of his methods from other researchers. Some criticisms of the work were that even though there were multiple categories, the role of manager is still more complex. There are still many manager roles that are not as traditional and are not captured in Mintzbergs original three categories. In addition, sometimes, Mintzbergs research was not always effective. The research, when applied to real-life situations, did not always improve the management process in real-life practice. These two criticisms against Mintzbergs research method raised some questions about whether or not the research was useful to how we understand managers in todays world. However, even if the criticisms against Mintzbergs work are true, it does not mean that the original research from the 1960s is completely useless. Those researchers did not say Mintzbergs research is invalid. His research has two positive functions to the further research. The first positive function is Mintzberg provided a useful functional approach to analyse management. And he used this approach to provide a clear concept of the role of manager to the researcher. When researching human behavior, it is important to be concise about the subject of the research. Mintzbergs research has helped other researchers clearly define what a manager is, because in real-life situations, the manager is not always the same position title. Mintzbergs definitions added clarity and precision to future research on the topic. The second positive function is Mintzbergs research could be regarded as a good beginning to give a new insight to further research on this field in the future. Scientific research is always a gradual process. Just because Mintzbergs initial research had certain flaws, does not mean it is useless to other researchers. Researchers who are interested in studying the workplace in a systematic way have older research to look back on. A researcher doesnt have to start from the very beginning older research like Mintzbergs have shown what methods work well and what methods are not as appropriate for workplace dynamics. As more young professionals enter the job market, this research will continue to study and change the way we think about the modern workplace.", "hypothesis": "Young professionals can easily know management experience in the workplace.", "label": "c"} +{"uid": "id_513", "premise": "What the Managers Really Do? When students graduate and first enter the workforce, the most common choice is to find an entry-level position. This can be a job such as an unpaid internship, an assistant, a secretary, or a junior partner position. Traditionally, we start with simpler jobs and work our way up. Young professionals start out with a plan to become senior partners, associates, or even managers of a workplace. However, these promotions can be few and far between, leaving many young professionals unfamiliar with management experience. An important step is understanding the role and responsibilities of a person in a managing position. Managers are organisational members who are responsible for the work performance of other organisational members. Managers have formal authority to use organisational resources and to make decisions. Managers at different levels of the organisation engage in different amounts of time on the four managerial functions of planning, organising, leading, and controlling. However, as many professionals already know, managing styles can be very different depending on where you work. Some managing styles are strictly hierarchical. Other managing styles can be more casual and relaxed, where the manager may act more like a team member rather than a strict boss. Many researchers have created a more scientific approach in studying these different approaches to managing. In the 1960s, researcher Henry Mintzberg created a seminal organisational model using three categories. These categories represent three major functional approaches, which are designated as interpersonal, informational and decisional. Introduced Category 1: INTERPERSONAL ROLES. Interpersonal roles require managers to direct and supervise employees and the organisation. The figurehead is typically a top of middle manager. This manager may communicate future organisational goals or ethical guidelines to employees at company meetings. They also attend ribbon-cutting ceremonies, host receptions, presentations and other activities associated with the figurehead role. A leader acts as an example for other employees to follow, gives commands and directions to subordinates, makes decisions, and mobilises employee support. They are also responsible for the selection and training of employees. Managers must be leaders at all levels of the organisation; often lower-level managers look to top management for this leadership example. In the role of liaison, a manager must coordinate the work of others in different work units, establish alliances between others, and work to share resources. This role is particularly critical for middle managers, who must often compete with other managers for important resources, yet must maintain successful working relationships with them for long time periods. Introduced Category 2: INFORMATIONAL ROLES. Informational roles are those in which managers obtain and transmit information. These roles have changed dramatically as technology has improved. The monitor evaluates the performance of others and takes corrective action to improve that performance. Monitors also watch for changes in the environment and within the company that may affect individual and organisational performance. Monitoring occurs at all levels of management. The role of disseminator requires that managers inform employees of changes that affect them and the organisation. They also communicate the companys vision and purpose. Introduced Category 3: DECISIONAL ROLES. Decisional roles require managers to plan strategy and utilise resources. There are four specific roles that are decisional. The entrepreneur role requires the manager to assign resources to develop innovative goods and services, or to expand a business. The disturbance handler corrects unanticipated problems facing the organisation from the internal or external environment. The third decisional role, that of resource allocator, involves determining which work units will get which resources. Top managers are likely to make large, overall budget decisions, while middle managers may make more specific allocations. Finally, the negotiator works with others, such as suppliers, distributors, or labor unions, to reach agreements regarding products and services. Although Mintzbergs initial research in 1960s helped categorise manager approaches, Mintzberg was still concerned about research involving other roles in the workplace. Minstzberg considered expanding his research to other roles, such as the role of disseminator, figurehead, liaison and spokesperson. Each role would have different special characteristics, and a new categorisation system would have to be made for each role to understand it properly. While Mintzbergs initial research was helpful in starting the conversation, there has since been criticism of his methods from other researchers. Some criticisms of the work were that even though there were multiple categories, the role of manager is still more complex. There are still many manager roles that are not as traditional and are not captured in Mintzbergs original three categories. In addition, sometimes, Mintzbergs research was not always effective. The research, when applied to real-life situations, did not always improve the management process in real-life practice. These two criticisms against Mintzbergs research method raised some questions about whether or not the research was useful to how we understand managers in todays world. However, even if the criticisms against Mintzbergs work are true, it does not mean that the original research from the 1960s is completely useless. Those researchers did not say Mintzbergs research is invalid. His research has two positive functions to the further research. The first positive function is Mintzberg provided a useful functional approach to analyse management. And he used this approach to provide a clear concept of the role of manager to the researcher. When researching human behavior, it is important to be concise about the subject of the research. Mintzbergs research has helped other researchers clearly define what a manager is, because in real-life situations, the manager is not always the same position title. Mintzbergs definitions added clarity and precision to future research on the topic. The second positive function is Mintzbergs research could be regarded as a good beginning to give a new insight to further research on this field in the future. Scientific research is always a gradual process. Just because Mintzbergs initial research had certain flaws, does not mean it is useless to other researchers. Researchers who are interested in studying the workplace in a systematic way have older research to look back on. A researcher doesnt have to start from the very beginning older research like Mintzbergs have shown what methods work well and what methods are not as appropriate for workplace dynamics. As more young professionals enter the job market, this research will continue to study and change the way we think about the modern workplace.", "hypothesis": "Mintzbergs theory is valuable for future studies.", "label": "e"} +{"uid": "id_514", "premise": "What the Managers Really Do? When students graduate and first enter the workforce, the most common choice is to find an entry-level position. This can be a job such as an unpaid internship, an assistant, a secretary, or a junior partner position. Traditionally, we start with simpler jobs and work our way up. Young professionals start out with a plan to become senior partners, associates, or even managers of a workplace. However, these promotions can be few and far between, leaving many young professionals unfamiliar with management experience. An important step is understanding the role and responsibilities of a person in a managing position. Managers are organisational members who are responsible for the work performance of other organisational members. Managers have formal authority to use organisational resources and to make decisions. Managers at different levels of the organisation engage in different amounts of time on the four managerial functions of planning, organising, leading, and controlling. However, as many professionals already know, managing styles can be very different depending on where you work. Some managing styles are strictly hierarchical. Other managing styles can be more casual and relaxed, where the manager may act more like a team member rather than a strict boss. Many researchers have created a more scientific approach in studying these different approaches to managing. In the 1960s, researcher Henry Mintzberg created a seminal organisational model using three categories. These categories represent three major functional approaches, which are designated as interpersonal, informational and decisional. Introduced Category 1: INTERPERSONAL ROLES. Interpersonal roles require managers to direct and supervise employees and the organisation. The figurehead is typically a top of middle manager. This manager may communicate future organisational goals or ethical guidelines to employees at company meetings. They also attend ribbon-cutting ceremonies, host receptions, presentations and other activities associated with the figurehead role. A leader acts as an example for other employees to follow, gives commands and directions to subordinates, makes decisions, and mobilises employee support. They are also responsible for the selection and training of employees. Managers must be leaders at all levels of the organisation; often lower-level managers look to top management for this leadership example. In the role of liaison, a manager must coordinate the work of others in different work units, establish alliances between others, and work to share resources. This role is particularly critical for middle managers, who must often compete with other managers for important resources, yet must maintain successful working relationships with them for long time periods. Introduced Category 2: INFORMATIONAL ROLES. Informational roles are those in which managers obtain and transmit information. These roles have changed dramatically as technology has improved. The monitor evaluates the performance of others and takes corrective action to improve that performance. Monitors also watch for changes in the environment and within the company that may affect individual and organisational performance. Monitoring occurs at all levels of management. The role of disseminator requires that managers inform employees of changes that affect them and the organisation. They also communicate the companys vision and purpose. Introduced Category 3: DECISIONAL ROLES. Decisional roles require managers to plan strategy and utilise resources. There are four specific roles that are decisional. The entrepreneur role requires the manager to assign resources to develop innovative goods and services, or to expand a business. The disturbance handler corrects unanticipated problems facing the organisation from the internal or external environment. The third decisional role, that of resource allocator, involves determining which work units will get which resources. Top managers are likely to make large, overall budget decisions, while middle managers may make more specific allocations. Finally, the negotiator works with others, such as suppliers, distributors, or labor unions, to reach agreements regarding products and services. Although Mintzbergs initial research in 1960s helped categorise manager approaches, Mintzberg was still concerned about research involving other roles in the workplace. Minstzberg considered expanding his research to other roles, such as the role of disseminator, figurehead, liaison and spokesperson. Each role would have different special characteristics, and a new categorisation system would have to be made for each role to understand it properly. While Mintzbergs initial research was helpful in starting the conversation, there has since been criticism of his methods from other researchers. Some criticisms of the work were that even though there were multiple categories, the role of manager is still more complex. There are still many manager roles that are not as traditional and are not captured in Mintzbergs original three categories. In addition, sometimes, Mintzbergs research was not always effective. The research, when applied to real-life situations, did not always improve the management process in real-life practice. These two criticisms against Mintzbergs research method raised some questions about whether or not the research was useful to how we understand managers in todays world. However, even if the criticisms against Mintzbergs work are true, it does not mean that the original research from the 1960s is completely useless. Those researchers did not say Mintzbergs research is invalid. His research has two positive functions to the further research. The first positive function is Mintzberg provided a useful functional approach to analyse management. And he used this approach to provide a clear concept of the role of manager to the researcher. When researching human behavior, it is important to be concise about the subject of the research. Mintzbergs research has helped other researchers clearly define what a manager is, because in real-life situations, the manager is not always the same position title. Mintzbergs definitions added clarity and precision to future research on the topic. The second positive function is Mintzbergs research could be regarded as a good beginning to give a new insight to further research on this field in the future. Scientific research is always a gradual process. Just because Mintzbergs initial research had certain flaws, does not mean it is useless to other researchers. Researchers who are interested in studying the workplace in a systematic way have older research to look back on. A researcher doesnt have to start from the very beginning older research like Mintzbergs have shown what methods work well and what methods are not as appropriate for workplace dynamics. As more young professionals enter the job market, this research will continue to study and change the way we think about the modern workplace.", "hypothesis": "All managers do the same work.", "label": "c"} +{"uid": "id_515", "premise": "What the Managers Really Do? When students graduate and first enter the workforce, the most common choice is to find an entry-level position. This can be a job such as an unpaid internship, an assistant, a secretary, or a junior partner position. Traditionally, we start with simpler jobs and work our way up. Young professionals start out with a plan to become senior partners, associates, or even managers of a workplace. However, these promotions can be few and far between, leaving many young professionals unfamiliar with management experience. An important step is understanding the role and responsibilities of a person in a managing position. Managers are organisational members who are responsible for the work performance of other organisational members. Managers have formal authority to use organisational resources and to make decisions. Managers at different levels of the organisation engage in different amounts of time on the four managerial functions of planning, organising, leading, and controlling. However, as many professionals already know, managing styles can be very different depending on where you work. Some managing styles are strictly hierarchical. Other managing styles can be more casual and relaxed, where the manager may act more like a team member rather than a strict boss. Many researchers have created a more scientific approach in studying these different approaches to managing. In the 1960s, researcher Henry Mintzberg created a seminal organisational model using three categories. These categories represent three major functional approaches, which are designated as interpersonal, informational and decisional. Introduced Category 1: INTERPERSONAL ROLES. Interpersonal roles require managers to direct and supervise employees and the organisation. The figurehead is typically a top of middle manager. This manager may communicate future organisational goals or ethical guidelines to employees at company meetings. They also attend ribbon-cutting ceremonies, host receptions, presentations and other activities associated with the figurehead role. A leader acts as an example for other employees to follow, gives commands and directions to subordinates, makes decisions, and mobilises employee support. They are also responsible for the selection and training of employees. Managers must be leaders at all levels of the organisation; often lower-level managers look to top management for this leadership example. In the role of liaison, a manager must coordinate the work of others in different work units, establish alliances between others, and work to share resources. This role is particularly critical for middle managers, who must often compete with other managers for important resources, yet must maintain successful working relationships with them for long time periods. Introduced Category 2: INFORMATIONAL ROLES. Informational roles are those in which managers obtain and transmit information. These roles have changed dramatically as technology has improved. The monitor evaluates the performance of others and takes corrective action to improve that performance. Monitors also watch for changes in the environment and within the company that may affect individual and organisational performance. Monitoring occurs at all levels of management. The role of disseminator requires that managers inform employees of changes that affect them and the organisation. They also communicate the companys vision and purpose. Introduced Category 3: DECISIONAL ROLES. Decisional roles require managers to plan strategy and utilise resources. There are four specific roles that are decisional. The entrepreneur role requires the manager to assign resources to develop innovative goods and services, or to expand a business. The disturbance handler corrects unanticipated problems facing the organisation from the internal or external environment. The third decisional role, that of resource allocator, involves determining which work units will get which resources. Top managers are likely to make large, overall budget decisions, while middle managers may make more specific allocations. Finally, the negotiator works with others, such as suppliers, distributors, or labor unions, to reach agreements regarding products and services. Although Mintzbergs initial research in 1960s helped categorise manager approaches, Mintzberg was still concerned about research involving other roles in the workplace. Minstzberg considered expanding his research to other roles, such as the role of disseminator, figurehead, liaison and spokesperson. Each role would have different special characteristics, and a new categorisation system would have to be made for each role to understand it properly. While Mintzbergs initial research was helpful in starting the conversation, there has since been criticism of his methods from other researchers. Some criticisms of the work were that even though there were multiple categories, the role of manager is still more complex. There are still many manager roles that are not as traditional and are not captured in Mintzbergs original three categories. In addition, sometimes, Mintzbergs research was not always effective. The research, when applied to real-life situations, did not always improve the management process in real-life practice. These two criticisms against Mintzbergs research method raised some questions about whether or not the research was useful to how we understand managers in todays world. However, even if the criticisms against Mintzbergs work are true, it does not mean that the original research from the 1960s is completely useless. Those researchers did not say Mintzbergs research is invalid. His research has two positive functions to the further research. The first positive function is Mintzberg provided a useful functional approach to analyse management. And he used this approach to provide a clear concept of the role of manager to the researcher. When researching human behavior, it is important to be concise about the subject of the research. Mintzbergs research has helped other researchers clearly define what a manager is, because in real-life situations, the manager is not always the same position title. Mintzbergs definitions added clarity and precision to future research on the topic. The second positive function is Mintzbergs research could be regarded as a good beginning to give a new insight to further research on this field in the future. Scientific research is always a gradual process. Just because Mintzbergs initial research had certain flaws, does not mean it is useless to other researchers. Researchers who are interested in studying the workplace in a systematic way have older research to look back on. A researcher doesnt have to start from the very beginning older research like Mintzbergs have shown what methods work well and what methods are not as appropriate for workplace dynamics. As more young professionals enter the job market, this research will continue to study and change the way we think about the modern workplace.", "hypothesis": "Mintzbergs theory broke well-established notions about managing styles.", "label": "n"} +{"uid": "id_516", "premise": "What the Managers Really Do? When students graduate and first enter the workforce, the most common choice is to find an entry-level position. This can be a job such as an unpaid internship, an assistant, a secretary, or a junior partner position. Traditionally, we start with simpler jobs and work our way up. Young professionals start out with a plan to become senior partners, associates, or even managers of a workplace. However, these promotions can be few and far between, leaving many young professionals unfamiliar with management experience. An important step is understanding the role and responsibilities of a person in a managing position. Managers are organisational members who are responsible for the work performance of other organisational members. Managers have formal authority to use organisational resources and to make decisions. Managers at different levels of the organisation engage in different amounts of time on the four managerial functions of planning, organising, leading, and controlling. However, as many professionals already know, managing styles can be very different depending on where you work. Some managing styles are strictly hierarchical. Other managing styles can be more casual and relaxed, where the manager may act more like a team member rather than a strict boss. Many researchers have created a more scientific approach in studying these different approaches to managing. In the 1960s, researcher Henry Mintzberg created a seminal organisational model using three categories. These categories represent three major functional approaches, which are designated as interpersonal, informational and decisional. Introduced Category 1: INTERPERSONAL ROLES. Interpersonal roles require managers to direct and supervise employees and the organisation. The figurehead is typically a top of middle manager. This manager may communicate future organisational goals or ethical guidelines to employees at company meetings. They also attend ribbon-cutting ceremonies, host receptions, presentations and other activities associated with the figurehead role. A leader acts as an example for other employees to follow, gives commands and directions to subordinates, makes decisions, and mobilises employee support. They are also responsible for the selection and training of employees. Managers must be leaders at all levels of the organisation; often lower-level managers look to top management for this leadership example. In the role of liaison, a manager must coordinate the work of others in different work units, establish alliances between others, and work to share resources. This role is particularly critical for middle managers, who must often compete with other managers for important resources, yet must maintain successful working relationships with them for long time periods. Introduced Category 2: INFORMATIONAL ROLES. Informational roles are those in which managers obtain and transmit information. These roles have changed dramatically as technology has improved. The monitor evaluates the performance of others and takes corrective action to improve that performance. Monitors also watch for changes in the environment and within the company that may affect individual and organisational performance. Monitoring occurs at all levels of management. The role of disseminator requires that managers inform employees of changes that affect them and the organisation. They also communicate the companys vision and purpose. Introduced Category 3: DECISIONAL ROLES. Decisional roles require managers to plan strategy and utilise resources. There are four specific roles that are decisional. The entrepreneur role requires the manager to assign resources to develop innovative goods and services, or to expand a business. The disturbance handler corrects unanticipated problems facing the organisation from the internal or external environment. The third decisional role, that of resource allocator, involves determining which work units will get which resources. Top managers are likely to make large, overall budget decisions, while middle managers may make more specific allocations. Finally, the negotiator works with others, such as suppliers, distributors, or labor unions, to reach agreements regarding products and services. Although Mintzbergs initial research in 1960s helped categorise manager approaches, Mintzberg was still concerned about research involving other roles in the workplace. Minstzberg considered expanding his research to other roles, such as the role of disseminator, figurehead, liaison and spokesperson. Each role would have different special characteristics, and a new categorisation system would have to be made for each role to understand it properly. While Mintzbergs initial research was helpful in starting the conversation, there has since been criticism of his methods from other researchers. Some criticisms of the work were that even though there were multiple categories, the role of manager is still more complex. There are still many manager roles that are not as traditional and are not captured in Mintzbergs original three categories. In addition, sometimes, Mintzbergs research was not always effective. The research, when applied to real-life situations, did not always improve the management process in real-life practice. These two criticisms against Mintzbergs research method raised some questions about whether or not the research was useful to how we understand managers in todays world. However, even if the criticisms against Mintzbergs work are true, it does not mean that the original research from the 1960s is completely useless. Those researchers did not say Mintzbergs research is invalid. His research has two positive functions to the further research. The first positive function is Mintzberg provided a useful functional approach to analyse management. And he used this approach to provide a clear concept of the role of manager to the researcher. When researching human behavior, it is important to be concise about the subject of the research. Mintzbergs research has helped other researchers clearly define what a manager is, because in real-life situations, the manager is not always the same position title. Mintzbergs definitions added clarity and precision to future research on the topic. The second positive function is Mintzbergs research could be regarded as a good beginning to give a new insight to further research on this field in the future. Scientific research is always a gradual process. Just because Mintzbergs initial research had certain flaws, does not mean it is useless to other researchers. Researchers who are interested in studying the workplace in a systematic way have older research to look back on. A researcher doesnt have to start from the very beginning older research like Mintzbergs have shown what methods work well and what methods are not as appropriate for workplace dynamics. As more young professionals enter the job market, this research will continue to study and change the way we think about the modern workplace.", "hypothesis": "Mintzberg got a large amount of research funds for his contribution.", "label": "n"} +{"uid": "id_517", "premise": "What to do in a fire? Fire drills are a big part of being safe in school: They prepare you for what you need to do in case of a fire. But what if there was a fire where you live? Would you know what to do? Talking about fires can be scary because no one likes to think about people getting hurt or their things getting burned. But you can feel less worried if you are prepared. Its a good idea for families to talk about what they would do to escape a fire. Different families will have different strategies. Some kids live in one-story houses and other kids live in tall buildings. Youll want to talk about escape plans and escape routes, so lets start there. Know Your Way Out An escape plan can help every member of a family get out of a burning house. The idea is to get outside quickly and safely. Smoke from a fire can make it hard to see where things are, so its important to learn and remember the different ways out of your home. How many exits are there? How do you get to them from your room? Its a good idea to have your family draw a map of the escape plan. Its possible one way out could be blocked by fire or smoke, so youll want to know where other ones are. And if you live in an apartment building, youll want to know the best way to the stairwell or other emergency exits. Safety Steps If youre in a room with the door closed when the fire breaks out, you need to take a few extra steps: Check to see if theres heat or smoke coming in the cracks around the door. (Youre checking to see if theres fire on the other side. ) If you see smoke coming under the door dont open the door! If you dont see smoke touch the door. If the door is hot or very warm dont open the door! If you dont see smoke and the door is not hot then use your fingers to lightly touch the doorknob. If the doorknob is hot or very warm dont open the door! If the doorknob feels cool, and you cant see any smoke around the door, you can open the door very carefully and slowly. When you open the door, if you feel a burst of heat or smoke pours into the room, quickly shut the door and make sure it is really closed. If theres no smoke or heat when you open the door, go toward your escape route exit.", "hypothesis": "If you open the door and everything seems fine, go straight to the exit.", "label": "e"} +{"uid": "id_518", "premise": "What to do in a fire? Fire drills are a big part of being safe in school: They prepare you for what you need to do in case of a fire. But what if there was a fire where you live? Would you know what to do? Talking about fires can be scary because no one likes to think about people getting hurt or their things getting burned. But you can feel less worried if you are prepared. Its a good idea for families to talk about what they would do to escape a fire. Different families will have different strategies. Some kids live in one-story houses and other kids live in tall buildings. Youll want to talk about escape plans and escape routes, so lets start there. Know Your Way Out An escape plan can help every member of a family get out of a burning house. The idea is to get outside quickly and safely. Smoke from a fire can make it hard to see where things are, so its important to learn and remember the different ways out of your home. How many exits are there? How do you get to them from your room? Its a good idea to have your family draw a map of the escape plan. Its possible one way out could be blocked by fire or smoke, so youll want to know where other ones are. And if you live in an apartment building, youll want to know the best way to the stairwell or other emergency exits. Safety Steps If youre in a room with the door closed when the fire breaks out, you need to take a few extra steps: Check to see if theres heat or smoke coming in the cracks around the door. (Youre checking to see if theres fire on the other side. ) If you see smoke coming under the door dont open the door! If you dont see smoke touch the door. If the door is hot or very warm dont open the door! If you dont see smoke and the door is not hot then use your fingers to lightly touch the doorknob. If the doorknob is hot or very warm dont open the door! If the doorknob feels cool, and you cant see any smoke around the door, you can open the door very carefully and slowly. When you open the door, if you feel a burst of heat or smoke pours into the room, quickly shut the door and make sure it is really closed. If theres no smoke or heat when you open the door, go toward your escape route exit.", "hypothesis": "Hot door means you shouldnt open it to escape.", "label": "e"} +{"uid": "id_519", "premise": "What to do in a fire? Fire drills are a big part of being safe in school: They prepare you for what you need to do in case of a fire. But what if there was a fire where you live? Would you know what to do? Talking about fires can be scary because no one likes to think about people getting hurt or their things getting burned. But you can feel less worried if you are prepared. Its a good idea for families to talk about what they would do to escape a fire. Different families will have different strategies. Some kids live in one-story houses and other kids live in tall buildings. Youll want to talk about escape plans and escape routes, so lets start there. Know Your Way Out An escape plan can help every member of a family get out of a burning house. The idea is to get outside quickly and safely. Smoke from a fire can make it hard to see where things are, so its important to learn and remember the different ways out of your home. How many exits are there? How do you get to them from your room? Its a good idea to have your family draw a map of the escape plan. Its possible one way out could be blocked by fire or smoke, so youll want to know where other ones are. And if you live in an apartment building, youll want to know the best way to the stairwell or other emergency exits. Safety Steps If youre in a room with the door closed when the fire breaks out, you need to take a few extra steps: Check to see if theres heat or smoke coming in the cracks around the door. (Youre checking to see if theres fire on the other side. ) If you see smoke coming under the door dont open the door! If you dont see smoke touch the door. If the door is hot or very warm dont open the door! If you dont see smoke and the door is not hot then use your fingers to lightly touch the doorknob. If the doorknob is hot or very warm dont open the door! If the doorknob feels cool, and you cant see any smoke around the door, you can open the door very carefully and slowly. When you open the door, if you feel a burst of heat or smoke pours into the room, quickly shut the door and make sure it is really closed. If theres no smoke or heat when you open the door, go toward your escape route exit.", "hypothesis": "If youre stuck in a room, and see smoke coming into your room, you should open the door and ran to the exit.", "label": "c"} +{"uid": "id_520", "premise": "What to do in a fire? Fire drills are a big part of being safe in school: They prepare you for what you need to do in case of a fire. But what if there was a fire where you live? Would you know what to do? Talking about fires can be scary because no one likes to think about people getting hurt or their things getting burned. But you can feel less worried if you are prepared. Its a good idea for families to talk about what they would do to escape a fire. Different families will have different strategies. Some kids live in one-story houses and other kids live in tall buildings. Youll want to talk about escape plans and escape routes, so lets start there. Know Your Way Out An escape plan can help every member of a family get out of a burning house. The idea is to get outside quickly and safely. Smoke from a fire can make it hard to see where things are, so its important to learn and remember the different ways out of your home. How many exits are there? How do you get to them from your room? Its a good idea to have your family draw a map of the escape plan. Its possible one way out could be blocked by fire or smoke, so youll want to know where other ones are. And if you live in an apartment building, youll want to know the best way to the stairwell or other emergency exits. Safety Steps If youre in a room with the door closed when the fire breaks out, you need to take a few extra steps: Check to see if theres heat or smoke coming in the cracks around the door. (Youre checking to see if theres fire on the other side. ) If you see smoke coming under the door dont open the door! If you dont see smoke touch the door. If the door is hot or very warm dont open the door! If you dont see smoke and the door is not hot then use your fingers to lightly touch the doorknob. If the doorknob is hot or very warm dont open the door! If the doorknob feels cool, and you cant see any smoke around the door, you can open the door very carefully and slowly. When you open the door, if you feel a burst of heat or smoke pours into the room, quickly shut the door and make sure it is really closed. If theres no smoke or heat when you open the door, go toward your escape route exit.", "hypothesis": "You should mark different ways out of your home on the map.", "label": "n"} +{"uid": "id_521", "premise": "What to do in a fire? Fire drills are a big part of being safe in school: They prepare you for what you need to do in case of a fire. But what if there was a fire where you live? Would you know what to do? Talking about fires can be scary because no one likes to think about people getting hurt or their things getting burned. But you can feel less worried if you are prepared. Its a good idea for families to talk about what they would do to escape a fire. Different families will have different strategies. Some kids live in one-story houses and other kids live in tall buildings. Youll want to talk about escape plans and escape routes, so lets start there. Know Your Way Out An escape plan can help every member of a family get out of a burning house. The idea is to get outside quickly and safely. Smoke from a fire can make it hard to see where things are, so its important to learn and remember the different ways out of your home. How many exits are there? How do you get to them from your room? Its a good idea to have your family draw a map of the escape plan. Its possible one way out could be blocked by fire or smoke, so youll want to know where other ones are. And if you live in an apartment building, youll want to know the best way to the stairwell or other emergency exits. Safety Steps If youre in a room with the door closed when the fire breaks out, you need to take a few extra steps: Check to see if theres heat or smoke coming in the cracks around the door. (Youre checking to see if theres fire on the other side. ) If you see smoke coming under the door dont open the door! If you dont see smoke touch the door. If the door is hot or very warm dont open the door! If you dont see smoke and the door is not hot then use your fingers to lightly touch the doorknob. If the doorknob is hot or very warm dont open the door! If the doorknob feels cool, and you cant see any smoke around the door, you can open the door very carefully and slowly. When you open the door, if you feel a burst of heat or smoke pours into the room, quickly shut the door and make sure it is really closed. If theres no smoke or heat when you open the door, go toward your escape route exit.", "hypothesis": "It is important to have a strategy before escaping the fire.", "label": "e"} +{"uid": "id_522", "premise": "What you need to know about Culture Shock Most people who move to a foreign country or culture may experience a period of time when they feel very homesick and have a lot of stress and difficulty functioning in the new culture. This feeling is often called culture shock and it is important to understand and learn how to cope with culture shock if you are to adapt successfully to your new homes culture. First of all, its important to know that culture shock is normal. Everyone in a new situation will go through some form of culture shock, and the extent of which they do is determined by factors such as the difference between cultures, the degree to which someone is anxious to adapt to a new culture and the familiarity that person has to the new culture. If you go, for example, to a culture that is far different from your own, youre likely to experience culture shock more sharply than those who move to a new culture knowing the language and the behavioural norms of the new culture. There are four general stages of cultural adjustment, and it is important that you are aware of these stages and can recognise which stage you are in and when so that you will understand why you feel the way you do and that any difficulties you are experience are temporary, a process you are going through rather than a constant situation. The first stage is usually referred to as the excitement stage or the honeymoon stage. Upon arriving in a new environment, youll be interested in the new culture, everything will seem exciting, everyone will seem friendly and helpful and youll be overwhelmed with impressions. During this stage you are merely soaking up the new landscape, taking in these impressions passively, and at this stage you have little meaningful experience of the culture. But it isnt long before the honeymoon stage dissolves into the second stage sometimes called the withdrawal stage. The excitement you felt before changes to frustration as you find it difficult to cope with the problems that arise. It seems that everything is difficult, the language is hard to learn, people are unusual and unpredictable, friends are hard to make, and simple things like shopping and going to the bank are challenges. It is at this stage that you are likely to feel anxious and homesick, and you will probably find yourself complaining about the new culture or country. This is the stage which is referred to as culture shock. Culture shock is only temporary, and at some point, if you are one of those who manage to stick it out, youll transition into the third stage of cultural adjustment, the recovery stage. At this point, youll have a routine, and youll feel more confident functioning in the new culture. Youll start to feel less isolated as you start to understand and accept the way things are done and the way people behave in your new environment. Customs and traditions are clearer and easier to understand. At this stage, youll deal with new challenges with humour rather than anxiety. The last stage is the home or stability stage this is the point when people start to feel at home in the new culture. At this stage, youll function well in the new culture, adopt certain features and behaviours from your new home, and prefer certain aspects of the new culture to your own culture. There is, in a sense, a fifth stage to this process. If you decide to return home after a long period in a new culture, you may experience what is called reverse culture shock. This means that you may find aspects of your own culture foreign because you are so used to the new culture that you have spent so long adjusting to. Reverse culture shock is usually pretty mild you may notice things about your home culture that you had never noticed before, and some of the ways people do things may seem odd. Reverse culture shock rarely lasts for very long.", "hypothesis": "Some people will find the process of adapting to a new country easier than others.", "label": "e"} +{"uid": "id_523", "premise": "What you need to know about Culture Shock Most people who move to a foreign country or culture may experience a period of time when they feel very homesick and have a lot of stress and difficulty functioning in the new culture. This feeling is often called culture shock and it is important to understand and learn how to cope with culture shock if you are to adapt successfully to your new homes culture. First of all, its important to know that culture shock is normal. Everyone in a new situation will go through some form of culture shock, and the extent of which they do is determined by factors such as the difference between cultures, the degree to which someone is anxious to adapt to a new culture and the familiarity that person has to the new culture. If you go, for example, to a culture that is far different from your own, youre likely to experience culture shock more sharply than those who move to a new culture knowing the language and the behavioural norms of the new culture. There are four general stages of cultural adjustment, and it is important that you are aware of these stages and can recognise which stage you are in and when so that you will understand why you feel the way you do and that any difficulties you are experience are temporary, a process you are going through rather than a constant situation. The first stage is usually referred to as the excitement stage or the honeymoon stage. Upon arriving in a new environment, youll be interested in the new culture, everything will seem exciting, everyone will seem friendly and helpful and youll be overwhelmed with impressions. During this stage you are merely soaking up the new landscape, taking in these impressions passively, and at this stage you have little meaningful experience of the culture. But it isnt long before the honeymoon stage dissolves into the second stage sometimes called the withdrawal stage. The excitement you felt before changes to frustration as you find it difficult to cope with the problems that arise. It seems that everything is difficult, the language is hard to learn, people are unusual and unpredictable, friends are hard to make, and simple things like shopping and going to the bank are challenges. It is at this stage that you are likely to feel anxious and homesick, and you will probably find yourself complaining about the new culture or country. This is the stage which is referred to as culture shock. Culture shock is only temporary, and at some point, if you are one of those who manage to stick it out, youll transition into the third stage of cultural adjustment, the recovery stage. At this point, youll have a routine, and youll feel more confident functioning in the new culture. Youll start to feel less isolated as you start to understand and accept the way things are done and the way people behave in your new environment. Customs and traditions are clearer and easier to understand. At this stage, youll deal with new challenges with humour rather than anxiety. The last stage is the home or stability stage this is the point when people start to feel at home in the new culture. At this stage, youll function well in the new culture, adopt certain features and behaviours from your new home, and prefer certain aspects of the new culture to your own culture. There is, in a sense, a fifth stage to this process. If you decide to return home after a long period in a new culture, you may experience what is called reverse culture shock. This means that you may find aspects of your own culture foreign because you are so used to the new culture that you have spent so long adjusting to. Reverse culture shock is usually pretty mild you may notice things about your home culture that you had never noticed before, and some of the ways people do things may seem odd. Reverse culture shock rarely lasts for very long.", "hypothesis": "By the third stage, people do not experience any more problems with the new culture.", "label": "c"} +{"uid": "id_524", "premise": "What you need to know about Culture Shock Most people who move to a foreign country or culture may experience a period of time when they feel very homesick and have a lot of stress and difficulty functioning in the new culture. This feeling is often called culture shock and it is important to understand and learn how to cope with culture shock if you are to adapt successfully to your new homes culture. First of all, its important to know that culture shock is normal. Everyone in a new situation will go through some form of culture shock, and the extent of which they do is determined by factors such as the difference between cultures, the degree to which someone is anxious to adapt to a new culture and the familiarity that person has to the new culture. If you go, for example, to a culture that is far different from your own, youre likely to experience culture shock more sharply than those who move to a new culture knowing the language and the behavioural norms of the new culture. There are four general stages of cultural adjustment, and it is important that you are aware of these stages and can recognise which stage you are in and when so that you will understand why you feel the way you do and that any difficulties you are experience are temporary, a process you are going through rather than a constant situation. The first stage is usually referred to as the excitement stage or the honeymoon stage. Upon arriving in a new environment, youll be interested in the new culture, everything will seem exciting, everyone will seem friendly and helpful and youll be overwhelmed with impressions. During this stage you are merely soaking up the new landscape, taking in these impressions passively, and at this stage you have little meaningful experience of the culture. But it isnt long before the honeymoon stage dissolves into the second stage sometimes called the withdrawal stage. The excitement you felt before changes to frustration as you find it difficult to cope with the problems that arise. It seems that everything is difficult, the language is hard to learn, people are unusual and unpredictable, friends are hard to make, and simple things like shopping and going to the bank are challenges. It is at this stage that you are likely to feel anxious and homesick, and you will probably find yourself complaining about the new culture or country. This is the stage which is referred to as culture shock. Culture shock is only temporary, and at some point, if you are one of those who manage to stick it out, youll transition into the third stage of cultural adjustment, the recovery stage. At this point, youll have a routine, and youll feel more confident functioning in the new culture. Youll start to feel less isolated as you start to understand and accept the way things are done and the way people behave in your new environment. Customs and traditions are clearer and easier to understand. At this stage, youll deal with new challenges with humour rather than anxiety. The last stage is the home or stability stage this is the point when people start to feel at home in the new culture. At this stage, youll function well in the new culture, adopt certain features and behaviours from your new home, and prefer certain aspects of the new culture to your own culture. There is, in a sense, a fifth stage to this process. If you decide to return home after a long period in a new culture, you may experience what is called reverse culture shock. This means that you may find aspects of your own culture foreign because you are so used to the new culture that you have spent so long adjusting to. Reverse culture shock is usually pretty mild you may notice things about your home culture that you had never noticed before, and some of the ways people do things may seem odd. Reverse culture shock rarely lasts for very long.", "hypothesis": "In the fourth stage, people speak new language fluently.", "label": "n"} +{"uid": "id_525", "premise": "What you need to know about Culture Shock Most people who move to a foreign country or culture may experience a period of time when they feel very homesick and have a lot of stress and difficulty functioning in the new culture. This feeling is often called culture shock and it is important to understand and learn how to cope with culture shock if you are to adapt successfully to your new homes culture. First of all, its important to know that culture shock is normal. Everyone in a new situation will go through some form of culture shock, and the extent of which they do is determined by factors such as the difference between cultures, the degree to which someone is anxious to adapt to a new culture and the familiarity that person has to the new culture. If you go, for example, to a culture that is far different from your own, youre likely to experience culture shock more sharply than those who move to a new culture knowing the language and the behavioural norms of the new culture. There are four general stages of cultural adjustment, and it is important that you are aware of these stages and can recognise which stage you are in and when so that you will understand why you feel the way you do and that any difficulties you are experience are temporary, a process you are going through rather than a constant situation. The first stage is usually referred to as the excitement stage or the honeymoon stage. Upon arriving in a new environment, youll be interested in the new culture, everything will seem exciting, everyone will seem friendly and helpful and youll be overwhelmed with impressions. During this stage you are merely soaking up the new landscape, taking in these impressions passively, and at this stage you have little meaningful experience of the culture. But it isnt long before the honeymoon stage dissolves into the second stage sometimes called the withdrawal stage. The excitement you felt before changes to frustration as you find it difficult to cope with the problems that arise. It seems that everything is difficult, the language is hard to learn, people are unusual and unpredictable, friends are hard to make, and simple things like shopping and going to the bank are challenges. It is at this stage that you are likely to feel anxious and homesick, and you will probably find yourself complaining about the new culture or country. This is the stage which is referred to as culture shock. Culture shock is only temporary, and at some point, if you are one of those who manage to stick it out, youll transition into the third stage of cultural adjustment, the recovery stage. At this point, youll have a routine, and youll feel more confident functioning in the new culture. Youll start to feel less isolated as you start to understand and accept the way things are done and the way people behave in your new environment. Customs and traditions are clearer and easier to understand. At this stage, youll deal with new challenges with humour rather than anxiety. The last stage is the home or stability stage this is the point when people start to feel at home in the new culture. At this stage, youll function well in the new culture, adopt certain features and behaviours from your new home, and prefer certain aspects of the new culture to your own culture. There is, in a sense, a fifth stage to this process. If you decide to return home after a long period in a new culture, you may experience what is called reverse culture shock. This means that you may find aspects of your own culture foreign because you are so used to the new culture that you have spent so long adjusting to. Reverse culture shock is usually pretty mild you may notice things about your home culture that you had never noticed before, and some of the ways people do things may seem odd. Reverse culture shock rarely lasts for very long.", "hypothesis": "People can ease culture shock by learning about the language and customs before they go to the new culture.", "label": "e"} +{"uid": "id_526", "premise": "What you need to know about Culture Shock Most people who move to a foreign country or culture may experience a period of time when they feel very homesick and have a lot of stress and difficulty functioning in the new culture. This feeling is often called culture shock and it is important to understand and learn how to cope with culture shock if you are to adapt successfully to your new homes culture. First of all, its important to know that culture shock is normal. Everyone in a new situation will go through some form of culture shock, and the extent of which they do is determined by factors such as the difference between cultures, the degree to which someone is anxious to adapt to a new culture and the familiarity that person has to the new culture. If you go, for example, to a culture that is far different from your own, youre likely to experience culture shock more sharply than those who move to a new culture knowing the language and the behavioural norms of the new culture. There are four general stages of cultural adjustment, and it is important that you are aware of these stages and can recognise which stage you are in and when so that you will understand why you feel the way you do and that any difficulties you are experience are temporary, a process you are going through rather than a constant situation. The first stage is usually referred to as the excitement stage or the honeymoon stage. Upon arriving in a new environment, youll be interested in the new culture, everything will seem exciting, everyone will seem friendly and helpful and youll be overwhelmed with impressions. During this stage you are merely soaking up the new landscape, taking in these impressions passively, and at this stage you have little meaningful experience of the culture. But it isnt long before the honeymoon stage dissolves into the second stage sometimes called the withdrawal stage. The excitement you felt before changes to frustration as you find it difficult to cope with the problems that arise. It seems that everything is difficult, the language is hard to learn, people are unusual and unpredictable, friends are hard to make, and simple things like shopping and going to the bank are challenges. It is at this stage that you are likely to feel anxious and homesick, and you will probably find yourself complaining about the new culture or country. This is the stage which is referred to as culture shock. Culture shock is only temporary, and at some point, if you are one of those who manage to stick it out, youll transition into the third stage of cultural adjustment, the recovery stage. At this point, youll have a routine, and youll feel more confident functioning in the new culture. Youll start to feel less isolated as you start to understand and accept the way things are done and the way people behave in your new environment. Customs and traditions are clearer and easier to understand. At this stage, youll deal with new challenges with humour rather than anxiety. The last stage is the home or stability stage this is the point when people start to feel at home in the new culture. At this stage, youll function well in the new culture, adopt certain features and behaviours from your new home, and prefer certain aspects of the new culture to your own culture. There is, in a sense, a fifth stage to this process. If you decide to return home after a long period in a new culture, you may experience what is called reverse culture shock. This means that you may find aspects of your own culture foreign because you are so used to the new culture that you have spent so long adjusting to. Reverse culture shock is usually pretty mild you may notice things about your home culture that you had never noticed before, and some of the ways people do things may seem odd. Reverse culture shock rarely lasts for very long.", "hypothesis": "In the first stage, people will have a very positive impression of the new culture.", "label": "e"} +{"uid": "id_527", "premise": "What you need to know about Culture Shock Most people who move to a foreign country or culture may experience a period of time when they feel very homesick and have a lot of stress and difficulty functioning in the new culture. This feeling is often called culture shock and it is important to understand and learn how to cope with culture shock if you are to adapt successfully to your new homes culture. First of all, its important to know that culture shock is normal. Everyone in a new situation will go through some form of culture shock, and the extent of which they do is determined by factors such as the difference between cultures, the degree to which someone is anxious to adapt to a new culture and the familiarity that person has to the new culture. If you go, for example, to a culture that is far different from your own, youre likely to experience culture shock more sharply than those who move to a new culture knowing the language and the behavioural norms of the new culture. There are four general stages of cultural adjustment, and it is important that you are aware of these stages and can recognise which stage you are in and when so that you will understand why you feel the way you do and that any difficulties you are experience are temporary, a process you are going through rather than a constant situation. The first stage is usually referred to as the excitement stage or the honeymoon stage. Upon arriving in a new environment, youll be interested in the new culture, everything will seem exciting, everyone will seem friendly and helpful and youll be overwhelmed with impressions. During this stage you are merely soaking up the new landscape, taking in these impressions passively, and at this stage you have little meaningful experience of the culture. But it isnt long before the honeymoon stage dissolves into the second stage sometimes called the withdrawal stage. The excitement you felt before changes to frustration as you find it difficult to cope with the problems that arise. It seems that everything is difficult, the language is hard to learn, people are unusual and unpredictable, friends are hard to make, and simple things like shopping and going to the bank are challenges. It is at this stage that you are likely to feel anxious and homesick, and you will probably find yourself complaining about the new culture or country. This is the stage which is referred to as culture shock. Culture shock is only temporary, and at some point, if you are one of those who manage to stick it out, youll transition into the third stage of cultural adjustment, the recovery stage. At this point, youll have a routine, and youll feel more confident functioning in the new culture. Youll start to feel less isolated as you start to understand and accept the way things are done and the way people behave in your new environment. Customs and traditions are clearer and easier to understand. At this stage, youll deal with new challenges with humour rather than anxiety. The last stage is the home or stability stage this is the point when people start to feel at home in the new culture. At this stage, youll function well in the new culture, adopt certain features and behaviours from your new home, and prefer certain aspects of the new culture to your own culture. There is, in a sense, a fifth stage to this process. If you decide to return home after a long period in a new culture, you may experience what is called reverse culture shock. This means that you may find aspects of your own culture foreign because you are so used to the new culture that you have spent so long adjusting to. Reverse culture shock is usually pretty mild you may notice things about your home culture that you had never noticed before, and some of the ways people do things may seem odd. Reverse culture shock rarely lasts for very long.", "hypothesis": "Many people will leave the new culture while they are in the second stage.", "label": "n"} +{"uid": "id_528", "premise": "What you need to know about Culture Shock Most people who move to a foreign country or culture may experience a period of time when they feel very homesick and have a lot of stress and difficulty functioning in the new culture. This feeling is often called culture shock and it is important to understand and learn how to cope with culture shock if you are to adapt successfully to your new homes culture. First of all, its important to know that culture shock is normal. Everyone in a new situation will go through some form of culture shock, and the extent of which they do is determined by factors such as the difference between cultures, the degree to which someone is anxious to adapt to a new culture and the familiarity that person has to the new culture. If you go, for example, to a culture that is far different from your own, youre likely to experience culture shock more sharply than those who move to a new culture knowing the language and the behavioural norms of the new culture. There are four general stages of cultural adjustment, and it is important that you are aware of these stages and can recognise which stage you are in and when so that you will understand why you feel the way you do and that any difficulties you are experience are temporary, a process you are going through rather than a constant situation. The first stage is usually referred to as the excitement stage or the honeymoon stage. Upon arriving in a new environment, youll be interested in the new culture, everything will seem exciting, everyone will seem friendly and helpful and youll be overwhelmed with impressions. During this stage you are merely soaking up the new landscape, taking in these impressions passively, and at this stage you have little meaningful experience of the culture. But it isnt long before the honeymoon stage dissolves into the second stage sometimes called the withdrawal stage. The excitement you felt before changes to frustration as you find it difficult to cope with the problems that arise. It seems that everything is difficult, the language is hard to learn, people are unusual and unpredictable, friends are hard to make, and simple things like shopping and going to the bank are challenges. It is at this stage that you are likely to feel anxious and homesick, and you will probably find yourself complaining about the new culture or country. This is the stage which is referred to as culture shock. Culture shock is only temporary, and at some point, if you are one of those who manage to stick it out, youll transition into the third stage of cultural adjustment, the recovery stage. At this point, youll have a routine, and youll feel more confident functioning in the new culture. Youll start to feel less isolated as you start to understand and accept the way things are done and the way people behave in your new environment. Customs and traditions are clearer and easier to understand. At this stage, youll deal with new challenges with humour rather than anxiety. The last stage is the home or stability stage this is the point when people start to feel at home in the new culture. At this stage, youll function well in the new culture, adopt certain features and behaviours from your new home, and prefer certain aspects of the new culture to your own culture. There is, in a sense, a fifth stage to this process. If you decide to return home after a long period in a new culture, you may experience what is called reverse culture shock. This means that you may find aspects of your own culture foreign because you are so used to the new culture that you have spent so long adjusting to. Reverse culture shock is usually pretty mild you may notice things about your home culture that you had never noticed before, and some of the ways people do things may seem odd. Reverse culture shock rarely lasts for very long.", "hypothesis": "Reverse culture shock is as difficult to deal with as culture shock.", "label": "c"} +{"uid": "id_529", "premise": "What you need to know about Culture Shock Most people who move to a foreign country or culture may experience a period of time when they feel very homesick and have a lot of stress and difficulty functioning in the new culture. This feeling is often called culture shock and it is important to understand and learn how to cope with culture shock if you are to adapt successfully to your new homes culture. First of all, its important to know that culture shock is normal. Everyone in a new situation will go through some form of culture shock, and the extent of which they do is determined by factors such as the difference between cultures, the degree to which someone is anxious to adapt to a new culture and the familiarity that person has to the new culture. If you go, for example, to a culture that is far different from your own, youre likely to experience culture shock more sharply than those who move to a new culture knowing the language and the behavioural norms of the new culture. There are four general stages of cultural adjustment, and it is important that you are aware of these stages and can recognise which stage you are in and when so that you will understand why you feel the way you do and that any difficulties you are experience are temporary, a process you are going through rather than a constant situation. The first stage is usually referred to as the excitement stage or the honeymoon stage. Upon arriving in a new environment, youll be interested in the new culture, everything will seem exciting, everyone will seem friendly and helpful and youll be overwhelmed with impressions. During this stage you are merely soaking up the new landscape, taking in these impressions passively, and at this stage you have little meaningful experience of the culture. But it isnt long before the honeymoon stage dissolves into the second stage sometimes called the withdrawal stage. The excitement you felt before changes to frustration as you find it difficult to cope with the problems that arise. It seems that everything is difficult, the language is hard to learn, people are unusual and unpredictable, friends are hard to make, and simple things like shopping and going to the bank are challenges. It is at this stage that you are likely to feel anxious and homesick, and you will probably find yourself complaining about the new culture or country. This is the stage which is referred to as culture shock. Culture shock is only temporary, and at some point, if you are one of those who manage to stick it out, youll transition into the third stage of cultural adjustment, the recovery stage. At this point, youll have a routine, and youll feel more confident functioning in the new culture. Youll start to feel less isolated as you start to understand and accept the way things are done and the way people behave in your new environment. Customs and traditions are clearer and easier to understand. At this stage, youll deal with new challenges with humour rather than anxiety. The last stage is the home or stability stage this is the point when people start to feel at home in the new culture. At this stage, youll function well in the new culture, adopt certain features and behaviours from your new home, and prefer certain aspects of the new culture to your own culture. There is, in a sense, a fifth stage to this process. If you decide to return home after a long period in a new culture, you may experience what is called reverse culture shock. This means that you may find aspects of your own culture foreign because you are so used to the new culture that you have spent so long adjusting to. Reverse culture shock is usually pretty mild you may notice things about your home culture that you had never noticed before, and some of the ways people do things may seem odd. Reverse culture shock rarely lasts for very long.", "hypothesis": "Knowing about these four stages will help people adjust to a new culture more quickly.", "label": "n"} +{"uid": "id_530", "premise": "What you need to know about Culture Shock Most people who move to a foreign country or culture may experience a period of time when they feel very homesick and have a lot of stress and difficulty functioning in the new culture. This feeling is often called culture shock and it is important to understand and learn how to cope with culture shock if you are to adapt successfully to your new homes culture. First of all, its important to know that culture shock is normal. Everyone in a new situation will go through some form of culture shock, and the extent of which they do is determined by factors such as the difference between cultures, the degree to which someone is anxious to adapt to a new culture and the familiarity that person has to the new culture. If you go, for example, to a culture that is far different from your own, youre likely to experience culture shock more sharply than those who move to a new culture knowing the language and the behavioural norms of the new culture. There are four general stages of cultural adjustment, and it is important that you are aware of these stages and can recognise which stage you are in and when so that you will understand why you feel the way you do and that any difficulties you are experience are temporary, a process you are going through rather than a constant situation. The first stage is usually referred to as the excitement stage or the honeymoon stage. Upon arriving in a new environment, youll be interested in the new culture, everything will seem exciting, everyone will seem friendly and helpful and youll be overwhelmed with impressions. During this stage you are merely soaking up the new landscape, taking in these impressions passively, and at this stage you have little meaningful experience of the culture. But it isnt long before the honeymoon stage dissolves into the second stage sometimes called the withdrawal stage. The excitement you felt before changes to frustration as you find it difficult to cope with the problems that arise. It seems that everything is difficult, the language is hard to learn, people are unusual and unpredictable, friends are hard to make, and simple things like shopping and going to the bank are challenges. It is at this stage that you are likely to feel anxious and homesick, and you will probably find yourself complaining about the new culture or country. This is the stage which is referred to as culture shock. Culture shock is only temporary, and at some point, if you are one of those who manage to stick it out, youll transition into the third stage of cultural adjustment, the recovery stage. At this point, youll have a routine, and youll feel more confident functioning in the new culture. Youll start to feel less isolated as you start to understand and accept the way things are done and the way people behave in your new environment. Customs and traditions are clearer and easier to understand. At this stage, youll deal with new challenges with humour rather than anxiety. The last stage is the home or stability stage this is the point when people start to feel at home in the new culture. At this stage, youll function well in the new culture, adopt certain features and behaviours from your new home, and prefer certain aspects of the new culture to your own culture. There is, in a sense, a fifth stage to this process. If you decide to return home after a long period in a new culture, you may experience what is called reverse culture shock. This means that you may find aspects of your own culture foreign because you are so used to the new culture that you have spent so long adjusting to. Reverse culture shock is usually pretty mild you may notice things about your home culture that you had never noticed before, and some of the ways people do things may seem odd. Reverse culture shock rarely lasts for very long.", "hypothesis": "Culture shock is another name for cultural adjustment.", "label": "c"} +{"uid": "id_531", "premise": "What you need to know about Culture Shock Most people who move to a foreign country or culture may experience a period of time when they feel very homesick and have a lot of stress and difficulty functioning in the new culture. This feeling is often called culture shock and it is important to understand and learn how to cope with culture shock if you are to adapt successfully to your new homes culture. First of all, its important to know that culture shock is normal. Everyone in a new situation will go through some form of culture shock, and the extent of which they do is determined by factors such as the difference between cultures, the degree to which someone is anxious to adapt to a new culture and the familiarity that person has to the new culture. If you go, for example, to a culture that is far different from your own, youre likely to experience culture shock more sharply than those who move to a new culture knowing the language and the behavioural norms of the new culture. There are four general stages of cultural adjustment, and it is important that you are aware of these stages and can recognise which stage you are in and when so that you will understand why you feel the way you do and that any difficulties you are experience are temporary, a process you are going through rather than a constant situation. The first stage is usually referred to as the excitement stage or the honeymoon stage. Upon arriving in a new environment, youll be interested in the new culture, everything will seem exciting, everyone will seem friendly and helpful and youll be overwhelmed with impressions. During this stage you are merely soaking up the new landscape, taking in these impressions passively, and at this stage you have little meaningful experience of the culture. But it isnt long before the honeymoon stage dissolves into the second stage sometimes called the withdrawal stage. The excitement you felt before changes to frustration as you find it difficult to cope with the problems that arise. It seems that everything is difficult, the language is hard to learn, people are unusual and unpredictable, friends are hard to make, and simple things like shopping and going to the bank are challenges. It is at this stage that you are likely to feel anxious and homesick, and you will probably find yourself complaining about the new culture or country. This is the stage which is referred to as culture shock. Culture shock is only temporary, and at some point, if you are one of those who manage to stick it out, youll transition into the third stage of cultural adjustment, the recovery stage. At this point, youll have a routine, and youll feel more confident functioning in the new culture. Youll start to feel less isolated as you start to understand and accept the way things are done and the way people behave in your new environment. Customs and traditions are clearer and easier to understand. At this stage, youll deal with new challenges with humour rather than anxiety. The last stage is the home or stability stage this is the point when people start to feel at home in the new culture. At this stage, youll function well in the new culture, adopt certain features and behaviours from your new home, and prefer certain aspects of the new culture to your own culture. There is, in a sense, a fifth stage to this process. If you decide to return home after a long period in a new culture, you may experience what is called reverse culture shock. This means that you may find aspects of your own culture foreign because you are so used to the new culture that you have spent so long adjusting to. Reverse culture shock is usually pretty mild you may notice things about your home culture that you had never noticed before, and some of the ways people do things may seem odd. Reverse culture shock rarely lasts for very long.", "hypothesis": "The first stage is usually the shortest.", "label": "n"} +{"uid": "id_532", "premise": "What's so funny? John McCrone reviews recent research on humor The joke comes over the headphones: 'Which side of a dog has the most hair? The left. ' No, not funny. Try again. 'Which side of a dog has the most hair? The outside. ' Hah! The punchline is silly yet fitting, tempting a smile, even a laugh. Laughter has always struck people as deeply mysterious, perhaps pointless. The writer Arthur Koestler dubbed it the luxury reflex: 'unique in that it serves no apparent biological purpose. ' Theories about humour have an ancient pedigree. Plato expressed the idea that humor is simply a delighted feeling of superiority over others. Kant and Freud felt that joke-telling relies on building up a psychic tension which is safely punctured by the ludicrousness of the punchline. But most modern humor theorists have settled on some version of Aristotle's belief that jokes are based on a reaction to or resolution of incongruity, when the punchline is either a nonsense or, though appearing silly, has a clever second meaning. Graeme Ritchie, a computational linguist in Edinburgh, studies the linguistic structure of jokes in order to understand not only humor but language understanding and reasoning in machines. He says that while there is no single format for jokes, many revolve around a sudden and surprising conceptual shift. A comedian will present a situation followed by an unexpected interpretation that is also apt. So even if a punchline sounds silly, the listener can see there is a clever semantic fit and that sudden mental 'Aha! ' is the buzz that makes us laugh. Viewed from this angle, humor is just a form of creative insight, a sudden leap to a new perspective. However, there is another type of laughter, the laughter of social appeasement and it is important to understand this too. Play is a crucial part of development in most young mammals. Rats produce ultrasonic squeaks to prevent their scuffles turning nasty. Chimpanzees have a 'play-face' a gaping expression accompanied by a panting 'ah ah' noise. In humans, these signals have mutated into smiles and laughs. Researchers believe social situations, rather than cognitive events such as jokes, trigger these instinctual markers of play or appeasement. People laugh on fairground rides or when tickled to flag a play situation, whether they feel amused or not. Both social and cognitive types of laughter tap into the same expressive machinery in our brains, the emotion and motor circuits that produce smiles and excited vocalisations. However, if cognitive laughter is the product of more general thought processes, it should result from more expansive brain activity. Psychologist Vinod Goel investigated humour using the new technique of 'single event' functional magnetic resonance imaging (fMRI). An MRI scanner uses magnetic fields and radio waves to track the changes in oxygenated blood that accompany mental activity. Until recently, MRI scanners needed several minutes of activity and so could not be used to track rapid thought processes such as comprehending a joke. New developments now allow half-second 'snapshots' of all sorts of reasoning and problem-solving activities. Although Goel felt being inside a brain scanner was hardly the ideal place for appreciating a joke, he found evidence that understanding a joke involves a widespread mental shift. His scans showed that at the beginning of a joke the listener's prefrontal cortex lit up, particularly the right prefrontal believed to be critical for problem solving. But there was also activity in the temporal lobes at the side of the head (consistent with attempts to rouse stored knowledge) and in many other brain areas. Then when the punchline arrived, a new area sprang to life the orbital prefrontal cortex. This patch of brain tucked behind the orbits of the eyes is associated with evaluating information. Making a rapid emotional assessment of the events of the moment is an extremely demanding job for the brain, animal or human. Energy and arousal levels may need to be retuned in the blink of an eye. These abrupt changes will produce either positive or negative feelings. The orbital cortex, the region that becomes active in Goel's experiment, seems the best candidate for the site that feeds such feelings into higher-level thought processes, with its close connections to the brain's sub-cortical arousal apparatus and centres of metabolic control. All warm-blooded animals make constant tiny adjustments in arousal in response to external events, but humans, who have developed a much more complicated internal life as a result of language, respond emotionally not only to their surroundings, but to their own thoughts. Whenever a sought-for answer snaps into place, there is a shudder of pleased recognition. Creative discovery being pleasurable, humans have learned to find ways of milking this natural response. The fact that jokes tap into our general evaluative machinery explains why the line between funny and disgusting, or funny and frightening, can be so fine. Whether a joke gives pleasure or pain depends on a person's outlook. Humor may be a luxury, but the mechanism behind it is no evolutionary accident. As Peter Derks, a psychologist at William and Mary College in Virginia, says: 'I like to think of humour as the distorted mirror of the mind. It's creative, perceptual, analytical and lingual. If we can figure out how the mind processes humor, then we'll have a pretty good handle on how it works in general. '", "hypothesis": "Current thinking on humour has largely ignored Aristotle's view on the subject.", "label": "c"} +{"uid": "id_533", "premise": "What's so funny? John McCrone reviews recent research on humor The joke comes over the headphones: 'Which side of a dog has the most hair? The left. ' No, not funny. Try again. 'Which side of a dog has the most hair? The outside. ' Hah! The punchline is silly yet fitting, tempting a smile, even a laugh. Laughter has always struck people as deeply mysterious, perhaps pointless. The writer Arthur Koestler dubbed it the luxury reflex: 'unique in that it serves no apparent biological purpose. ' Theories about humour have an ancient pedigree. Plato expressed the idea that humor is simply a delighted feeling of superiority over others. Kant and Freud felt that joke-telling relies on building up a psychic tension which is safely punctured by the ludicrousness of the punchline. But most modern humor theorists have settled on some version of Aristotle's belief that jokes are based on a reaction to or resolution of incongruity, when the punchline is either a nonsense or, though appearing silly, has a clever second meaning. Graeme Ritchie, a computational linguist in Edinburgh, studies the linguistic structure of jokes in order to understand not only humor but language understanding and reasoning in machines. He says that while there is no single format for jokes, many revolve around a sudden and surprising conceptual shift. A comedian will present a situation followed by an unexpected interpretation that is also apt. So even if a punchline sounds silly, the listener can see there is a clever semantic fit and that sudden mental 'Aha! ' is the buzz that makes us laugh. Viewed from this angle, humor is just a form of creative insight, a sudden leap to a new perspective. However, there is another type of laughter, the laughter of social appeasement and it is important to understand this too. Play is a crucial part of development in most young mammals. Rats produce ultrasonic squeaks to prevent their scuffles turning nasty. Chimpanzees have a 'play-face' a gaping expression accompanied by a panting 'ah ah' noise. In humans, these signals have mutated into smiles and laughs. Researchers believe social situations, rather than cognitive events such as jokes, trigger these instinctual markers of play or appeasement. People laugh on fairground rides or when tickled to flag a play situation, whether they feel amused or not. Both social and cognitive types of laughter tap into the same expressive machinery in our brains, the emotion and motor circuits that produce smiles and excited vocalisations. However, if cognitive laughter is the product of more general thought processes, it should result from more expansive brain activity. Psychologist Vinod Goel investigated humour using the new technique of 'single event' functional magnetic resonance imaging (fMRI). An MRI scanner uses magnetic fields and radio waves to track the changes in oxygenated blood that accompany mental activity. Until recently, MRI scanners needed several minutes of activity and so could not be used to track rapid thought processes such as comprehending a joke. New developments now allow half-second 'snapshots' of all sorts of reasoning and problem-solving activities. Although Goel felt being inside a brain scanner was hardly the ideal place for appreciating a joke, he found evidence that understanding a joke involves a widespread mental shift. His scans showed that at the beginning of a joke the listener's prefrontal cortex lit up, particularly the right prefrontal believed to be critical for problem solving. But there was also activity in the temporal lobes at the side of the head (consistent with attempts to rouse stored knowledge) and in many other brain areas. Then when the punchline arrived, a new area sprang to life the orbital prefrontal cortex. This patch of brain tucked behind the orbits of the eyes is associated with evaluating information. Making a rapid emotional assessment of the events of the moment is an extremely demanding job for the brain, animal or human. Energy and arousal levels may need to be retuned in the blink of an eye. These abrupt changes will produce either positive or negative feelings. The orbital cortex, the region that becomes active in Goel's experiment, seems the best candidate for the site that feeds such feelings into higher-level thought processes, with its close connections to the brain's sub-cortical arousal apparatus and centres of metabolic control. All warm-blooded animals make constant tiny adjustments in arousal in response to external events, but humans, who have developed a much more complicated internal life as a result of language, respond emotionally not only to their surroundings, but to their own thoughts. Whenever a sought-for answer snaps into place, there is a shudder of pleased recognition. Creative discovery being pleasurable, humans have learned to find ways of milking this natural response. The fact that jokes tap into our general evaluative machinery explains why the line between funny and disgusting, or funny and frightening, can be so fine. Whether a joke gives pleasure or pain depends on a person's outlook. Humor may be a luxury, but the mechanism behind it is no evolutionary accident. As Peter Derks, a psychologist at William and Mary College in Virginia, says: 'I like to think of humour as the distorted mirror of the mind. It's creative, perceptual, analytical and lingual. If we can figure out how the mind processes humor, then we'll have a pretty good handle on how it works in general. '", "hypothesis": "Graeme Ritchie's work links jokes to artificial intelligence.", "label": "e"} +{"uid": "id_534", "premise": "What's so funny? John McCrone reviews recent research on humor The joke comes over the headphones: 'Which side of a dog has the most hair? The left. ' No, not funny. Try again. 'Which side of a dog has the most hair? The outside. ' Hah! The punchline is silly yet fitting, tempting a smile, even a laugh. Laughter has always struck people as deeply mysterious, perhaps pointless. The writer Arthur Koestler dubbed it the luxury reflex: 'unique in that it serves no apparent biological purpose. ' Theories about humour have an ancient pedigree. Plato expressed the idea that humor is simply a delighted feeling of superiority over others. Kant and Freud felt that joke-telling relies on building up a psychic tension which is safely punctured by the ludicrousness of the punchline. But most modern humor theorists have settled on some version of Aristotle's belief that jokes are based on a reaction to or resolution of incongruity, when the punchline is either a nonsense or, though appearing silly, has a clever second meaning. Graeme Ritchie, a computational linguist in Edinburgh, studies the linguistic structure of jokes in order to understand not only humor but language understanding and reasoning in machines. He says that while there is no single format for jokes, many revolve around a sudden and surprising conceptual shift. A comedian will present a situation followed by an unexpected interpretation that is also apt. So even if a punchline sounds silly, the listener can see there is a clever semantic fit and that sudden mental 'Aha! ' is the buzz that makes us laugh. Viewed from this angle, humor is just a form of creative insight, a sudden leap to a new perspective. However, there is another type of laughter, the laughter of social appeasement and it is important to understand this too. Play is a crucial part of development in most young mammals. Rats produce ultrasonic squeaks to prevent their scuffles turning nasty. Chimpanzees have a 'play-face' a gaping expression accompanied by a panting 'ah ah' noise. In humans, these signals have mutated into smiles and laughs. Researchers believe social situations, rather than cognitive events such as jokes, trigger these instinctual markers of play or appeasement. People laugh on fairground rides or when tickled to flag a play situation, whether they feel amused or not. Both social and cognitive types of laughter tap into the same expressive machinery in our brains, the emotion and motor circuits that produce smiles and excited vocalisations. However, if cognitive laughter is the product of more general thought processes, it should result from more expansive brain activity. Psychologist Vinod Goel investigated humour using the new technique of 'single event' functional magnetic resonance imaging (fMRI). An MRI scanner uses magnetic fields and radio waves to track the changes in oxygenated blood that accompany mental activity. Until recently, MRI scanners needed several minutes of activity and so could not be used to track rapid thought processes such as comprehending a joke. New developments now allow half-second 'snapshots' of all sorts of reasoning and problem-solving activities. Although Goel felt being inside a brain scanner was hardly the ideal place for appreciating a joke, he found evidence that understanding a joke involves a widespread mental shift. His scans showed that at the beginning of a joke the listener's prefrontal cortex lit up, particularly the right prefrontal believed to be critical for problem solving. But there was also activity in the temporal lobes at the side of the head (consistent with attempts to rouse stored knowledge) and in many other brain areas. Then when the punchline arrived, a new area sprang to life the orbital prefrontal cortex. This patch of brain tucked behind the orbits of the eyes is associated with evaluating information. Making a rapid emotional assessment of the events of the moment is an extremely demanding job for the brain, animal or human. Energy and arousal levels may need to be retuned in the blink of an eye. These abrupt changes will produce either positive or negative feelings. The orbital cortex, the region that becomes active in Goel's experiment, seems the best candidate for the site that feeds such feelings into higher-level thought processes, with its close connections to the brain's sub-cortical arousal apparatus and centres of metabolic control. All warm-blooded animals make constant tiny adjustments in arousal in response to external events, but humans, who have developed a much more complicated internal life as a result of language, respond emotionally not only to their surroundings, but to their own thoughts. Whenever a sought-for answer snaps into place, there is a shudder of pleased recognition. Creative discovery being pleasurable, humans have learned to find ways of milking this natural response. The fact that jokes tap into our general evaluative machinery explains why the line between funny and disgusting, or funny and frightening, can be so fine. Whether a joke gives pleasure or pain depends on a person's outlook. Humor may be a luxury, but the mechanism behind it is no evolutionary accident. As Peter Derks, a psychologist at William and Mary College in Virginia, says: 'I like to think of humour as the distorted mirror of the mind. It's creative, perceptual, analytical and lingual. If we can figure out how the mind processes humor, then we'll have a pretty good handle on how it works in general. '", "hypothesis": "Plato believed humour to be a sign of above-average intelligence.", "label": "n"} +{"uid": "id_535", "premise": "What's so funny? John McCrone reviews recent research on humor The joke comes over the headphones: 'Which side of a dog has the most hair? The left. ' No, not funny. Try again. 'Which side of a dog has the most hair? The outside. ' Hah! The punchline is silly yet fitting, tempting a smile, even a laugh. Laughter has always struck people as deeply mysterious, perhaps pointless. The writer Arthur Koestler dubbed it the luxury reflex: 'unique in that it serves no apparent biological purpose. ' Theories about humour have an ancient pedigree. Plato expressed the idea that humor is simply a delighted feeling of superiority over others. Kant and Freud felt that joke-telling relies on building up a psychic tension which is safely punctured by the ludicrousness of the punchline. But most modern humor theorists have settled on some version of Aristotle's belief that jokes are based on a reaction to or resolution of incongruity, when the punchline is either a nonsense or, though appearing silly, has a clever second meaning. Graeme Ritchie, a computational linguist in Edinburgh, studies the linguistic structure of jokes in order to understand not only humor but language understanding and reasoning in machines. He says that while there is no single format for jokes, many revolve around a sudden and surprising conceptual shift. A comedian will present a situation followed by an unexpected interpretation that is also apt. So even if a punchline sounds silly, the listener can see there is a clever semantic fit and that sudden mental 'Aha! ' is the buzz that makes us laugh. Viewed from this angle, humor is just a form of creative insight, a sudden leap to a new perspective. However, there is another type of laughter, the laughter of social appeasement and it is important to understand this too. Play is a crucial part of development in most young mammals. Rats produce ultrasonic squeaks to prevent their scuffles turning nasty. Chimpanzees have a 'play-face' a gaping expression accompanied by a panting 'ah ah' noise. In humans, these signals have mutated into smiles and laughs. Researchers believe social situations, rather than cognitive events such as jokes, trigger these instinctual markers of play or appeasement. People laugh on fairground rides or when tickled to flag a play situation, whether they feel amused or not. Both social and cognitive types of laughter tap into the same expressive machinery in our brains, the emotion and motor circuits that produce smiles and excited vocalisations. However, if cognitive laughter is the product of more general thought processes, it should result from more expansive brain activity. Psychologist Vinod Goel investigated humour using the new technique of 'single event' functional magnetic resonance imaging (fMRI). An MRI scanner uses magnetic fields and radio waves to track the changes in oxygenated blood that accompany mental activity. Until recently, MRI scanners needed several minutes of activity and so could not be used to track rapid thought processes such as comprehending a joke. New developments now allow half-second 'snapshots' of all sorts of reasoning and problem-solving activities. Although Goel felt being inside a brain scanner was hardly the ideal place for appreciating a joke, he found evidence that understanding a joke involves a widespread mental shift. His scans showed that at the beginning of a joke the listener's prefrontal cortex lit up, particularly the right prefrontal believed to be critical for problem solving. But there was also activity in the temporal lobes at the side of the head (consistent with attempts to rouse stored knowledge) and in many other brain areas. Then when the punchline arrived, a new area sprang to life the orbital prefrontal cortex. This patch of brain tucked behind the orbits of the eyes is associated with evaluating information. Making a rapid emotional assessment of the events of the moment is an extremely demanding job for the brain, animal or human. Energy and arousal levels may need to be retuned in the blink of an eye. These abrupt changes will produce either positive or negative feelings. The orbital cortex, the region that becomes active in Goel's experiment, seems the best candidate for the site that feeds such feelings into higher-level thought processes, with its close connections to the brain's sub-cortical arousal apparatus and centres of metabolic control. All warm-blooded animals make constant tiny adjustments in arousal in response to external events, but humans, who have developed a much more complicated internal life as a result of language, respond emotionally not only to their surroundings, but to their own thoughts. Whenever a sought-for answer snaps into place, there is a shudder of pleased recognition. Creative discovery being pleasurable, humans have learned to find ways of milking this natural response. The fact that jokes tap into our general evaluative machinery explains why the line between funny and disgusting, or funny and frightening, can be so fine. Whether a joke gives pleasure or pain depends on a person's outlook. Humor may be a luxury, but the mechanism behind it is no evolutionary accident. As Peter Derks, a psychologist at William and Mary College in Virginia, says: 'I like to think of humour as the distorted mirror of the mind. It's creative, perceptual, analytical and lingual. If we can figure out how the mind processes humor, then we'll have a pretty good handle on how it works in general. '", "hypothesis": "Kant believed that a successful joke involves the controlled release of nervous energy.", "label": "e"} +{"uid": "id_536", "premise": "What's so funny? John McCrone reviews recent research on humor The joke comes over the headphones: 'Which side of a dog has the most hair? The left. ' No, not funny. Try again. 'Which side of a dog has the most hair? The outside. ' Hah! The punchline is silly yet fitting, tempting a smile, even a laugh. Laughter has always struck people as deeply mysterious, perhaps pointless. The writer Arthur Koestler dubbed it the luxury reflex: 'unique in that it serves no apparent biological purpose. ' Theories about humour have an ancient pedigree. Plato expressed the idea that humor is simply a delighted feeling of superiority over others. Kant and Freud felt that joke-telling relies on building up a psychic tension which is safely punctured by the ludicrousness of the punchline. But most modern humor theorists have settled on some version of Aristotle's belief that jokes are based on a reaction to or resolution of incongruity, when the punchline is either a nonsense or, though appearing silly, has a clever second meaning. Graeme Ritchie, a computational linguist in Edinburgh, studies the linguistic structure of jokes in order to understand not only humor but language understanding and reasoning in machines. He says that while there is no single format for jokes, many revolve around a sudden and surprising conceptual shift. A comedian will present a situation followed by an unexpected interpretation that is also apt. So even if a punchline sounds silly, the listener can see there is a clever semantic fit and that sudden mental 'Aha! ' is the buzz that makes us laugh. Viewed from this angle, humor is just a form of creative insight, a sudden leap to a new perspective. However, there is another type of laughter, the laughter of social appeasement and it is important to understand this too. Play is a crucial part of development in most young mammals. Rats produce ultrasonic squeaks to prevent their scuffles turning nasty. Chimpanzees have a 'play-face' a gaping expression accompanied by a panting 'ah ah' noise. In humans, these signals have mutated into smiles and laughs. Researchers believe social situations, rather than cognitive events such as jokes, trigger these instinctual markers of play or appeasement. People laugh on fairground rides or when tickled to flag a play situation, whether they feel amused or not. Both social and cognitive types of laughter tap into the same expressive machinery in our brains, the emotion and motor circuits that produce smiles and excited vocalisations. However, if cognitive laughter is the product of more general thought processes, it should result from more expansive brain activity. Psychologist Vinod Goel investigated humour using the new technique of 'single event' functional magnetic resonance imaging (fMRI). An MRI scanner uses magnetic fields and radio waves to track the changes in oxygenated blood that accompany mental activity. Until recently, MRI scanners needed several minutes of activity and so could not be used to track rapid thought processes such as comprehending a joke. New developments now allow half-second 'snapshots' of all sorts of reasoning and problem-solving activities. Although Goel felt being inside a brain scanner was hardly the ideal place for appreciating a joke, he found evidence that understanding a joke involves a widespread mental shift. His scans showed that at the beginning of a joke the listener's prefrontal cortex lit up, particularly the right prefrontal believed to be critical for problem solving. But there was also activity in the temporal lobes at the side of the head (consistent with attempts to rouse stored knowledge) and in many other brain areas. Then when the punchline arrived, a new area sprang to life the orbital prefrontal cortex. This patch of brain tucked behind the orbits of the eyes is associated with evaluating information. Making a rapid emotional assessment of the events of the moment is an extremely demanding job for the brain, animal or human. Energy and arousal levels may need to be retuned in the blink of an eye. These abrupt changes will produce either positive or negative feelings. The orbital cortex, the region that becomes active in Goel's experiment, seems the best candidate for the site that feeds such feelings into higher-level thought processes, with its close connections to the brain's sub-cortical arousal apparatus and centres of metabolic control. All warm-blooded animals make constant tiny adjustments in arousal in response to external events, but humans, who have developed a much more complicated internal life as a result of language, respond emotionally not only to their surroundings, but to their own thoughts. Whenever a sought-for answer snaps into place, there is a shudder of pleased recognition. Creative discovery being pleasurable, humans have learned to find ways of milking this natural response. The fact that jokes tap into our general evaluative machinery explains why the line between funny and disgusting, or funny and frightening, can be so fine. Whether a joke gives pleasure or pain depends on a person's outlook. Humor may be a luxury, but the mechanism behind it is no evolutionary accident. As Peter Derks, a psychologist at William and Mary College in Virginia, says: 'I like to think of humour as the distorted mirror of the mind. It's creative, perceptual, analytical and lingual. If we can figure out how the mind processes humor, then we'll have a pretty good handle on how it works in general. '", "hypothesis": "Arthur Koestler considered laughter biologically important in several ways.", "label": "c"} +{"uid": "id_537", "premise": "What's so funny? John McCrone reviews recent research on humor The joke comes over the headphones: 'Which side of a dog has the most hair? The left. ' No, not funny. Try again. 'Which side of a dog has the most hair? The outside. ' Hah! The punchline is silly yet fitting, tempting a smile, even a laugh. Laughter has always struck people as deeply mysterious, perhaps pointless. The writer Arthur Koestler dubbed it the luxury reflex: 'unique in that it serves no apparent biological purpose. ' Theories about humour have an ancient pedigree. Plato expressed the idea that humor is simply a delighted feeling of superiority over others. Kant and Freud felt that joke-telling relies on building up a psychic tension which is safely punctured by the ludicrousness of the punchline. But most modern humor theorists have settled on some version of Aristotle's belief that jokes are based on a reaction to or resolution of incongruity, when the punchline is either a nonsense or, though appearing silly, has a clever second meaning. Graeme Ritchie, a computational linguist in Edinburgh, studies the linguistic structure of jokes in order to understand not only humor but language understanding and reasoning in machines. He says that while there is no single format for jokes, many revolve around a sudden and surprising conceptual shift. A comedian will present a situation followed by an unexpected interpretation that is also apt. So even if a punchline sounds silly, the listener can see there is a clever semantic fit and that sudden mental 'Aha! ' is the buzz that makes us laugh. Viewed from this angle, humor is just a form of creative insight, a sudden leap to a new perspective. However, there is another type of laughter, the laughter of social appeasement and it is important to understand this too. Play is a crucial part of development in most young mammals. Rats produce ultrasonic squeaks to prevent their scuffles turning nasty. Chimpanzees have a 'play-face' a gaping expression accompanied by a panting 'ah ah' noise. In humans, these signals have mutated into smiles and laughs. Researchers believe social situations, rather than cognitive events such as jokes, trigger these instinctual markers of play or appeasement. People laugh on fairground rides or when tickled to flag a play situation, whether they feel amused or not. Both social and cognitive types of laughter tap into the same expressive machinery in our brains, the emotion and motor circuits that produce smiles and excited vocalisations. However, if cognitive laughter is the product of more general thought processes, it should result from more expansive brain activity. Psychologist Vinod Goel investigated humour using the new technique of 'single event' functional magnetic resonance imaging (fMRI). An MRI scanner uses magnetic fields and radio waves to track the changes in oxygenated blood that accompany mental activity. Until recently, MRI scanners needed several minutes of activity and so could not be used to track rapid thought processes such as comprehending a joke. New developments now allow half-second 'snapshots' of all sorts of reasoning and problem-solving activities. Although Goel felt being inside a brain scanner was hardly the ideal place for appreciating a joke, he found evidence that understanding a joke involves a widespread mental shift. His scans showed that at the beginning of a joke the listener's prefrontal cortex lit up, particularly the right prefrontal believed to be critical for problem solving. But there was also activity in the temporal lobes at the side of the head (consistent with attempts to rouse stored knowledge) and in many other brain areas. Then when the punchline arrived, a new area sprang to life the orbital prefrontal cortex. This patch of brain tucked behind the orbits of the eyes is associated with evaluating information. Making a rapid emotional assessment of the events of the moment is an extremely demanding job for the brain, animal or human. Energy and arousal levels may need to be retuned in the blink of an eye. These abrupt changes will produce either positive or negative feelings. The orbital cortex, the region that becomes active in Goel's experiment, seems the best candidate for the site that feeds such feelings into higher-level thought processes, with its close connections to the brain's sub-cortical arousal apparatus and centres of metabolic control. All warm-blooded animals make constant tiny adjustments in arousal in response to external events, but humans, who have developed a much more complicated internal life as a result of language, respond emotionally not only to their surroundings, but to their own thoughts. Whenever a sought-for answer snaps into place, there is a shudder of pleased recognition. Creative discovery being pleasurable, humans have learned to find ways of milking this natural response. The fact that jokes tap into our general evaluative machinery explains why the line between funny and disgusting, or funny and frightening, can be so fine. Whether a joke gives pleasure or pain depends on a person's outlook. Humor may be a luxury, but the mechanism behind it is no evolutionary accident. As Peter Derks, a psychologist at William and Mary College in Virginia, says: 'I like to think of humour as the distorted mirror of the mind. It's creative, perceptual, analytical and lingual. If we can figure out how the mind processes humor, then we'll have a pretty good handle on how it works in general. '", "hypothesis": "Most comedians use personal situations as a source of humour.", "label": "n"} +{"uid": "id_538", "premise": "What's so funny? John McCrone reviews recent research on humor The joke comes over the headphones: 'Which side of a dog has the most hair? The left. ' No, not funny. Try again. 'Which side of a dog has the most hair? The outside. ' Hah! The punchline is silly yet fitting, tempting a smile, even a laugh. Laughter has always struck people as deeply mysterious, perhaps pointless. The writer Arthur Koestler dubbed it the luxury reflex: 'unique in that it serves no apparent biological purpose. ' Theories about humour have an ancient pedigree. Plato expressed the idea that humor is simply a delighted feeling of superiority over others. Kant and Freud felt that joke-telling relies on building up a psychic tension which is safely punctured by the ludicrousness of the punchline. But most modern humor theorists have settled on some version of Aristotle's belief that jokes are based on a reaction to or resolution of incongruity, when the punchline is either a nonsense or, though appearing silly, has a clever second meaning. Graeme Ritchie, a computational linguist in Edinburgh, studies the linguistic structure of jokes in order to understand not only humor but language understanding and reasoning in machines. He says that while there is no single format for jokes, many revolve around a sudden and surprising conceptual shift. A comedian will present a situation followed by an unexpected interpretation that is also apt. So even if a punchline sounds silly, the listener can see there is a clever semantic fit and that sudden mental 'Aha! ' is the buzz that makes us laugh. Viewed from this angle, humor is just a form of creative insight, a sudden leap to a new perspective. However, there is another type of laughter, the laughter of social appeasement and it is important to understand this too. Play is a crucial part of development in most young mammals. Rats produce ultrasonic squeaks to prevent their scuffles turning nasty. Chimpanzees have a 'play-face' a gaping expression accompanied by a panting 'ah ah' noise. In humans, these signals have mutated into smiles and laughs. Researchers believe social situations, rather than cognitive events such as jokes, trigger these instinctual markers of play or appeasement. People laugh on fairground rides or when tickled to flag a play situation, whether they feel amused or not. Both social and cognitive types of laughter tap into the same expressive machinery in our brains, the emotion and motor circuits that produce smiles and excited vocalisations. However, if cognitive laughter is the product of more general thought processes, it should result from more expansive brain activity. Psychologist Vinod Goel investigated humour using the new technique of 'single event' functional magnetic resonance imaging (fMRI). An MRI scanner uses magnetic fields and radio waves to track the changes in oxygenated blood that accompany mental activity. Until recently, MRI scanners needed several minutes of activity and so could not be used to track rapid thought processes such as comprehending a joke. New developments now allow half-second 'snapshots' of all sorts of reasoning and problem-solving activities. Although Goel felt being inside a brain scanner was hardly the ideal place for appreciating a joke, he found evidence that understanding a joke involves a widespread mental shift. His scans showed that at the beginning of a joke the listener's prefrontal cortex lit up, particularly the right prefrontal believed to be critical for problem solving. But there was also activity in the temporal lobes at the side of the head (consistent with attempts to rouse stored knowledge) and in many other brain areas. Then when the punchline arrived, a new area sprang to life the orbital prefrontal cortex. This patch of brain tucked behind the orbits of the eyes is associated with evaluating information. Making a rapid emotional assessment of the events of the moment is an extremely demanding job for the brain, animal or human. Energy and arousal levels may need to be retuned in the blink of an eye. These abrupt changes will produce either positive or negative feelings. The orbital cortex, the region that becomes active in Goel's experiment, seems the best candidate for the site that feeds such feelings into higher-level thought processes, with its close connections to the brain's sub-cortical arousal apparatus and centres of metabolic control. All warm-blooded animals make constant tiny adjustments in arousal in response to external events, but humans, who have developed a much more complicated internal life as a result of language, respond emotionally not only to their surroundings, but to their own thoughts. Whenever a sought-for answer snaps into place, there is a shudder of pleased recognition. Creative discovery being pleasurable, humans have learned to find ways of milking this natural response. The fact that jokes tap into our general evaluative machinery explains why the line between funny and disgusting, or funny and frightening, can be so fine. Whether a joke gives pleasure or pain depends on a person's outlook. Humor may be a luxury, but the mechanism behind it is no evolutionary accident. As Peter Derks, a psychologist at William and Mary College in Virginia, says: 'I like to think of humour as the distorted mirror of the mind. It's creative, perceptual, analytical and lingual. If we can figure out how the mind processes humor, then we'll have a pretty good handle on how it works in general. '", "hypothesis": "Chimpanzees make particular noises when they are playing.", "label": "e"} +{"uid": "id_539", "premise": "What's the purpose of gaining knowledge? 'I would found an institution where any person can find instruction in any subject. ' That was the founder's motto for Cornell University, and it seems an apt characterization of the different university, also in the USA, where I currently teach philosophy. A student can prepare for a career in resort management, engineering, interior design, accounting, music, law enforcement, you name it. But what would the founders of these two institutions have thought of a course called Arson for Profit ? I kid you not: we have it on the books. Any undergraduates who have met the academic requirements can sign up for the course in our program in 'fire science'. Naturally, the course is intended for prospective arson investigators, who can learn all the tricks of the trade for detecting whether a fire was deliberately set, discovering who did it, and establishing a chain of evidence for effective prosecution in a court of law. But wouldn't this also be the perfect course for prospective arsonists to sign up for? My point is not to criticize academic programs in fire science: they are highly welcome as part of the increasing professionalization of this and many other occupations. However, it's not unknown for a firefighter to torch a building. This example suggests how dishonest and illegal behavior, with the help of higher education, can creep into every aspect of public and business life. I realized this anew when I was invited to speak before a class in marketing, which is another of our degree programs. The regular instructor is a colleague who appreciates the kind of ethical perspective I can bring as a philosopher. There are endless ways I could have approached this assignment, but I took my cue from the title of the course: 'Principles of Marketing'. It made me think to ask the students, 'Is marketing principled? ' After all, a subject matter can have principles in the sense of being codified, having rules, as with football or chess, without being principled in the sense of being ethical. Many of the students immediately assumed that the answer to my question about marketing principles was obvious: no. Just look at the ways in which everything under the sun has been marketed; obviously it need not be done in a principled (=ethical) fashion. Is that obvious? I made the suggestion, which may sound downright crazy in light of the evidence, that perhaps marketing is by definition principled. My inspiration for this judgement is the philosopher Immanuel Kant, who argued that any body of knowledge consists of an end (or purpose) and a means. Let us apply both the terms 'means' and 'end' to marketing. The students have signed up for a course in order to learn how to market effectively. But to what end? There seem to be two main attitudes toward that question. One is that the answer is obvious: the purpose of marketing is to sell things and to make money. The other attitude is that the purpose of marketing is irrelevant: Each person comes to the program and course with his or her own plans, and these need not even concern the acquisition of marketing expertise as such. My proposal, which I believe would also be Kant's, is that neither of these attitudes captures the significance of the end to the means for marketing. A field of knowledge or a professional endeavor is defined by both the means and the end; hence both deserve scrutiny. Students need to study both how to achieve X, and also what X is. It is at this point that 'Arson for Profit' becomes supremely relevant. That course is presumably all about means: how to detect and prosecute criminal activity. It is therefore assumed that the end is good in an ethical sense. When I ask fire science students to articulate the end, or purpose, of their field, they eventually generalize to something like, 'The safety and welfare of society, ' which seems right. As we have seen, someone could use the very same knowledge of means to achieve a much less noble end, such as personal profit via destructive, dangerous, reckless activity. But we would not call that firefighting. We have a separate word for it: arson. Similarly, if you employed the 'principles of marketing' in an unprincipled way, you would not be doing marketing. We have another term for it: fraud. Kant gives the example of a doctor and a poisoner, who use the identical knowledge to achieve their divergent ends. We would say that one is practicing medicine, the other, murder.", "hypothesis": "The 'Arson for Profit' course would be useful for people intending to set fire to buildings.", "label": "e"} +{"uid": "id_540", "premise": "What's the purpose of gaining knowledge? 'I would found an institution where any person can find instruction in any subject. ' That was the founder's motto for Cornell University, and it seems an apt characterization of the different university, also in the USA, where I currently teach philosophy. A student can prepare for a career in resort management, engineering, interior design, accounting, music, law enforcement, you name it. But what would the founders of these two institutions have thought of a course called Arson for Profit ? I kid you not: we have it on the books. Any undergraduates who have met the academic requirements can sign up for the course in our program in 'fire science'. Naturally, the course is intended for prospective arson investigators, who can learn all the tricks of the trade for detecting whether a fire was deliberately set, discovering who did it, and establishing a chain of evidence for effective prosecution in a court of law. But wouldn't this also be the perfect course for prospective arsonists to sign up for? My point is not to criticize academic programs in fire science: they are highly welcome as part of the increasing professionalization of this and many other occupations. However, it's not unknown for a firefighter to torch a building. This example suggests how dishonest and illegal behavior, with the help of higher education, can creep into every aspect of public and business life. I realized this anew when I was invited to speak before a class in marketing, which is another of our degree programs. The regular instructor is a colleague who appreciates the kind of ethical perspective I can bring as a philosopher. There are endless ways I could have approached this assignment, but I took my cue from the title of the course: 'Principles of Marketing'. It made me think to ask the students, 'Is marketing principled? ' After all, a subject matter can have principles in the sense of being codified, having rules, as with football or chess, without being principled in the sense of being ethical. Many of the students immediately assumed that the answer to my question about marketing principles was obvious: no. Just look at the ways in which everything under the sun has been marketed; obviously it need not be done in a principled (=ethical) fashion. Is that obvious? I made the suggestion, which may sound downright crazy in light of the evidence, that perhaps marketing is by definition principled. My inspiration for this judgement is the philosopher Immanuel Kant, who argued that any body of knowledge consists of an end (or purpose) and a means. Let us apply both the terms 'means' and 'end' to marketing. The students have signed up for a course in order to learn how to market effectively. But to what end? There seem to be two main attitudes toward that question. One is that the answer is obvious: the purpose of marketing is to sell things and to make money. The other attitude is that the purpose of marketing is irrelevant: Each person comes to the program and course with his or her own plans, and these need not even concern the acquisition of marketing expertise as such. My proposal, which I believe would also be Kant's, is that neither of these attitudes captures the significance of the end to the means for marketing. A field of knowledge or a professional endeavor is defined by both the means and the end; hence both deserve scrutiny. Students need to study both how to achieve X, and also what X is. It is at this point that 'Arson for Profit' becomes supremely relevant. That course is presumably all about means: how to detect and prosecute criminal activity. It is therefore assumed that the end is good in an ethical sense. When I ask fire science students to articulate the end, or purpose, of their field, they eventually generalize to something like, 'The safety and welfare of society, ' which seems right. As we have seen, someone could use the very same knowledge of means to achieve a much less noble end, such as personal profit via destructive, dangerous, reckless activity. But we would not call that firefighting. We have a separate word for it: arson. Similarly, if you employed the 'principles of marketing' in an unprincipled way, you would not be doing marketing. We have another term for it: fraud. Kant gives the example of a doctor and a poisoner, who use the identical knowledge to achieve their divergent ends. We would say that one is practicing medicine, the other, murder.", "hypothesis": "It is difficult to attract students onto courses that do not focus on a career.", "label": "n"} +{"uid": "id_541", "premise": "What's the purpose of gaining knowledge? 'I would found an institution where any person can find instruction in any subject. ' That was the founder's motto for Cornell University, and it seems an apt characterization of the different university, also in the USA, where I currently teach philosophy. A student can prepare for a career in resort management, engineering, interior design, accounting, music, law enforcement, you name it. But what would the founders of these two institutions have thought of a course called Arson for Profit ? I kid you not: we have it on the books. Any undergraduates who have met the academic requirements can sign up for the course in our program in 'fire science'. Naturally, the course is intended for prospective arson investigators, who can learn all the tricks of the trade for detecting whether a fire was deliberately set, discovering who did it, and establishing a chain of evidence for effective prosecution in a court of law. But wouldn't this also be the perfect course for prospective arsonists to sign up for? My point is not to criticize academic programs in fire science: they are highly welcome as part of the increasing professionalization of this and many other occupations. However, it's not unknown for a firefighter to torch a building. This example suggests how dishonest and illegal behavior, with the help of higher education, can creep into every aspect of public and business life. I realized this anew when I was invited to speak before a class in marketing, which is another of our degree programs. The regular instructor is a colleague who appreciates the kind of ethical perspective I can bring as a philosopher. There are endless ways I could have approached this assignment, but I took my cue from the title of the course: 'Principles of Marketing'. It made me think to ask the students, 'Is marketing principled? ' After all, a subject matter can have principles in the sense of being codified, having rules, as with football or chess, without being principled in the sense of being ethical. Many of the students immediately assumed that the answer to my question about marketing principles was obvious: no. Just look at the ways in which everything under the sun has been marketed; obviously it need not be done in a principled (=ethical) fashion. Is that obvious? I made the suggestion, which may sound downright crazy in light of the evidence, that perhaps marketing is by definition principled. My inspiration for this judgement is the philosopher Immanuel Kant, who argued that any body of knowledge consists of an end (or purpose) and a means. Let us apply both the terms 'means' and 'end' to marketing. The students have signed up for a course in order to learn how to market effectively. But to what end? There seem to be two main attitudes toward that question. One is that the answer is obvious: the purpose of marketing is to sell things and to make money. The other attitude is that the purpose of marketing is irrelevant: Each person comes to the program and course with his or her own plans, and these need not even concern the acquisition of marketing expertise as such. My proposal, which I believe would also be Kant's, is that neither of these attitudes captures the significance of the end to the means for marketing. A field of knowledge or a professional endeavor is defined by both the means and the end; hence both deserve scrutiny. Students need to study both how to achieve X, and also what X is. It is at this point that 'Arson for Profit' becomes supremely relevant. That course is presumably all about means: how to detect and prosecute criminal activity. It is therefore assumed that the end is good in an ethical sense. When I ask fire science students to articulate the end, or purpose, of their field, they eventually generalize to something like, 'The safety and welfare of society, ' which seems right. As we have seen, someone could use the very same knowledge of means to achieve a much less noble end, such as personal profit via destructive, dangerous, reckless activity. But we would not call that firefighting. We have a separate word for it: arson. Similarly, if you employed the 'principles of marketing' in an unprincipled way, you would not be doing marketing. We have another term for it: fraud. Kant gives the example of a doctor and a poisoner, who use the identical knowledge to achieve their divergent ends. We would say that one is practicing medicine, the other, murder.", "hypothesis": "Fire science courses are too academic to help people to be good at the job of firefighting.", "label": "c"} +{"uid": "id_542", "premise": "What's the purpose of gaining knowledge? 'I would found an institution where any person can find instruction in any subject. ' That was the founder's motto for Cornell University, and it seems an apt characterization of the different university, also in the USA, where I currently teach philosophy. A student can prepare for a career in resort management, engineering, interior design, accounting, music, law enforcement, you name it. But what would the founders of these two institutions have thought of a course called Arson for Profit ? I kid you not: we have it on the books. Any undergraduates who have met the academic requirements can sign up for the course in our program in 'fire science'. Naturally, the course is intended for prospective arson investigators, who can learn all the tricks of the trade for detecting whether a fire was deliberately set, discovering who did it, and establishing a chain of evidence for effective prosecution in a court of law. But wouldn't this also be the perfect course for prospective arsonists to sign up for? My point is not to criticize academic programs in fire science: they are highly welcome as part of the increasing professionalization of this and many other occupations. However, it's not unknown for a firefighter to torch a building. This example suggests how dishonest and illegal behavior, with the help of higher education, can creep into every aspect of public and business life. I realized this anew when I was invited to speak before a class in marketing, which is another of our degree programs. The regular instructor is a colleague who appreciates the kind of ethical perspective I can bring as a philosopher. There are endless ways I could have approached this assignment, but I took my cue from the title of the course: 'Principles of Marketing'. It made me think to ask the students, 'Is marketing principled? ' After all, a subject matter can have principles in the sense of being codified, having rules, as with football or chess, without being principled in the sense of being ethical. Many of the students immediately assumed that the answer to my question about marketing principles was obvious: no. Just look at the ways in which everything under the sun has been marketed; obviously it need not be done in a principled (=ethical) fashion. Is that obvious? I made the suggestion, which may sound downright crazy in light of the evidence, that perhaps marketing is by definition principled. My inspiration for this judgement is the philosopher Immanuel Kant, who argued that any body of knowledge consists of an end (or purpose) and a means. Let us apply both the terms 'means' and 'end' to marketing. The students have signed up for a course in order to learn how to market effectively. But to what end? There seem to be two main attitudes toward that question. One is that the answer is obvious: the purpose of marketing is to sell things and to make money. The other attitude is that the purpose of marketing is irrelevant: Each person comes to the program and course with his or her own plans, and these need not even concern the acquisition of marketing expertise as such. My proposal, which I believe would also be Kant's, is that neither of these attitudes captures the significance of the end to the means for marketing. A field of knowledge or a professional endeavor is defined by both the means and the end; hence both deserve scrutiny. Students need to study both how to achieve X, and also what X is. It is at this point that 'Arson for Profit' becomes supremely relevant. That course is presumably all about means: how to detect and prosecute criminal activity. It is therefore assumed that the end is good in an ethical sense. When I ask fire science students to articulate the end, or purpose, of their field, they eventually generalize to something like, 'The safety and welfare of society, ' which seems right. As we have seen, someone could use the very same knowledge of means to achieve a much less noble end, such as personal profit via destructive, dangerous, reckless activity. But we would not call that firefighting. We have a separate word for it: arson. Similarly, if you employed the 'principles of marketing' in an unprincipled way, you would not be doing marketing. We have another term for it: fraud. Kant gives the example of a doctor and a poisoner, who use the identical knowledge to achieve their divergent ends. We would say that one is practicing medicine, the other, murder.", "hypothesis": "The writer's fire science students provided a detailed definition of the purpose of their studies.", "label": "c"} +{"uid": "id_543", "premise": "Whats in Blood? Blood is the most specialised fluid within living animals, playing an absolutely critical role. It symbolises life (new blood), health (get your blood running), personality (good or bad blood), and family (your bloodline). This red fluid itself is something which most people would rather not see, yet it contains such a complex soup of proteins, sugars, ions, hormones, gases, and basic cellular components that it is certainly worth considering in some detail. By volume, half of blood is the liquid part, called plasma. The rest comprises specialised components, the main one being red blood cells (technically known as erythrocytes). These transport oxygen molecules throughout the body, and also give blood its colour (from the hemoglobin protein within, which turns red when combined with oxygen). Red blood cells, as with all cells in the human body, have a limited operating life. They are produced within the marrow of bones, principally the larger ones, and live for about four months before they fall inactive, to be then reabsorbed by the spleen and liver, with waste products absorbed into the urine. This contrasts with the other main cells of human blood: the white blood cells, technically known as leukocytes. Similarly produced in the bone marrow, they are active only for three or four days, yet they are essential in defending the body against infections. White blood cells come in many different types, each designed to deal with a different sort of invader bacteria, virus, fungus, or parasite. When one of these enters the body, the white blood cells quickly determine its nature, then, after mustering sufficient numbers of a specific type (the period in which you are sick), they launch themselves into the fight, enveloping each individual invasive cell, and breaking it down (leading to recovery). That leaves the last main component of blood: platelets. Their technical name is thrombocytes, and they are much smaller than red and white blood cells. Also circulating freely, they are responsible for clotting the blood, and this is necessary to heal both external and internal injuries. Again, they are produced in the bone marrow, and have the interesting ability to change shape. There are several diseases related to the breakdown in the regulation of their numbers. If too low, excessive bleeding can occur, yet if too high, internal clotting may result, causing potentially catastrophic blockages in parts of the body and medical ailments we know as strokes, heart attacks, and embolisms. Bloods complexity presents particular difficulties in the advent of emergency transfusions. These are avoided whenever possible in order to lower the risk of reactions due to blood incompatibility. Unexpected antigens can trigger antibodies to attack blood components, with potentially lethal results. Thus, if transfusions are to take place, a thorough knowledge and classification of blood is essential, yet with 30 recognised blood-group systems, containing hundreds of antigens, this presents quite a challenge. The ABO system is the most important. On top of this is the Rhesus factor, which is not as simple as positive or negative (as most people think), but comprises scores of antigens. These can, however, be clustered together into groups which cause similar responses, creating some order. Of course, the simplest system to avoid adverse transfusion reactions is for patients to receive their own blood for example, in a series of blood donations in anticipation of an operation scheduled some months in advance. The second best system is to undertake cross-matching, which involves simply mixing samples of the patients blood with the donors, then checking microscopically for clumping a key sign of incompatibility. Both of these systems are obviously impractical in an emergency situation, which is why meticulous testing, documentation, and labeling of blood are necessary. In a true emergency, a blood bank is needed, with an array of various types of blood on hand. Hence, blood donations must be a regular occurrence among a significant segment of the population. In the developed world, unpaid volunteers provide most of the blood for the community, whereas in less developed nations, families or friends are mostly involved. In the era of HIV and other insidious blood-borne diseases, potential donors are carefully screened and tested, and a period of about two months is recommended before successive whole blood donations. Given the vital role which blood plays, it is strange to think that for almost 2000 years bloodletting was a widespread medical practice. It was based on the belief that blood carried humours, whose imbalances resulted in medical illnesses. Bleeding a patient was supposed to remove an undesirable excess of one of these. Furthermore, the fact that blood circulated around the body was unknown. It was instead assumed to be quickly created, and equally quickly exhausted of its value, after which it could stagnant unhealthily in the bodily extremities. Although the logic was there, it goes without saying that very few patients responded positively to such treatment.", "hypothesis": "Bleeding people was a painful process.", "label": "n"} +{"uid": "id_544", "premise": "Whats in Blood? Blood is the most specialised fluid within living animals, playing an absolutely critical role. It symbolises life (new blood), health (get your blood running), personality (good or bad blood), and family (your bloodline). This red fluid itself is something which most people would rather not see, yet it contains such a complex soup of proteins, sugars, ions, hormones, gases, and basic cellular components that it is certainly worth considering in some detail. By volume, half of blood is the liquid part, called plasma. The rest comprises specialised components, the main one being red blood cells (technically known as erythrocytes). These transport oxygen molecules throughout the body, and also give blood its colour (from the hemoglobin protein within, which turns red when combined with oxygen). Red blood cells, as with all cells in the human body, have a limited operating life. They are produced within the marrow of bones, principally the larger ones, and live for about four months before they fall inactive, to be then reabsorbed by the spleen and liver, with waste products absorbed into the urine. This contrasts with the other main cells of human blood: the white blood cells, technically known as leukocytes. Similarly produced in the bone marrow, they are active only for three or four days, yet they are essential in defending the body against infections. White blood cells come in many different types, each designed to deal with a different sort of invader bacteria, virus, fungus, or parasite. When one of these enters the body, the white blood cells quickly determine its nature, then, after mustering sufficient numbers of a specific type (the period in which you are sick), they launch themselves into the fight, enveloping each individual invasive cell, and breaking it down (leading to recovery). That leaves the last main component of blood: platelets. Their technical name is thrombocytes, and they are much smaller than red and white blood cells. Also circulating freely, they are responsible for clotting the blood, and this is necessary to heal both external and internal injuries. Again, they are produced in the bone marrow, and have the interesting ability to change shape. There are several diseases related to the breakdown in the regulation of their numbers. If too low, excessive bleeding can occur, yet if too high, internal clotting may result, causing potentially catastrophic blockages in parts of the body and medical ailments we know as strokes, heart attacks, and embolisms. Bloods complexity presents particular difficulties in the advent of emergency transfusions. These are avoided whenever possible in order to lower the risk of reactions due to blood incompatibility. Unexpected antigens can trigger antibodies to attack blood components, with potentially lethal results. Thus, if transfusions are to take place, a thorough knowledge and classification of blood is essential, yet with 30 recognised blood-group systems, containing hundreds of antigens, this presents quite a challenge. The ABO system is the most important. On top of this is the Rhesus factor, which is not as simple as positive or negative (as most people think), but comprises scores of antigens. These can, however, be clustered together into groups which cause similar responses, creating some order. Of course, the simplest system to avoid adverse transfusion reactions is for patients to receive their own blood for example, in a series of blood donations in anticipation of an operation scheduled some months in advance. The second best system is to undertake cross-matching, which involves simply mixing samples of the patients blood with the donors, then checking microscopically for clumping a key sign of incompatibility. Both of these systems are obviously impractical in an emergency situation, which is why meticulous testing, documentation, and labeling of blood are necessary. In a true emergency, a blood bank is needed, with an array of various types of blood on hand. Hence, blood donations must be a regular occurrence among a significant segment of the population. In the developed world, unpaid volunteers provide most of the blood for the community, whereas in less developed nations, families or friends are mostly involved. In the era of HIV and other insidious blood-borne diseases, potential donors are carefully screened and tested, and a period of about two months is recommended before successive whole blood donations. Given the vital role which blood plays, it is strange to think that for almost 2000 years bloodletting was a widespread medical practice. It was based on the belief that blood carried humours, whose imbalances resulted in medical illnesses. Bleeding a patient was supposed to remove an undesirable excess of one of these. Furthermore, the fact that blood circulated around the body was unknown. It was instead assumed to be quickly created, and equally quickly exhausted of its value, after which it could stagnant unhealthily in the bodily extremities. Although the logic was there, it goes without saying that very few patients responded positively to such treatment.", "hypothesis": "In poorer countries, family members often donate blood.", "label": "e"} +{"uid": "id_545", "premise": "Whats in Blood? Blood is the most specialised fluid within living animals, playing an absolutely critical role. It symbolises life (new blood), health (get your blood running), personality (good or bad blood), and family (your bloodline). This red fluid itself is something which most people would rather not see, yet it contains such a complex soup of proteins, sugars, ions, hormones, gases, and basic cellular components that it is certainly worth considering in some detail. By volume, half of blood is the liquid part, called plasma. The rest comprises specialised components, the main one being red blood cells (technically known as erythrocytes). These transport oxygen molecules throughout the body, and also give blood its colour (from the hemoglobin protein within, which turns red when combined with oxygen). Red blood cells, as with all cells in the human body, have a limited operating life. They are produced within the marrow of bones, principally the larger ones, and live for about four months before they fall inactive, to be then reabsorbed by the spleen and liver, with waste products absorbed into the urine. This contrasts with the other main cells of human blood: the white blood cells, technically known as leukocytes. Similarly produced in the bone marrow, they are active only for three or four days, yet they are essential in defending the body against infections. White blood cells come in many different types, each designed to deal with a different sort of invader bacteria, virus, fungus, or parasite. When one of these enters the body, the white blood cells quickly determine its nature, then, after mustering sufficient numbers of a specific type (the period in which you are sick), they launch themselves into the fight, enveloping each individual invasive cell, and breaking it down (leading to recovery). That leaves the last main component of blood: platelets. Their technical name is thrombocytes, and they are much smaller than red and white blood cells. Also circulating freely, they are responsible for clotting the blood, and this is necessary to heal both external and internal injuries. Again, they are produced in the bone marrow, and have the interesting ability to change shape. There are several diseases related to the breakdown in the regulation of their numbers. If too low, excessive bleeding can occur, yet if too high, internal clotting may result, causing potentially catastrophic blockages in parts of the body and medical ailments we know as strokes, heart attacks, and embolisms. Bloods complexity presents particular difficulties in the advent of emergency transfusions. These are avoided whenever possible in order to lower the risk of reactions due to blood incompatibility. Unexpected antigens can trigger antibodies to attack blood components, with potentially lethal results. Thus, if transfusions are to take place, a thorough knowledge and classification of blood is essential, yet with 30 recognised blood-group systems, containing hundreds of antigens, this presents quite a challenge. The ABO system is the most important. On top of this is the Rhesus factor, which is not as simple as positive or negative (as most people think), but comprises scores of antigens. These can, however, be clustered together into groups which cause similar responses, creating some order. Of course, the simplest system to avoid adverse transfusion reactions is for patients to receive their own blood for example, in a series of blood donations in anticipation of an operation scheduled some months in advance. The second best system is to undertake cross-matching, which involves simply mixing samples of the patients blood with the donors, then checking microscopically for clumping a key sign of incompatibility. Both of these systems are obviously impractical in an emergency situation, which is why meticulous testing, documentation, and labeling of blood are necessary. In a true emergency, a blood bank is needed, with an array of various types of blood on hand. Hence, blood donations must be a regular occurrence among a significant segment of the population. In the developed world, unpaid volunteers provide most of the blood for the community, whereas in less developed nations, families or friends are mostly involved. In the era of HIV and other insidious blood-borne diseases, potential donors are carefully screened and tested, and a period of about two months is recommended before successive whole blood donations. Given the vital role which blood plays, it is strange to think that for almost 2000 years bloodletting was a widespread medical practice. It was based on the belief that blood carried humours, whose imbalances resulted in medical illnesses. Bleeding a patient was supposed to remove an undesirable excess of one of these. Furthermore, the fact that blood circulated around the body was unknown. It was instead assumed to be quickly created, and equally quickly exhausted of its value, after which it could stagnant unhealthily in the bodily extremities. Although the logic was there, it goes without saying that very few patients responded positively to such treatment.", "hypothesis": "Blood cross-matching can be done without special equipment.", "label": "c"} +{"uid": "id_546", "premise": "Whats the purpose of gaining knowledge? I would found an institution where any person can find instruction in any subject That was the founders motto for Cornell University, and it seems an apt characterization of the different university, also in the USA, where I currently teach philosophy. A student can prepare for a career in resort management, engineering, interior design, accounting, music, law enforcement, you name it. But what would the founders of these two institutions have thought of a course called Arson for Profit? I kid you not: we have it on the books. Any undergraduates who have met the academic requirements can sign up for the course in our program in fire science. Naturally, the course is intended for prospective arson investigators, who can learn all the tricks of the trade for detecting whether a fire was deliberately set, discovering who did it, and establishing a chain of evidence for effective prosecution in a court of law. But wouldnt this also be the perfect course for prospective arsonists to sign up for? My point is not to criticize academic programs in fire science: they are highly welcome as part of the increasing professionalization of this and many other occupations. However, its not unknown for a firefighter to torch a building. This example suggests how dishonest and illegal behavior, with the help of higher education, can creep into every aspect of public and business life. I realized this anew when I was invited to speak before a class in marketing, which is another of our degree programs. The regular instructor is a colleague who appreciates the kind of ethical perspective I can bring as a philosopher. There are endless ways I could have approached this assignment, but I took my cue from the title of the course: Principles of Marketing. It made me think to ask the students, Is marketing principled? After all, a subject matter can have principles in the sense of being codified, having rules, as with football or chess, without being principled in the sense of being ethical. Many of the students immediately assumed that the answer to my question about marketing principles was obvious: no. Just look at the ways in which everything under the sun has been marketed; obviously it need not be done in a principled (=ethical) fashion. Is that obvious? I made the suggestion, which may sound downright crazy in light of the evidence, that perhaps marketing is by definition principled. My inspiration for this judgement is the philosopher Immanuel Kant, who argued that any body of knowledge consists of an end (or purpose) and a means. Let us apply both the terms means and end to marketing. The students have signed up for a course in order to learn how to market effectively. But to what end? There seem to be two main attitudes toward that question. One is that the answer is obvious: the purpose of marketing is to sell things and to make money. The other attitude is that the purpose of marketing is irrelevant: Each person comes to the program and course with his or her own plans, and these need not even concern the acquisition of marketing expertise as such. My proposal, which I believe would also be Kants, is that neither of these attitudes captures the significance of the end to the means for marketing. A field of knowledge or a professional endeavor is defined by both the means and the end; hence both deserve scrutiny. Students need to study both how to achieve X, and also what X is. It is at this point that Arson for Profit becomes supremely relevant. That course is presumably all about means: how to detect and prosecute criminal activity. It is therefore assumed that the end is good in an ethical sense. When I ask fire science students to articulate the end, or purpose, of their field, they eventually generalize to something like, The safety and welfare of society, which seems right. As we have seen, someone could use the very same knowledge of means to achieve a much less noble end, such as personal profit via destructive, dangerous, reckless activity. But we would not call that firefighting. We have a separate word for it: arson. Similarly, if you employed the principles of marketing in an unprincipled way, you would not be doing marketing. We have another term for it: fraud. Kant gives the example of a doctor and a poisoner, who use the identical knowledge to achieve their divergent ends. We would say that one is practicing medicine, the other, murder.", "hypothesis": "The writers fire science students provided a detailed definition of the purpose of their studies.", "label": "c"} +{"uid": "id_547", "premise": "Whats the purpose of gaining knowledge? I would found an institution where any person can find instruction in any subject That was the founders motto for Cornell University, and it seems an apt characterization of the different university, also in the USA, where I currently teach philosophy. A student can prepare for a career in resort management, engineering, interior design, accounting, music, law enforcement, you name it. But what would the founders of these two institutions have thought of a course called Arson for Profit? I kid you not: we have it on the books. Any undergraduates who have met the academic requirements can sign up for the course in our program in fire science. Naturally, the course is intended for prospective arson investigators, who can learn all the tricks of the trade for detecting whether a fire was deliberately set, discovering who did it, and establishing a chain of evidence for effective prosecution in a court of law. But wouldnt this also be the perfect course for prospective arsonists to sign up for? My point is not to criticize academic programs in fire science: they are highly welcome as part of the increasing professionalization of this and many other occupations. However, its not unknown for a firefighter to torch a building. This example suggests how dishonest and illegal behavior, with the help of higher education, can creep into every aspect of public and business life. I realized this anew when I was invited to speak before a class in marketing, which is another of our degree programs. The regular instructor is a colleague who appreciates the kind of ethical perspective I can bring as a philosopher. There are endless ways I could have approached this assignment, but I took my cue from the title of the course: Principles of Marketing. It made me think to ask the students, Is marketing principled? After all, a subject matter can have principles in the sense of being codified, having rules, as with football or chess, without being principled in the sense of being ethical. Many of the students immediately assumed that the answer to my question about marketing principles was obvious: no. Just look at the ways in which everything under the sun has been marketed; obviously it need not be done in a principled (=ethical) fashion. Is that obvious? I made the suggestion, which may sound downright crazy in light of the evidence, that perhaps marketing is by definition principled. My inspiration for this judgement is the philosopher Immanuel Kant, who argued that any body of knowledge consists of an end (or purpose) and a means. Let us apply both the terms means and end to marketing. The students have signed up for a course in order to learn how to market effectively. But to what end? There seem to be two main attitudes toward that question. One is that the answer is obvious: the purpose of marketing is to sell things and to make money. The other attitude is that the purpose of marketing is irrelevant: Each person comes to the program and course with his or her own plans, and these need not even concern the acquisition of marketing expertise as such. My proposal, which I believe would also be Kants, is that neither of these attitudes captures the significance of the end to the means for marketing. A field of knowledge or a professional endeavor is defined by both the means and the end; hence both deserve scrutiny. Students need to study both how to achieve X, and also what X is. It is at this point that Arson for Profit becomes supremely relevant. That course is presumably all about means: how to detect and prosecute criminal activity. It is therefore assumed that the end is good in an ethical sense. When I ask fire science students to articulate the end, or purpose, of their field, they eventually generalize to something like, The safety and welfare of society, which seems right. As we have seen, someone could use the very same knowledge of means to achieve a much less noble end, such as personal profit via destructive, dangerous, reckless activity. But we would not call that firefighting. We have a separate word for it: arson. Similarly, if you employed the principles of marketing in an unprincipled way, you would not be doing marketing. We have another term for it: fraud. Kant gives the example of a doctor and a poisoner, who use the identical knowledge to achieve their divergent ends. We would say that one is practicing medicine, the other, murder.", "hypothesis": "Fire science are too academic to help people to be good at the job of firefighting.", "label": "n"} +{"uid": "id_548", "premise": "Whats the purpose of gaining knowledge? I would found an institution where any person can find instruction in any subject That was the founders motto for Cornell University, and it seems an apt characterization of the different university, also in the USA, where I currently teach philosophy. A student can prepare for a career in resort management, engineering, interior design, accounting, music, law enforcement, you name it. But what would the founders of these two institutions have thought of a course called Arson for Profit? I kid you not: we have it on the books. Any undergraduates who have met the academic requirements can sign up for the course in our program in fire science. Naturally, the course is intended for prospective arson investigators, who can learn all the tricks of the trade for detecting whether a fire was deliberately set, discovering who did it, and establishing a chain of evidence for effective prosecution in a court of law. But wouldnt this also be the perfect course for prospective arsonists to sign up for? My point is not to criticize academic programs in fire science: they are highly welcome as part of the increasing professionalization of this and many other occupations. However, its not unknown for a firefighter to torch a building. This example suggests how dishonest and illegal behavior, with the help of higher education, can creep into every aspect of public and business life. I realized this anew when I was invited to speak before a class in marketing, which is another of our degree programs. The regular instructor is a colleague who appreciates the kind of ethical perspective I can bring as a philosopher. There are endless ways I could have approached this assignment, but I took my cue from the title of the course: Principles of Marketing. It made me think to ask the students, Is marketing principled? After all, a subject matter can have principles in the sense of being codified, having rules, as with football or chess, without being principled in the sense of being ethical. Many of the students immediately assumed that the answer to my question about marketing principles was obvious: no. Just look at the ways in which everything under the sun has been marketed; obviously it need not be done in a principled (=ethical) fashion. Is that obvious? I made the suggestion, which may sound downright crazy in light of the evidence, that perhaps marketing is by definition principled. My inspiration for this judgement is the philosopher Immanuel Kant, who argued that any body of knowledge consists of an end (or purpose) and a means. Let us apply both the terms means and end to marketing. The students have signed up for a course in order to learn how to market effectively. But to what end? There seem to be two main attitudes toward that question. One is that the answer is obvious: the purpose of marketing is to sell things and to make money. The other attitude is that the purpose of marketing is irrelevant: Each person comes to the program and course with his or her own plans, and these need not even concern the acquisition of marketing expertise as such. My proposal, which I believe would also be Kants, is that neither of these attitudes captures the significance of the end to the means for marketing. A field of knowledge or a professional endeavor is defined by both the means and the end; hence both deserve scrutiny. Students need to study both how to achieve X, and also what X is. It is at this point that Arson for Profit becomes supremely relevant. That course is presumably all about means: how to detect and prosecute criminal activity. It is therefore assumed that the end is good in an ethical sense. When I ask fire science students to articulate the end, or purpose, of their field, they eventually generalize to something like, The safety and welfare of society, which seems right. As we have seen, someone could use the very same knowledge of means to achieve a much less noble end, such as personal profit via destructive, dangerous, reckless activity. But we would not call that firefighting. We have a separate word for it: arson. Similarly, if you employed the principles of marketing in an unprincipled way, you would not be doing marketing. We have another term for it: fraud. Kant gives the example of a doctor and a poisoner, who use the identical knowledge to achieve their divergent ends. We would say that one is practicing medicine, the other, murder.", "hypothesis": "The Arson for Profit course would be useful for people intending to set fire to buildings.", "label": "e"} +{"uid": "id_549", "premise": "Whats the purpose of gaining knowledge? I would found an institution where any person can find instruction in any subject That was the founders motto for Cornell University, and it seems an apt characterization of the different university, also in the USA, where I currently teach philosophy. A student can prepare for a career in resort management, engineering, interior design, accounting, music, law enforcement, you name it. But what would the founders of these two institutions have thought of a course called Arson for Profit? I kid you not: we have it on the books. Any undergraduates who have met the academic requirements can sign up for the course in our program in fire science. Naturally, the course is intended for prospective arson investigators, who can learn all the tricks of the trade for detecting whether a fire was deliberately set, discovering who did it, and establishing a chain of evidence for effective prosecution in a court of law. But wouldnt this also be the perfect course for prospective arsonists to sign up for? My point is not to criticize academic programs in fire science: they are highly welcome as part of the increasing professionalization of this and many other occupations. However, its not unknown for a firefighter to torch a building. This example suggests how dishonest and illegal behavior, with the help of higher education, can creep into every aspect of public and business life. I realized this anew when I was invited to speak before a class in marketing, which is another of our degree programs. The regular instructor is a colleague who appreciates the kind of ethical perspective I can bring as a philosopher. There are endless ways I could have approached this assignment, but I took my cue from the title of the course: Principles of Marketing. It made me think to ask the students, Is marketing principled? After all, a subject matter can have principles in the sense of being codified, having rules, as with football or chess, without being principled in the sense of being ethical. Many of the students immediately assumed that the answer to my question about marketing principles was obvious: no. Just look at the ways in which everything under the sun has been marketed; obviously it need not be done in a principled (=ethical) fashion. Is that obvious? I made the suggestion, which may sound downright crazy in light of the evidence, that perhaps marketing is by definition principled. My inspiration for this judgement is the philosopher Immanuel Kant, who argued that any body of knowledge consists of an end (or purpose) and a means. Let us apply both the terms means and end to marketing. The students have signed up for a course in order to learn how to market effectively. But to what end? There seem to be two main attitudes toward that question. One is that the answer is obvious: the purpose of marketing is to sell things and to make money. The other attitude is that the purpose of marketing is irrelevant: Each person comes to the program and course with his or her own plans, and these need not even concern the acquisition of marketing expertise as such. My proposal, which I believe would also be Kants, is that neither of these attitudes captures the significance of the end to the means for marketing. A field of knowledge or a professional endeavor is defined by both the means and the end; hence both deserve scrutiny. Students need to study both how to achieve X, and also what X is. It is at this point that Arson for Profit becomes supremely relevant. That course is presumably all about means: how to detect and prosecute criminal activity. It is therefore assumed that the end is good in an ethical sense. When I ask fire science students to articulate the end, or purpose, of their field, they eventually generalize to something like, The safety and welfare of society, which seems right. As we have seen, someone could use the very same knowledge of means to achieve a much less noble end, such as personal profit via destructive, dangerous, reckless activity. But we would not call that firefighting. We have a separate word for it: arson. Similarly, if you employed the principles of marketing in an unprincipled way, you would not be doing marketing. We have another term for it: fraud. Kant gives the example of a doctor and a poisoner, who use the identical knowledge to achieve their divergent ends. We would say that one is practicing medicine, the other, murder.", "hypothesis": "It is difficult to attract students onto courses that do no focus on a career.", "label": "n"} +{"uid": "id_550", "premise": "When Christianity was first established by law, a corrupt form of Latin had become the common language of all the western parts of Europe. The service of the Church accordingly, and the translation of the Bible which was read in churches, were both in that corrupted Latin which was the common language of the country. After the fall of the Roman Empire, Latin gradually ceased to be the language of any part of Europe. However, although Latin was no longer understood anywhere by the great body of the people, Church services still continued to be performed in that language. Two different languages were thus established in Europe: a language of the priests and a language of the people.", "hypothesis": "Latin continued to be used in church services because of the continuing influence of Rome.", "label": "c"} +{"uid": "id_551", "premise": "When Christianity was first established by law, a corrupt form of Latin had become the common language of all the western parts of Europe. The service of the Church accordingly, and the translation of the Bible which was read in churches, were both in that corrupted Latin which was the common language of the country. After the fall of the Roman Empire, Latin gradually ceased to be the language of any part of Europe. However, although Latin was no longer understood anywhere by the great body of the people, Church services still continued to be performed in that language. Two different languages were thus established in Europe: a language of the priests and a language of the people.", "hypothesis": "Priests spoke a different language from the common people.", "label": "e"} +{"uid": "id_552", "premise": "When Christianity was first established by law, a corrupt form of Latin had become the common language of all the western parts of Europe. The service of the Church accordingly, and the translation of the Bible which was read in churches, were both in that corrupted Latin which was the common language of the country. After the fall of the Roman Empire, Latin gradually ceased to be the language of any part of Europe. However, although Latin was no longer understood anywhere by the great body of the people, Church services still continued to be performed in that language. Two different languages were thus established in Europe: a language of the priests and a language of the people.", "hypothesis": "After the fall of the Roman Empire, people who had previously spoken Latin returned to their original languages.", "label": "n"} +{"uid": "id_553", "premise": "When Christianity was first established by law, a corrupt form of Latin had become the common language of all the western parts of Europe. The service of the Church accordingly, and the translation of the Bible which was read in churches, were both in that corrupted Latin which was the common language of the country. After the fall of the Roman Empire, Latin gradually ceased to be the language of any part of Europe. However, although Latin was no longer understood anywhere by the great body of the people, Church services still continued to be performed in that language. Two different languages were thus established in Europe: a language of the priests and a language of the people.", "hypothesis": "Prior to the fall of the Roman Empire, Latin had been established by law as the language of the Church in Western Europe.", "label": "n"} +{"uid": "id_554", "premise": "When any company moves from a sales to a marketing approach, it is not just a case of re-titling the Sales Director as Marketing Director and doubling the advertising budget. It requires a complete reorientation in thinking and a revolution in how a company organises and practises its business activities. whereas selling focuses on the needs of the seller, marketing focuses on the needs of the buyer. Whereas selling is preoccupied with the seller s need to convert his or her product into cash, marketing is preoccupied with the idea of identifying and hence satisfying the needs of the customer. However, subscribing to a philosophy of marketing, even though an important first step, is not the same as putting that philosophy into practice.", "hypothesis": "Advertising budgets are normally doubled when a company moves over to a marketing approach.", "label": "n"} +{"uid": "id_555", "premise": "When any company moves from a sales to a marketing approach, it is not just a case of re-titling the Sales Director as Marketing Director and doubling the advertising budget. It requires a complete reorientation in thinking and a revolution in how a company organizes and practices its business activities. Whereas selling focuses on the needs of the seller, marketing focuses on the needs of the buyer. Whereas selling is preoccupied with the sellers need to convert his or her product into cash, marketing is preoccupied with the idea of identifying and hence satisfying the needs of the customer. However, subscribing to a philosophy of marketing, even though an important first step, is not the same as putting that philosophy into practice.", "hypothesis": "Advertising budgets are normally doubled when a company moves over to a marketing approach.", "label": "n"} +{"uid": "id_556", "premise": "When conversations flow We spend a large part of our daily life talking with other people and, consequently, we are very accustomed to the art of conversing. But why do we feel comfortable in conversations that have flow, but get nervous and distressed when a conversation is interrupted by unexpected silences? To answer this question we will first look at some of the effects of conversational flow. Then we will explain how flow can serve different social needs. The positive consequences of conversational flow show some similarities with the effects of processing fluency. Research has shown that processing fluency the ease with which people process information influences peoples judgments across a broad range of social dimensions. For instance, people feel that when something is easily processed, it is more true or accurate. Moreover, they have more confidence in their judgments regarding information that came to them fluently, and they like things that are easy to process more than things that are difficult to process. Research indicates that a speaker is judged to be more knowledgeable when they answer questions instantly; responding with disfluent speech markers such as uh or urn or simply remaining silent for a moment too long can destroy that positive image. One of the social needs addressed by conversational flow is the human need for synchrony to be in sync or in harmony with one another. Many studies have shown how people attempt to synchronize with their partners, by coordinating their behavior. This interpersonal coordination underlies a wide array of human activities, ranging from more complicated ones like ballroom dancing to simply walking or talking with friends. In conversations, interpersonal coordination is found when people adjust the duration of their utterances and their speech rate to one another so that they can enable turn-taking to occur, without talking over each other or experiencing awkward silences. Since people are very well-trained in having conversations, they are often able to take turns within milliseconds, resulting in a conversational flow of smoothly meshed behaviors. A lack of flow is characterized by interruptions, simultaneous speech or mutual silences. Avoiding these features is important for defining and maintaining interpersonal relationships. The need to belong has been identified as one of the most basic of human motivations and plays a role in many human behaviors. That conversational flow is related to belonging may be most easily illustrated by the consequences of flow disruptions. What happens when the positive experience of flow is disrupted by, for instance, a brief silence? We all know that silences can be pretty awkward, and research shows that even short disruptions in conversational flow can lead to a sharp rise in distress levels. In movies, silences are often used to signal non-compliance or confrontation (Piazza, 2006). Some researchers even argue that silencing someone is one of the most serious forms of exclusion. Group membership is of elementary importance to our wellbeing and because humans are very sensitive to signals of exclusion, a silence is generally taken as a sign of rejection. In this way, a lack of flow in a conversation may signal that our relationship is not as solid as we thought it was. Another aspect of synchrony is that people often try to validate their opinions to those of others. That is, people like to see others as having similar ideas or worldviews as they have themselves, because this informs people that they are correct and their worldviews are justified. One way in which people can justify their worldviews is by assuming that, as long as their conversations run smoothly, their interaction partners probably agree with them. This idea was tested by researchers using video observations. Participants imagined being one out of three people in a video clip who had either a fluent conversation or a conversation in which flow was disrupted by a brief silence. Except for the silence, the videos were identical. After watching the video, participants were asked to what extent the people in the video agreed with each other. Participants who watched the fluent conversation rated agreement to be higher than participants watching the conversation that was disrupted by a silence, even though participants were not consciously aware of the disruption. It appears that the subjective feeling of being out of sync informs people of possible disagreements, regardless of the content of the conversation. Because people are generally so well- trained in having smooth conversations, any disruption of this flow indicates that something is wrong, either interpersonally or within the group as a whole. Consequently, people who do not talk very easily may be incorrectly understood as being less agreeable than those who have no difficulty keeping up a conversation. On a societal level, one could even imagine that a lack of conversational flow may hamper the integration of immigrants who have not completely mastered the language of their new country yet. In a similar sense, the ever- increasing number of online conversations may be disrupted by misinterpretations and anxiety that are produced by insuperable delays in the Internet connection. Keeping in mind the effects of conversational flow for feelings of belonging and validation may help one to be prepared to avoid such misunderstandings in future conversations.", "hypothesis": "People assess information according to how readily they can understand it.", "label": "e"} +{"uid": "id_557", "premise": "When conversations flow We spend a large part of our daily life talking with other people and, consequently, we are very accustomed to the art of conversing. But why do we feel comfortable in conversations that have flow, but get nervous and distressed when a conversation is interrupted by unexpected silences? To answer this question we will first look at some of the effects of conversational flow. Then we will explain how flow can serve different social needs. The positive consequences of conversational flow show some similarities with the effects of processing fluency. Research has shown that processing fluency the ease with which people process information influences peoples judgments across a broad range of social dimensions. For instance, people feel that when something is easily processed, it is more true or accurate. Moreover, they have more confidence in their judgments regarding information that came to them fluently, and they like things that are easy to process more than things that are difficult to process. Research indicates that a speaker is judged to be more knowledgeable when they answer questions instantly; responding with disfluent speech markers such as uh or urn or simply remaining silent for a moment too long can destroy that positive image. One of the social needs addressed by conversational flow is the human need for synchrony to be in sync or in harmony with one another. Many studies have shown how people attempt to synchronize with their partners, by coordinating their behavior. This interpersonal coordination underlies a wide array of human activities, ranging from more complicated ones like ballroom dancing to simply walking or talking with friends. In conversations, interpersonal coordination is found when people adjust the duration of their utterances and their speech rate to one another so that they can enable turn-taking to occur, without talking over each other or experiencing awkward silences. Since people are very well-trained in having conversations, they are often able to take turns within milliseconds, resulting in a conversational flow of smoothly meshed behaviors. A lack of flow is characterized by interruptions, simultaneous speech or mutual silences. Avoiding these features is important for defining and maintaining interpersonal relationships. The need to belong has been identified as one of the most basic of human motivations and plays a role in many human behaviors. That conversational flow is related to belonging may be most easily illustrated by the consequences of flow disruptions. What happens when the positive experience of flow is disrupted by, for instance, a brief silence? We all know that silences can be pretty awkward, and research shows that even short disruptions in conversational flow can lead to a sharp rise in distress levels. In movies, silences are often used to signal non-compliance or confrontation (Piazza, 2006). Some researchers even argue that silencing someone is one of the most serious forms of exclusion. Group membership is of elementary importance to our wellbeing and because humans are very sensitive to signals of exclusion, a silence is generally taken as a sign of rejection. In this way, a lack of flow in a conversation may signal that our relationship is not as solid as we thought it was. Another aspect of synchrony is that people often try to validate their opinions to those of others. That is, people like to see others as having similar ideas or worldviews as they have themselves, because this informs people that they are correct and their worldviews are justified. One way in which people can justify their worldviews is by assuming that, as long as their conversations run smoothly, their interaction partners probably agree with them. This idea was tested by researchers using video observations. Participants imagined being one out of three people in a video clip who had either a fluent conversation or a conversation in which flow was disrupted by a brief silence. Except for the silence, the videos were identical. After watching the video, participants were asked to what extent the people in the video agreed with each other. Participants who watched the fluent conversation rated agreement to be higher than participants watching the conversation that was disrupted by a silence, even though participants were not consciously aware of the disruption. It appears that the subjective feeling of being out of sync informs people of possible disagreements, regardless of the content of the conversation. Because people are generally so well- trained in having smooth conversations, any disruption of this flow indicates that something is wrong, either interpersonally or within the group as a whole. Consequently, people who do not talk very easily may be incorrectly understood as being less agreeable than those who have no difficulty keeping up a conversation. On a societal level, one could even imagine that a lack of conversational flow may hamper the integration of immigrants who have not completely mastered the language of their new country yet. In a similar sense, the ever- increasing number of online conversations may be disrupted by misinterpretations and anxiety that are produced by insuperable delays in the Internet connection. Keeping in mind the effects of conversational flow for feelings of belonging and validation may help one to be prepared to avoid such misunderstandings in future conversations.", "hypothesis": "Delays in online chat fail to have the same negative effect as disruptions that occur in natural conversation.", "label": "c"} +{"uid": "id_558", "premise": "When conversations flow We spend a large part of our daily life talking with other people and, consequently, we are very accustomed to the art of conversing. But why do we feel comfortable in conversations that have flow, but get nervous and distressed when a conversation is interrupted by unexpected silences? To answer this question we will first look at some of the effects of conversational flow. Then we will explain how flow can serve different social needs. The positive consequences of conversational flow show some similarities with the effects of processing fluency. Research has shown that processing fluency the ease with which people process information influences peoples judgments across a broad range of social dimensions. For instance, people feel that when something is easily processed, it is more true or accurate. Moreover, they have more confidence in their judgments regarding information that came to them fluently, and they like things that are easy to process more than things that are difficult to process. Research indicates that a speaker is judged to be more knowledgeable when they answer questions instantly; responding with disfluent speech markers such as uh or urn or simply remaining silent for a moment too long can destroy that positive image. One of the social needs addressed by conversational flow is the human need for synchrony to be in sync or in harmony with one another. Many studies have shown how people attempt to synchronize with their partners, by coordinating their behavior. This interpersonal coordination underlies a wide array of human activities, ranging from more complicated ones like ballroom dancing to simply walking or talking with friends. In conversations, interpersonal coordination is found when people adjust the duration of their utterances and their speech rate to one another so that they can enable turn-taking to occur, without talking over each other or experiencing awkward silences. Since people are very well-trained in having conversations, they are often able to take turns within milliseconds, resulting in a conversational flow of smoothly meshed behaviors. A lack of flow is characterized by interruptions, simultaneous speech or mutual silences. Avoiding these features is important for defining and maintaining interpersonal relationships. The need to belong has been identified as one of the most basic of human motivations and plays a role in many human behaviors. That conversational flow is related to belonging may be most easily illustrated by the consequences of flow disruptions. What happens when the positive experience of flow is disrupted by, for instance, a brief silence? We all know that silences can be pretty awkward, and research shows that even short disruptions in conversational flow can lead to a sharp rise in distress levels. In movies, silences are often used to signal non-compliance or confrontation (Piazza, 2006). Some researchers even argue that silencing someone is one of the most serious forms of exclusion. Group membership is of elementary importance to our wellbeing and because humans are very sensitive to signals of exclusion, a silence is generally taken as a sign of rejection. In this way, a lack of flow in a conversation may signal that our relationship is not as solid as we thought it was. Another aspect of synchrony is that people often try to validate their opinions to those of others. That is, people like to see others as having similar ideas or worldviews as they have themselves, because this informs people that they are correct and their worldviews are justified. One way in which people can justify their worldviews is by assuming that, as long as their conversations run smoothly, their interaction partners probably agree with them. This idea was tested by researchers using video observations. Participants imagined being one out of three people in a video clip who had either a fluent conversation or a conversation in which flow was disrupted by a brief silence. Except for the silence, the videos were identical. After watching the video, participants were asked to what extent the people in the video agreed with each other. Participants who watched the fluent conversation rated agreement to be higher than participants watching the conversation that was disrupted by a silence, even though participants were not consciously aware of the disruption. It appears that the subjective feeling of being out of sync informs people of possible disagreements, regardless of the content of the conversation. Because people are generally so well- trained in having smooth conversations, any disruption of this flow indicates that something is wrong, either interpersonally or within the group as a whole. Consequently, people who do not talk very easily may be incorrectly understood as being less agreeable than those who have no difficulty keeping up a conversation. On a societal level, one could even imagine that a lack of conversational flow may hamper the integration of immigrants who have not completely mastered the language of their new country yet. In a similar sense, the ever- increasing number of online conversations may be disrupted by misinterpretations and anxiety that are produced by insuperable delays in the Internet connection. Keeping in mind the effects of conversational flow for feelings of belonging and validation may help one to be prepared to avoid such misunderstandings in future conversations.", "hypothesis": "People who talk less often have clearer ideas than those who talk a lot.", "label": "n"} +{"uid": "id_559", "premise": "When conversations flow We spend a large part of our daily life talking with other people and, consequently, we are very accustomed to the art of conversing. But why do we feel comfortable in conversations that have flow, but get nervous and distressed when a conversation is interrupted by unexpected silences? To answer this question we will first look at some of the effects of conversational flow. Then we will explain how flow can serve different social needs. The positive consequences of conversational flow show some similarities with the effects of processing fluency. Research has shown that processing fluency the ease with which people process information influences peoples judgments across a broad range of social dimensions. For instance, people feel that when something is easily processed, it is more true or accurate. Moreover, they have more confidence in their judgments regarding information that came to them fluently, and they like things that are easy to process more than things that are difficult to process. Research indicates that a speaker is judged to be more knowledgeable when they answer questions instantly; responding with disfluent speech markers such as uh or urn or simply remaining silent for a moment too long can destroy that positive image. One of the social needs addressed by conversational flow is the human need for synchrony to be in sync or in harmony with one another. Many studies have shown how people attempt to synchronize with their partners, by coordinating their behavior. This interpersonal coordination underlies a wide array of human activities, ranging from more complicated ones like ballroom dancing to simply walking or talking with friends. In conversations, interpersonal coordination is found when people adjust the duration of their utterances and their speech rate to one another so that they can enable turn-taking to occur, without talking over each other or experiencing awkward silences. Since people are very well-trained in having conversations, they are often able to take turns within milliseconds, resulting in a conversational flow of smoothly meshed behaviors. A lack of flow is characterized by interruptions, simultaneous speech or mutual silences. Avoiding these features is important for defining and maintaining interpersonal relationships. The need to belong has been identified as one of the most basic of human motivations and plays a role in many human behaviors. That conversational flow is related to belonging may be most easily illustrated by the consequences of flow disruptions. What happens when the positive experience of flow is disrupted by, for instance, a brief silence? We all know that silences can be pretty awkward, and research shows that even short disruptions in conversational flow can lead to a sharp rise in distress levels. In movies, silences are often used to signal non-compliance or confrontation (Piazza, 2006). Some researchers even argue that silencing someone is one of the most serious forms of exclusion. Group membership is of elementary importance to our wellbeing and because humans are very sensitive to signals of exclusion, a silence is generally taken as a sign of rejection. In this way, a lack of flow in a conversation may signal that our relationship is not as solid as we thought it was. Another aspect of synchrony is that people often try to validate their opinions to those of others. That is, people like to see others as having similar ideas or worldviews as they have themselves, because this informs people that they are correct and their worldviews are justified. One way in which people can justify their worldviews is by assuming that, as long as their conversations run smoothly, their interaction partners probably agree with them. This idea was tested by researchers using video observations. Participants imagined being one out of three people in a video clip who had either a fluent conversation or a conversation in which flow was disrupted by a brief silence. Except for the silence, the videos were identical. After watching the video, participants were asked to what extent the people in the video agreed with each other. Participants who watched the fluent conversation rated agreement to be higher than participants watching the conversation that was disrupted by a silence, even though participants were not consciously aware of the disruption. It appears that the subjective feeling of being out of sync informs people of possible disagreements, regardless of the content of the conversation. Because people are generally so well- trained in having smooth conversations, any disruption of this flow indicates that something is wrong, either interpersonally or within the group as a whole. Consequently, people who do not talk very easily may be incorrectly understood as being less agreeable than those who have no difficulty keeping up a conversation. On a societal level, one could even imagine that a lack of conversational flow may hamper the integration of immigrants who have not completely mastered the language of their new country yet. In a similar sense, the ever- increasing number of online conversations may be disrupted by misinterpretations and anxiety that are produced by insuperable delays in the Internet connection. Keeping in mind the effects of conversational flow for feelings of belonging and validation may help one to be prepared to avoid such misunderstandings in future conversations.", "hypothesis": "Video observations have often been used to assess conversational flow.", "label": "n"} +{"uid": "id_560", "premise": "When conversations flow We spend a large part of our daily life talking with other people and, consequently, we are very accustomed to the art of conversing. But why do we feel comfortable in conversations that have flow, but get nervous and distressed when a conversation is interrupted by unexpected silences? To answer this question we will first look at some of the effects of conversational flow. Then we will explain how flow can serve different social needs. The positive consequences of conversational flow show some similarities with the effects of processing fluency. Research has shown that processing fluency the ease with which people process information influences peoples judgments across a broad range of social dimensions. For instance, people feel that when something is easily processed, it is more true or accurate. Moreover, they have more confidence in their judgments regarding information that came to them fluently, and they like things that are easy to process more than things that are difficult to process. Research indicates that a speaker is judged to be more knowledgeable when they answer questions instantly; responding with disfluent speech markers such as uh or urn or simply remaining silent for a moment too long can destroy that positive image. One of the social needs addressed by conversational flow is the human need for synchrony to be in sync or in harmony with one another. Many studies have shown how people attempt to synchronize with their partners, by coordinating their behavior. This interpersonal coordination underlies a wide array of human activities, ranging from more complicated ones like ballroom dancing to simply walking or talking with friends. In conversations, interpersonal coordination is found when people adjust the duration of their utterances and their speech rate to one another so that they can enable turn-taking to occur, without talking over each other or experiencing awkward silences. Since people are very well-trained in having conversations, they are often able to take turns within milliseconds, resulting in a conversational flow of smoothly meshed behaviors. A lack of flow is characterized by interruptions, simultaneous speech or mutual silences. Avoiding these features is important for defining and maintaining interpersonal relationships. The need to belong has been identified as one of the most basic of human motivations and plays a role in many human behaviors. That conversational flow is related to belonging may be most easily illustrated by the consequences of flow disruptions. What happens when the positive experience of flow is disrupted by, for instance, a brief silence? We all know that silences can be pretty awkward, and research shows that even short disruptions in conversational flow can lead to a sharp rise in distress levels. In movies, silences are often used to signal non-compliance or confrontation (Piazza, 2006). Some researchers even argue that silencing someone is one of the most serious forms of exclusion. Group membership is of elementary importance to our wellbeing and because humans are very sensitive to signals of exclusion, a silence is generally taken as a sign of rejection. In this way, a lack of flow in a conversation may signal that our relationship is not as solid as we thought it was. Another aspect of synchrony is that people often try to validate their opinions to those of others. That is, people like to see others as having similar ideas or worldviews as they have themselves, because this informs people that they are correct and their worldviews are justified. One way in which people can justify their worldviews is by assuming that, as long as their conversations run smoothly, their interaction partners probably agree with them. This idea was tested by researchers using video observations. Participants imagined being one out of three people in a video clip who had either a fluent conversation or a conversation in which flow was disrupted by a brief silence. Except for the silence, the videos were identical. After watching the video, participants were asked to what extent the people in the video agreed with each other. Participants who watched the fluent conversation rated agreement to be higher than participants watching the conversation that was disrupted by a silence, even though participants were not consciously aware of the disruption. It appears that the subjective feeling of being out of sync informs people of possible disagreements, regardless of the content of the conversation. Because people are generally so well- trained in having smooth conversations, any disruption of this flow indicates that something is wrong, either interpersonally or within the group as a whole. Consequently, people who do not talk very easily may be incorrectly understood as being less agreeable than those who have no difficulty keeping up a conversation. On a societal level, one could even imagine that a lack of conversational flow may hamper the integration of immigrants who have not completely mastered the language of their new country yet. In a similar sense, the ever- increasing number of online conversations may be disrupted by misinterpretations and anxiety that are produced by insuperable delays in the Internet connection. Keeping in mind the effects of conversational flow for feelings of belonging and validation may help one to be prepared to avoid such misunderstandings in future conversations.", "hypothesis": "A quick response to a question is thought to show a lack of knowledge.", "label": "c"} +{"uid": "id_561", "premise": "When conversations flow We spend a large part of our daily life talking with other people and, consequently, we are very accustomed to the art of conversing. But why do we feel comfortable in conversations that have flow, but get nervous and distressed when a conversation is interrupted by unexpected silences? To answer this question we will first look at some of the effects of conversational flow. Then we will explain how flow can serve different social needs. The positive consequences of conversational flow show some similarities with the effects of processing fluency. Research has shown that processing fluency the ease with which people process information influences peoples judgments across a broad range of social dimensions. For instance, people feel that when something is easily processed, it is more true or accurate. Moreover, they have more confidence in their judgments regarding information that came to them fluently, and they like things that are easy to process more than things that are difficult to process. Research indicates that a speaker is judged to be more knowledgeable when they answer questions instantly; responding with disfluent speech markers such as uh or urn or simply remaining silent for a moment too long can destroy that positive image. One of the social needs addressed by conversational flow is the human need for synchrony to be in sync or in harmony with one another. Many studies have shown how people attempt to synchronize with their partners, by coordinating their behavior. This interpersonal coordination underlies a wide array of human activities, ranging from more complicated ones like ballroom dancing to simply walking or talking with friends. In conversations, interpersonal coordination is found when people adjust the duration of their utterances and their speech rate to one another so that they can enable turn-taking to occur, without talking over each other or experiencing awkward silences. Since people are very well-trained in having conversations, they are often able to take turns within milliseconds, resulting in a conversational flow of smoothly meshed behaviors. A lack of flow is characterized by interruptions, simultaneous speech or mutual silences. Avoiding these features is important for defining and maintaining interpersonal relationships. The need to belong has been identified as one of the most basic of human motivations and plays a role in many human behaviors. That conversational flow is related to belonging may be most easily illustrated by the consequences of flow disruptions. What happens when the positive experience of flow is disrupted by, for instance, a brief silence? We all know that silences can be pretty awkward, and research shows that even short disruptions in conversational flow can lead to a sharp rise in distress levels. In movies, silences are often used to signal non-compliance or confrontation (Piazza, 2006). Some researchers even argue that silencing someone is one of the most serious forms of exclusion. Group membership is of elementary importance to our wellbeing and because humans are very sensitive to signals of exclusion, a silence is generally taken as a sign of rejection. In this way, a lack of flow in a conversation may signal that our relationship is not as solid as we thought it was. Another aspect of synchrony is that people often try to validate their opinions to those of others. That is, people like to see others as having similar ideas or worldviews as they have themselves, because this informs people that they are correct and their worldviews are justified. One way in which people can justify their worldviews is by assuming that, as long as their conversations run smoothly, their interaction partners probably agree with them. This idea was tested by researchers using video observations. Participants imagined being one out of three people in a video clip who had either a fluent conversation or a conversation in which flow was disrupted by a brief silence. Except for the silence, the videos were identical. After watching the video, participants were asked to what extent the people in the video agreed with each other. Participants who watched the fluent conversation rated agreement to be higher than participants watching the conversation that was disrupted by a silence, even though participants were not consciously aware of the disruption. It appears that the subjective feeling of being out of sync informs people of possible disagreements, regardless of the content of the conversation. Because people are generally so well- trained in having smooth conversations, any disruption of this flow indicates that something is wrong, either interpersonally or within the group as a whole. Consequently, people who do not talk very easily may be incorrectly understood as being less agreeable than those who have no difficulty keeping up a conversation. On a societal level, one could even imagine that a lack of conversational flow may hamper the integration of immigrants who have not completely mastered the language of their new country yet. In a similar sense, the ever- increasing number of online conversations may be disrupted by misinterpretations and anxiety that are produced by insuperable delays in the Internet connection. Keeping in mind the effects of conversational flow for feelings of belonging and validation may help one to be prepared to avoid such misunderstandings in future conversations.", "hypothesis": "Conversation occupies much of our time.", "label": "e"} +{"uid": "id_562", "premise": "When discussing his famous character Rorschach, the antihero of Watchmen, Moore explains, I originally intended Rorschach to be a warning about the possible outcome of vigilante thinking. But an awful lot of comic readers felt his remorseless, frightening, psychotic toughness was his most appealing characteristic not quite what I was going for. Moore misunderstands his own heros appeal within this quotation: it is not that Rorschach is willing to break little fingers to extract information, or that he is happy to use violence, that makes him laudable. The Comedian, another superhero within the alternative world of Watchmen, is a thug who has won no great fan base; his remorselessness (killing a pregnant Vietnamese woman), frightening (attempt at rape), psychotic toughness (one only has to look at the panels of him shooting out into a crowd to witness this) is repulsive, not winning. This is because The Comedian has no purpose: he is a nihilist, and as a nihilist, denies any potential meaning to his fellow man, and so to the comics reader. Everything to him is a joke, including his self, and consequently his own death could be seen as just another gag. Rorschach, on the other hand, does believe in something: he questions if his fight for justice is futile? then instantly corrects himself, stating there is good and evil, and evil must be punished. Even in the face of Armageddon I shall not compromise in this. Jacob Held, in his essay comparing Rorschachs motivation with Kantian ethics, put forward the postulation perhaps our dignity is found in acting as if the world were just, even when it is clearly not. Rorschach then causes pain in others not because he is a sadist, but because he feels the need to punish wrong and to uphold the good, and though he cannot make the world just, he can act according to his sense of justice - through the use of violence.", "hypothesis": "The Comedian is a misnomer - the character that goes by this title should not, logically, be called this.", "label": "c"} +{"uid": "id_563", "premise": "When each year an average of 500,000 immigrants entered the country the Home Office calculated that the fiscal benefit of this level of inward migration was 2.5 billion a year. This calculation was used extensively by the government of the day to support their immigration policies. The findings of the Home Office stood out against the findings of other western nations which found the benefits of large-scale inward migration to be so small as to be close to zero. The difference in the findings arose because the Home Office figure was based only on the effect of inward migration on the country's total Gross Domestic Product (GDP), while the other studies measured the effect on GDP per head. However, the Home Office calculation was obviously flawed, and they have since stopped using it, because immigration manifestly increases both the total GDP and the population. While the overall effect of inward migration may be negligible nationally in fiscal terms, the indigenous low paid and low skilled stand to lose out because as a consequence of the inward migration they face greater competition for work. Some employers have much to gain from the improved supply of labour and savings made from not having to train young people.", "hypothesis": "The authors intended meaning when he wrote However, the Home Office calculation was obviously flawed, and they have since stopped using it, because immigration manifestly increases both the total GDP and the population would be better served if instead of immigration he wrote inward migration.", "label": "e"} +{"uid": "id_564", "premise": "When each year an average of 500,000 immigrants entered the country the Home Office calculated that the fiscal benefit of this level of inward migration was 2.5 billion a year. This calculation was used extensively by the government of the day to support their immigration policies. The findings of the Home Office stood out against the findings of other western nations which found the benefits of large-scale inward migration to be so small as to be close to zero. The difference in the findings arose because the Home Office figure was based only on the effect of inward migration on the country's total Gross Domestic Product (GDP), while the other studies measured the effect on GDP per head. However, the Home Office calculation was obviously flawed, and they have since stopped using it, because immigration manifestly increases both the total GDP and the population. While the overall effect of inward migration may be negligible nationally in fiscal terms, the indigenous low paid and low skilled stand to lose out because as a consequence of the inward migration they face greater competition for work. Some employers have much to gain from the improved supply of labour and savings made from not having to train young people.", "hypothesis": "There are no clear winners in an economy experiencing large-scale inward migration.", "label": "c"} +{"uid": "id_565", "premise": "When each year an average of 500,000 immigrants entered the country the Home Office calculated that the fiscal benefit of this level of inward migration was 2.5 billion a year. This calculation was used extensively by the government of the day to support their immigration policies. The findings of the Home Office stood out against the findings of other western nations which found the benefits of large-scale inward migration to be so small as to be close to zero. The difference in the findings arose because the Home Office figure was based only on the effect of inward migration on the country's total Gross Domestic Product (GDP), while the other studies measured the effect on GDP per head. However, the Home Office calculation was obviously flawed, and they have since stopped using it, because immigration manifestly increases both the total GDP and the population. While the overall effect of inward migration may be negligible nationally in fiscal terms, the indigenous low paid and low skilled stand to lose out because as a consequence of the inward migration they face greater competition for work. Some employers have much to gain from the improved supply of labour and savings made from not having to train young people.", "hypothesis": "It is no longer the case that half a million immigrants enter the country.", "label": "e"} +{"uid": "id_566", "premise": "When evolution runs backwards Evolution isnt supposed to run backwards - yet an increasing number of examples show that it does and that it can sometimes represent the future of a species. The description of any animal as an evolutionary throwback is controversial. For the better part of a century, most biologists have been reluctant to use those words, mindful of a principle of evolution that says evolution cannot run backwards. But as more and more examples come to light and modern genetics enters the scene, that principle is having to be rewritten. Not only are evolutionary throwbacks possible, they sometimes play an important role in the forward march of evolution. The technical term for an evolutionary throwback is an atavism, from the Latin atavus, meaning forefather. The word has ugly connotations thanks largely to Cesare Lombroso, a 19th-century Italian medic who argued that criminals were born not made and could be identified by certain physical features that were throwbacks to a primitive, sub-human state. While Lombroso was measuring criminals, a Belgian palaeontologist called Louis Dollo was studying fossil records and coming to the opposite conclusion. In 1890 he proposed that evolution was irreversible: that an organism is unable to return, even partially, to a previous stage already realised in the ranks of its ancestors. Early 20th-century biologists came to a similar conclusion, though they qualified it in terms of probability, stating that there is no reason why evolution cannot run backwards -it is just very unlikely. And so the idea of irreversibility in evolution stuck and came to be known as Dollos law. If Dollos law is right, atavisms should occur only very rarely, if at all. Yet almost since the idea took root, exceptions have been cropping up. In 1919, for example, a humpback whale with a pair of leglike appendages over a metre long, complete with a full set of limb bones, was caught off Vancouver Island in Canada. Explorer Roy Chapman Andrews argued at the time that the whale must be a throwback to a land-living ancestor. I can see no other explanation, he wrote in 1921. Since then, so many other examples have been discovered that it no longer makes sense to say that evolution is as good as irreversible. And this poses a puzzle: how can characteristics that disappeared millions of years ago suddenly reappear? In 1994, Rudolf Raff and colleagues at Indiana University in the USA decided to use genetics to put a number on the probability of evolution going into reverse. They reasoned that while some evolutionary changes involve the loss of genes and are therefore irreversible, others may be the result of genes being switched off. If these silent genes are somehow switched back on, they argued, longlost traits could reappear. Raffs team went on to calculate the likelihood of it happening. Silent genes accumulate random mutations, they reasoned, eventually rendering them useless. So how long can a gene survive in a species if it is no longer used? The team calculated that there is a good chance of silent genes surviving for up to 6 million years in at least a few individuals in a population, and that some might survive as long as 10 million years. In other words, throwbacks are possible, but only to the relatively recent evolutionary past. As a possible example, the team pointed to the mole salamanders of Mexico and California. Like most amphibians these begin life in a juvenile tadpole state, then metamorphose into the adult form except for one species, the axolotl, which famously lives its entire life as a juvenile. The simplest explanation for this is that the axolotl lineage alone lost the ability to metamorphose, while others retained it. From a detailed analysis of the salamanders family tree, however, it is clear that the other lineages evolved from an ancestor that itself had lost the ability to metamorphose. In other words, metamorphosis in mole salamanders is an atavism. The salamander example fits with Raffs 10million-year time frame. 82More recently, however, examples have been reported that break the time limit, suggesting that silent genes may not be the whole story. In a paper published last year, biologist Gunter Wagner of Yale University reported some work on the evolutionary history of a group of South American lizards called Bachia. Many of these have minuscule limbs; some look more like snakes than lizards and a few have completely lost the toes on their hind limbs. Other species, however, sport up to four toes on their hind legs. The simplest explanation is that the toed lineages never lost their toes, but Wagner begs to differ. According to his analysis of the Bachia family tree, the toed species re-evolved toes from toeless ancestors and, what is more, digit loss and gain has occurred on more than one occasion over tens of millions of years. So whats going on? One possibility is that these traits are lost and then simply reappear, in much the same way that similar structures can independently arise in unrelated species, such as the dorsal fins of sharks and killer whales. Another more intriguing possibility is that the genetic information needed to make toes somehow survived for tens or perhaps hundreds of millions of years in the lizards and was reactivated. These atavistic traits provided an advantage and spread through the population, effectively reversing evolution. But if silent genes degrade within 6 to million years, how can long-lost traits be reactivated over longer timescales? The answer may lie in the womb. Early embryos of many species develop ancestral features. Snake embryos, for example, sprout hind limb buds. Later in development these features disappear thanks to developmental programs that say lose the leg. If for any reason this does not happen, the ancestral feature may not disappear, leading to an atavism.", "hypothesis": "Evolutionary throwbacks might be caused by developmental problems in the womb.", "label": "e"} +{"uid": "id_567", "premise": "When evolution runs backwards Evolution isnt supposed to run backwards - yet an increasing number of examples show that it does and that it can sometimes represent the future of a species. The description of any animal as an evolutionary throwback is controversial. For the better part of a century, most biologists have been reluctant to use those words, mindful of a principle of evolution that says evolution cannot run backwards. But as more and more examples come to light and modern genetics enters the scene, that principle is having to be rewritten. Not only are evolutionary throwbacks possible, they sometimes play an important role in the forward march of evolution. The technical term for an evolutionary throwback is an atavism, from the Latin atavus, meaning forefather. The word has ugly connotations thanks largely to Cesare Lombroso, a 19th-century Italian medic who argued that criminals were born not made and could be identified by certain physical features that were throwbacks to a primitive, sub-human state. While Lombroso was measuring criminals, a Belgian palaeontologist called Louis Dollo was studying fossil records and coming to the opposite conclusion. In 1890 he proposed that evolution was irreversible: that an organism is unable to return, even partially, to a previous stage already realised in the ranks of its ancestors. Early 20th-century biologists came to a similar conclusion, though they qualified it in terms of probability, stating that there is no reason why evolution cannot run backwards -it is just very unlikely. And so the idea of irreversibility in evolution stuck and came to be known as Dollos law. If Dollos law is right, atavisms should occur only very rarely, if at all. Yet almost since the idea took root, exceptions have been cropping up. In 1919, for example, a humpback whale with a pair of leglike appendages over a metre long, complete with a full set of limb bones, was caught off Vancouver Island in Canada. Explorer Roy Chapman Andrews argued at the time that the whale must be a throwback to a land-living ancestor. I can see no other explanation, he wrote in 1921. Since then, so many other examples have been discovered that it no longer makes sense to say that evolution is as good as irreversible. And this poses a puzzle: how can characteristics that disappeared millions of years ago suddenly reappear? In 1994, Rudolf Raff and colleagues at Indiana University in the USA decided to use genetics to put a number on the probability of evolution going into reverse. They reasoned that while some evolutionary changes involve the loss of genes and are therefore irreversible, others may be the result of genes being switched off. If these silent genes are somehow switched back on, they argued, longlost traits could reappear. Raffs team went on to calculate the likelihood of it happening. Silent genes accumulate random mutations, they reasoned, eventually rendering them useless. So how long can a gene survive in a species if it is no longer used? The team calculated that there is a good chance of silent genes surviving for up to 6 million years in at least a few individuals in a population, and that some might survive as long as 10 million years. In other words, throwbacks are possible, but only to the relatively recent evolutionary past. As a possible example, the team pointed to the mole salamanders of Mexico and California. Like most amphibians these begin life in a juvenile tadpole state, then metamorphose into the adult form except for one species, the axolotl, which famously lives its entire life as a juvenile. The simplest explanation for this is that the axolotl lineage alone lost the ability to metamorphose, while others retained it. From a detailed analysis of the salamanders family tree, however, it is clear that the other lineages evolved from an ancestor that itself had lost the ability to metamorphose. In other words, metamorphosis in mole salamanders is an atavism. The salamander example fits with Raffs 10million-year time frame. 82More recently, however, examples have been reported that break the time limit, suggesting that silent genes may not be the whole story. In a paper published last year, biologist Gunter Wagner of Yale University reported some work on the evolutionary history of a group of South American lizards called Bachia. Many of these have minuscule limbs; some look more like snakes than lizards and a few have completely lost the toes on their hind limbs. Other species, however, sport up to four toes on their hind legs. The simplest explanation is that the toed lineages never lost their toes, but Wagner begs to differ. According to his analysis of the Bachia family tree, the toed species re-evolved toes from toeless ancestors and, what is more, digit loss and gain has occurred on more than one occasion over tens of millions of years. So whats going on? One possibility is that these traits are lost and then simply reappear, in much the same way that similar structures can independently arise in unrelated species, such as the dorsal fins of sharks and killer whales. Another more intriguing possibility is that the genetic information needed to make toes somehow survived for tens or perhaps hundreds of millions of years in the lizards and was reactivated. These atavistic traits provided an advantage and spread through the population, effectively reversing evolution. But if silent genes degrade within 6 to million years, how can long-lost traits be reactivated over longer timescales? The answer may lie in the womb. Early embryos of many species develop ancestral features. Snake embryos, for example, sprout hind limb buds. Later in development these features disappear thanks to developmental programs that say lose the leg. If for any reason this does not happen, the ancestral feature may not disappear, leading to an atavism.", "hypothesis": "The temporary occurence of longlost traits in embryos is rare.", "label": "c"} +{"uid": "id_568", "premise": "When evolution runs backwards Evolution isnt supposed to run backwards - yet an increasing number of examples show that it does and that it can sometimes represent the future of a species. The description of any animal as an evolutionary throwback is controversial. For the better part of a century, most biologists have been reluctant to use those words, mindful of a principle of evolution that says evolution cannot run backwards. But as more and more examples come to light and modern genetics enters the scene, that principle is having to be rewritten. Not only are evolutionary throwbacks possible, they sometimes play an important role in the forward march of evolution. The technical term for an evolutionary throwback is an atavism, from the Latin atavus, meaning forefather. The word has ugly connotations thanks largely to Cesare Lombroso, a 19th-century Italian medic who argued that criminals were born not made and could be identified by certain physical features that were throwbacks to a primitive, sub-human state. While Lombroso was measuring criminals, a Belgian palaeontologist called Louis Dollo was studying fossil records and coming to the opposite conclusion. In 1890 he proposed that evolution was irreversible: that an organism is unable to return, even partially, to a previous stage already realised in the ranks of its ancestors. Early 20th-century biologists came to a similar conclusion, though they qualified it in terms of probability, stating that there is no reason why evolution cannot run backwards -it is just very unlikely. And so the idea of irreversibility in evolution stuck and came to be known as Dollos law. If Dollos law is right, atavisms should occur only very rarely, if at all. Yet almost since the idea took root, exceptions have been cropping up. In 1919, for example, a humpback whale with a pair of leglike appendages over a metre long, complete with a full set of limb bones, was caught off Vancouver Island in Canada. Explorer Roy Chapman Andrews argued at the time that the whale must be a throwback to a land-living ancestor. I can see no other explanation, he wrote in 1921. Since then, so many other examples have been discovered that it no longer makes sense to say that evolution is as good as irreversible. And this poses a puzzle: how can characteristics that disappeared millions of years ago suddenly reappear? In 1994, Rudolf Raff and colleagues at Indiana University in the USA decided to use genetics to put a number on the probability of evolution going into reverse. They reasoned that while some evolutionary changes involve the loss of genes and are therefore irreversible, others may be the result of genes being switched off. If these silent genes are somehow switched back on, they argued, longlost traits could reappear. Raffs team went on to calculate the likelihood of it happening. Silent genes accumulate random mutations, they reasoned, eventually rendering them useless. So how long can a gene survive in a species if it is no longer used? The team calculated that there is a good chance of silent genes surviving for up to 6 million years in at least a few individuals in a population, and that some might survive as long as 10 million years. In other words, throwbacks are possible, but only to the relatively recent evolutionary past. As a possible example, the team pointed to the mole salamanders of Mexico and California. Like most amphibians these begin life in a juvenile tadpole state, then metamorphose into the adult form except for one species, the axolotl, which famously lives its entire life as a juvenile. The simplest explanation for this is that the axolotl lineage alone lost the ability to metamorphose, while others retained it. From a detailed analysis of the salamanders family tree, however, it is clear that the other lineages evolved from an ancestor that itself had lost the ability to metamorphose. In other words, metamorphosis in mole salamanders is an atavism. The salamander example fits with Raffs 10million-year time frame. 82More recently, however, examples have been reported that break the time limit, suggesting that silent genes may not be the whole story. In a paper published last year, biologist Gunter Wagner of Yale University reported some work on the evolutionary history of a group of South American lizards called Bachia. Many of these have minuscule limbs; some look more like snakes than lizards and a few have completely lost the toes on their hind limbs. Other species, however, sport up to four toes on their hind legs. The simplest explanation is that the toed lineages never lost their toes, but Wagner begs to differ. According to his analysis of the Bachia family tree, the toed species re-evolved toes from toeless ancestors and, what is more, digit loss and gain has occurred on more than one occasion over tens of millions of years. So whats going on? One possibility is that these traits are lost and then simply reappear, in much the same way that similar structures can independently arise in unrelated species, such as the dorsal fins of sharks and killer whales. Another more intriguing possibility is that the genetic information needed to make toes somehow survived for tens or perhaps hundreds of millions of years in the lizards and was reactivated. These atavistic traits provided an advantage and spread through the population, effectively reversing evolution. But if silent genes degrade within 6 to million years, how can long-lost traits be reactivated over longer timescales? The answer may lie in the womb. Early embryos of many species develop ancestral features. Snake embryos, for example, sprout hind limb buds. Later in development these features disappear thanks to developmental programs that say lose the leg. If for any reason this does not happen, the ancestral feature may not disappear, leading to an atavism.", "hypothesis": "Wagner was the first person to do research on South American lizards.", "label": "n"} +{"uid": "id_569", "premise": "When evolution runs backwards Evolution isnt supposed to run backwards - yet an increasing number of examples show that it does and that it can sometimes represent the future of a species. The description of any animal as an evolutionary throwback is controversial. For the better part of a century, most biologists have been reluctant to use those words, mindful of a principle of evolution that says evolution cannot run backwards. But as more and more examples come to light and modern genetics enters the scene, that principle is having to be rewritten. Not only are evolutionary throwbacks possible, they sometimes play an important role in the forward march of evolution. The technical term for an evolutionary throwback is an atavism, from the Latin atavus, meaning forefather. The word has ugly connotations thanks largely to Cesare Lombroso, a 19th-century Italian medic who argued that criminals were born not made and could be identified by certain physical features that were throwbacks to a primitive, sub-human state. While Lombroso was measuring criminals, a Belgian palaeontologist called Louis Dollo was studying fossil records and coming to the opposite conclusion. In 1890 he proposed that evolution was irreversible: that an organism is unable to return, even partially, to a previous stage already realised in the ranks of its ancestors. Early 20th-century biologists came to a similar conclusion, though they qualified it in terms of probability, stating that there is no reason why evolution cannot run backwards -it is just very unlikely. And so the idea of irreversibility in evolution stuck and came to be known as Dollos law. If Dollos law is right, atavisms should occur only very rarely, if at all. Yet almost since the idea took root, exceptions have been cropping up. In 1919, for example, a humpback whale with a pair of leglike appendages over a metre long, complete with a full set of limb bones, was caught off Vancouver Island in Canada. Explorer Roy Chapman Andrews argued at the time that the whale must be a throwback to a land-living ancestor. I can see no other explanation, he wrote in 1921. Since then, so many other examples have been discovered that it no longer makes sense to say that evolution is as good as irreversible. And this poses a puzzle: how can characteristics that disappeared millions of years ago suddenly reappear? In 1994, Rudolf Raff and colleagues at Indiana University in the USA decided to use genetics to put a number on the probability of evolution going into reverse. They reasoned that while some evolutionary changes involve the loss of genes and are therefore irreversible, others may be the result of genes being switched off. If these silent genes are somehow switched back on, they argued, longlost traits could reappear. Raffs team went on to calculate the likelihood of it happening. Silent genes accumulate random mutations, they reasoned, eventually rendering them useless. So how long can a gene survive in a species if it is no longer used? The team calculated that there is a good chance of silent genes surviving for up to 6 million years in at least a few individuals in a population, and that some might survive as long as 10 million years. In other words, throwbacks are possible, but only to the relatively recent evolutionary past. As a possible example, the team pointed to the mole salamanders of Mexico and California. Like most amphibians these begin life in a juvenile tadpole state, then metamorphose into the adult form except for one species, the axolotl, which famously lives its entire life as a juvenile. The simplest explanation for this is that the axolotl lineage alone lost the ability to metamorphose, while others retained it. From a detailed analysis of the salamanders family tree, however, it is clear that the other lineages evolved from an ancestor that itself had lost the ability to metamorphose. In other words, metamorphosis in mole salamanders is an atavism. The salamander example fits with Raffs 10million-year time frame. 82More recently, however, examples have been reported that break the time limit, suggesting that silent genes may not be the whole story. In a paper published last year, biologist Gunter Wagner of Yale University reported some work on the evolutionary history of a group of South American lizards called Bachia. Many of these have minuscule limbs; some look more like snakes than lizards and a few have completely lost the toes on their hind limbs. Other species, however, sport up to four toes on their hind legs. The simplest explanation is that the toed lineages never lost their toes, but Wagner begs to differ. According to his analysis of the Bachia family tree, the toed species re-evolved toes from toeless ancestors and, what is more, digit loss and gain has occurred on more than one occasion over tens of millions of years. So whats going on? One possibility is that these traits are lost and then simply reappear, in much the same way that similar structures can independently arise in unrelated species, such as the dorsal fins of sharks and killer whales. Another more intriguing possibility is that the genetic information needed to make toes somehow survived for tens or perhaps hundreds of millions of years in the lizards and was reactivated. These atavistic traits provided an advantage and spread through the population, effectively reversing evolution. But if silent genes degrade within 6 to million years, how can long-lost traits be reactivated over longer timescales? The answer may lie in the womb. Early embryos of many species develop ancestral features. Snake embryos, for example, sprout hind limb buds. Later in development these features disappear thanks to developmental programs that say lose the leg. If for any reason this does not happen, the ancestral feature may not disappear, leading to an atavism.", "hypothesis": "Wagner believes that Bachia lizards with toes had toeless ancestors.", "label": "e"} +{"uid": "id_570", "premise": "When evolution runs backwards. Evolution isnt supposed to run backwards yet an increasing number of examples show that it does and that it can sometimes represent the future of a species The description of any animal as an evolutionary throwback is controversial. For the better part of a century, most biologists have been reluctant to use those words, mindful of a principle of evolution that says evolution cannot run backwards. But as more and more examples come to light and modern genetics enters the scene, that principle is having to be rewritten. Not only are evolutionary throwbacks possible, they sometimes play an important role in the forward march of evolution. The technical term for an evolutionary throwback is an atavism, from the Latin atavus, meaning forefather. The word has ugly connotations thanks largely to Cesare Lombroso, a 19th-century Italian medic who argued that criminals were born not made and could be identified by certain physical features that were throwbacks to a primitive, sub-human state. While Lombroso was measuring criminals, a Belgian palaeontologist called Louis Dollo was studying fossil records and coming to the opposite conclusion. In 1890 he proposed that evolution was irreversible: that an organism is unable to return, even partially, to a previous stage already realised in the ranks of its ancestors. Early 20th-century biologists came to a similar conclusion, though they qualified it in terms of probability, stating that there is no reason why evolution cannot run backwards it is just very unlikely. And so the idea of irreversibility in evolution stuck and came to be known as Dollos law. If Dollos law is right, atavisms should occur only very rarely, if at all. Yet almost since the idea took root, exceptions have been cropping up. In 1919, for example, a humpback whale with a pair of leg-like appendages over a metre long, complete with a full set of limb bones, was caught off Vancouver Island in Canada. Explorer Roy Chapman Andrews argued at the time that the whale must be a throwback to a land-living ancestor. I can see no other explanation, he wrote in 1921. Since then, so many other examples have been discovered that it no longer makes sense to say that evolution is as good as irreversible. And this poses a puzzle: how can characteristics that disappeared millions of years ago suddenly reappear? In 1994, Rudolf Raff and colleagues at Indiana University in the USA decided to use genetics to put a number on the probability of evolution going into reverse. They reasoned that while some evolutionary changes involve the loss of genes and are therefore irreversible, others may be the result of genes being switched off. If these silent genes are somehow switched back on, they argued, long-lost traits could reappear. Raffs team went on to calculate the likelihood of it happening. Silent genes accumulate random mutations, they reasoned, eventually rendering them useless. So how long can a gene survive in a species if it is no longer used? The team calculated that there is a good chance of silent genes surviving for up to 6 million years in at least a few individuals in a population, and that some might survive as long as 10 million years. In other words, throwbacks are possible, but only to the relatively recent evolutionary past. As a possible example, the team pointed to the mole salamanders of Mexico and California. Like most amphibians these begin life in a juvenile tadpole state, then metamorphose into the adult form except for one species, the axolotl, which famously lives its entire life as a juvenile. The simplest explanation for this is that the axolotl lineage alone lost the ability to metamorphose, while others retained it. From a detailed analysis of the salamanders family tree, however, it is clear that the other lineages evolved from an ancestor that itself had lost the ability to metamorphose. In other words, metamorphosis in mole salamanders is an atavism. The salamander example fits with Raffs 10-million-year time frame. More recently, however, examples have been reported that break the time limit, suggesting that silent genes may not be the whole story. In a paper published last year, biologist Gunter Wagner of Yale University reported some work on the evolutionary history of a group of South American lizards called Bachia. Many of these have minuscule limbs; some look more like snakes than lizards and a few have completely lost the toes on their hind limbs. Other species, however, sport up to four toes on their hind legs. The simplest explanation is that the toed lineages never lost their toes, but Wagner begs to differ. According to his analysis of the Bachia family tree, the toed species re-evolved toes from toeless ancestors and, what is more, digit loss and gain has occurred on more than one occasion over tens of millions of years. So whats going on? One possibility is that these traits are lost and then simply reappear, in much the same way that similar structures can independently arise in unrelated species, such as the dorsal fins of sharks and killer whales. Another more intriguing possibility is that the genetic information needed to make toes somehow survived for tens or perhaps hundreds of millions of years in the lizards and was reactivated. These atavistic traits provided an advantage and spread through the population, effectively reversing evolution. But if silent genes degrade within 6 to 10 million years, how can long-lost traits be reactivated over longer timescales? The answer may lie in the womb. Early embryos of many species develop ancestral features. Snake embryos, for example, sprout hind limb buds. Later in development these features disappear thanks to developmental programs that say lose the leg. If for any reason this does not happen, the ancestral feature may not disappear, leading to an atavism.", "hypothesis": "Evolutionary throwbacks might be caused by developmental problems in the womb.", "label": "e"} +{"uid": "id_571", "premise": "When evolution runs backwards. Evolution isnt supposed to run backwards yet an increasing number of examples show that it does and that it can sometimes represent the future of a species The description of any animal as an evolutionary throwback is controversial. For the better part of a century, most biologists have been reluctant to use those words, mindful of a principle of evolution that says evolution cannot run backwards. But as more and more examples come to light and modern genetics enters the scene, that principle is having to be rewritten. Not only are evolutionary throwbacks possible, they sometimes play an important role in the forward march of evolution. The technical term for an evolutionary throwback is an atavism, from the Latin atavus, meaning forefather. The word has ugly connotations thanks largely to Cesare Lombroso, a 19th-century Italian medic who argued that criminals were born not made and could be identified by certain physical features that were throwbacks to a primitive, sub-human state. While Lombroso was measuring criminals, a Belgian palaeontologist called Louis Dollo was studying fossil records and coming to the opposite conclusion. In 1890 he proposed that evolution was irreversible: that an organism is unable to return, even partially, to a previous stage already realised in the ranks of its ancestors. Early 20th-century biologists came to a similar conclusion, though they qualified it in terms of probability, stating that there is no reason why evolution cannot run backwards it is just very unlikely. And so the idea of irreversibility in evolution stuck and came to be known as Dollos law. If Dollos law is right, atavisms should occur only very rarely, if at all. Yet almost since the idea took root, exceptions have been cropping up. In 1919, for example, a humpback whale with a pair of leg-like appendages over a metre long, complete with a full set of limb bones, was caught off Vancouver Island in Canada. Explorer Roy Chapman Andrews argued at the time that the whale must be a throwback to a land-living ancestor. I can see no other explanation, he wrote in 1921. Since then, so many other examples have been discovered that it no longer makes sense to say that evolution is as good as irreversible. And this poses a puzzle: how can characteristics that disappeared millions of years ago suddenly reappear? In 1994, Rudolf Raff and colleagues at Indiana University in the USA decided to use genetics to put a number on the probability of evolution going into reverse. They reasoned that while some evolutionary changes involve the loss of genes and are therefore irreversible, others may be the result of genes being switched off. If these silent genes are somehow switched back on, they argued, long-lost traits could reappear. Raffs team went on to calculate the likelihood of it happening. Silent genes accumulate random mutations, they reasoned, eventually rendering them useless. So how long can a gene survive in a species if it is no longer used? The team calculated that there is a good chance of silent genes surviving for up to 6 million years in at least a few individuals in a population, and that some might survive as long as 10 million years. In other words, throwbacks are possible, but only to the relatively recent evolutionary past. As a possible example, the team pointed to the mole salamanders of Mexico and California. Like most amphibians these begin life in a juvenile tadpole state, then metamorphose into the adult form except for one species, the axolotl, which famously lives its entire life as a juvenile. The simplest explanation for this is that the axolotl lineage alone lost the ability to metamorphose, while others retained it. From a detailed analysis of the salamanders family tree, however, it is clear that the other lineages evolved from an ancestor that itself had lost the ability to metamorphose. In other words, metamorphosis in mole salamanders is an atavism. The salamander example fits with Raffs 10-million-year time frame. More recently, however, examples have been reported that break the time limit, suggesting that silent genes may not be the whole story. In a paper published last year, biologist Gunter Wagner of Yale University reported some work on the evolutionary history of a group of South American lizards called Bachia. Many of these have minuscule limbs; some look more like snakes than lizards and a few have completely lost the toes on their hind limbs. Other species, however, sport up to four toes on their hind legs. The simplest explanation is that the toed lineages never lost their toes, but Wagner begs to differ. According to his analysis of the Bachia family tree, the toed species re-evolved toes from toeless ancestors and, what is more, digit loss and gain has occurred on more than one occasion over tens of millions of years. So whats going on? One possibility is that these traits are lost and then simply reappear, in much the same way that similar structures can independently arise in unrelated species, such as the dorsal fins of sharks and killer whales. Another more intriguing possibility is that the genetic information needed to make toes somehow survived for tens or perhaps hundreds of millions of years in the lizards and was reactivated. These atavistic traits provided an advantage and spread through the population, effectively reversing evolution. But if silent genes degrade within 6 to 10 million years, how can long-lost traits be reactivated over longer timescales? The answer may lie in the womb. Early embryos of many species develop ancestral features. Snake embryos, for example, sprout hind limb buds. Later in development these features disappear thanks to developmental programs that say lose the leg. If for any reason this does not happen, the ancestral feature may not disappear, leading to an atavism.", "hypothesis": "The temporary occurrence of long-lost traits in embryos is rare.", "label": "c"} +{"uid": "id_572", "premise": "When evolution runs backwards. Evolution isnt supposed to run backwards yet an increasing number of examples show that it does and that it can sometimes represent the future of a species The description of any animal as an evolutionary throwback is controversial. For the better part of a century, most biologists have been reluctant to use those words, mindful of a principle of evolution that says evolution cannot run backwards. But as more and more examples come to light and modern genetics enters the scene, that principle is having to be rewritten. Not only are evolutionary throwbacks possible, they sometimes play an important role in the forward march of evolution. The technical term for an evolutionary throwback is an atavism, from the Latin atavus, meaning forefather. The word has ugly connotations thanks largely to Cesare Lombroso, a 19th-century Italian medic who argued that criminals were born not made and could be identified by certain physical features that were throwbacks to a primitive, sub-human state. While Lombroso was measuring criminals, a Belgian palaeontologist called Louis Dollo was studying fossil records and coming to the opposite conclusion. In 1890 he proposed that evolution was irreversible: that an organism is unable to return, even partially, to a previous stage already realised in the ranks of its ancestors. Early 20th-century biologists came to a similar conclusion, though they qualified it in terms of probability, stating that there is no reason why evolution cannot run backwards it is just very unlikely. And so the idea of irreversibility in evolution stuck and came to be known as Dollos law. If Dollos law is right, atavisms should occur only very rarely, if at all. Yet almost since the idea took root, exceptions have been cropping up. In 1919, for example, a humpback whale with a pair of leg-like appendages over a metre long, complete with a full set of limb bones, was caught off Vancouver Island in Canada. Explorer Roy Chapman Andrews argued at the time that the whale must be a throwback to a land-living ancestor. I can see no other explanation, he wrote in 1921. Since then, so many other examples have been discovered that it no longer makes sense to say that evolution is as good as irreversible. And this poses a puzzle: how can characteristics that disappeared millions of years ago suddenly reappear? In 1994, Rudolf Raff and colleagues at Indiana University in the USA decided to use genetics to put a number on the probability of evolution going into reverse. They reasoned that while some evolutionary changes involve the loss of genes and are therefore irreversible, others may be the result of genes being switched off. If these silent genes are somehow switched back on, they argued, long-lost traits could reappear. Raffs team went on to calculate the likelihood of it happening. Silent genes accumulate random mutations, they reasoned, eventually rendering them useless. So how long can a gene survive in a species if it is no longer used? The team calculated that there is a good chance of silent genes surviving for up to 6 million years in at least a few individuals in a population, and that some might survive as long as 10 million years. In other words, throwbacks are possible, but only to the relatively recent evolutionary past. As a possible example, the team pointed to the mole salamanders of Mexico and California. Like most amphibians these begin life in a juvenile tadpole state, then metamorphose into the adult form except for one species, the axolotl, which famously lives its entire life as a juvenile. The simplest explanation for this is that the axolotl lineage alone lost the ability to metamorphose, while others retained it. From a detailed analysis of the salamanders family tree, however, it is clear that the other lineages evolved from an ancestor that itself had lost the ability to metamorphose. In other words, metamorphosis in mole salamanders is an atavism. The salamander example fits with Raffs 10-million-year time frame. More recently, however, examples have been reported that break the time limit, suggesting that silent genes may not be the whole story. In a paper published last year, biologist Gunter Wagner of Yale University reported some work on the evolutionary history of a group of South American lizards called Bachia. Many of these have minuscule limbs; some look more like snakes than lizards and a few have completely lost the toes on their hind limbs. Other species, however, sport up to four toes on their hind legs. The simplest explanation is that the toed lineages never lost their toes, but Wagner begs to differ. According to his analysis of the Bachia family tree, the toed species re-evolved toes from toeless ancestors and, what is more, digit loss and gain has occurred on more than one occasion over tens of millions of years. So whats going on? One possibility is that these traits are lost and then simply reappear, in much the same way that similar structures can independently arise in unrelated species, such as the dorsal fins of sharks and killer whales. Another more intriguing possibility is that the genetic information needed to make toes somehow survived for tens or perhaps hundreds of millions of years in the lizards and was reactivated. These atavistic traits provided an advantage and spread through the population, effectively reversing evolution. But if silent genes degrade within 6 to 10 million years, how can long-lost traits be reactivated over longer timescales? The answer may lie in the womb. Early embryos of many species develop ancestral features. Snake embryos, for example, sprout hind limb buds. Later in development these features disappear thanks to developmental programs that say lose the leg. If for any reason this does not happen, the ancestral feature may not disappear, leading to an atavism.", "hypothesis": "Wagner believes that Bachia lizards with toes had toeless ancestors.", "label": "e"} +{"uid": "id_573", "premise": "When evolution runs backwards. Evolution isnt supposed to run backwards yet an increasing number of examples show that it does and that it can sometimes represent the future of a species The description of any animal as an evolutionary throwback is controversial. For the better part of a century, most biologists have been reluctant to use those words, mindful of a principle of evolution that says evolution cannot run backwards. But as more and more examples come to light and modern genetics enters the scene, that principle is having to be rewritten. Not only are evolutionary throwbacks possible, they sometimes play an important role in the forward march of evolution. The technical term for an evolutionary throwback is an atavism, from the Latin atavus, meaning forefather. The word has ugly connotations thanks largely to Cesare Lombroso, a 19th-century Italian medic who argued that criminals were born not made and could be identified by certain physical features that were throwbacks to a primitive, sub-human state. While Lombroso was measuring criminals, a Belgian palaeontologist called Louis Dollo was studying fossil records and coming to the opposite conclusion. In 1890 he proposed that evolution was irreversible: that an organism is unable to return, even partially, to a previous stage already realised in the ranks of its ancestors. Early 20th-century biologists came to a similar conclusion, though they qualified it in terms of probability, stating that there is no reason why evolution cannot run backwards it is just very unlikely. And so the idea of irreversibility in evolution stuck and came to be known as Dollos law. If Dollos law is right, atavisms should occur only very rarely, if at all. Yet almost since the idea took root, exceptions have been cropping up. In 1919, for example, a humpback whale with a pair of leg-like appendages over a metre long, complete with a full set of limb bones, was caught off Vancouver Island in Canada. Explorer Roy Chapman Andrews argued at the time that the whale must be a throwback to a land-living ancestor. I can see no other explanation, he wrote in 1921. Since then, so many other examples have been discovered that it no longer makes sense to say that evolution is as good as irreversible. And this poses a puzzle: how can characteristics that disappeared millions of years ago suddenly reappear? In 1994, Rudolf Raff and colleagues at Indiana University in the USA decided to use genetics to put a number on the probability of evolution going into reverse. They reasoned that while some evolutionary changes involve the loss of genes and are therefore irreversible, others may be the result of genes being switched off. If these silent genes are somehow switched back on, they argued, long-lost traits could reappear. Raffs team went on to calculate the likelihood of it happening. Silent genes accumulate random mutations, they reasoned, eventually rendering them useless. So how long can a gene survive in a species if it is no longer used? The team calculated that there is a good chance of silent genes surviving for up to 6 million years in at least a few individuals in a population, and that some might survive as long as 10 million years. In other words, throwbacks are possible, but only to the relatively recent evolutionary past. As a possible example, the team pointed to the mole salamanders of Mexico and California. Like most amphibians these begin life in a juvenile tadpole state, then metamorphose into the adult form except for one species, the axolotl, which famously lives its entire life as a juvenile. The simplest explanation for this is that the axolotl lineage alone lost the ability to metamorphose, while others retained it. From a detailed analysis of the salamanders family tree, however, it is clear that the other lineages evolved from an ancestor that itself had lost the ability to metamorphose. In other words, metamorphosis in mole salamanders is an atavism. The salamander example fits with Raffs 10-million-year time frame. More recently, however, examples have been reported that break the time limit, suggesting that silent genes may not be the whole story. In a paper published last year, biologist Gunter Wagner of Yale University reported some work on the evolutionary history of a group of South American lizards called Bachia. Many of these have minuscule limbs; some look more like snakes than lizards and a few have completely lost the toes on their hind limbs. Other species, however, sport up to four toes on their hind legs. The simplest explanation is that the toed lineages never lost their toes, but Wagner begs to differ. According to his analysis of the Bachia family tree, the toed species re-evolved toes from toeless ancestors and, what is more, digit loss and gain has occurred on more than one occasion over tens of millions of years. So whats going on? One possibility is that these traits are lost and then simply reappear, in much the same way that similar structures can independently arise in unrelated species, such as the dorsal fins of sharks and killer whales. Another more intriguing possibility is that the genetic information needed to make toes somehow survived for tens or perhaps hundreds of millions of years in the lizards and was reactivated. These atavistic traits provided an advantage and spread through the population, effectively reversing evolution. But if silent genes degrade within 6 to 10 million years, how can long-lost traits be reactivated over longer timescales? The answer may lie in the womb. Early embryos of many species develop ancestral features. Snake embryos, for example, sprout hind limb buds. Later in development these features disappear thanks to developmental programs that say lose the leg. If for any reason this does not happen, the ancestral feature may not disappear, leading to an atavism.", "hypothesis": "Wagner was the first person to do research on South American lizards.", "label": "n"} +{"uid": "id_574", "premise": "When the American War of Independence started, the Americans had no regular army. But one was soon formed under the command of George Washington. However, this army was badly equipped and lacked proper training. The war lasted for six years, from 1775 to 1781, and the Americans drew up the formal Declaration of Independence on 4 July 1776. This stated that the United States would be an independent republic.", "hypothesis": "The war lasted for six years and the Declaration of Independence was made shortly after the end of the war.", "label": "c"} +{"uid": "id_575", "premise": "When the American War of Independence started, the Americans had no regular army. But one was soon formed under the command of George Washington. However, this army was badly equipped and lacked proper training. The war lasted for six years, from 1775 to 1781, and the Americans drew up the formal Declaration of Independence on 4 July 1776. This stated that the United States would be an independent republic.", "hypothesis": "The first regular American army was commanded by Washington.", "label": "e"} +{"uid": "id_576", "premise": "When the American War of Independence started, the Americans had no regular army. But one was soon formed under the command of George Washington. However, this army was badly equipped and lacked proper training. The war lasted for six years, from 1775 to 1781, and the Americans drew up the formal Declaration of Independence on 4 July 1776. This stated that the United States would be an independent republic.", "hypothesis": "The highly trained American army quickly won the war.", "label": "c"} +{"uid": "id_577", "premise": "When the Tulip Bubble Burst Tulips are spring-blooming perennials that grow from bulbs. Depending on the species, tulip plants can grow as short as 4 inches (10 cm) or as high as 28 inches (71 cm). The tulip's large flowers usually bloom on scapes or sub-scapose stems that lack bracts. Most tulips produce only one flower per stem, but a few species bear multiple flowers on their scapes (e. g. Tulipa turkestanica). The showy, generally cup or star-shaped tulip flower has three petals and three sepals, which are often termed tepals because they are nearly identical. These six tepals are often marked on the interior surface near the bases with darker colorings. Tulip flowers come in a wide variety of colors, except pure blue (several tulips with \"blue\" in the name have a faint violet hue) A. Long before anyone ever heard of Qualcomm, CMGI, Cisco Systems, or the other high-tech stocks that have soared during the current bull market, there was Semper Augustus. Both more prosaic and more sublime than any stock or bond, it was a tulip of extraordinary beauty, its midnight-blue petals topped by a band of pure white and accented with crimson flares. To denizens of 17th century Holland, little was as desirable. B. Around 1624, the Amsterdam man who owned the only dozen specimens was offered 3,000 guilders for one bulb. While there's no accurate way to render that in today's greenbacks, the sum was roughly equal to the annual income of a wealthy merchant. (A few years later, Rembrandt received about half that amount for painting The Night Watch. ) Yet the bulb's owner, whose name is now lost to history, nixed the offer. C. Who was crazier, the tulip lover who refused to sell for a small fortune or the one who was willing to splurge. That's a question that springs to mind after reading Tulip mania: The Story of the World's Most Coveted Flower and the Extraordinary Passions It Aroused by British journalist Mike Dash. In recent years, as investors have intentionally forgotten everything they learned in Investing 101 in order to load up on unproved, unprofitable dot- com issues, tulip mania has been invoked frequently. In this concise, artfully written account, Dash tells the real history behind the buzzword and in doing so, offers a cautionary tale for our times. D. The Dutch were not the first to go gaga over the tulip. Long before the first tulip bloomed in Europe-in Bavaria, it turns out, in 1559-the flower had enchanted the Persians and bewitched the rulers of the Ottoman Empire. It was in Holland, however, that the passion for tulips found its most fertile ground, forreasons that had little to do with horticulture. E. Holland in the early 17th century was embarking on its Golden Age. Resources that had just a few years earlier gone toward fighting for independence from Spain now flowed into commerce. Amsterdam merchants were at the center of the lucrative East Indies trade, where a single voyage could yield profits of 400%. They displayed their success by erecting grand estates surrounded by flower gardens. The Dutch population seemed tom by two contradictory impulses: a horror of living beyond one's means and the love of a long shot. F. Enter the tulip. \"It is impossible to comprehend the tulip mania without understanding just how different tulips were from every other flower known to horticulturists in the 17th century, \" says Dash. \"The colors they exhibited were more intense and more concentrated than those of ordinary plants. \" Despite the outlandish prices commanded by rare bulbs, ordinary tulips were sold by the pound. Around 1630, however, a new type of tulip fancier appeared, lured by tales of fat profits. These \"florists, \" or professional tulip traders, sought out flower lovers and speculators alike. But if the supply of tulip buyers grew quickly, the supply of bulbs did not. The tulip was a conspirator in the supply squeeze: It takes seven years to grow one from seed. And while bulbs can produce two or three clones, or \"offsets, \" annually, the mother bulb only lasts a few years. G. Bulb prices rose steadily throughout the 1630s, as ever more speculators into the market. Weavers and farmers mortgaged whatever they could to raise cash to begin trading. In 1633, a farmhouse in Hoorn changed hands for three rare bulbs. By 1636 any tulip-even bulbs recently considered garbage-could be sold off, often for hundreds of guilders. A futures market for bulbs existed, and tulip traders could be found conducting their business in hundreds of Dutch taverns. Tulip mania reached its peak during the winter of 1636-37, when some bulbs were changing hands ten times in a day. The zenith came early that winter, at an auction to benefit seven orphans whose only asset was 70 fine tulips left by then father. One, a rare Violetten Admirael van Enkhuizen bulb that was about to split in two, sold for 5,200 guilders, the all-time record. All told, the flowers brought in nearly 53,000 guilders. H. Soon after, the tulip market crashed utterly, spectacularly. It began inHaarlem, at a routine bulb auction when, for the first time, the greater fool refused to show up and pay. Within days, the panic had spread across the country. Despite the efforts of traders to prop up demand, the market for tulips evaporated. Flowers that had commanded 5,000 guilders a few weeks before now fetched one-hundredth that amount. Tulip mania is not without flaws. Dash dwells too long on the tulip's migration from Asia to Holland. But he does a service with this illuminating, accessible account of incredible financial folly. I. Tulip mania differed in one crucial aspect from the dot-com craze that grips our attention today: Even at its height, the Amsterdam Stock Exchange, well- established in 1630, wouldn't touch tulips. \"The speculation in tulip bulbs always existed at the margins of Dutch economic life, \" Dash writes. After the market crashed, a compromise was brokered that let most traders settle then debts for a fraction of then liability. The overall fallout on the Dutch economy was negligible. Will we say the same when Wall Street's current obsession finally runs its course?", "hypothesis": "In 1624, all the tulip collection belonged to a man in Amsterdam.", "label": "e"} +{"uid": "id_578", "premise": "When the Tulip Bubble Burst Tulips are spring-blooming perennials that grow from bulbs. Depending on the species, tulip plants can grow as short as 4 inches (10 cm) or as high as 28 inches (71 cm). The tulip's large flowers usually bloom on scapes or sub-scapose stems that lack bracts. Most tulips produce only one flower per stem, but a few species bear multiple flowers on their scapes (e. g. Tulipa turkestanica). The showy, generally cup or star-shaped tulip flower has three petals and three sepals, which are often termed tepals because they are nearly identical. These six tepals are often marked on the interior surface near the bases with darker colorings. Tulip flowers come in a wide variety of colors, except pure blue (several tulips with \"blue\" in the name have a faint violet hue) A. Long before anyone ever heard of Qualcomm, CMGI, Cisco Systems, or the other high-tech stocks that have soared during the current bull market, there was Semper Augustus. Both more prosaic and more sublime than any stock or bond, it was a tulip of extraordinary beauty, its midnight-blue petals topped by a band of pure white and accented with crimson flares. To denizens of 17th century Holland, little was as desirable. B. Around 1624, the Amsterdam man who owned the only dozen specimens was offered 3,000 guilders for one bulb. While there's no accurate way to render that in today's greenbacks, the sum was roughly equal to the annual income of a wealthy merchant. (A few years later, Rembrandt received about half that amount for painting The Night Watch. ) Yet the bulb's owner, whose name is now lost to history, nixed the offer. C. Who was crazier, the tulip lover who refused to sell for a small fortune or the one who was willing to splurge. That's a question that springs to mind after reading Tulip mania: The Story of the World's Most Coveted Flower and the Extraordinary Passions It Aroused by British journalist Mike Dash. In recent years, as investors have intentionally forgotten everything they learned in Investing 101 in order to load up on unproved, unprofitable dot- com issues, tulip mania has been invoked frequently. In this concise, artfully written account, Dash tells the real history behind the buzzword and in doing so, offers a cautionary tale for our times. D. The Dutch were not the first to go gaga over the tulip. Long before the first tulip bloomed in Europe-in Bavaria, it turns out, in 1559-the flower had enchanted the Persians and bewitched the rulers of the Ottoman Empire. It was in Holland, however, that the passion for tulips found its most fertile ground, forreasons that had little to do with horticulture. E. Holland in the early 17th century was embarking on its Golden Age. Resources that had just a few years earlier gone toward fighting for independence from Spain now flowed into commerce. Amsterdam merchants were at the center of the lucrative East Indies trade, where a single voyage could yield profits of 400%. They displayed their success by erecting grand estates surrounded by flower gardens. The Dutch population seemed tom by two contradictory impulses: a horror of living beyond one's means and the love of a long shot. F. Enter the tulip. \"It is impossible to comprehend the tulip mania without understanding just how different tulips were from every other flower known to horticulturists in the 17th century, \" says Dash. \"The colors they exhibited were more intense and more concentrated than those of ordinary plants. \" Despite the outlandish prices commanded by rare bulbs, ordinary tulips were sold by the pound. Around 1630, however, a new type of tulip fancier appeared, lured by tales of fat profits. These \"florists, \" or professional tulip traders, sought out flower lovers and speculators alike. But if the supply of tulip buyers grew quickly, the supply of bulbs did not. The tulip was a conspirator in the supply squeeze: It takes seven years to grow one from seed. And while bulbs can produce two or three clones, or \"offsets, \" annually, the mother bulb only lasts a few years. G. Bulb prices rose steadily throughout the 1630s, as ever more speculators into the market. Weavers and farmers mortgaged whatever they could to raise cash to begin trading. In 1633, a farmhouse in Hoorn changed hands for three rare bulbs. By 1636 any tulip-even bulbs recently considered garbage-could be sold off, often for hundreds of guilders. A futures market for bulbs existed, and tulip traders could be found conducting their business in hundreds of Dutch taverns. Tulip mania reached its peak during the winter of 1636-37, when some bulbs were changing hands ten times in a day. The zenith came early that winter, at an auction to benefit seven orphans whose only asset was 70 fine tulips left by then father. One, a rare Violetten Admirael van Enkhuizen bulb that was about to split in two, sold for 5,200 guilders, the all-time record. All told, the flowers brought in nearly 53,000 guilders. H. Soon after, the tulip market crashed utterly, spectacularly. It began inHaarlem, at a routine bulb auction when, for the first time, the greater fool refused to show up and pay. Within days, the panic had spread across the country. Despite the efforts of traders to prop up demand, the market for tulips evaporated. Flowers that had commanded 5,000 guilders a few weeks before now fetched one-hundredth that amount. Tulip mania is not without flaws. Dash dwells too long on the tulip's migration from Asia to Holland. But he does a service with this illuminating, accessible account of incredible financial folly. I. Tulip mania differed in one crucial aspect from the dot-com craze that grips our attention today: Even at its height, the Amsterdam Stock Exchange, well- established in 1630, wouldn't touch tulips. \"The speculation in tulip bulbs always existed at the margins of Dutch economic life, \" Dash writes. After the market crashed, a compromise was brokered that let most traders settle then debts for a fraction of then liability. The overall fallout on the Dutch economy was negligible. Will we say the same when Wall Street's current obsession finally runs its course?", "hypothesis": "From 1630, Amsterdam Stock Exchange started to regulate Tulips exchange market.", "label": "c"} +{"uid": "id_579", "premise": "When the Tulip Bubble Burst Tulips are spring-blooming perennials that grow from bulbs. Depending on the species, tulip plants can grow as short as 4 inches (10 cm) or as high as 28 inches (71 cm). The tulip's large flowers usually bloom on scapes or sub-scapose stems that lack bracts. Most tulips produce only one flower per stem, but a few species bear multiple flowers on their scapes (e. g. Tulipa turkestanica). The showy, generally cup or star-shaped tulip flower has three petals and three sepals, which are often termed tepals because they are nearly identical. These six tepals are often marked on the interior surface near the bases with darker colorings. Tulip flowers come in a wide variety of colors, except pure blue (several tulips with \"blue\" in the name have a faint violet hue) A. Long before anyone ever heard of Qualcomm, CMGI, Cisco Systems, or the other high-tech stocks that have soared during the current bull market, there was Semper Augustus. Both more prosaic and more sublime than any stock or bond, it was a tulip of extraordinary beauty, its midnight-blue petals topped by a band of pure white and accented with crimson flares. To denizens of 17th century Holland, little was as desirable. B. Around 1624, the Amsterdam man who owned the only dozen specimens was offered 3,000 guilders for one bulb. While there's no accurate way to render that in today's greenbacks, the sum was roughly equal to the annual income of a wealthy merchant. (A few years later, Rembrandt received about half that amount for painting The Night Watch. ) Yet the bulb's owner, whose name is now lost to history, nixed the offer. C. Who was crazier, the tulip lover who refused to sell for a small fortune or the one who was willing to splurge. That's a question that springs to mind after reading Tulip mania: The Story of the World's Most Coveted Flower and the Extraordinary Passions It Aroused by British journalist Mike Dash. In recent years, as investors have intentionally forgotten everything they learned in Investing 101 in order to load up on unproved, unprofitable dot- com issues, tulip mania has been invoked frequently. In this concise, artfully written account, Dash tells the real history behind the buzzword and in doing so, offers a cautionary tale for our times. D. The Dutch were not the first to go gaga over the tulip. Long before the first tulip bloomed in Europe-in Bavaria, it turns out, in 1559-the flower had enchanted the Persians and bewitched the rulers of the Ottoman Empire. It was in Holland, however, that the passion for tulips found its most fertile ground, forreasons that had little to do with horticulture. E. Holland in the early 17th century was embarking on its Golden Age. Resources that had just a few years earlier gone toward fighting for independence from Spain now flowed into commerce. Amsterdam merchants were at the center of the lucrative East Indies trade, where a single voyage could yield profits of 400%. They displayed their success by erecting grand estates surrounded by flower gardens. The Dutch population seemed tom by two contradictory impulses: a horror of living beyond one's means and the love of a long shot. F. Enter the tulip. \"It is impossible to comprehend the tulip mania without understanding just how different tulips were from every other flower known to horticulturists in the 17th century, \" says Dash. \"The colors they exhibited were more intense and more concentrated than those of ordinary plants. \" Despite the outlandish prices commanded by rare bulbs, ordinary tulips were sold by the pound. Around 1630, however, a new type of tulip fancier appeared, lured by tales of fat profits. These \"florists, \" or professional tulip traders, sought out flower lovers and speculators alike. But if the supply of tulip buyers grew quickly, the supply of bulbs did not. The tulip was a conspirator in the supply squeeze: It takes seven years to grow one from seed. And while bulbs can produce two or three clones, or \"offsets, \" annually, the mother bulb only lasts a few years. G. Bulb prices rose steadily throughout the 1630s, as ever more speculators into the market. Weavers and farmers mortgaged whatever they could to raise cash to begin trading. In 1633, a farmhouse in Hoorn changed hands for three rare bulbs. By 1636 any tulip-even bulbs recently considered garbage-could be sold off, often for hundreds of guilders. A futures market for bulbs existed, and tulip traders could be found conducting their business in hundreds of Dutch taverns. Tulip mania reached its peak during the winter of 1636-37, when some bulbs were changing hands ten times in a day. The zenith came early that winter, at an auction to benefit seven orphans whose only asset was 70 fine tulips left by then father. One, a rare Violetten Admirael van Enkhuizen bulb that was about to split in two, sold for 5,200 guilders, the all-time record. All told, the flowers brought in nearly 53,000 guilders. H. Soon after, the tulip market crashed utterly, spectacularly. It began inHaarlem, at a routine bulb auction when, for the first time, the greater fool refused to show up and pay. Within days, the panic had spread across the country. Despite the efforts of traders to prop up demand, the market for tulips evaporated. Flowers that had commanded 5,000 guilders a few weeks before now fetched one-hundredth that amount. Tulip mania is not without flaws. Dash dwells too long on the tulip's migration from Asia to Holland. But he does a service with this illuminating, accessible account of incredible financial folly. I. Tulip mania differed in one crucial aspect from the dot-com craze that grips our attention today: Even at its height, the Amsterdam Stock Exchange, well- established in 1630, wouldn't touch tulips. \"The speculation in tulip bulbs always existed at the margins of Dutch economic life, \" Dash writes. After the market crashed, a compromise was brokered that let most traders settle then debts for a fraction of then liability. The overall fallout on the Dutch economy was negligible. Will we say the same when Wall Street's current obsession finally runs its course?", "hypothesis": "Holland was the most wealthy country in the world in 17th century.", "label": "n"} +{"uid": "id_580", "premise": "When the Tulip Bubble Burst Tulips are spring-blooming perennials that grow from bulbs. Depending on the species, tulip plants can grow as short as 4 inches (10 cm) or as high as 28 inches (71 cm). The tulip's large flowers usually bloom on scapes or sub-scapose stems that lack bracts. Most tulips produce only one flower per stem, but a few species bear multiple flowers on their scapes (e. g. Tulipa turkestanica). The showy, generally cup or star-shaped tulip flower has three petals and three sepals, which are often termed tepals because they are nearly identical. These six tepals are often marked on the interior surface near the bases with darker colorings. Tulip flowers come in a wide variety of colors, except pure blue (several tulips with \"blue\" in the name have a faint violet hue) A. Long before anyone ever heard of Qualcomm, CMGI, Cisco Systems, or the other high-tech stocks that have soared during the current bull market, there was Semper Augustus. Both more prosaic and more sublime than any stock or bond, it was a tulip of extraordinary beauty, its midnight-blue petals topped by a band of pure white and accented with crimson flares. To denizens of 17th century Holland, little was as desirable. B. Around 1624, the Amsterdam man who owned the only dozen specimens was offered 3,000 guilders for one bulb. While there's no accurate way to render that in today's greenbacks, the sum was roughly equal to the annual income of a wealthy merchant. (A few years later, Rembrandt received about half that amount for painting The Night Watch. ) Yet the bulb's owner, whose name is now lost to history, nixed the offer. C. Who was crazier, the tulip lover who refused to sell for a small fortune or the one who was willing to splurge. That's a question that springs to mind after reading Tulip mania: The Story of the World's Most Coveted Flower and the Extraordinary Passions It Aroused by British journalist Mike Dash. In recent years, as investors have intentionally forgotten everything they learned in Investing 101 in order to load up on unproved, unprofitable dot- com issues, tulip mania has been invoked frequently. In this concise, artfully written account, Dash tells the real history behind the buzzword and in doing so, offers a cautionary tale for our times. D. The Dutch were not the first to go gaga over the tulip. Long before the first tulip bloomed in Europe-in Bavaria, it turns out, in 1559-the flower had enchanted the Persians and bewitched the rulers of the Ottoman Empire. It was in Holland, however, that the passion for tulips found its most fertile ground, forreasons that had little to do with horticulture. E. Holland in the early 17th century was embarking on its Golden Age. Resources that had just a few years earlier gone toward fighting for independence from Spain now flowed into commerce. Amsterdam merchants were at the center of the lucrative East Indies trade, where a single voyage could yield profits of 400%. They displayed their success by erecting grand estates surrounded by flower gardens. The Dutch population seemed tom by two contradictory impulses: a horror of living beyond one's means and the love of a long shot. F. Enter the tulip. \"It is impossible to comprehend the tulip mania without understanding just how different tulips were from every other flower known to horticulturists in the 17th century, \" says Dash. \"The colors they exhibited were more intense and more concentrated than those of ordinary plants. \" Despite the outlandish prices commanded by rare bulbs, ordinary tulips were sold by the pound. Around 1630, however, a new type of tulip fancier appeared, lured by tales of fat profits. These \"florists, \" or professional tulip traders, sought out flower lovers and speculators alike. But if the supply of tulip buyers grew quickly, the supply of bulbs did not. The tulip was a conspirator in the supply squeeze: It takes seven years to grow one from seed. And while bulbs can produce two or three clones, or \"offsets, \" annually, the mother bulb only lasts a few years. G. Bulb prices rose steadily throughout the 1630s, as ever more speculators into the market. Weavers and farmers mortgaged whatever they could to raise cash to begin trading. In 1633, a farmhouse in Hoorn changed hands for three rare bulbs. By 1636 any tulip-even bulbs recently considered garbage-could be sold off, often for hundreds of guilders. A futures market for bulbs existed, and tulip traders could be found conducting their business in hundreds of Dutch taverns. Tulip mania reached its peak during the winter of 1636-37, when some bulbs were changing hands ten times in a day. The zenith came early that winter, at an auction to benefit seven orphans whose only asset was 70 fine tulips left by then father. One, a rare Violetten Admirael van Enkhuizen bulb that was about to split in two, sold for 5,200 guilders, the all-time record. All told, the flowers brought in nearly 53,000 guilders. H. Soon after, the tulip market crashed utterly, spectacularly. It began inHaarlem, at a routine bulb auction when, for the first time, the greater fool refused to show up and pay. Within days, the panic had spread across the country. Despite the efforts of traders to prop up demand, the market for tulips evaporated. Flowers that had commanded 5,000 guilders a few weeks before now fetched one-hundredth that amount. Tulip mania is not without flaws. Dash dwells too long on the tulip's migration from Asia to Holland. But he does a service with this illuminating, accessible account of incredible financial folly. I. Tulip mania differed in one crucial aspect from the dot-com craze that grips our attention today: Even at its height, the Amsterdam Stock Exchange, well- established in 1630, wouldn't touch tulips. \"The speculation in tulip bulbs always existed at the margins of Dutch economic life, \" Dash writes. After the market crashed, a compromise was brokered that let most traders settle then debts for a fraction of then liability. The overall fallout on the Dutch economy was negligible. Will we say the same when Wall Street's current obsession finally runs its course?", "hypothesis": "Popularity of Tulip in Holland was much higher than any other countries in17th century.", "label": "e"} +{"uid": "id_581", "premise": "When the Tulip Bubble Burst Tulips are spring-blooming perennials that grow from bulbs. Depending on the species, tulip plants can grow as short as 4 inches (10 cm) or as high as 28 inches (71 cm). The tulip's large flowers usually bloom on scapes or sub-scapose stems that lack bracts. Most tulips produce only one flower per stem, but a few species bear multiple flowers on their scapes (e. g. Tulipa turkestanica). The showy, generally cup or star-shaped tulip flower has three petals and three sepals, which are often termed tepals because they are nearly identical. These six tepals are often marked on the interior surface near the bases with darker colorings. Tulip flowers come in a wide variety of colors, except pure blue (several tulips with \"blue\" in the name have a faint violet hue) A. Long before anyone ever heard of Qualcomm, CMGI, Cisco Systems, or the other high-tech stocks that have soared during the current bull market, there was Semper Augustus. Both more prosaic and more sublime than any stock or bond, it was a tulip of extraordinary beauty, its midnight-blue petals topped by a band of pure white and accented with crimson flares. To denizens of 17th century Holland, little was as desirable. B. Around 1624, the Amsterdam man who owned the only dozen specimens was offered 3,000 guilders for one bulb. While there's no accurate way to render that in today's greenbacks, the sum was roughly equal to the annual income of a wealthy merchant. (A few years later, Rembrandt received about half that amount for painting The Night Watch. ) Yet the bulb's owner, whose name is now lost to history, nixed the offer. C. Who was crazier, the tulip lover who refused to sell for a small fortune or the one who was willing to splurge. That's a question that springs to mind after reading Tulip mania: The Story of the World's Most Coveted Flower and the Extraordinary Passions It Aroused by British journalist Mike Dash. In recent years, as investors have intentionally forgotten everything they learned in Investing 101 in order to load up on unproved, unprofitable dot- com issues, tulip mania has been invoked frequently. In this concise, artfully written account, Dash tells the real history behind the buzzword and in doing so, offers a cautionary tale for our times. D. The Dutch were not the first to go gaga over the tulip. Long before the first tulip bloomed in Europe-in Bavaria, it turns out, in 1559-the flower had enchanted the Persians and bewitched the rulers of the Ottoman Empire. It was in Holland, however, that the passion for tulips found its most fertile ground, forreasons that had little to do with horticulture. E. Holland in the early 17th century was embarking on its Golden Age. Resources that had just a few years earlier gone toward fighting for independence from Spain now flowed into commerce. Amsterdam merchants were at the center of the lucrative East Indies trade, where a single voyage could yield profits of 400%. They displayed their success by erecting grand estates surrounded by flower gardens. The Dutch population seemed tom by two contradictory impulses: a horror of living beyond one's means and the love of a long shot. F. Enter the tulip. \"It is impossible to comprehend the tulip mania without understanding just how different tulips were from every other flower known to horticulturists in the 17th century, \" says Dash. \"The colors they exhibited were more intense and more concentrated than those of ordinary plants. \" Despite the outlandish prices commanded by rare bulbs, ordinary tulips were sold by the pound. Around 1630, however, a new type of tulip fancier appeared, lured by tales of fat profits. These \"florists, \" or professional tulip traders, sought out flower lovers and speculators alike. But if the supply of tulip buyers grew quickly, the supply of bulbs did not. The tulip was a conspirator in the supply squeeze: It takes seven years to grow one from seed. And while bulbs can produce two or three clones, or \"offsets, \" annually, the mother bulb only lasts a few years. G. Bulb prices rose steadily throughout the 1630s, as ever more speculators into the market. Weavers and farmers mortgaged whatever they could to raise cash to begin trading. In 1633, a farmhouse in Hoorn changed hands for three rare bulbs. By 1636 any tulip-even bulbs recently considered garbage-could be sold off, often for hundreds of guilders. A futures market for bulbs existed, and tulip traders could be found conducting their business in hundreds of Dutch taverns. Tulip mania reached its peak during the winter of 1636-37, when some bulbs were changing hands ten times in a day. The zenith came early that winter, at an auction to benefit seven orphans whose only asset was 70 fine tulips left by then father. One, a rare Violetten Admirael van Enkhuizen bulb that was about to split in two, sold for 5,200 guilders, the all-time record. All told, the flowers brought in nearly 53,000 guilders. H. Soon after, the tulip market crashed utterly, spectacularly. It began inHaarlem, at a routine bulb auction when, for the first time, the greater fool refused to show up and pay. Within days, the panic had spread across the country. Despite the efforts of traders to prop up demand, the market for tulips evaporated. Flowers that had commanded 5,000 guilders a few weeks before now fetched one-hundredth that amount. Tulip mania is not without flaws. Dash dwells too long on the tulip's migration from Asia to Holland. But he does a service with this illuminating, accessible account of incredible financial folly. I. Tulip mania differed in one crucial aspect from the dot-com craze that grips our attention today: Even at its height, the Amsterdam Stock Exchange, well- established in 1630, wouldn't touch tulips. \"The speculation in tulip bulbs always existed at the margins of Dutch economic life, \" Dash writes. After the market crashed, a compromise was brokered that let most traders settle then debts for a fraction of then liability. The overall fallout on the Dutch economy was negligible. Will we say the same when Wall Street's current obsession finally runs its course?", "hypothesis": "Tulip was first planted in Holland according to this passage.", "label": "c"} +{"uid": "id_582", "premise": "When they saw that it was snowing, Sheila and Bob Crandall decided to take the train to visit Sheila's Aunt Janet. Aunt Janet lives 218 miles from Sheila and Bob. The roundtrip train tickets cost $32.50 each. On all their other trips to visit Aunt Janet, Sheila and Bob had driven their car.", "hypothesis": "For Sheila and Bob, taking the train is cheaper than driving the car.", "label": "n"} +{"uid": "id_583", "premise": "When they saw that it was snowing, Sheila and Bob Crandall decided to take the train to visit Sheila's Aunt Janet. Aunt Janet lives 218 miles from Sheila and Bob. The roundtrip train tickets cost $32.50 each. On all their other trips to visit Aunt Janet, Sheila and Bob had driven their car.", "hypothesis": "Sheila and Bob will have to buy four different train tickets.", "label": "c"} +{"uid": "id_584", "premise": "When they saw that it was snowing, Sheila and Bob Crandall decided to take the train to visit Sheila's Aunt Janet. Aunt Janet lives 218 miles from Sheila and Bob. The roundtrip train tickets cost $32.50 each. On all their other trips to visit Aunt Janet, Sheila and Bob had driven their car.", "hypothesis": "Based on the weather, Sheila and Bob made a decision to take the train.", "label": "e"} +{"uid": "id_585", "premise": "When they saw that it was snowing, Sheila and Bob Crandall decided to take the train to visit Sheila's Aunt Janet. Aunt Janet lives 218 miles from Sheila and Bob. The roundtrip train tickets cost $32.50 each. On all their other trips to visit Aunt Janet, Sheila and Bob had driven their car.", "hypothesis": "Aunt Janet persuaded Sheila and Bob to take the train.", "label": "n"} +{"uid": "id_586", "premise": "When was the last time you saw a frog? Chances are, if you live in a city, you have not seen one for some time. Even in wet areas once teeming with frogs and toads, it is becoming less and less easy to find those slimy, hopping and sometimes poisonous members of the animal kingdom. All over the world, and even in remote parts of Australia, frogs are losing the ecological battle for survival, and biologists are at a loss to explain their demise. Are amphibians simply oversensitive to changes in the ecosystem? Could it be that their rapid decline in numbers is signaling some coming environmental disaster for us all? This frightening scenario is in part the consequence of a dramatic increase over the last quarter century in the development of once natural areas of wet marshland; home not only to frogs but to all manner of wildlife. However, as yet, there are no obvious reasons why certain frog species are disappearing from rainforests in Australia that have barely been touched by human hand. The mystery is unsettling to say the least, for it is known that amphibian species are extremely sensitive to environmental variations in temperature and moisture levels. The danger is that planet Earth might not only lose a vital link in the ecological food chain (frogs keep populations of otherwise pestilent insects at manageable levels), but we might be increasing our output of air pollutants to levels that may have already become irreversible. Frogs could be inadvertently warning us of a catastrophe. An example of a species of frog that, at far as is known, has become extinct, is the platypus frog. Like the well-known Australian mammal it was named after, it exhibited some very strange behaviour; instead of giving birth to tadpoles in the water, it raised its young within its stomach. The baby frogs were actually born from out of their mother's mouth. Discovered in 1981, less than ten years later the frog had completely vanished from the crystal clear waters of Booloumba Creek near Queensland's Sunshine Coast. Unfortunately, this freak of nature is not the only frog species to have been lost in Australia. Since the 1970s, no less than eight others have suffered the same fate. One theory that seems to fit the facts concerns the depletion of the ozone layer, a well documented phenomenon which has led to a sharp increase in ultraviolet radiation levels. The ozone layer is meant to shield the Earth from UV rays, but increased radiation may be having a greater effect upon frog populations than previously believed. Another theory is that worldwide temperature increases are upsetting the breeding cycles of frogs.", "hypothesis": "Frogs and toads are usually poisonous.", "label": "c"} +{"uid": "id_587", "premise": "When was the last time you saw a frog? Chances are, if you live in a city, you have not seen one for some time. Even in wet areas once teeming with frogs and toads, it is becoming less and less easy to find those slimy, hopping and sometimes poisonous members of the animal kingdom. All over the world, and even in remote parts of Australia, frogs are losing the ecological battle for survival, and biologists are at a loss to explain their demise. Are amphibians simply oversensitive to changes in the ecosystem? Could it be that their rapid decline in numbers is signaling some coming environmental disaster for us all? This frightening scenario is in part the consequence of a dramatic increase over the last quarter century in the development of once natural areas of wet marshland; home not only to frogs but to all manner of wildlife. However, as yet, there are no obvious reasons why certain frog species are disappearing from rainforests in Australia that have barely been touched by human hand. The mystery is unsettling to say the least, for it is known that amphibian species are extremely sensitive to environmental variations in temperature and moisture levels. The danger is that planet Earth might not only lose a vital link in the ecological food chain (frogs keep populations of otherwise pestilent insects at manageable levels), but we might be increasing our output of air pollutants to levels that may have already become irreversible. Frogs could be inadvertently warning us of a catastrophe. An example of a species of frog that, at far as is known, has become extinct, is the platypus frog. Like the well-known Australian mammal it was named after, it exhibited some very strange behaviour; instead of giving birth to tadpoles in the water, it raised its young within its stomach. The baby frogs were actually born from out of their mother's mouth. Discovered in 1981, less than ten years later the frog had completely vanished from the crystal clear waters of Booloumba Creek near Queensland's Sunshine Coast. Unfortunately, this freak of nature is not the only frog species to have been lost in Australia. Since the 1970s, no less than eight others have suffered the same fate. One theory that seems to fit the facts concerns the depletion of the ozone layer, a well documented phenomenon which has led to a sharp increase in ultraviolet radiation levels. The ozone layer is meant to shield the Earth from UV rays, but increased radiation may be having a greater effect upon frog populations than previously believed. Another theory is that worldwide temperature increases are upsetting the breeding cycles of frogs.", "hypothesis": "The frogs' natural habitat is becoming more and more developed.", "label": "e"} +{"uid": "id_588", "premise": "When was the last time you saw a frog? Chances are, if you live in a city, you have not seen one for some time. Even in wet areas once teeming with frogs and toads, it is becoming less and less easy to find those slimy, hopping and sometimes poisonous members of the animal kingdom. All over the world, and even in remote parts of Australia, frogs are losing the ecological battle for survival, and biologists are at a loss to explain their demise. Are amphibians simply oversensitive to changes in the ecosystem? Could it be that their rapid decline in numbers is signaling some coming environmental disaster for us all? This frightening scenario is in part the consequence of a dramatic increase over the last quarter century in the development of once natural areas of wet marshland; home not only to frogs but to all manner of wildlife. However, as yet, there are no obvious reasons why certain frog species are disappearing from rainforests in Australia that have barely been touched by human hand. The mystery is unsettling to say the least, for it is known that amphibian species are extremely sensitive to environmental variations in temperature and moisture levels. The danger is that planet Earth might not only lose a vital link in the ecological food chain (frogs keep populations of otherwise pestilent insects at manageable levels), but we might be increasing our output of air pollutants to levels that may have already become irreversible. Frogs could be inadvertently warning us of a catastrophe. An example of a species of frog that, at far as is known, has become extinct, is the platypus frog. Like the well-known Australian mammal it was named after, it exhibited some very strange behaviour; instead of giving birth to tadpoles in the water, it raised its young within its stomach. The baby frogs were actually born from out of their mother's mouth. Discovered in 1981, less than ten years later the frog had completely vanished from the crystal clear waters of Booloumba Creek near Queensland's Sunshine Coast. Unfortunately, this freak of nature is not the only frog species to have been lost in Australia. Since the 1970s, no less than eight others have suffered the same fate. One theory that seems to fit the facts concerns the depletion of the ozone layer, a well documented phenomenon which has led to a sharp increase in ultraviolet radiation levels. The ozone layer is meant to shield the Earth from UV rays, but increased radiation may be having a greater effect upon frog populations than previously believed. Another theory is that worldwide temperature increases are upsetting the breeding cycles of frogs.", "hypothesis": "Frogs are disappearing only from city areas.", "label": "c"} +{"uid": "id_589", "premise": "When was the last time you saw a frog? Chances are, if you live in a city, you have not seen one for some time. Even in wet areas once teeming with frogs and toads, it is becoming less and less easy to find those slimy, hopping and sometimes poisonous members of the animal kingdom. All over the world, and even in remote parts of Australia, frogs are losing the ecological battle for survival, and biologists are at a loss to explain their demise. Are amphibians simply oversensitive to changes in the ecosystem? Could it be that their rapid decline in numbers is signaling some coming environmental disaster for us all? This frightening scenario is in part the consequence of a dramatic increase over the last quarter century in the development of once natural areas of wet marshland; home not only to frogs but to all manner of wildlife. However, as yet, there are no obvious reasons why certain frog species are disappearing from rainforests in Australia that have barely been touched by human hand. The mystery is unsettling to say the least, for it is known that amphibian species are extremely sensitive to environmental variations in temperature and moisture levels. The danger is that planet Earth might not only lose a vital link in the ecological food chain (frogs keep populations of otherwise pestilent insects at manageable levels), but we might be increasing our output of air pollutants to levels that may have already become irreversible. Frogs could be inadvertently warning us of a catastrophe. An example of a species of frog that, at far as is known, has become extinct, is the platypus frog. Like the well-known Australian mammal it was named after, it exhibited some very strange behaviour; instead of giving birth to tadpoles in the water, it raised its young within its stomach. The baby frogs were actually born from out of their mother's mouth. Discovered in 1981, less than ten years later the frog had completely vanished from the crystal clear waters of Booloumba Creek near Queensland's Sunshine Coast. Unfortunately, this freak of nature is not the only frog species to have been lost in Australia. Since the 1970s, no less than eight others have suffered the same fate. One theory that seems to fit the facts concerns the depletion of the ozone layer, a well documented phenomenon which has led to a sharp increase in ultraviolet radiation levels. The ozone layer is meant to shield the Earth from UV rays, but increased radiation may be having a greater effect upon frog populations than previously believed. Another theory is that worldwide temperature increases are upsetting the breeding cycles of frogs.", "hypothesis": "Biologists are unable to explain why frogs are dying.", "label": "e"} +{"uid": "id_590", "premise": "When was the last time you saw a frog? Chances are, if you live in a city, you have not seen one for some time. Even in wet areas once teeming with frogs and toads, it is becoming less and less easy to find those slimy, hopping and sometimes poisonous members of the animal kingdom. All over the world, and even in remote parts of Australia, frogs are losing the ecological battle for survival, and biologists are at a loss to explain their demise. Are amphibians simply oversensitive to changes in the ecosystem? Could it be that their rapid decline in numbers is signaling some coming environmental disaster for us all? This frightening scenario is in part the consequence of a dramatic increase over the last quarter century in the development of once natural areas of wet marshland; home not only to frogs but to all manner of wildlife. However, as yet, there are no obvious reasons why certain frog species are disappearing from rainforests in Australia that have barely been touched by human hand. The mystery is unsettling to say the least, for it is known that amphibian species are extremely sensitive to environmental variations in temperature and moisture levels. The danger is that planet Earth might not only lose a vital link in the ecological food chain (frogs keep populations of otherwise pestilent insects at manageable levels), but we might be increasing our output of air pollutants to levels that may have already become irreversible. Frogs could be inadvertently warning us of a catastrophe. An example of a species of frog that, at far as is known, has become extinct, is the platypus frog. Like the well-known Australian mammal it was named after, it exhibited some very strange behaviour; instead of giving birth to tadpoles in the water, it raised its young within its stomach. The baby frogs were actually born from out of their mother's mouth. Discovered in 1981, less than ten years later the frog had completely vanished from the crystal clear waters of Booloumba Creek near Queensland's Sunshine Coast. Unfortunately, this freak of nature is not the only frog species to have been lost in Australia. Since the 1970s, no less than eight others have suffered the same fate. One theory that seems to fit the facts concerns the depletion of the ozone layer, a well documented phenomenon which has led to a sharp increase in ultraviolet radiation levels. The ozone layer is meant to shield the Earth from UV rays, but increased radiation may be having a greater effect upon frog populations than previously believed. Another theory is that worldwide temperature increases are upsetting the breeding cycles of frogs.", "hypothesis": "Attempts are being made to halt the development of wet marshland.", "label": "n"} +{"uid": "id_591", "premise": "When was the last time you saw a frog? Chances are, if you live in a city, you have not seen one for some time. Even in wet areas once teeming with frogs and toads, it is becoming less and less easy to find those slimy, hopping and sometimes poisonous members of the animal kingdom. All over the world, and even in remote parts of Australia, frogs are losing the ecological battle for survival, and biologists are at a loss to explain their demise. Are amphibians simply oversensitive to changes in the ecosystem? Could it be that their rapid decline in numbers is signaling some coming environmental disaster for us all? This frightening scenario is in part the consequence of a dramatic increase over the last quarter century in the development of once natural areas of wet marshland; home not only to frogs but to all manner of wildlife. However, as yet, there are no obvious reasons why certain frog species are disappearing from rainforests in Australia that have barely been touched by human hand. The mystery is unsettling to say the least, for it is known that amphibian species are extremely sensitive to environmental variations in temperature and moisture levels. The danger is that planet Earth might not only lose a vital link in the ecological food chain (frogs keep populations of otherwise pestilent insects at manageable levels), but we might be increasing our output of air pollutants to levels that may have already become irreversible. Frogs could be inadvertently warning us of a catastrophe. An example of a species of frog that, at far as is known, has become extinct, is the platypus frog. Like the well-known Australian mammal it was named after, it exhibited some very strange behaviour; instead of giving birth to tadpoles in the water, it raised its young within its stomach. The baby frogs were actually born from out of their mother's mouth. Discovered in 1981, less than ten years later the frog had completely vanished from the crystal clear waters of Booloumba Creek near Queensland's Sunshine Coast. Unfortunately, this freak of nature is not the only frog species to have been lost in Australia. Since the 1970s, no less than eight others have suffered the same fate. One theory that seems to fit the facts concerns the depletion of the ozone layer, a well documented phenomenon which has led to a sharp increase in ultraviolet radiation levels. The ozone layer is meant to shield the Earth from UV rays, but increased radiation may be having a greater effect upon frog populations than previously believed. Another theory is that worldwide temperature increases are upsetting the breeding cycles of frogs.", "hypothesis": "Frogs are important in the ecosystem because they control pests.", "label": "e"} +{"uid": "id_592", "premise": "When was the last time you saw a frog? Chances are, if you live in a city, you have not seen one for some time. Even in wet areas once teeming with frogs and toads, it is becoming less and less easy to find those slimy, hopping and sometimes poisonous members of the animal kingdom. All over the world, and even in remote parts of Australia, frogs are losing the ecological battle for survival, and biologists are at a loss to explain their demise. Are amphibians simply oversensitive to changes in the ecosystem? Could it be that their rapid decline in numbers is signaling some coming environmental disaster for us all? This frightening scenario is in part the consequence of a dramatic increase over the last quarter century in the development of once natural areas of wet marshland; home not only to frogs but to all manner of wildlife. However, as yet, there are no obvious reasons why certain frog species are disappearing from rainforests in Australia that have barely been touched by human hand. The mystery is unsettling to say the least, for it is known that amphibian species are extremely sensitive to environmental variations in temperature and moisture levels. The danger is that planet Earth might not only lose a vital link in the ecological food chain (frogs keep populations of otherwise pestilent insects at manageable levels), but we might be increasing our output of air pollutants to levels that may have already become irreversible. Frogs could be inadvertently warning us of a catastrophe. An example of a species of frog that, at far as is known, has become extinct, is the platypus frog. Like the well-known Australian mammal it was named after, it exhibited some very strange behaviour; instead of giving birth to tadpoles in the water, it raised its young within its stomach. The baby frogs were actually born from out of their mother's mouth. Discovered in 1981, less than ten years later the frog had completely vanished from the crystal clear waters of Booloumba Creek near Queensland's Sunshine Coast. Unfortunately, this freak of nature is not the only frog species to have been lost in Australia. Since the 1970s, no less than eight others have suffered the same fate. One theory that seems to fit the facts concerns the depletion of the ozone layer, a well documented phenomenon which has led to a sharp increase in ultraviolet radiation levels. The ozone layer is meant to shield the Earth from UV rays, but increased radiation may be having a greater effect upon frog populations than previously believed. Another theory is that worldwide temperature increases are upsetting the breeding cycles of frogs.", "hypothesis": "The platypus frog became extinct by 1991.", "label": "e"} +{"uid": "id_593", "premise": "When was the last time you saw a frog? Chances are, if you live in a city, you have not seen one for some time. Even in wet areas once teeming with frogs and toads, it is becoming less and less easy to find those slimy, hopping and sometimes poisonous members of the animal kingdom. All over the world, and even in remote parts of Australia, frogs are losing the ecological battle for survival, and biologists are at a loss to explain their demise. Are amphibians simply oversensitive to changes in the ecosystem? Could it be that their rapid decline in numbers is signaling some coming environmental disaster for us all? This frightening scenario is in part the consequence of a dramatic increase over the last quarter century in the development of once natural areas of wet marshland; home not only to frogs but to all manner of wildlife. However, as yet, there are no obvious reasons why certain frog species are disappearing from rainforests in Australia that have barely been touched by human hand. The mystery is unsettling to say the least, for it is known that amphibian species are extremely sensitive to environmental variations in temperature and moisture levels. The danger is that planet Earth might not only lose a vital link in the ecological food chain (frogs keep populations of otherwise pestilent insects at manageable levels), but we might be increasing our output of air pollutants to levels that may have already become irreversible. Frogs could be inadvertently warning us of a catastrophe. An example of a species of frog that, at far as is known, has become extinct, is the platypus frog. Like the well-known Australian mammal it was named after, it exhibited some very strange behaviour; instead of giving birth to tadpoles in the water, it raised its young within its stomach. The baby frogs were actually born from out of their mother's mouth. Discovered in 1981, less than ten years later the frog had completely vanished from the crystal clear waters of Booloumba Creek near Queensland's Sunshine Coast. Unfortunately, this freak of nature is not the only frog species to have been lost in Australia. Since the 1970s, no less than eight others have suffered the same fate. One theory that seems to fit the facts concerns the depletion of the ozone layer, a well documented phenomenon which has led to a sharp increase in ultraviolet radiation levels. The ozone layer is meant to shield the Earth from UV rays, but increased radiation may be having a greater effect upon frog populations than previously believed. Another theory is that worldwide temperature increases are upsetting the breeding cycles of frogs.", "hypothesis": "Frogs usually give birth to their young in an underwater nest.", "label": "n"} +{"uid": "id_594", "premise": "When was the last time you saw a frog? Chances are, if you live in a city, you have not seen one for some time. Even in wet areas once teeming with frogs and toads, it is becoming less and less easy to find those slimy, hopping and sometimes poisonous members of the animal kingdom. All over the world, and even in remote parts of Australia, frogs are losing the ecological battle for survival, and biologists are at a loss to explain their demise. Are amphibians simply oversensitive to changes in the ecosystem? Could it be that their rapid decline in numbers is signaling some coming environmental disaster for us all? This frightening scenario is in part the consequence of a dramatic increase over the last quarter century in the development of once natural areas of wet marshland; home not only to frogs but to all manner of wildlife. However, as yet, there are no obvious reasons why certain frog species are disappearing from rainforests in Australia that have barely been touched by human hand. The mystery is unsettling to say the least, for it is known that amphibian species are extremely sensitive to environmental variations in temperature and moisture levels. The danger is that planet Earth might not only lose a vital link in the ecological food chain (frogs keep populations of otherwise pestilent insects at manageable levels), but we might be increasing our output of air pollutants to levels that may have already become irreversible. Frogs could be inadvertently warning us of a catastrophe. An example of a species of frog that, at far as is known, has become extinct, is the platypus frog. Like the well-known Australian mammal it was named after, it exhibited some very strange behaviour; instead of giving birth to tadpoles in the water, it raised its young within its stomach. The baby frogs were actually born from out of their mother's mouth. Discovered in 1981, less than ten years later the frog had completely vanished from the crystal clear waters of Booloumba Creek near Queensland's Sunshine Coast. Unfortunately, this freak of nature is not the only frog species to have been lost in Australia. Since the 1970s, no less than eight others have suffered the same fate. One theory that seems to fit the facts concerns the depletion of the ozone layer, a well documented phenomenon which has led to a sharp increase in ultraviolet radiation levels. The ozone layer is meant to shield the Earth from UV rays, but increased radiation may be having a greater effect upon frog populations than previously believed. Another theory is that worldwide temperature increases are upsetting the breeding cycles of frogs.", "hypothesis": "Eight frog species have become extinct so far in Australia.", "label": "c"} +{"uid": "id_595", "premise": "When was the last time you saw a frog? Chances are, if you live in a city, you have not seen one for some time. Even in wet areas once teeming with frogs and toads, it is becoming less and less easy to find those slimy, hopping and sometimes poisonous members of the animal kingdom. All over the world, and even in remote parts of Australia, frogs are losing the ecological battle for survival, and biologists are at a loss to explain their demise. Are amphibians simply oversensitive to changes in the ecosystem? Could it be that their rapid decline in numbers is signaling some coming environmental disaster for us all? This frightening scenario is in part the consequence of a dramatic increase over the last quarter century in the development of once natural areas of wet marshland; home not only to frogs but to all manner of wildlife. However, as yet, there are no obvious reasons why certain frog species are disappearing from rainforests in Australia that have barely been touched by human hand. The mystery is unsettling to say the least, for it is known that amphibian species are extremely sensitive to environmental variations in temperature and moisture levels. The danger is that planet Earth might not only lose a vital link in the ecological food chain (frogs keep populations of otherwise pestilent insects at manageable levels), but we might be increasing our output of air pollutants to levels that may have already become irreversible. Frogs could be inadvertently warning us of a catastrophe. An example of a species of frog that, at far as is known, has become extinct, is the platypus frog. Like the well-known Australian mammal it was named after, it exhibited some very strange behaviour; instead of giving birth to tadpoles in the water, it raised its young within its stomach. The baby frogs were actually born from out of their mother's mouth. Discovered in 1981, less than ten years later the frog had completely vanished from the crystal clear waters of Booloumba Creek near Queensland's Sunshine Coast. Unfortunately, this freak of nature is not the only frog species to have been lost in Australia. Since the 1970s, no less than eight others have suffered the same fate. One theory that seems to fit the facts concerns the depletion of the ozone layer, a well documented phenomenon which has led to a sharp increase in ultraviolet radiation levels. The ozone layer is meant to shield the Earth from UV rays, but increased radiation may be having a greater effect upon frog populations than previously believed. Another theory is that worldwide temperature increases are upsetting the breeding cycles of frogs.", "hypothesis": "There is convincing evidence that the ozone layer is being depleted.", "label": "e"} +{"uid": "id_596", "premise": "When was the last time you saw a frog? Chances are, if you live in a city, you have not seen one for some time. Even in wet areas once teeming with frogs and toads, it is becoming less and less easy to find those slimy, hopping and sometimes poisonous members of the animal kingdom. All over the world, and even in remote parts of Australia, frogs are losing the ecological battle for survival, and biologists are at a loss to explain their demise. Are amphibians simply oversensitive to changes in the ecosystem? Could it be that their rapid decline in numbers is signaling some coming environmental disaster for us all? This frightening scenario is in part the consequence of a dramatic increase over the last quarter century in the development of once natural areas of wet marshland; home not only to frogs but to all manner of wildlife. However, as yet, there are no obvious reasons why certain frog species are disappearing from rainforests in Australia that have barely been touched by human hand. The mystery is unsettling to say the least, for it is known that amphibian species are extremely sensitive to environmental variations in temperature and moisture levels. The danger is that planet Earth might not only lose a vital link in the ecological food chain (frogs keep populations of otherwise pestilent insects at manageable levels), but we might be increasing our output of air pollutants to levels that may have already become irreversible. Frogs could be inadvertently warning us of a catastrophe. An example of a species of frog that, at far as is known, has become extinct, is the platypus frog. Like the well-known Australian mammal it was named after, it exhibited some very strange behaviour; instead of giving birth to tadpoles in the water, it raised its young within its stomach. The baby frogs were actually born from out of their mother's mouth. Discovered in 1981, less than ten years later the frog had completely vanished from the crystal clear waters of Booloumba Creek near Queensland's Sunshine Coast. Unfortunately, this freak of nature is not the only frog species to have been lost in Australia. Since the 1970s, no less than eight others have suffered the same fate. One theory that seems to fit the facts concerns the depletion of the ozone layer, a well documented phenomenon which has led to a sharp increase in ultraviolet radiation levels. The ozone layer is meant to shield the Earth from UV rays, but increased radiation may be having a greater effect upon frog populations than previously believed. Another theory is that worldwide temperature increases are upsetting the breeding cycles of frogs.", "hypothesis": "It is a fact that frogs' breeding cycles are upset by worldwide in creases in temperature.", "label": "c"} +{"uid": "id_597", "premise": "When you think about it, kissing is strange and a bit icky. You share saliva with someone, sometimes for a prolonged period of time. One kiss could pass on 80 million bacteria, not all of them good. Yet everyone surely remembers their first kiss, in all its embarrassing or delightful detail, and kissing continues to play a big role in new romances. At least, it does in some societies. People in western societies may assume that romantic kissing is a universal human behaviour, but a new analysis suggests that less than half of all cultures actually do it. Kissing is also extremely rare in the animal kingdom. So whats really behind this odd behaviour? If it is useful, why dont all animals do it and all humans too? It turns out that the very fact that most animals dont kiss helps explain why some do. According to a new study of kissing preferences, which looked at 168 cultures from around the world, only 46% of cultures kiss in the romantic sense. Previous estimates had put the figure at 90%. The new study excluded parents kissing their children, and focused solely on romantic lip-on-lip action between couples. Many hunter-gatherer groups showed no evidence of kissing or desire to do so. Some even considered it revolting. The Mehinaku tribe in Brazil reportedly said it was gross. Given that hunter-gatherer groups are the closest modern humans get to living our ancestral lifestyle, our ancestors may not have been kissing either. The study overturns the belief that romantic kissing is a near-universal human behaviour, says lead author William Jankowiak of the University of Nevada in Las Vegas. Instead it seems to be a product of western societies, passed on from one generation to the next, he says. There is some historical evidence to back that up. Kissing as we do it today seems to be a fairly recent invention, says Rafael Wlodarski of the University of Oxford in the UK. He has trawled through records to find evidence of how kissing has changed. The oldest evidence of a kissing-type behaviour comes from Hindu Vedic Sanskrit texts from over 3,500 years ago. Kissing was described as inhaling each others soul. In contrast, Egyptian hieroglyphics picture people close to each other rather than pressing their lips together. So what is going on? Is kissing something we do naturally, but that some cultures have suppressed? Or is it something modern humans have invented? We can find some insight by looking at animals. Our closest relatives, chimpanzees and bonobos, do kiss. Primatologist Frans de Waal of Emory University in Atlanta, Georgia, has seen many instances of chimps kissing and hugging after conflict. For chimpanzees, kissing is a form of reconciliation. It is more common among males than females. In other words, it is not a romantic behaviour. Their cousins the bonobos kiss more often, and they often use tongues while doing so. Thats perhaps not surprising, because bonobos are highly sexual beings. When two humans meet, we might shake hands. Bonobos have sex: the so-called bonobo handshake. They also use sex for many other kinds of bonding. So their kisses are not particularly romantic, either. These two apes are exceptions. As far as we know, other animals do not kiss at all. They may nuzzle or touch their faces together, but even those that have lips dont share saliva or purse and smack their lips together. They dont need to. Take wild boars. Males produce a pungent smell that females find extremely attractive. The key chemical is a pheromone called androstenone that triggers the females desire to mate. From a females point of view this is a good thing, because males with the most androstonene are also the most fertile. Her sense of smell is so acute, she doesnt need to get close enough to kiss the male. The same is true of many other mammals. For example, female hamsters emit a pheromone that gets males very excited. Mice follow similar chemical traces to help them find partners that are genetically different, minimising the risk of accidental incest. Animals often release these pheromones in their urine. Their urine is much more pungent, says Wlodarski. If theres urine present in the environment they can assess compatibility through that. Its not just mammals that have a great sense of smell. A male black widow spider can smell pheromones produced by a female that tell him if she has recently eaten. To minimise the risk of being eaten, he will only mate with her if she is not hungry. The point is, animals do not need to get close to each other to smell out a good potential mate. On the other hand, humans have an atrocious sense of smell, so we benefit from getting close. Smell isnt the only cue we use to assess each others fitness, but studies have shown that it plays an important role in mate choice. A study published in 1995 showed that women, just like mice, prefer the smell of men who are genetically different from them. This makes sense, as mating with someone with different genes is likely to produce healthy offspring. Kissing is a great way to get close enough to sniff out your partners genes. In 2013, Wlodarski examined kissing preferences in detail. He asked several hundred people what was most important when kissing someone. How they smelled featured highly, and the importance of smell increased when women were most fertile. It turns out that men also make a version of the pheromone that female boars find attractive. It is present in male sweat, and when women are exposed to it their arousal levels increase slightly. Pheromones are a big part of how mammals chose a mate, says Wlodarski, and we share some of them. Weve inherited all of our biology from mammals, weve just added extra things through evolutionary time. On that view, kissing is just a culturally acceptable way to get close enough to another person to detect their pheromones. In some cultures, this sniffing behaviour turned into physical lip contact. Its hard to pinpoint when this happened, but both serve the same purpose, says Wlodarski. So if you want to find a perfect match, you could forego kissing and start smelling people instead. Youll find just as good a partner, and you wont get half as many germs. Be prepared for some funny looks, though.", "hypothesis": "Wlodarski surveyed several men to figure out the importance of kissing.", "label": "n"} +{"uid": "id_598", "premise": "When you think about it, kissing is strange and a bit icky. You share saliva with someone, sometimes for a prolonged period of time. One kiss could pass on 80 million bacteria, not all of them good. Yet everyone surely remembers their first kiss, in all its embarrassing or delightful detail, and kissing continues to play a big role in new romances. At least, it does in some societies. People in western societies may assume that romantic kissing is a universal human behaviour, but a new analysis suggests that less than half of all cultures actually do it. Kissing is also extremely rare in the animal kingdom. So whats really behind this odd behaviour? If it is useful, why dont all animals do it and all humans too? It turns out that the very fact that most animals dont kiss helps explain why some do. According to a new study of kissing preferences, which looked at 168 cultures from around the world, only 46% of cultures kiss in the romantic sense. Previous estimates had put the figure at 90%. The new study excluded parents kissing their children, and focused solely on romantic lip-on-lip action between couples. Many hunter-gatherer groups showed no evidence of kissing or desire to do so. Some even considered it revolting. The Mehinaku tribe in Brazil reportedly said it was gross. Given that hunter-gatherer groups are the closest modern humans get to living our ancestral lifestyle, our ancestors may not have been kissing either. The study overturns the belief that romantic kissing is a near-universal human behaviour, says lead author William Jankowiak of the University of Nevada in Las Vegas. Instead it seems to be a product of western societies, passed on from one generation to the next, he says. There is some historical evidence to back that up. Kissing as we do it today seems to be a fairly recent invention, says Rafael Wlodarski of the University of Oxford in the UK. He has trawled through records to find evidence of how kissing has changed. The oldest evidence of a kissing-type behaviour comes from Hindu Vedic Sanskrit texts from over 3,500 years ago. Kissing was described as inhaling each others soul. In contrast, Egyptian hieroglyphics picture people close to each other rather than pressing their lips together. So what is going on? Is kissing something we do naturally, but that some cultures have suppressed? Or is it something modern humans have invented? We can find some insight by looking at animals. Our closest relatives, chimpanzees and bonobos, do kiss. Primatologist Frans de Waal of Emory University in Atlanta, Georgia, has seen many instances of chimps kissing and hugging after conflict. For chimpanzees, kissing is a form of reconciliation. It is more common among males than females. In other words, it is not a romantic behaviour. Their cousins the bonobos kiss more often, and they often use tongues while doing so. Thats perhaps not surprising, because bonobos are highly sexual beings. When two humans meet, we might shake hands. Bonobos have sex: the so-called bonobo handshake. They also use sex for many other kinds of bonding. So their kisses are not particularly romantic, either. These two apes are exceptions. As far as we know, other animals do not kiss at all. They may nuzzle or touch their faces together, but even those that have lips dont share saliva or purse and smack their lips together. They dont need to. Take wild boars. Males produce a pungent smell that females find extremely attractive. The key chemical is a pheromone called androstenone that triggers the females desire to mate. From a females point of view this is a good thing, because males with the most androstonene are also the most fertile. Her sense of smell is so acute, she doesnt need to get close enough to kiss the male. The same is true of many other mammals. For example, female hamsters emit a pheromone that gets males very excited. Mice follow similar chemical traces to help them find partners that are genetically different, minimising the risk of accidental incest. Animals often release these pheromones in their urine. Their urine is much more pungent, says Wlodarski. If theres urine present in the environment they can assess compatibility through that. Its not just mammals that have a great sense of smell. A male black widow spider can smell pheromones produced by a female that tell him if she has recently eaten. To minimise the risk of being eaten, he will only mate with her if she is not hungry. The point is, animals do not need to get close to each other to smell out a good potential mate. On the other hand, humans have an atrocious sense of smell, so we benefit from getting close. Smell isnt the only cue we use to assess each others fitness, but studies have shown that it plays an important role in mate choice. A study published in 1995 showed that women, just like mice, prefer the smell of men who are genetically different from them. This makes sense, as mating with someone with different genes is likely to produce healthy offspring. Kissing is a great way to get close enough to sniff out your partners genes. In 2013, Wlodarski examined kissing preferences in detail. He asked several hundred people what was most important when kissing someone. How they smelled featured highly, and the importance of smell increased when women were most fertile. It turns out that men also make a version of the pheromone that female boars find attractive. It is present in male sweat, and when women are exposed to it their arousal levels increase slightly. Pheromones are a big part of how mammals chose a mate, says Wlodarski, and we share some of them. Weve inherited all of our biology from mammals, weve just added extra things through evolutionary time. On that view, kissing is just a culturally acceptable way to get close enough to another person to detect their pheromones. In some cultures, this sniffing behaviour turned into physical lip contact. Its hard to pinpoint when this happened, but both serve the same purpose, says Wlodarski. So if you want to find a perfect match, you could forego kissing and start smelling people instead. Youll find just as good a partner, and you wont get half as many germs. Be prepared for some funny looks, though.", "hypothesis": "Scent might be important in choosing your partner.", "label": "e"} +{"uid": "id_599", "premise": "When you think about it, kissing is strange and a bit icky. You share saliva with someone, sometimes for a prolonged period of time. One kiss could pass on 80 million bacteria, not all of them good. Yet everyone surely remembers their first kiss, in all its embarrassing or delightful detail, and kissing continues to play a big role in new romances. At least, it does in some societies. People in western societies may assume that romantic kissing is a universal human behaviour, but a new analysis suggests that less than half of all cultures actually do it. Kissing is also extremely rare in the animal kingdom. So whats really behind this odd behaviour? If it is useful, why dont all animals do it and all humans too? It turns out that the very fact that most animals dont kiss helps explain why some do. According to a new study of kissing preferences, which looked at 168 cultures from around the world, only 46% of cultures kiss in the romantic sense. Previous estimates had put the figure at 90%. The new study excluded parents kissing their children, and focused solely on romantic lip-on-lip action between couples. Many hunter-gatherer groups showed no evidence of kissing or desire to do so. Some even considered it revolting. The Mehinaku tribe in Brazil reportedly said it was gross. Given that hunter-gatherer groups are the closest modern humans get to living our ancestral lifestyle, our ancestors may not have been kissing either. The study overturns the belief that romantic kissing is a near-universal human behaviour, says lead author William Jankowiak of the University of Nevada in Las Vegas. Instead it seems to be a product of western societies, passed on from one generation to the next, he says. There is some historical evidence to back that up. Kissing as we do it today seems to be a fairly recent invention, says Rafael Wlodarski of the University of Oxford in the UK. He has trawled through records to find evidence of how kissing has changed. The oldest evidence of a kissing-type behaviour comes from Hindu Vedic Sanskrit texts from over 3,500 years ago. Kissing was described as inhaling each others soul. In contrast, Egyptian hieroglyphics picture people close to each other rather than pressing their lips together. So what is going on? Is kissing something we do naturally, but that some cultures have suppressed? Or is it something modern humans have invented? We can find some insight by looking at animals. Our closest relatives, chimpanzees and bonobos, do kiss. Primatologist Frans de Waal of Emory University in Atlanta, Georgia, has seen many instances of chimps kissing and hugging after conflict. For chimpanzees, kissing is a form of reconciliation. It is more common among males than females. In other words, it is not a romantic behaviour. Their cousins the bonobos kiss more often, and they often use tongues while doing so. Thats perhaps not surprising, because bonobos are highly sexual beings. When two humans meet, we might shake hands. Bonobos have sex: the so-called bonobo handshake. They also use sex for many other kinds of bonding. So their kisses are not particularly romantic, either. These two apes are exceptions. As far as we know, other animals do not kiss at all. They may nuzzle or touch their faces together, but even those that have lips dont share saliva or purse and smack their lips together. They dont need to. Take wild boars. Males produce a pungent smell that females find extremely attractive. The key chemical is a pheromone called androstenone that triggers the females desire to mate. From a females point of view this is a good thing, because males with the most androstonene are also the most fertile. Her sense of smell is so acute, she doesnt need to get close enough to kiss the male. The same is true of many other mammals. For example, female hamsters emit a pheromone that gets males very excited. Mice follow similar chemical traces to help them find partners that are genetically different, minimising the risk of accidental incest. Animals often release these pheromones in their urine. Their urine is much more pungent, says Wlodarski. If theres urine present in the environment they can assess compatibility through that. Its not just mammals that have a great sense of smell. A male black widow spider can smell pheromones produced by a female that tell him if she has recently eaten. To minimise the risk of being eaten, he will only mate with her if she is not hungry. The point is, animals do not need to get close to each other to smell out a good potential mate. On the other hand, humans have an atrocious sense of smell, so we benefit from getting close. Smell isnt the only cue we use to assess each others fitness, but studies have shown that it plays an important role in mate choice. A study published in 1995 showed that women, just like mice, prefer the smell of men who are genetically different from them. This makes sense, as mating with someone with different genes is likely to produce healthy offspring. Kissing is a great way to get close enough to sniff out your partners genes. In 2013, Wlodarski examined kissing preferences in detail. He asked several hundred people what was most important when kissing someone. How they smelled featured highly, and the importance of smell increased when women were most fertile. It turns out that men also make a version of the pheromone that female boars find attractive. It is present in male sweat, and when women are exposed to it their arousal levels increase slightly. Pheromones are a big part of how mammals chose a mate, says Wlodarski, and we share some of them. Weve inherited all of our biology from mammals, weve just added extra things through evolutionary time. On that view, kissing is just a culturally acceptable way to get close enough to another person to detect their pheromones. In some cultures, this sniffing behaviour turned into physical lip contact. Its hard to pinpoint when this happened, but both serve the same purpose, says Wlodarski. So if you want to find a perfect match, you could forego kissing and start smelling people instead. Youll find just as good a partner, and you wont get half as many germs. Be prepared for some funny looks, though.", "hypothesis": "Both Easter and Wester societies presume that kissing is essential for any part of the world.", "label": "c"} +{"uid": "id_600", "premise": "When you think about it, kissing is strange and a bit icky. You share saliva with someone, sometimes for a prolonged period of time. One kiss could pass on 80 million bacteria, not all of them good. Yet everyone surely remembers their first kiss, in all its embarrassing or delightful detail, and kissing continues to play a big role in new romances. At least, it does in some societies. People in western societies may assume that romantic kissing is a universal human behaviour, but a new analysis suggests that less than half of all cultures actually do it. Kissing is also extremely rare in the animal kingdom. So whats really behind this odd behaviour? If it is useful, why dont all animals do it and all humans too? It turns out that the very fact that most animals dont kiss helps explain why some do. According to a new study of kissing preferences, which looked at 168 cultures from around the world, only 46% of cultures kiss in the romantic sense. Previous estimates had put the figure at 90%. The new study excluded parents kissing their children, and focused solely on romantic lip-on-lip action between couples. Many hunter-gatherer groups showed no evidence of kissing or desire to do so. Some even considered it revolting. The Mehinaku tribe in Brazil reportedly said it was gross. Given that hunter-gatherer groups are the closest modern humans get to living our ancestral lifestyle, our ancestors may not have been kissing either. The study overturns the belief that romantic kissing is a near-universal human behaviour, says lead author William Jankowiak of the University of Nevada in Las Vegas. Instead it seems to be a product of western societies, passed on from one generation to the next, he says. There is some historical evidence to back that up. Kissing as we do it today seems to be a fairly recent invention, says Rafael Wlodarski of the University of Oxford in the UK. He has trawled through records to find evidence of how kissing has changed. The oldest evidence of a kissing-type behaviour comes from Hindu Vedic Sanskrit texts from over 3,500 years ago. Kissing was described as inhaling each others soul. In contrast, Egyptian hieroglyphics picture people close to each other rather than pressing their lips together. So what is going on? Is kissing something we do naturally, but that some cultures have suppressed? Or is it something modern humans have invented? We can find some insight by looking at animals. Our closest relatives, chimpanzees and bonobos, do kiss. Primatologist Frans de Waal of Emory University in Atlanta, Georgia, has seen many instances of chimps kissing and hugging after conflict. For chimpanzees, kissing is a form of reconciliation. It is more common among males than females. In other words, it is not a romantic behaviour. Their cousins the bonobos kiss more often, and they often use tongues while doing so. Thats perhaps not surprising, because bonobos are highly sexual beings. When two humans meet, we might shake hands. Bonobos have sex: the so-called bonobo handshake. They also use sex for many other kinds of bonding. So their kisses are not particularly romantic, either. These two apes are exceptions. As far as we know, other animals do not kiss at all. They may nuzzle or touch their faces together, but even those that have lips dont share saliva or purse and smack their lips together. They dont need to. Take wild boars. Males produce a pungent smell that females find extremely attractive. The key chemical is a pheromone called androstenone that triggers the females desire to mate. From a females point of view this is a good thing, because males with the most androstonene are also the most fertile. Her sense of smell is so acute, she doesnt need to get close enough to kiss the male. The same is true of many other mammals. For example, female hamsters emit a pheromone that gets males very excited. Mice follow similar chemical traces to help them find partners that are genetically different, minimising the risk of accidental incest. Animals often release these pheromones in their urine. Their urine is much more pungent, says Wlodarski. If theres urine present in the environment they can assess compatibility through that. Its not just mammals that have a great sense of smell. A male black widow spider can smell pheromones produced by a female that tell him if she has recently eaten. To minimise the risk of being eaten, he will only mate with her if she is not hungry. The point is, animals do not need to get close to each other to smell out a good potential mate. On the other hand, humans have an atrocious sense of smell, so we benefit from getting close. Smell isnt the only cue we use to assess each others fitness, but studies have shown that it plays an important role in mate choice. A study published in 1995 showed that women, just like mice, prefer the smell of men who are genetically different from them. This makes sense, as mating with someone with different genes is likely to produce healthy offspring. Kissing is a great way to get close enough to sniff out your partners genes. In 2013, Wlodarski examined kissing preferences in detail. He asked several hundred people what was most important when kissing someone. How they smelled featured highly, and the importance of smell increased when women were most fertile. It turns out that men also make a version of the pheromone that female boars find attractive. It is present in male sweat, and when women are exposed to it their arousal levels increase slightly. Pheromones are a big part of how mammals chose a mate, says Wlodarski, and we share some of them. Weve inherited all of our biology from mammals, weve just added extra things through evolutionary time. On that view, kissing is just a culturally acceptable way to get close enough to another person to detect their pheromones. In some cultures, this sniffing behaviour turned into physical lip contact. Its hard to pinpoint when this happened, but both serve the same purpose, says Wlodarski. So if you want to find a perfect match, you could forego kissing and start smelling people instead. Youll find just as good a partner, and you wont get half as many germs. Be prepared for some funny looks, though.", "hypothesis": "Our ancestors were not likely to kiss.", "label": "e"} +{"uid": "id_601", "premise": "When you think about it, kissing is strange and a bit icky. You share saliva with someone, sometimes for a prolonged period of time. One kiss could pass on 80 million bacteria, not all of them good. Yet everyone surely remembers their first kiss, in all its embarrassing or delightful detail, and kissing continues to play a big role in new romances. At least, it does in some societies. People in western societies may assume that romantic kissing is a universal human behaviour, but a new analysis suggests that less than half of all cultures actually do it. Kissing is also extremely rare in the animal kingdom. So whats really behind this odd behaviour? If it is useful, why dont all animals do it and all humans too? It turns out that the very fact that most animals dont kiss helps explain why some do. According to a new study of kissing preferences, which looked at 168 cultures from around the world, only 46% of cultures kiss in the romantic sense. Previous estimates had put the figure at 90%. The new study excluded parents kissing their children, and focused solely on romantic lip-on-lip action between couples. Many hunter-gatherer groups showed no evidence of kissing or desire to do so. Some even considered it revolting. The Mehinaku tribe in Brazil reportedly said it was gross. Given that hunter-gatherer groups are the closest modern humans get to living our ancestral lifestyle, our ancestors may not have been kissing either. The study overturns the belief that romantic kissing is a near-universal human behaviour, says lead author William Jankowiak of the University of Nevada in Las Vegas. Instead it seems to be a product of western societies, passed on from one generation to the next, he says. There is some historical evidence to back that up. Kissing as we do it today seems to be a fairly recent invention, says Rafael Wlodarski of the University of Oxford in the UK. He has trawled through records to find evidence of how kissing has changed. The oldest evidence of a kissing-type behaviour comes from Hindu Vedic Sanskrit texts from over 3,500 years ago. Kissing was described as inhaling each others soul. In contrast, Egyptian hieroglyphics picture people close to each other rather than pressing their lips together. So what is going on? Is kissing something we do naturally, but that some cultures have suppressed? Or is it something modern humans have invented? We can find some insight by looking at animals. Our closest relatives, chimpanzees and bonobos, do kiss. Primatologist Frans de Waal of Emory University in Atlanta, Georgia, has seen many instances of chimps kissing and hugging after conflict. For chimpanzees, kissing is a form of reconciliation. It is more common among males than females. In other words, it is not a romantic behaviour. Their cousins the bonobos kiss more often, and they often use tongues while doing so. Thats perhaps not surprising, because bonobos are highly sexual beings. When two humans meet, we might shake hands. Bonobos have sex: the so-called bonobo handshake. They also use sex for many other kinds of bonding. So their kisses are not particularly romantic, either. These two apes are exceptions. As far as we know, other animals do not kiss at all. They may nuzzle or touch their faces together, but even those that have lips dont share saliva or purse and smack their lips together. They dont need to. Take wild boars. Males produce a pungent smell that females find extremely attractive. The key chemical is a pheromone called androstenone that triggers the females desire to mate. From a females point of view this is a good thing, because males with the most androstonene are also the most fertile. Her sense of smell is so acute, she doesnt need to get close enough to kiss the male. The same is true of many other mammals. For example, female hamsters emit a pheromone that gets males very excited. Mice follow similar chemical traces to help them find partners that are genetically different, minimising the risk of accidental incest. Animals often release these pheromones in their urine. Their urine is much more pungent, says Wlodarski. If theres urine present in the environment they can assess compatibility through that. Its not just mammals that have a great sense of smell. A male black widow spider can smell pheromones produced by a female that tell him if she has recently eaten. To minimise the risk of being eaten, he will only mate with her if she is not hungry. The point is, animals do not need to get close to each other to smell out a good potential mate. On the other hand, humans have an atrocious sense of smell, so we benefit from getting close. Smell isnt the only cue we use to assess each others fitness, but studies have shown that it plays an important role in mate choice. A study published in 1995 showed that women, just like mice, prefer the smell of men who are genetically different from them. This makes sense, as mating with someone with different genes is likely to produce healthy offspring. Kissing is a great way to get close enough to sniff out your partners genes. In 2013, Wlodarski examined kissing preferences in detail. He asked several hundred people what was most important when kissing someone. How they smelled featured highly, and the importance of smell increased when women were most fertile. It turns out that men also make a version of the pheromone that female boars find attractive. It is present in male sweat, and when women are exposed to it their arousal levels increase slightly. Pheromones are a big part of how mammals chose a mate, says Wlodarski, and we share some of them. Weve inherited all of our biology from mammals, weve just added extra things through evolutionary time. On that view, kissing is just a culturally acceptable way to get close enough to another person to detect their pheromones. In some cultures, this sniffing behaviour turned into physical lip contact. Its hard to pinpoint when this happened, but both serve the same purpose, says Wlodarski. So if you want to find a perfect match, you could forego kissing and start smelling people instead. Youll find just as good a partner, and you wont get half as many germs. Be prepared for some funny looks, though.", "hypothesis": "Chimpanzees and bonbons kiss not for the romance.", "label": "e"} +{"uid": "id_602", "premise": "When you think about it, kissing is strange and a bit icky. You share saliva with someone, sometimes for a prolonged period of time. One kiss could pass on 80 million bacteria, not all of them good. Yet everyone surely remembers their first kiss, in all its embarrassing or delightful detail, and kissing continues to play a big role in new romances. At least, it does in some societies. People in western societies may assume that romantic kissing is a universal human behaviour, but a new analysis suggests that less than half of all cultures actually do it. Kissing is also extremely rare in the animal kingdom. So whats really behind this odd behaviour? If it is useful, why dont all animals do it and all humans too? It turns out that the very fact that most animals dont kiss helps explain why some do. According to a new study of kissing preferences, which looked at 168 cultures from around the world, only 46% of cultures kiss in the romantic sense. Previous estimates had put the figure at 90%. The new study excluded parents kissing their children, and focused solely on romantic lip-on-lip action between couples. Many hunter-gatherer groups showed no evidence of kissing or desire to do so. Some even considered it revolting. The Mehinaku tribe in Brazil reportedly said it was gross. Given that hunter-gatherer groups are the closest modern humans get to living our ancestral lifestyle, our ancestors may not have been kissing either. The study overturns the belief that romantic kissing is a near-universal human behaviour, says lead author William Jankowiak of the University of Nevada in Las Vegas. Instead it seems to be a product of western societies, passed on from one generation to the next, he says. There is some historical evidence to back that up. Kissing as we do it today seems to be a fairly recent invention, says Rafael Wlodarski of the University of Oxford in the UK. He has trawled through records to find evidence of how kissing has changed. The oldest evidence of a kissing-type behaviour comes from Hindu Vedic Sanskrit texts from over 3,500 years ago. Kissing was described as inhaling each others soul. In contrast, Egyptian hieroglyphics picture people close to each other rather than pressing their lips together. So what is going on? Is kissing something we do naturally, but that some cultures have suppressed? Or is it something modern humans have invented? We can find some insight by looking at animals. Our closest relatives, chimpanzees and bonobos, do kiss. Primatologist Frans de Waal of Emory University in Atlanta, Georgia, has seen many instances of chimps kissing and hugging after conflict. For chimpanzees, kissing is a form of reconciliation. It is more common among males than females. In other words, it is not a romantic behaviour. Their cousins the bonobos kiss more often, and they often use tongues while doing so. Thats perhaps not surprising, because bonobos are highly sexual beings. When two humans meet, we might shake hands. Bonobos have sex: the so-called bonobo handshake. They also use sex for many other kinds of bonding. So their kisses are not particularly romantic, either. These two apes are exceptions. As far as we know, other animals do not kiss at all. They may nuzzle or touch their faces together, but even those that have lips dont share saliva or purse and smack their lips together. They dont need to. Take wild boars. Males produce a pungent smell that females find extremely attractive. The key chemical is a pheromone called androstenone that triggers the females desire to mate. From a females point of view this is a good thing, because males with the most androstonene are also the most fertile. Her sense of smell is so acute, she doesnt need to get close enough to kiss the male. The same is true of many other mammals. For example, female hamsters emit a pheromone that gets males very excited. Mice follow similar chemical traces to help them find partners that are genetically different, minimising the risk of accidental incest. Animals often release these pheromones in their urine. Their urine is much more pungent, says Wlodarski. If theres urine present in the environment they can assess compatibility through that. Its not just mammals that have a great sense of smell. A male black widow spider can smell pheromones produced by a female that tell him if she has recently eaten. To minimise the risk of being eaten, he will only mate with her if she is not hungry. The point is, animals do not need to get close to each other to smell out a good potential mate. On the other hand, humans have an atrocious sense of smell, so we benefit from getting close. Smell isnt the only cue we use to assess each others fitness, but studies have shown that it plays an important role in mate choice. A study published in 1995 showed that women, just like mice, prefer the smell of men who are genetically different from them. This makes sense, as mating with someone with different genes is likely to produce healthy offspring. Kissing is a great way to get close enough to sniff out your partners genes. In 2013, Wlodarski examined kissing preferences in detail. He asked several hundred people what was most important when kissing someone. How they smelled featured highly, and the importance of smell increased when women were most fertile. It turns out that men also make a version of the pheromone that female boars find attractive. It is present in male sweat, and when women are exposed to it their arousal levels increase slightly. Pheromones are a big part of how mammals chose a mate, says Wlodarski, and we share some of them. Weve inherited all of our biology from mammals, weve just added extra things through evolutionary time. On that view, kissing is just a culturally acceptable way to get close enough to another person to detect their pheromones. In some cultures, this sniffing behaviour turned into physical lip contact. Its hard to pinpoint when this happened, but both serve the same purpose, says Wlodarski. So if you want to find a perfect match, you could forego kissing and start smelling people instead. Youll find just as good a partner, and you wont get half as many germs. Be prepared for some funny looks, though.", "hypothesis": "There are other animal, rather than apes, that kiss.", "label": "c"} +{"uid": "id_603", "premise": "When you think about it, kissing is strange and a bit icky. You share saliva with someone, sometimes for a prolonged period of time. One kiss could pass on 80 million bacteria, not all of them good. Yet everyone surely remembers their first kiss, in all its embarrassing or delightful detail, and kissing continues to play a big role in new romances. At least, it does in some societies. People in western societies may assume that romantic kissing is a universal human behaviour, but a new analysis suggests that less than half of all cultures actually do it. Kissing is also extremely rare in the animal kingdom. So whats really behind this odd behaviour? If it is useful, why dont all animals do it and all humans too? It turns out that the very fact that most animals dont kiss helps explain why some do. According to a new study of kissing preferences, which looked at 168 cultures from around the world, only 46% of cultures kiss in the romantic sense. Previous estimates had put the figure at 90%. The new study excluded parents kissing their children, and focused solely on romantic lip-on-lip action between couples. Many hunter-gatherer groups showed no evidence of kissing or desire to do so. Some even considered it revolting. The Mehinaku tribe in Brazil reportedly said it was gross. Given that hunter-gatherer groups are the closest modern humans get to living our ancestral lifestyle, our ancestors may not have been kissing either. The study overturns the belief that romantic kissing is a near-universal human behaviour, says lead author William Jankowiak of the University of Nevada in Las Vegas. Instead it seems to be a product of western societies, passed on from one generation to the next, he says. There is some historical evidence to back that up. Kissing as we do it today seems to be a fairly recent invention, says Rafael Wlodarski of the University of Oxford in the UK. He has trawled through records to find evidence of how kissing has changed. The oldest evidence of a kissing-type behaviour comes from Hindu Vedic Sanskrit texts from over 3,500 years ago. Kissing was described as inhaling each others soul. In contrast, Egyptian hieroglyphics picture people close to each other rather than pressing their lips together. So what is going on? Is kissing something we do naturally, but that some cultures have suppressed? Or is it something modern humans have invented? We can find some insight by looking at animals. Our closest relatives, chimpanzees and bonobos, do kiss. Primatologist Frans de Waal of Emory University in Atlanta, Georgia, has seen many instances of chimps kissing and hugging after conflict. For chimpanzees, kissing is a form of reconciliation. It is more common among males than females. In other words, it is not a romantic behaviour. Their cousins the bonobos kiss more often, and they often use tongues while doing so. Thats perhaps not surprising, because bonobos are highly sexual beings. When two humans meet, we might shake hands. Bonobos have sex: the so-called bonobo handshake. They also use sex for many other kinds of bonding. So their kisses are not particularly romantic, either. These two apes are exceptions. As far as we know, other animals do not kiss at all. They may nuzzle or touch their faces together, but even those that have lips dont share saliva or purse and smack their lips together. They dont need to. Take wild boars. Males produce a pungent smell that females find extremely attractive. The key chemical is a pheromone called androstenone that triggers the females desire to mate. From a females point of view this is a good thing, because males with the most androstonene are also the most fertile. Her sense of smell is so acute, she doesnt need to get close enough to kiss the male. The same is true of many other mammals. For example, female hamsters emit a pheromone that gets males very excited. Mice follow similar chemical traces to help them find partners that are genetically different, minimising the risk of accidental incest. Animals often release these pheromones in their urine. Their urine is much more pungent, says Wlodarski. If theres urine present in the environment they can assess compatibility through that. Its not just mammals that have a great sense of smell. A male black widow spider can smell pheromones produced by a female that tell him if she has recently eaten. To minimise the risk of being eaten, he will only mate with her if she is not hungry. The point is, animals do not need to get close to each other to smell out a good potential mate. On the other hand, humans have an atrocious sense of smell, so we benefit from getting close. Smell isnt the only cue we use to assess each others fitness, but studies have shown that it plays an important role in mate choice. A study published in 1995 showed that women, just like mice, prefer the smell of men who are genetically different from them. This makes sense, as mating with someone with different genes is likely to produce healthy offspring. Kissing is a great way to get close enough to sniff out your partners genes. In 2013, Wlodarski examined kissing preferences in detail. He asked several hundred people what was most important when kissing someone. How they smelled featured highly, and the importance of smell increased when women were most fertile. It turns out that men also make a version of the pheromone that female boars find attractive. It is present in male sweat, and when women are exposed to it their arousal levels increase slightly. Pheromones are a big part of how mammals chose a mate, says Wlodarski, and we share some of them. Weve inherited all of our biology from mammals, weve just added extra things through evolutionary time. On that view, kissing is just a culturally acceptable way to get close enough to another person to detect their pheromones. In some cultures, this sniffing behaviour turned into physical lip contact. Its hard to pinpoint when this happened, but both serve the same purpose, says Wlodarski. So if you want to find a perfect match, you could forego kissing and start smelling people instead. Youll find just as good a partner, and you wont get half as many germs. Be prepared for some funny looks, though.", "hypothesis": "According to a Hindu text, kissing is a means to exchange souls.", "label": "e"} +{"uid": "id_604", "premise": "When you think about it, kissing is strange and a bit icky. You share saliva with someone, sometimes for a prolonged period of time. One kiss could pass on 80 million bacteria, not all of them good. Yet everyone surely remembers their first kiss, in all its embarrassing or delightful detail, and kissing continues to play a big role in new romances. At least, it does in some societies. People in western societies may assume that romantic kissing is a universal human behaviour, but a new analysis suggests that less than half of all cultures actually do it. Kissing is also extremely rare in the animal kingdom. So whats really behind this odd behaviour? If it is useful, why dont all animals do it and all humans too? It turns out that the very fact that most animals dont kiss helps explain why some do. According to a new study of kissing preferences, which looked at 168 cultures from around the world, only 46% of cultures kiss in the romantic sense. Previous estimates had put the figure at 90%. The new study excluded parents kissing their children, and focused solely on romantic lip-on-lip action between couples. Many hunter-gatherer groups showed no evidence of kissing or desire to do so. Some even considered it revolting. The Mehinaku tribe in Brazil reportedly said it was gross. Given that hunter-gatherer groups are the closest modern humans get to living our ancestral lifestyle, our ancestors may not have been kissing either. The study overturns the belief that romantic kissing is a near-universal human behaviour, says lead author William Jankowiak of the University of Nevada in Las Vegas. Instead it seems to be a product of western societies, passed on from one generation to the next, he says. There is some historical evidence to back that up. Kissing as we do it today seems to be a fairly recent invention, says Rafael Wlodarski of the University of Oxford in the UK. He has trawled through records to find evidence of how kissing has changed. The oldest evidence of a kissing-type behaviour comes from Hindu Vedic Sanskrit texts from over 3,500 years ago. Kissing was described as inhaling each others soul. In contrast, Egyptian hieroglyphics picture people close to each other rather than pressing their lips together. So what is going on? Is kissing something we do naturally, but that some cultures have suppressed? Or is it something modern humans have invented? We can find some insight by looking at animals. Our closest relatives, chimpanzees and bonobos, do kiss. Primatologist Frans de Waal of Emory University in Atlanta, Georgia, has seen many instances of chimps kissing and hugging after conflict. For chimpanzees, kissing is a form of reconciliation. It is more common among males than females. In other words, it is not a romantic behaviour. Their cousins the bonobos kiss more often, and they often use tongues while doing so. Thats perhaps not surprising, because bonobos are highly sexual beings. When two humans meet, we might shake hands. Bonobos have sex: the so-called bonobo handshake. They also use sex for many other kinds of bonding. So their kisses are not particularly romantic, either. These two apes are exceptions. As far as we know, other animals do not kiss at all. They may nuzzle or touch their faces together, but even those that have lips dont share saliva or purse and smack their lips together. They dont need to. Take wild boars. Males produce a pungent smell that females find extremely attractive. The key chemical is a pheromone called androstenone that triggers the females desire to mate. From a females point of view this is a good thing, because males with the most androstonene are also the most fertile. Her sense of smell is so acute, she doesnt need to get close enough to kiss the male. The same is true of many other mammals. For example, female hamsters emit a pheromone that gets males very excited. Mice follow similar chemical traces to help them find partners that are genetically different, minimising the risk of accidental incest. Animals often release these pheromones in their urine. Their urine is much more pungent, says Wlodarski. If theres urine present in the environment they can assess compatibility through that. Its not just mammals that have a great sense of smell. A male black widow spider can smell pheromones produced by a female that tell him if she has recently eaten. To minimise the risk of being eaten, he will only mate with her if she is not hungry. The point is, animals do not need to get close to each other to smell out a good potential mate. On the other hand, humans have an atrocious sense of smell, so we benefit from getting close. Smell isnt the only cue we use to assess each others fitness, but studies have shown that it plays an important role in mate choice. A study published in 1995 showed that women, just like mice, prefer the smell of men who are genetically different from them. This makes sense, as mating with someone with different genes is likely to produce healthy offspring. Kissing is a great way to get close enough to sniff out your partners genes. In 2013, Wlodarski examined kissing preferences in detail. He asked several hundred people what was most important when kissing someone. How they smelled featured highly, and the importance of smell increased when women were most fertile. It turns out that men also make a version of the pheromone that female boars find attractive. It is present in male sweat, and when women are exposed to it their arousal levels increase slightly. Pheromones are a big part of how mammals chose a mate, says Wlodarski, and we share some of them. Weve inherited all of our biology from mammals, weve just added extra things through evolutionary time. On that view, kissing is just a culturally acceptable way to get close enough to another person to detect their pheromones. In some cultures, this sniffing behaviour turned into physical lip contact. Its hard to pinpoint when this happened, but both serve the same purpose, says Wlodarski. So if you want to find a perfect match, you could forego kissing and start smelling people instead. Youll find just as good a partner, and you wont get half as many germs. Be prepared for some funny looks, though.", "hypothesis": "Majority of the microorganisms passed by kissing are beneficial for the body.", "label": "n"} +{"uid": "id_605", "premise": "Whereas mvertebrates have an external exoskeleton, humans and other vertebrates have an mternal endoskeleton. The human endoskeleton is comprised of cartilage and the bodys 206 bones, which are connected to each other by ligaments. As well as protecting and supporting the bodys internal organs, the human endoskeketon also works in conjunction with muscles, joints and the nervous system to enable movement. Jomts occur between bones, making the skeleton flexible by acting as hinges or pivots. Tendons attach muscles to bones and contract in response to a stimulus from the bodys nervous system. Those muscles that are under conscious control, the skeletal muscles, act by pullmg against the bones of the skeleton.", "hypothesis": "Bones contract skeletal muscles m response to signals from the nervous system.", "label": "c"} +{"uid": "id_606", "premise": "Whereas mvertebrates have an external exoskeleton, humans and other vertebrates have an mternal endoskeleton. The human endoskeleton is comprised of cartilage and the bodys 206 bones, which are connected to each other by ligaments. As well as protecting and supporting the bodys internal organs, the human endoskeketon also works in conjunction with muscles, joints and the nervous system to enable movement. Jomts occur between bones, making the skeleton flexible by acting as hinges or pivots. Tendons attach muscles to bones and contract in response to a stimulus from the bodys nervous system. Those muscles that are under conscious control, the skeletal muscles, act by pullmg against the bones of the skeleton.", "hypothesis": "Unlike invertebrates, humans have an mternal exoskeleton.", "label": "c"} +{"uid": "id_607", "premise": "Whereas mvertebrates have an external exoskeleton, humans and other vertebrates have an mternal endoskeleton. The human endoskeleton is comprised of cartilage and the bodys 206 bones, which are connected to each other by ligaments. As well as protecting and supporting the bodys internal organs, the human endoskeketon also works in conjunction with muscles, joints and the nervous system to enable movement. Jomts occur between bones, making the skeleton flexible by acting as hinges or pivots. Tendons attach muscles to bones and contract in response to a stimulus from the bodys nervous system. Those muscles that are under conscious control, the skeletal muscles, act by pullmg against the bones of the skeleton.", "hypothesis": "The human skeleton is comprised mamly of bone.", "label": "n"} +{"uid": "id_608", "premise": "Whereas mvertebrates have an external exoskeleton, humans and other vertebrates have an mternal endoskeleton. The human endoskeleton is comprised of cartilage and the bodys 206 bones, which are connected to each other by ligaments. As well as protecting and supporting the bodys internal organs, the human endoskeketon also works in conjunction with muscles, joints and the nervous system to enable movement. Jomts occur between bones, making the skeleton flexible by acting as hinges or pivots. Tendons attach muscles to bones and contract in response to a stimulus from the bodys nervous system. Those muscles that are under conscious control, the skeletal muscles, act by pullmg against the bones of the skeleton.", "hypothesis": "The human endoskeleton provides connection pomts for the bodys muscles.", "label": "e"} +{"uid": "id_609", "premise": "Whereas mvertebrates have an external exoskeleton, humans and other vertebrates have an mternal endoskeleton. The human endoskeleton is comprised of cartilage and the bodys 206 bones, which are connected to each other by ligaments. As well as protecting and supporting the bodys internal organs, the human endoskeketon also works in conjunction with muscles, joints and the nervous system to enable movement. Jomts occur between bones, making the skeleton flexible by acting as hinges or pivots. Tendons attach muscles to bones and contract in response to a stimulus from the bodys nervous system. Those muscles that are under conscious control, the skeletal muscles, act by pullmg against the bones of the skeleton.", "hypothesis": "Physical activity requires the muscles and bones to synchronise.", "label": "e"} +{"uid": "id_610", "premise": "Which of the following can be concluded from the above statement?", "hypothesis": "Indoor air is 10 to 30 times more polluted than outdoor air, and pollutants include dust mites, bacteria, fungi, viruses and pollen.", "label": "c"} +{"uid": "id_611", "premise": "Which of the following can be concluded from the above statement?", "hypothesis": "The highest demand for air purifiers is from Delhi.", "label": "e"} +{"uid": "id_612", "premise": "Which of the following can be concluded from the above statement?", "hypothesis": "The sales of air purifier increase due to rise in air pollution.", "label": "e"} +{"uid": "id_613", "premise": "Which of the following can be concluded from the above statement?", "hypothesis": "Delhi has been market as the most polluted city in the country by several reputed bodies.", "label": "c"} +{"uid": "id_614", "premise": "While most forms of discrimination in the workplace have been outlawed, discrimination or bias against some employees seeking career advancement still happens. This discrimination is both unwritten and unacknowledged. A glass ceiling is the term used to describe this type of discrimination and refers to the invisible barrier that people hit when they try to progress beyond a certain level in some businesses and organisations. Originally coined to illustrate the hidden use of sexual discrimination against women in professional environments, it is now used to describe any form of discrimination, such as racism or ageism, which prevents qualified or experienced employees reaching even basic levels within their organisation. Some reports and studies now suggest that change is happening and that cracks are beginning to appear in the glass. The studies also claim however that change is happening slowly and that the cracks are small.", "hypothesis": "A glass ceiling can prevent qualified people from getting to the top of their field.", "label": "e"} +{"uid": "id_615", "premise": "While most forms of discrimination in the workplace have been outlawed, discrimination or bias against some employees seeking career advancement still happens. This discrimination is both unwritten and unacknowledged. A glass ceiling is the term used to describe this type of discrimination and refers to the invisible barrier that people hit when they try to progress beyond a certain level in some businesses and organisations. Originally coined to illustrate the hidden use of sexual discrimination against women in professional environments, it is now used to describe any form of discrimination, such as racism or ageism, which prevents qualified or experienced employees reaching even basic levels within their organisation. Some reports and studies now suggest that change is happening and that cracks are beginning to appear in the glass. The studies also claim however that change is happening slowly and that the cracks are small.", "hypothesis": "Males are less likely to experience the glass ceiling effect than females.", "label": "n"} +{"uid": "id_616", "premise": "While most forms of discrimination in the workplace have been outlawed, discrimination or bias against some employees seeking career advancement still happens. This discrimination is both unwritten and unacknowledged. A glass ceiling is the term used to describe this type of discrimination and refers to the invisible barrier that people hit when they try to progress beyond a certain level in some businesses and organisations. Originally coined to illustrate the hidden use of sexual discrimination against women in professional environments, it is now used to describe any form of discrimination, such as racism or ageism, which prevents qualified or experienced employees reaching even basic levels within their organisation. Some reports and studies now suggest that change is happening and that cracks are beginning to appear in the glass. The studies also claim however that change is happening slowly and that the cracks are small.", "hypothesis": "There is no legislation covering discrimination at work so employers have to develop their own ways of preventing it.", "label": "c"} +{"uid": "id_617", "premise": "Whilst Mr Black, Mr Saul and Mr Hardy travel to work by bus, Mr Jones and Mr Peters travel by train. Mr Black and Mr Saul also walk part of the way. Mr Saul, Mr Peters and Mr Hardy have season tickets.", "hypothesis": "one people have neither a season ticket nor walk", "label": "e"} +{"uid": "id_618", "premise": "Whilst Mr Black, Mr Saul and Mr Hardy travel to work by bus, Mr Jones and Mr Peters travel by train. Mr Black and Mr Saul also walk part of the way. Mr Saul, Mr Peters and Mr Hardy have season tickets.", "hypothesis": "Mr Black travels by bus, but does not have a season ticket", "label": "e"} +{"uid": "id_619", "premise": "Whilst Mr Black, Mr Saul and Mr Hardy travel to work by bus, Mr Jones and Mr Peters travel by train. Mr Black and Mr Saul also walk part of the way. Mr Saul, Mr Peters and Mr Hardy have season tickets.", "hypothesis": "Mr Saul has a season ticket, but also walks", "label": "e"} +{"uid": "id_620", "premise": "Whilst Mr Black, Mr Saul and Mr Hardy travel to work by bus, Mr Jones and Mr Peters travel by train. Mr Black and Mr Saul also walk part of the way. Mr Saul, Mr Peters and Mr Hardy have season tickets.", "hypothesis": "Mr Peters lives closest to a bus stop", "label": "n"} +{"uid": "id_621", "premise": "Whilst Mr Black, Mr Saul and Mr Hardy travel to work by bus, Mr Jones and Mr Peters travel by train. Mr Black and Mr Saul also walk part of the way. Mr Saul, Mr Peters and Mr Hardy have season tickets.", "hypothesis": "Mr Jones does not have a season ticket and does not walk", "label": "e"} +{"uid": "id_622", "premise": "Whilst having similar effects on employees, there tend to be major difference between a merger and an acquisition. In an acquisition, power is substantially assumed by the new parent company. Change is often swift and brutal as the acquirer imposes its own control systems and financial restraints. Parties to a merger are likely to be evenly matched in terms of size, and the power and cultural dynamics of the combination are more ambiguous, integration is a more drawn out process. During an acquisition, there is often more overt conflict and resistance and a sense of powerlessness. In mergers, because of the prolonged period between the initial announcement and full integration, uncertainty and anxiety continue for a much longer time as the organization remains in a state of limbo.", "hypothesis": "Mergers yield a shorter period of anxiety and uncertainty amongst employees.", "label": "c"} +{"uid": "id_623", "premise": "Whilst having similar effects on employees, there tend to be major difference between a merger and an acquisition. In an acquisition, power is substantially assumed by the new parent company. Change is often swift and brutal as the acquirer imposes its own control systems and financial restraints. Parties to a merger are likely to be evenly matched in terms of size, and the power and cultural dynamics of the combination are more ambiguous, integration is a more drawn out process. During an acquisition, there is often more overt conflict and resistance and a sense of powerlessness. In mergers, because of the prolonged period between the initial announcement and full integration, uncertainty and anxiety continue for a much longer time as the organization remains in a state of limbo.", "hypothesis": "Mergers and acquisition tend to have distinctly different impacts on employees.", "label": "n"} +{"uid": "id_624", "premise": "Whilst having similar effects on employees, there tend to be major difference between a merger and an acquisition. In an acquisition, power is substantially assumed by the new parent company. Change is often swift and brutal as the acquirer imposes its own control systems and financial restraints. Parties to a merger are likely to be evenly matched in terms of size, and the power and cultural dynamics of the combination are more ambiguous, integration is a more drawn out process. During an acquisition, there is often more overt conflict and resistance and a sense of powerlessness. In mergers, because of the prolonged period between the initial announcement and full integration, uncertainty and anxiety continue for a much longer time as the organization remains in a state of limbo.", "hypothesis": "There tends to be a major power difference between parties in an acquisition.", "label": "e"} +{"uid": "id_625", "premise": "Whilst having similar effects on employees, there tend to be major difference between a merger and an acquisition. In an acquisition, power is substantially assumed by the new parent company. Change is often swift and brutal as the acquirer imposes its own control systems and financial restraints. Parties to a merger are likely to be evenly matched in terms of size, and the power and cultural dynamics of the combination are more ambiguous, integration is a more drawn out process. During an acquisition, there is often more overt conflict and resistance and a sense of powerlessness. In mergers, because of the prolonged period between the initial announcement and full integration, uncertainty and anxiety continue for a much longer time as the organization remains in a state of limbo.", "hypothesis": "Mergers and acquisition tend to have distinctly different impacts on employees.", "label": "c"} +{"uid": "id_626", "premise": "Whilst high visibility crime such as night-time drunken disturbance has increased, total urban and rural crime, both reported and unreported, has fallen over the last two years, yet paradoxically people feel less safe, believing that the converse is the case. This fall in crime has coincided with a drop in the number of police officer on the street. A citizens fear of crime seems not to be a matter of reality at all- the visibility of law enforcement officials has a greater impact on their view of reality than hard facts.", "hypothesis": "Reducing the number of police officer has led to a reduction in crime.", "label": "n"} +{"uid": "id_627", "premise": "Whilst high visibility crime such as night-time drunken disturbance has increased, total urban and rural crime, both reported and unreported, has fallen over the last two years, yet paradoxically people feel less safe, believing that the converse is the case. This fall in crime has coincided with a drop in the number of police officer on the street. A citizens fear of crime seems not to be a matter of reality at all- the visibility of law enforcement officials has a greater impact on their view of reality than hard facts.", "hypothesis": "Crime statistics support popular belief about the level of crime.", "label": "c"} +{"uid": "id_628", "premise": "Whilst high visibility crime such as night-time drunken disturbance has increased, total urban and rural crime, both reported and unreported, has fallen over the last two years, yet paradoxically people feel less safe, believing that the converse is the case. This fall in crime has coincided with a drop in the number of police officer on the street. A citizens fear of crime seems not to be a matter of reality at all- the visibility of law enforcement officials has a greater impact on their view of reality than hard facts.", "hypothesis": "People feel safer when there are more police on the street.", "label": "e"} +{"uid": "id_629", "premise": "Whilst high visibility crime such as night-time drunken disturbance has increased, total urban and rural crime, both reported and unreported, has fallen over the last two years, yet people feel less safe, believing that the converse is the case. This fall in crime has coincided with a drop in the number of police officer on the street. A citizens fear of seems not to be a matter of reality at all; the visibility of law enforcement officials has a greater impact on their view of reality than hard facts.", "hypothesis": "Reducing the number of police officer has led to a reduction in crime.", "label": "n"} +{"uid": "id_630", "premise": "Whilst high visibility crime such as night-time drunken disturbance has increased, total urban and rural crime, both reported and unreported, has fallen over the last two years, yet people feel less safe, believing that the converse is the case. This fall in crime has coincided with a drop in the number of police officer on the street. A citizens fear of seems not to be a matter of reality at all; the visibility of law enforcement officials has a greater impact on their view of reality than hard facts.", "hypothesis": "Crime statistics support popular belief about the level of crime.", "label": "c"} +{"uid": "id_631", "premise": "Whilst high visibility crime such as night-time drunken disturbance has increased, total urban and rural crime, both reported and unreported, has fallen over the last two years, yet people feel less safe, believing that the converse is the case. This fall in crime has coincided with a drop in the number of police officer on the street. A citizens fear of seems not to be a matter of reality at all; the visibility of law enforcement officials has a greater impact on their view of reality than hard facts.", "hypothesis": "People feel safer when there are more police on the street.", "label": "n"} +{"uid": "id_632", "premise": "Whiskers weighs less than Paws. Whiskers weighs more than Tabby.", "hypothesis": "Of the three cats, Tabby weighs the least.", "label": "e"} +{"uid": "id_633", "premise": "Why Pagodas Dont Fall Down In a land swept by typhoons and shaken by earthquakes, how have Japans tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japans first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight natures forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the buildings overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashiras role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as Professor Pagoda because of his passion to understand the pagoda, has built a series of models and tested them on a shake- table in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japans first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagodas loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual storeys from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five- storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual storeys of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walkers balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. With the eaves extending out on all sides like balancing poles, says Mr Ishida, the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "Only two Japanese pagodas have collapsed in 1400 years.", "label": "e"} +{"uid": "id_634", "premise": "Why Pagodas Dont Fall Down In a land swept by typhoons and shaken by earthquakes, how have Japans tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japans first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight natures forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the buildings overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashiras role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as Professor Pagoda because of his passion to understand the pagoda, has built a series of models and tested them on a shake- table in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japans first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagodas loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual storeys from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five- storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual storeys of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walkers balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. With the eaves extending out on all sides like balancing poles, says Mr Ishida, the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "The Hanshin earthquake of 1995 destroyed the pagoda at the Toji temple.", "label": "c"} +{"uid": "id_635", "premise": "Why Pagodas Dont Fall Down In a land swept by typhoons and shaken by earthquakes, how have Japans tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japans first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight natures forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the buildings overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashiras role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as Professor Pagoda because of his passion to understand the pagoda, has built a series of models and tested them on a shake- table in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japans first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagodas loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual storeys from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five- storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual storeys of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walkers balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. With the eaves extending out on all sides like balancing poles, says Mr Ishida, the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "The builders of pagodas knew how to absorb some of the power produced by severe weather conditions.", "label": "e"} +{"uid": "id_636", "premise": "Why Pagodas Dont Fall Down In a land swept by typhoons and shaken by earthquakes, how have Japans tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japans first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight natures forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the buildings overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashiras role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as Professor Pagoda because of his passion to understand the pagoda, has built a series of models and tested them on a shake- table in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japans first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagodas loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual storeys from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five- storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual storeys of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walkers balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. With the eaves extending out on all sides like balancing poles, says Mr Ishida, the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "The other buildings near the Toji pagoda had been built in the last 30 years.", "label": "n"} +{"uid": "id_637", "premise": "Why are so few tigers man-eaters? As you leave the Bandhavgarh National Park in central India, there is a notice which shows a huge, placid tiger. The notice says, You may not have seen me, but I have seen you. There are more than a billion people In India and Indian tigers probably see humans every single day of their lives. Tigers can and do kill almost everything they meet in the jungle, they will kill even attack elephants and rhino. Surely, then, it is a little strange that attacks on humans are not more frequent. Some people might argue that these attacks were in fact common in the past. British writers of adventure stories, such as Jim Corbett, gave the impression that village life in India in the early years of the twentieth century involved a stage of constant siege by man-eating tigers. But they may have overstated the terror spread by tigers. There were also far more tigers around in those days (probably 60.000 in the subcontinent compared to just 3000 today). So in proportion, attacks appear to have been as rare then as they are today. It is widely assumed that the constraint is fear; but what exactly are tigers afraid of? Can they really know that we may be even better armed than they are? Surely not. Has the species programmed the experiences of all tigers with humans its genes to be inherited as instinct? Perhaps. But I think the explanation may be more simple and, in a way, more intriguing. Since the growth of ethology in the 1950s. we have tried to understand animal behaviour from the animals point of view. Until the first elegant experiments by pioneers in the field such as Konrad Lorenz, naturalists wrote about animals as if they were slightly less intelligent humans. Jim Corbetts breathless accounts of his duels with a an-eaters in truth tell us more about Jim Corbett than they do about the animals. The principle of ethology, on the other hand, requires us to attempt to think in the same way as the animal we are studying thinks, and to observe every tiny detail of its behaviour without imposing our own human significances on its actions. I suspect that a tigers afraid of humans lies not in some preprogramed ancestral logic but in the way he actually perceives us visually. If you think like a tiger, a human in a car might appear just to be a part of the car, and because tigers dont eat cars the human is safe-unless the car is menacing the tiger or its cubs, in which case a brave or enraged tiger may charge. A human on foot is a different sort of puzzle. Imagine a tiger sees a man who is 1.8m tall. A tiger is less than 1m tall but they may be up to 3m long from head to tail. So when a tiger sees the man face on, it might not be unreasonable for him to assume that the man is 6m long. If he meet a deer of this size, he might attack the animal by leaping on its back, but when he looks behind the mind he cant see a back. From the front the man is huge, but looked at from the side he all but disappears. This must be very disconcerting. A hunter has to be confident that it can tackle its prey, and no one is confident when they are disconcerted. This is especially true of a solitary hunter such as the tiger and may explain why lions-particularly young lionesses who tend to encourage one another to take risks are more dangerous than tigers. If the theory that a tiger is disconcerted to find that a standing human is both very big and yet somehow invisible is correct, the opposite should be true of a squatting human. A squatting human is half he size and presents twice the spread of back, and more closely resembles a medium-sized deer. If tigers were simply frightened of all humans, then a squatting person would be no more attractive as a target than a standing one. This, however appears not to be the case. Many incidents of attacks on people involving villagers squatting or bending over to cut grass for fodder or building material. The fact that humans stand upright may therefore not just be something that distinguishes them from nearly all other species, but also a factor that helped them to survive in a dangerous and unpredictable environment.", "hypothesis": "Some writers of fiction have exaggerated the danger of tigers to man.", "label": "e"} +{"uid": "id_638", "premise": "Why are so few tigers man-eaters? As you leave the Bandhavgarh National Park in central India, there is a notice which shows a huge, placid tiger. The notice says, You may not have seen me, but I have seen you. There are more than a billion people In India and Indian tigers probably see humans every single day of their lives. Tigers can and do kill almost everything they meet in the jungle, they will kill even attack elephants and rhino. Surely, then, it is a little strange that attacks on humans are not more frequent. Some people might argue that these attacks were in fact common in the past. British writers of adventure stories, such as Jim Corbett, gave the impression that village life in India in the early years of the twentieth century involved a stage of constant siege by man-eating tigers. But they may have overstated the terror spread by tigers. There were also far more tigers around in those days (probably 60.000 in the subcontinent compared to just 3000 today). So in proportion, attacks appear to have been as rare then as they are today. It is widely assumed that the constraint is fear; but what exactly are tigers afraid of? Can they really know that we may be even better armed than they are? Surely not. Has the species programmed the experiences of all tigers with humans its genes to be inherited as instinct? Perhaps. But I think the explanation may be more simple and, in a way, more intriguing. Since the growth of ethology in the 1950s. we have tried to understand animal behaviour from the animals point of view. Until the first elegant experiments by pioneers in the field such as Konrad Lorenz, naturalists wrote about animals as if they were slightly less intelligent humans. Jim Corbetts breathless accounts of his duels with a an-eaters in truth tell us more about Jim Corbett than they do about the animals. The principle of ethology, on the other hand, requires us to attempt to think in the same way as the animal we are studying thinks, and to observe every tiny detail of its behaviour without imposing our own human significances on its actions. I suspect that a tigers afraid of humans lies not in some preprogramed ancestral logic but in the way he actually perceives us visually. If you think like a tiger, a human in a car might appear just to be a part of the car, and because tigers dont eat cars the human is safe-unless the car is menacing the tiger or its cubs, in which case a brave or enraged tiger may charge. A human on foot is a different sort of puzzle. Imagine a tiger sees a man who is 1.8m tall. A tiger is less than 1m tall but they may be up to 3m long from head to tail. So when a tiger sees the man face on, it might not be unreasonable for him to assume that the man is 6m long. If he meet a deer of this size, he might attack the animal by leaping on its back, but when he looks behind the mind he cant see a back. From the front the man is huge, but looked at from the side he all but disappears. This must be very disconcerting. A hunter has to be confident that it can tackle its prey, and no one is confident when they are disconcerted. This is especially true of a solitary hunter such as the tiger and may explain why lions-particularly young lionesses who tend to encourage one another to take risks are more dangerous than tigers. If the theory that a tiger is disconcerted to find that a standing human is both very big and yet somehow invisible is correct, the opposite should be true of a squatting human. A squatting human is half he size and presents twice the spread of back, and more closely resembles a medium-sized deer. If tigers were simply frightened of all humans, then a squatting person would be no more attractive as a target than a standing one. This, however appears not to be the case. Many incidents of attacks on people involving villagers squatting or bending over to cut grass for fodder or building material. The fact that humans stand upright may therefore not just be something that distinguishes them from nearly all other species, but also a factor that helped them to survive in a dangerous and unpredictable environment.", "hypothesis": "The fear of humans may be passed down in a tigers genes.", "label": "e"} +{"uid": "id_639", "premise": "Why are so few tigers man-eaters? As you leave the Bandhavgarh National Park in central India, there is a notice which shows a huge, placid tiger. The notice says, You may not have seen me, but I have seen you. There are more than a billion people In India and Indian tigers probably see humans every single day of their lives. Tigers can and do kill almost everything they meet in the jungle, they will kill even attack elephants and rhino. Surely, then, it is a little strange that attacks on humans are not more frequent. Some people might argue that these attacks were in fact common in the past. British writers of adventure stories, such as Jim Corbett, gave the impression that village life in India in the early years of the twentieth century involved a stage of constant siege by man-eating tigers. But they may have overstated the terror spread by tigers. There were also far more tigers around in those days (probably 60.000 in the subcontinent compared to just 3000 today). So in proportion, attacks appear to have been as rare then as they are today. It is widely assumed that the constraint is fear; but what exactly are tigers afraid of? Can they really know that we may be even better armed than they are? Surely not. Has the species programmed the experiences of all tigers with humans its genes to be inherited as instinct? Perhaps. But I think the explanation may be more simple and, in a way, more intriguing. Since the growth of ethology in the 1950s. we have tried to understand animal behaviour from the animals point of view. Until the first elegant experiments by pioneers in the field such as Konrad Lorenz, naturalists wrote about animals as if they were slightly less intelligent humans. Jim Corbetts breathless accounts of his duels with a an-eaters in truth tell us more about Jim Corbett than they do about the animals. The principle of ethology, on the other hand, requires us to attempt to think in the same way as the animal we are studying thinks, and to observe every tiny detail of its behaviour without imposing our own human significances on its actions. I suspect that a tigers afraid of humans lies not in some preprogramed ancestral logic but in the way he actually perceives us visually. If you think like a tiger, a human in a car might appear just to be a part of the car, and because tigers dont eat cars the human is safe-unless the car is menacing the tiger or its cubs, in which case a brave or enraged tiger may charge. A human on foot is a different sort of puzzle. Imagine a tiger sees a man who is 1.8m tall. A tiger is less than 1m tall but they may be up to 3m long from head to tail. So when a tiger sees the man face on, it might not be unreasonable for him to assume that the man is 6m long. If he meet a deer of this size, he might attack the animal by leaping on its back, but when he looks behind the mind he cant see a back. From the front the man is huge, but looked at from the side he all but disappears. This must be very disconcerting. A hunter has to be confident that it can tackle its prey, and no one is confident when they are disconcerted. This is especially true of a solitary hunter such as the tiger and may explain why lions-particularly young lionesses who tend to encourage one another to take risks are more dangerous than tigers. If the theory that a tiger is disconcerted to find that a standing human is both very big and yet somehow invisible is correct, the opposite should be true of a squatting human. A squatting human is half he size and presents twice the spread of back, and more closely resembles a medium-sized deer. If tigers were simply frightened of all humans, then a squatting person would be no more attractive as a target than a standing one. This, however appears not to be the case. Many incidents of attacks on people involving villagers squatting or bending over to cut grass for fodder or building material. The fact that humans stand upright may therefore not just be something that distinguishes them from nearly all other species, but also a factor that helped them to survive in a dangerous and unpredictable environment.", "hypothesis": "Konrad Lorenz claimed that some animals are more intelligent than humans.", "label": "n"} +{"uid": "id_640", "premise": "Why are so few tigers man-eaters? As you leave the Bandhavgarh National Park in central India, there is a notice which shows a huge, placid tiger. The notice says, You may not have seen me, but I have seen you. There are more than a billion people In India and Indian tigers probably see humans every single day of their lives. Tigers can and do kill almost everything they meet in the jungle, they will kill even attack elephants and rhino. Surely, then, it is a little strange that attacks on humans are not more frequent. Some people might argue that these attacks were in fact common in the past. British writers of adventure stories, such as Jim Corbett, gave the impression that village life in India in the early years of the twentieth century involved a stage of constant siege by man-eating tigers. But they may have overstated the terror spread by tigers. There were also far more tigers around in those days (probably 60.000 in the subcontinent compared to just 3000 today). So in proportion, attacks appear to have been as rare then as they are today. It is widely assumed that the constraint is fear; but what exactly are tigers afraid of? Can they really know that we may be even better armed than they are? Surely not. Has the species programmed the experiences of all tigers with humans its genes to be inherited as instinct? Perhaps. But I think the explanation may be more simple and, in a way, more intriguing. Since the growth of ethology in the 1950s. we have tried to understand animal behaviour from the animals point of view. Until the first elegant experiments by pioneers in the field such as Konrad Lorenz, naturalists wrote about animals as if they were slightly less intelligent humans. Jim Corbetts breathless accounts of his duels with a an-eaters in truth tell us more about Jim Corbett than they do about the animals. The principle of ethology, on the other hand, requires us to attempt to think in the same way as the animal we are studying thinks, and to observe every tiny detail of its behaviour without imposing our own human significances on its actions. I suspect that a tigers afraid of humans lies not in some preprogramed ancestral logic but in the way he actually perceives us visually. If you think like a tiger, a human in a car might appear just to be a part of the car, and because tigers dont eat cars the human is safe-unless the car is menacing the tiger or its cubs, in which case a brave or enraged tiger may charge. A human on foot is a different sort of puzzle. Imagine a tiger sees a man who is 1.8m tall. A tiger is less than 1m tall but they may be up to 3m long from head to tail. So when a tiger sees the man face on, it might not be unreasonable for him to assume that the man is 6m long. If he meet a deer of this size, he might attack the animal by leaping on its back, but when he looks behind the mind he cant see a back. From the front the man is huge, but looked at from the side he all but disappears. This must be very disconcerting. A hunter has to be confident that it can tackle its prey, and no one is confident when they are disconcerted. This is especially true of a solitary hunter such as the tiger and may explain why lions-particularly young lionesses who tend to encourage one another to take risks are more dangerous than tigers. If the theory that a tiger is disconcerted to find that a standing human is both very big and yet somehow invisible is correct, the opposite should be true of a squatting human. A squatting human is half he size and presents twice the spread of back, and more closely resembles a medium-sized deer. If tigers were simply frightened of all humans, then a squatting person would be no more attractive as a target than a standing one. This, however appears not to be the case. Many incidents of attacks on people involving villagers squatting or bending over to cut grass for fodder or building material. The fact that humans stand upright may therefore not just be something that distinguishes them from nearly all other species, but also a factor that helped them to survive in a dangerous and unpredictable environment.", "hypothesis": "Ethology involves applying principles of human behaviour to animals.", "label": "c"} +{"uid": "id_641", "premise": "Why are so few tigers man-eaters? As you leave the Bandhavgarh National Park in central India, there is a notice which shows a huge, placid tiger. The notice says, You may not have seen me, but I have seen you. There are more than a billion people In India and Indian tigers probably see humans every single day of their lives. Tigers can and do kill almost everything they meet in the jungle, they will kill even attack elephants and rhino. Surely, then, it is a little strange that attacks on humans are not more frequent. Some people might argue that these attacks were in fact common in the past. British writers of adventure stories, such as Jim Corbett, gave the impression that village life in India in the early years of the twentieth century involved a stage of constant siege by man-eating tigers. But they may have overstated the terror spread by tigers. There were also far more tigers around in those days (probably 60.000 in the subcontinent compared to just 3000 today). So in proportion, attacks appear to have been as rare then as they are today. It is widely assumed that the constraint is fear; but what exactly are tigers afraid of? Can they really know that we may be even better armed than they are? Surely not. Has the species programmed the experiences of all tigers with humans its genes to be inherited as instinct? Perhaps. But I think the explanation may be more simple and, in a way, more intriguing. Since the growth of ethology in the 1950s. we have tried to understand animal behaviour from the animals point of view. Until the first elegant experiments by pioneers in the field such as Konrad Lorenz, naturalists wrote about animals as if they were slightly less intelligent humans. Jim Corbetts breathless accounts of his duels with a an-eaters in truth tell us more about Jim Corbett than they do about the animals. The principle of ethology, on the other hand, requires us to attempt to think in the same way as the animal we are studying thinks, and to observe every tiny detail of its behaviour without imposing our own human significances on its actions. I suspect that a tigers afraid of humans lies not in some preprogramed ancestral logic but in the way he actually perceives us visually. If you think like a tiger, a human in a car might appear just to be a part of the car, and because tigers dont eat cars the human is safe-unless the car is menacing the tiger or its cubs, in which case a brave or enraged tiger may charge. A human on foot is a different sort of puzzle. Imagine a tiger sees a man who is 1.8m tall. A tiger is less than 1m tall but they may be up to 3m long from head to tail. So when a tiger sees the man face on, it might not be unreasonable for him to assume that the man is 6m long. If he meet a deer of this size, he might attack the animal by leaping on its back, but when he looks behind the mind he cant see a back. From the front the man is huge, but looked at from the side he all but disappears. This must be very disconcerting. A hunter has to be confident that it can tackle its prey, and no one is confident when they are disconcerted. This is especially true of a solitary hunter such as the tiger and may explain why lions-particularly young lionesses who tend to encourage one another to take risks are more dangerous than tigers. If the theory that a tiger is disconcerted to find that a standing human is both very big and yet somehow invisible is correct, the opposite should be true of a squatting human. A squatting human is half he size and presents twice the spread of back, and more closely resembles a medium-sized deer. If tigers were simply frightened of all humans, then a squatting person would be no more attractive as a target than a standing one. This, however appears not to be the case. Many incidents of attacks on people involving villagers squatting or bending over to cut grass for fodder or building material. The fact that humans stand upright may therefore not just be something that distinguishes them from nearly all other species, but also a factor that helped them to survive in a dangerous and unpredictable environment.", "hypothesis": "Tigers in the Bandhavgarh National Park are a protected species.", "label": "n"} +{"uid": "id_642", "premise": "Why companies should welcome disorder Organisation is big business. Whether it is of our lives - all those inboxes and calendars - or how companies are structured, a multi-billion dollar industry helps to meet this need. We have more strategies for time management, project management and self-organisation than at any other time in human history. We are told that we ought to organise our company, our home life, our week, our day and even our sleep, all as a means to becoming more productive. Every week, countless seminars and workshops take place around the world to tell a paying public that they ought to structure their lives in order to achieve this. This rhetoric has also crept into the thinking of business leaders and entrepreneurs, much to the delight of self-proclaimed perfectionists with the need to get everything right. The number of business schools and graduates has massively increased over the past 50 years, essentially teaching people how to organise well. Ironically, however, the number of businesses that fail has also steadily increased. Work-related stress has increased. A large proportion of workers from all demographics claim to be dissatisfied with the way their work is structured and the way they are managed. This begs the question: what has gone wrong? Why is it that on paper the drive for organisation seems a sure shot for increasing productivity, but in reality falls well short of what is expected? This has been a problem for a while now. Frederick Taylor was one of the forefathers of scientific management. Writing in the first half of the 20th century, he designed a number of principles to improve the efficiency of the work process, which have since become widespread in modern companies. So the approach has been around for a while. New research suggests that this obsession with efficiency is misguided. The problem is not necessarily the management theories or strategies we use to organise our work; it's the basic assumptions we hold in approaching how we work. Here it's the assumption that order is a necessary condition for productivity. This assumption has also fostered the idea that disorder must be detrimental to organisational productivity. The result is that businesses and people spend time and money organising themselves for the sake of organising, rather than actually looking at the end goal and usefulness of such an effort. What's more, recent studies show that order actually has diminishing returns. Order does increase productivity to a certain extent, but eventually the usefulness of the process of organisation, and the benefit it yields, reduce until the point where any further increase in order reduces productivity. Some argue that in a business, if the cost of formally structuring something outweighs the benefit of doing it, then that thing ought not to be formally structured. Instead, the resources involved can be better used elsewhere. In fact, research shows that, when innovating, the best approach is to create an environment devoid of structure and hierarchy and enable everyone involved to engage as one organic group. These environments can lead to new solutions that, under conventionally structured environments (filled with bottlenecks in terms of information flow, power structures, rules, and routines) would never be reached. In recent times companies have slowly started to embrace this disorganisation. Many of them embrace it in terms of perception (embracing the idea of disorder, as opposed to fearing it) and in terms of process (putting mechanisms in place to reduce structure). For example, Oticon, a large Danish manufacturer of hearing aids, used what it called a 'spaghetti' structure in order to reduce the organisation's rigid hierarchies. This involved scrapping formal job titles and giving staff huge amounts of ownership over their own time and projects. This approach proved to be highly successful initially, with clear improvements in worker productivity in all facets of the business. In similar fashion, the former chairman of General Electric embraced disorganisation, putting forward the idea of the 'boundaryless' organisation. Again, it involves breaking down the barriers between different parts of a company and encouraging virtual collaboration and flexible working. Google and a number of other tech companies have embraced (at least in part) these kinds of flexible structures, facilitated by technology and strong company values which glue people together. A word of warning to others thinking of jumping on this bandwagon: the evidence so far suggests disorder, much like order, also seems to have diminishing utility, and can also have detrimental effects on performance if overused. Like order, disorder should be embraced only so far as it is useful. But we should not fear it - nor venerate one over the other. This research also shows that we should continually question whether or not our existing assumptions work.", "hypothesis": "Google was inspired to adopt flexibility by the success of General Electric.", "label": "n"} +{"uid": "id_643", "premise": "Why companies should welcome disorder Organisation is big business. Whether it is of our lives - all those inboxes and calendars - or how companies are structured, a multi-billion dollar industry helps to meet this need. We have more strategies for time management, project management and self-organisation than at any other time in human history. We are told that we ought to organise our company, our home life, our week, our day and even our sleep, all as a means to becoming more productive. Every week, countless seminars and workshops take place around the world to tell a paying public that they ought to structure their lives in order to achieve this. This rhetoric has also crept into the thinking of business leaders and entrepreneurs, much to the delight of self-proclaimed perfectionists with the need to get everything right. The number of business schools and graduates has massively increased over the past 50 years, essentially teaching people how to organise well. Ironically, however, the number of businesses that fail has also steadily increased. Work-related stress has increased. A large proportion of workers from all demographics claim to be dissatisfied with the way their work is structured and the way they are managed. This begs the question: what has gone wrong? Why is it that on paper the drive for organisation seems a sure shot for increasing productivity, but in reality falls well short of what is expected? This has been a problem for a while now. Frederick Taylor was one of the forefathers of scientific management. Writing in the first half of the 20th century, he designed a number of principles to improve the efficiency of the work process, which have since become widespread in modern companies. So the approach has been around for a while. New research suggests that this obsession with efficiency is misguided. The problem is not necessarily the management theories or strategies we use to organise our work; it's the basic assumptions we hold in approaching how we work. Here it's the assumption that order is a necessary condition for productivity. This assumption has also fostered the idea that disorder must be detrimental to organisational productivity. The result is that businesses and people spend time and money organising themselves for the sake of organising, rather than actually looking at the end goal and usefulness of such an effort. What's more, recent studies show that order actually has diminishing returns. Order does increase productivity to a certain extent, but eventually the usefulness of the process of organisation, and the benefit it yields, reduce until the point where any further increase in order reduces productivity. Some argue that in a business, if the cost of formally structuring something outweighs the benefit of doing it, then that thing ought not to be formally structured. Instead, the resources involved can be better used elsewhere. In fact, research shows that, when innovating, the best approach is to create an environment devoid of structure and hierarchy and enable everyone involved to engage as one organic group. These environments can lead to new solutions that, under conventionally structured environments (filled with bottlenecks in terms of information flow, power structures, rules, and routines) would never be reached. In recent times companies have slowly started to embrace this disorganisation. Many of them embrace it in terms of perception (embracing the idea of disorder, as opposed to fearing it) and in terms of process (putting mechanisms in place to reduce structure). For example, Oticon, a large Danish manufacturer of hearing aids, used what it called a 'spaghetti' structure in order to reduce the organisation's rigid hierarchies. This involved scrapping formal job titles and giving staff huge amounts of ownership over their own time and projects. This approach proved to be highly successful initially, with clear improvements in worker productivity in all facets of the business. In similar fashion, the former chairman of General Electric embraced disorganisation, putting forward the idea of the 'boundaryless' organisation. Again, it involves breaking down the barriers between different parts of a company and encouraging virtual collaboration and flexible working. Google and a number of other tech companies have embraced (at least in part) these kinds of flexible structures, facilitated by technology and strong company values which glue people together. A word of warning to others thinking of jumping on this bandwagon: the evidence so far suggests disorder, much like order, also seems to have diminishing utility, and can also have detrimental effects on performance if overused. Like order, disorder should be embraced only so far as it is useful. But we should not fear it - nor venerate one over the other. This research also shows that we should continually question whether or not our existing assumptions work.", "hypothesis": "Innovation is most successful if the people involved have distinct roles.", "label": "c"} +{"uid": "id_644", "premise": "Why companies should welcome disorder Organisation is big business. Whether it is of our lives - all those inboxes and calendars - or how companies are structured, a multi-billion dollar industry helps to meet this need. We have more strategies for time management, project management and self-organisation than at any other time in human history. We are told that we ought to organise our company, our home life, our week, our day and even our sleep, all as a means to becoming more productive. Every week, countless seminars and workshops take place around the world to tell a paying public that they ought to structure their lives in order to achieve this. This rhetoric has also crept into the thinking of business leaders and entrepreneurs, much to the delight of self-proclaimed perfectionists with the need to get everything right. The number of business schools and graduates has massively increased over the past 50 years, essentially teaching people how to organise well. Ironically, however, the number of businesses that fail has also steadily increased. Work-related stress has increased. A large proportion of workers from all demographics claim to be dissatisfied with the way their work is structured and the way they are managed. This begs the question: what has gone wrong? Why is it that on paper the drive for organisation seems a sure shot for increasing productivity, but in reality falls well short of what is expected? This has been a problem for a while now. Frederick Taylor was one of the forefathers of scientific management. Writing in the first half of the 20th century, he designed a number of principles to improve the efficiency of the work process, which have since become widespread in modern companies. So the approach has been around for a while. New research suggests that this obsession with efficiency is misguided. The problem is not necessarily the management theories or strategies we use to organise our work; it's the basic assumptions we hold in approaching how we work. Here it's the assumption that order is a necessary condition for productivity. This assumption has also fostered the idea that disorder must be detrimental to organisational productivity. The result is that businesses and people spend time and money organising themselves for the sake of organising, rather than actually looking at the end goal and usefulness of such an effort. What's more, recent studies show that order actually has diminishing returns. Order does increase productivity to a certain extent, but eventually the usefulness of the process of organisation, and the benefit it yields, reduce until the point where any further increase in order reduces productivity. Some argue that in a business, if the cost of formally structuring something outweighs the benefit of doing it, then that thing ought not to be formally structured. Instead, the resources involved can be better used elsewhere. In fact, research shows that, when innovating, the best approach is to create an environment devoid of structure and hierarchy and enable everyone involved to engage as one organic group. These environments can lead to new solutions that, under conventionally structured environments (filled with bottlenecks in terms of information flow, power structures, rules, and routines) would never be reached. In recent times companies have slowly started to embrace this disorganisation. Many of them embrace it in terms of perception (embracing the idea of disorder, as opposed to fearing it) and in terms of process (putting mechanisms in place to reduce structure). For example, Oticon, a large Danish manufacturer of hearing aids, used what it called a 'spaghetti' structure in order to reduce the organisation's rigid hierarchies. This involved scrapping formal job titles and giving staff huge amounts of ownership over their own time and projects. This approach proved to be highly successful initially, with clear improvements in worker productivity in all facets of the business. In similar fashion, the former chairman of General Electric embraced disorganisation, putting forward the idea of the 'boundaryless' organisation. Again, it involves breaking down the barriers between different parts of a company and encouraging virtual collaboration and flexible working. Google and a number of other tech companies have embraced (at least in part) these kinds of flexible structures, facilitated by technology and strong company values which glue people together. A word of warning to others thinking of jumping on this bandwagon: the evidence so far suggests disorder, much like order, also seems to have diminishing utility, and can also have detrimental effects on performance if overused. Like order, disorder should be embraced only so far as it is useful. But we should not fear it - nor venerate one over the other. This research also shows that we should continually question whether or not our existing assumptions work.", "hypothesis": "Both businesses and people aim at order without really considering its value.", "label": "e"} +{"uid": "id_645", "premise": "Why dont you go to the court if the employer does not pay you the Provident Fund contribution?", "hypothesis": "It is obligatory for the employer to pay the Provident Fund contribution to the Employees.", "label": "e"} +{"uid": "id_646", "premise": "Why dont you go to the court if the employer does not pay you the Provident Fund contribution?", "hypothesis": "Courts can intervene in matters of dispute between employer and employees", "label": "e"} +{"uid": "id_647", "premise": "Why pagodas don't fall down In a land swept by typhoons and shaken by earthquakes, how have Japan's tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japan's first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight nature's forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the building's overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashira's role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as 'Professor Pagoda' because of his passion to understand the pagoda, has built a series of models and tested them on a 'shake-table' in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japan's first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagoda's loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual stories from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five-storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual stories of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walker's balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. 'With the eaves extending out on all sides like balancing poles, ' says Mr Ishida, 'the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. ' Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "The builders of pagodas knew how to absorb some of the power produced by severe weather conditions.", "label": "e"} +{"uid": "id_648", "premise": "Why pagodas don't fall down In a land swept by typhoons and shaken by earthquakes, how have Japan's tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japan's first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight nature's forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the building's overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashira's role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as 'Professor Pagoda' because of his passion to understand the pagoda, has built a series of models and tested them on a 'shake-table' in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japan's first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagoda's loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual stories from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five-storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual stories of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walker's balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. 'With the eaves extending out on all sides like balancing poles, ' says Mr Ishida, 'the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. ' Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "The other buildings near the Toji pagoda had been built in the last 30 years.", "label": "n"} +{"uid": "id_649", "premise": "Why pagodas don't fall down In a land swept by typhoons and shaken by earthquakes, how have Japan's tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japan's first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight nature's forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the building's overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashira's role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as 'Professor Pagoda' because of his passion to understand the pagoda, has built a series of models and tested them on a 'shake-table' in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japan's first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagoda's loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual stories from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five-storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual stories of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walker's balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. 'With the eaves extending out on all sides like balancing poles, ' says Mr Ishida, 'the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. ' Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "The Hanshin earthquake of 1995 destroyed the pagoda at the Toji temple.", "label": "c"} +{"uid": "id_650", "premise": "Why pagodas don't fall down In a land swept by typhoons and shaken by earthquakes, how have Japan's tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japan's first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight nature's forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the building's overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashira's role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as 'Professor Pagoda' because of his passion to understand the pagoda, has built a series of models and tested them on a 'shake-table' in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japan's first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagoda's loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual stories from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five-storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual stories of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walker's balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. 'With the eaves extending out on all sides like balancing poles, ' says Mr Ishida, 'the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. ' Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "Only two Japanese pagodas have collapsed in 1400 years.", "label": "e"} +{"uid": "id_651", "premise": "Why pagodas dont fall down. In a land swept by typhoons and shaken by earthquakes, how have Japans tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japans first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight natures forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the buildings overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashiras role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as Professor Pagoda because of his passion to understand the pagoda, has built a series of models and tested them on a shake-table in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japans first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagodas loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual stories from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five-storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual stories of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walkers balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. With the eaves extending out on all sides like balancing poles, says Mr Ishida, the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "The builders of pagodas knew how to absorb some of the power produced by severe weather conditions.", "label": "e"} +{"uid": "id_652", "premise": "Why pagodas dont fall down. In a land swept by typhoons and shaken by earthquakes, how have Japans tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japans first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight natures forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the buildings overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashiras role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as Professor Pagoda because of his passion to understand the pagoda, has built a series of models and tested them on a shake-table in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japans first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagodas loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual stories from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five-storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual stories of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walkers balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. With the eaves extending out on all sides like balancing poles, says Mr Ishida, the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "The other buildings near the Toji pagoda had been built in the last 30 years.", "label": "n"} +{"uid": "id_653", "premise": "Why pagodas dont fall down. In a land swept by typhoons and shaken by earthquakes, how have Japans tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japans first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight natures forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the buildings overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashiras role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as Professor Pagoda because of his passion to understand the pagoda, has built a series of models and tested them on a shake-table in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japans first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagodas loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual stories from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five-storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual stories of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walkers balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. With the eaves extending out on all sides like balancing poles, says Mr Ishida, the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "The Hanshin earthquake of 1995 destroyed the pagoda at the Toji temple.", "label": "c"} +{"uid": "id_654", "premise": "Why pagodas dont fall down. In a land swept by typhoons and shaken by earthquakes, how have Japans tallest and seemingly flimsiest old buildings 500 or so wooden pagodas remained standing for centuries? Records show that only two have collapsed during the past 1400 years. Those that have disappeared were destroyed by fire as a result of lightning or civil war. The disastrous Hanshin earthquake in 1995 killed 6,400 people, toppled elevated highways, flattened office blocks and devastated the port area of Kobe. Yet it left the magnificent five-storey pagoda at the Toji temple in nearby Kyoto unscathed, though it levelled a number of buildings in the neighbourhood. Japanese scholars have been mystified for ages about why these tall, slender buildings are so stable. It was only thirty years ago that the building industry felt confident enough to erect office blocks of steel and reinforced concrete that had more than a dozen floors. With its special shock absorbers to dampen the effect of sudden sideways movements from an earthquake, the thirty-six-storey Kasumigaseki building in central Tokyo Japans first skyscraper was considered a masterpiece of modern engineering when it was built in 1968. Yet in 826, with only pegs and wedges to keep his wooden structure upright, the master builder Kobodaishi had no hesitation in sending his majestic Toji pagoda soaring fifty-five metres into the sky nearly half as high as the Kasumigaseki skyscraper built some eleven centuries later. Clearly, Japanese carpenters of the day knew a few tricks about allowing a building to sway and settle itself rather than fight natures forces. But what sort of tricks? The multi-storey pagoda came to Japan from China in the sixth century. As in China, they were first introduced with Buddhism and were attached to important temples. The Chinese built their pagodas in brick or stone, with inner staircases, and used them in later centuries mainly as watchtowers. When the pagoda reached Japan, however, its architecture was freely adapted to local conditions they were built less high, typically five rather than nine storeys, made mainly of wood and the staircase was dispensed with because the Japanese pagoda did not have any practical use but became more of an art object. Because of the typhoons that batter Japan in the summer, Japanese builders learned to extend the eaves of buildings further beyond the walls. This prevents rainwater gushing down the walls. Pagodas in China and Korea have nothing like the overhang that is found on pagodas in Japan. The roof of a Japanese temple building can be made to overhang the sides of the structure by fifty per cent or more of the buildings overall width. For the same reason, the builders of Japanese pagodas seem to have further increased their weight by choosing to cover these extended eaves not with the porcelain tiles of many Chinese pagodas but with much heavier earthenware tiles. But this does not totally explain the great resilience of Japanese pagodas. Is the answer that, like a tall pine tree, the Japanese pagoda with its massive trunk-like central pillar known as shinbashira simply flexes and sways during a typhoon or earthquake? For centuries, many thought so. But the answer is not so simple because the startling thing is that the shinbashira actually carries no load at all. In fact, in some pagoda designs, it does not even rest on the ground, but is suspended from the top of the pagoda hanging loosely down through the middle of the building. The weight of the building is supported entirely by twelve outer and four inner columns. And what is the role of the shinbashira, the central pillar? The best way to understand the shinbashiras role is to watch a video made by Shuzo Ishida, a structural engineer at Kyoto Institute of Technology. Mr Ishida, known to his students as Professor Pagoda because of his passion to understand the pagoda, has built a series of models and tested them on a shake-table in his laboratory. In short, the shinbashira was acting like an enormous stationary pendulum. The ancient craftsmen, apparently without the assistance of very advanced mathematics, seemed to grasp the principles that were, more than a thousand years later, applied in the construction of Japans first skyscraper. What those early craftsmen had found by trial and error was that under pressure a pagodas loose stack of floors could be made to slither to and fro independent of one another. Viewed from the side, the pagoda seemed to be doing a snake dance with each consecutive floor moving in the opposite direction to its neighbours above and below. The shinbashira, running up through a hole in the centre of the building, constrained individual stories from moving too far because, after moving a certain distance, they banged into it, transmitting energy away along the column. Another strange feature of the Japanese pagoda is that, because the building tapers, with each successive floor plan being smaller than the one below, none of the vertical pillars that carry the weight of the building is connected to its corresponding pillar above. In other words, a five-storey pagoda contains not even one pillar that travels right up through the building to carry the structural loads from the top to the bottom. More surprising is the fact that the individual stories of a Japanese pagoda, unlike their counterparts elsewhere, are not actually connected to each other. They are simply stacked one on top of another like a pile of hats. Interestingly, such a design would not be permitted under current Japanese building regulations. And the extra-wide eaves? Think of them as a tightrope walkers balancing pole. The bigger the mass at each end of the pole, the easier it is for the tightrope walker to maintain his or her balance. The same holds true for a pagoda. With the eaves extending out on all sides like balancing poles, says Mr Ishida, the building responds to even the most powerful jolt of an earthquake with a graceful swaying, never an abrupt shaking. Here again, Japanese master builders of a thousand years ago anticipated concepts of modern structural engineering.", "hypothesis": "Only two Japanese pagodas have collapsed in 1400 years.", "label": "e"} +{"uid": "id_655", "premise": "Why zoos are good Scientist David Hone makes the case for zoos In my view, it is perfectly possible for many species of animals living in zoos or wildlife parks to have a quality of life as high as, or higher than, in the wild. Animals in good zoos get a varied and high-quality diet with all the supplements required, and any illnesses they might have will be treated. Their movement might be somewhat restricted, but they have a safe environment in which to live, and they are spared bullying and social ostracism by others of their kind. They do not suffer from the threat or stress of predators, or the irritation and pain of parasites or injuries. The average captive animal will have a greater life expectancy compared with its wild counterpart, and will not die of drought, of starvation or in the jaws of a predator. A lot of very nasty things happen to truly 'wild' animals that simply don't happen in good zoos, and to view a life that is 'free' as one that is automatically 'good' is, I think, an error. Furthermore, zoos serve several key purposes. Firstly, zoos aid conservation. Colossal numbers of species are becoming extinct across the world, and many more are increasingly threatened and therefore risk extinction. Moreover, some of these collapses have been sudden, dramatic and unexpected, or were simply discovered very late in the day. A species protected in captivity can be bred up to provide a reservoir population against a population crash or extinction in the wild. A good number of species only exist in captivity, with many of these living in zoos. Still more only exist in the wild because they have been reintroduced from zoos, or have wild populations that have been boosted by captive bred animals. Without these efforts there would be fewer species alive today. Although reintroduction successes are few and far between, the numbers are increasing, and the very fact that species have been saved or reintroduced as a result of captive breeding proves the value of such initiatives. Zoos also provide education. Many children and adults, especially those in cities, will never see a wild animal beyond a fox or pigeon. While it is true that television documentaries are becoming ever more detailed and impressive, and many natural history specimens are on display in museums, there really is nothing to compare with seeing a living creature in the flesh, hearing it, smelling it, watching what it does and having the time to absorb details. That alone will bring a greater understanding and perspective to many, and hopefully give them a greater appreciation for wildlife, conservation efforts and how they can contribute. In addition to this, there is also the education that can take place in zoos through signs, talks and presentations which directly communicate information to visitors about the animals they are seeing and their place in the world. This was an area where zoos used to be lacking, but they are now increasingly sophisticated in their communication and outreach work. Many zoos also work directly to educate conservation workers in other countries, or send their animal keepers abroad to contribute their knowledge and skills to those working in zoos and reserves, thereby helping to improve conditions and reintroductions all over the world. Zoos also play a key role in research. If we are to save wild species and restore and repair ecosystems we need to know about how key species live, act and react. Being able to undertake research on animals in zoos where there is less risk and fewer variables means real changes can be effected on wild populations. Finding out about, for example, the oestrus cycle of an animal or its breeding rate helps us manage wild populations. Procedures such as capturing and moving at-risk or dangerous individuals are bolstered by knowledge gained in zoos about doses for anaesthetics, and by experience in handling arid transporting animals. This can make a real difference to conservation efforts and to the reduction of human-animal conflicts, and can provide a knowledge base for helping with the increasing threats of habitat destruction and other problems. In conclusion, considering the many ongoing global threats to the environment, it is hard for me to see zoos as anything other than essential to the long-term survival of numerous species. They are vital not just in terms of protecting animals, but as a means of learning about them to aid those still in the wild, as well as educating and informing the general population about these animals and their world so that they can assist or at least accept the need to be more environmentally conscious. Without them, the world would be, and would increasingly become, a much poorer place.", "hypothesis": "Improvements in the quality of TV wildlife documentaries have resulted in increased numbers of zoo visitors.", "label": "n"} +{"uid": "id_656", "premise": "Why zoos are good Scientist David Hone makes the case for zoos In my view, it is perfectly possible for many species of animals living in zoos or wildlife parks to have a quality of life as high as, or higher than, in the wild. Animals in good zoos get a varied and high-quality diet with all the supplements required, and any illnesses they might have will be treated. Their movement might be somewhat restricted, but they have a safe environment in which to live, and they are spared bullying and social ostracism by others of their kind. They do not suffer from the threat or stress of predators, or the irritation and pain of parasites or injuries. The average captive animal will have a greater life expectancy compared with its wild counterpart, and will not die of drought, of starvation or in the jaws of a predator. A lot of very nasty things happen to truly 'wild' animals that simply don't happen in good zoos, and to view a life that is 'free' as one that is automatically 'good' is, I think, an error. Furthermore, zoos serve several key purposes. Firstly, zoos aid conservation. Colossal numbers of species are becoming extinct across the world, and many more are increasingly threatened and therefore risk extinction. Moreover, some of these collapses have been sudden, dramatic and unexpected, or were simply discovered very late in the day. A species protected in captivity can be bred up to provide a reservoir population against a population crash or extinction in the wild. A good number of species only exist in captivity, with many of these living in zoos. Still more only exist in the wild because they have been reintroduced from zoos, or have wild populations that have been boosted by captive bred animals. Without these efforts there would be fewer species alive today. Although reintroduction successes are few and far between, the numbers are increasing, and the very fact that species have been saved or reintroduced as a result of captive breeding proves the value of such initiatives. Zoos also provide education. Many children and adults, especially those in cities, will never see a wild animal beyond a fox or pigeon. While it is true that television documentaries are becoming ever more detailed and impressive, and many natural history specimens are on display in museums, there really is nothing to compare with seeing a living creature in the flesh, hearing it, smelling it, watching what it does and having the time to absorb details. That alone will bring a greater understanding and perspective to many, and hopefully give them a greater appreciation for wildlife, conservation efforts and how they can contribute. In addition to this, there is also the education that can take place in zoos through signs, talks and presentations which directly communicate information to visitors about the animals they are seeing and their place in the world. This was an area where zoos used to be lacking, but they are now increasingly sophisticated in their communication and outreach work. Many zoos also work directly to educate conservation workers in other countries, or send their animal keepers abroad to contribute their knowledge and skills to those working in zoos and reserves, thereby helping to improve conditions and reintroductions all over the world. Zoos also play a key role in research. If we are to save wild species and restore and repair ecosystems we need to know about how key species live, act and react. Being able to undertake research on animals in zoos where there is less risk and fewer variables means real changes can be effected on wild populations. Finding out about, for example, the oestrus cycle of an animal or its breeding rate helps us manage wild populations. Procedures such as capturing and moving at-risk or dangerous individuals are bolstered by knowledge gained in zoos about doses for anaesthetics, and by experience in handling arid transporting animals. This can make a real difference to conservation efforts and to the reduction of human-animal conflicts, and can provide a knowledge base for helping with the increasing threats of habitat destruction and other problems. In conclusion, considering the many ongoing global threats to the environment, it is hard for me to see zoos as anything other than essential to the long-term survival of numerous species. They are vital not just in terms of protecting animals, but as a means of learning about them to aid those still in the wild, as well as educating and informing the general population about these animals and their world so that they can assist or at least accept the need to be more environmentally conscious. Without them, the world would be, and would increasingly become, a much poorer place.", "hypothesis": "There are some species in zoos which can no longer be found in the wild.", "label": "e"} +{"uid": "id_657", "premise": "Why zoos are good Scientist David Hone makes the case for zoos In my view, it is perfectly possible for many species of animals living in zoos or wildlife parks to have a quality of life as high as, or higher than, in the wild. Animals in good zoos get a varied and high-quality diet with all the supplements required, and any illnesses they might have will be treated. Their movement might be somewhat restricted, but they have a safe environment in which to live, and they are spared bullying and social ostracism by others of their kind. They do not suffer from the threat or stress of predators, or the irritation and pain of parasites or injuries. The average captive animal will have a greater life expectancy compared with its wild counterpart, and will not die of drought, of starvation or in the jaws of a predator. A lot of very nasty things happen to truly 'wild' animals that simply don't happen in good zoos, and to view a life that is 'free' as one that is automatically 'good' is, I think, an error. Furthermore, zoos serve several key purposes. Firstly, zoos aid conservation. Colossal numbers of species are becoming extinct across the world, and many more are increasingly threatened and therefore risk extinction. Moreover, some of these collapses have been sudden, dramatic and unexpected, or were simply discovered very late in the day. A species protected in captivity can be bred up to provide a reservoir population against a population crash or extinction in the wild. A good number of species only exist in captivity, with many of these living in zoos. Still more only exist in the wild because they have been reintroduced from zoos, or have wild populations that have been boosted by captive bred animals. Without these efforts there would be fewer species alive today. Although reintroduction successes are few and far between, the numbers are increasing, and the very fact that species have been saved or reintroduced as a result of captive breeding proves the value of such initiatives. Zoos also provide education. Many children and adults, especially those in cities, will never see a wild animal beyond a fox or pigeon. While it is true that television documentaries are becoming ever more detailed and impressive, and many natural history specimens are on display in museums, there really is nothing to compare with seeing a living creature in the flesh, hearing it, smelling it, watching what it does and having the time to absorb details. That alone will bring a greater understanding and perspective to many, and hopefully give them a greater appreciation for wildlife, conservation efforts and how they can contribute. In addition to this, there is also the education that can take place in zoos through signs, talks and presentations which directly communicate information to visitors about the animals they are seeing and their place in the world. This was an area where zoos used to be lacking, but they are now increasingly sophisticated in their communication and outreach work. Many zoos also work directly to educate conservation workers in other countries, or send their animal keepers abroad to contribute their knowledge and skills to those working in zoos and reserves, thereby helping to improve conditions and reintroductions all over the world. Zoos also play a key role in research. If we are to save wild species and restore and repair ecosystems we need to know about how key species live, act and react. Being able to undertake research on animals in zoos where there is less risk and fewer variables means real changes can be effected on wild populations. Finding out about, for example, the oestrus cycle of an animal or its breeding rate helps us manage wild populations. Procedures such as capturing and moving at-risk or dangerous individuals are bolstered by knowledge gained in zoos about doses for anaesthetics, and by experience in handling arid transporting animals. This can make a real difference to conservation efforts and to the reduction of human-animal conflicts, and can provide a knowledge base for helping with the increasing threats of habitat destruction and other problems. In conclusion, considering the many ongoing global threats to the environment, it is hard for me to see zoos as anything other than essential to the long-term survival of numerous species. They are vital not just in terms of protecting animals, but as a means of learning about them to aid those still in the wild, as well as educating and informing the general population about these animals and their world so that they can assist or at least accept the need to be more environmentally conscious. Without them, the world would be, and would increasingly become, a much poorer place.", "hypothesis": "An animal is likely to live longer in a zoo than in the wild.", "label": "e"} +{"uid": "id_658", "premise": "Why zoos are good Scientist David Hone makes the case for zoos In my view, it is perfectly possible for many species of animals living in zoos or wildlife parks to have a quality of life as high as, or higher than, in the wild. Animals in good zoos get a varied and high-quality diet with all the supplements required, and any illnesses they might have will be treated. Their movement might be somewhat restricted, but they have a safe environment in which to live, and they are spared bullying and social ostracism by others of their kind. They do not suffer from the threat or stress of predators, or the irritation and pain of parasites or injuries. The average captive animal will have a greater life expectancy compared with its wild counterpart, and will not die of drought, of starvation or in the jaws of a predator. A lot of very nasty things happen to truly 'wild' animals that simply don't happen in good zoos, and to view a life that is 'free' as one that is automatically 'good' is, I think, an error. Furthermore, zoos serve several key purposes. Firstly, zoos aid conservation. Colossal numbers of species are becoming extinct across the world, and many more are increasingly threatened and therefore risk extinction. Moreover, some of these collapses have been sudden, dramatic and unexpected, or were simply discovered very late in the day. A species protected in captivity can be bred up to provide a reservoir population against a population crash or extinction in the wild. A good number of species only exist in captivity, with many of these living in zoos. Still more only exist in the wild because they have been reintroduced from zoos, or have wild populations that have been boosted by captive bred animals. Without these efforts there would be fewer species alive today. Although reintroduction successes are few and far between, the numbers are increasing, and the very fact that species have been saved or reintroduced as a result of captive breeding proves the value of such initiatives. Zoos also provide education. Many children and adults, especially those in cities, will never see a wild animal beyond a fox or pigeon. While it is true that television documentaries are becoming ever more detailed and impressive, and many natural history specimens are on display in museums, there really is nothing to compare with seeing a living creature in the flesh, hearing it, smelling it, watching what it does and having the time to absorb details. That alone will bring a greater understanding and perspective to many, and hopefully give them a greater appreciation for wildlife, conservation efforts and how they can contribute. In addition to this, there is also the education that can take place in zoos through signs, talks and presentations which directly communicate information to visitors about the animals they are seeing and their place in the world. This was an area where zoos used to be lacking, but they are now increasingly sophisticated in their communication and outreach work. Many zoos also work directly to educate conservation workers in other countries, or send their animal keepers abroad to contribute their knowledge and skills to those working in zoos and reserves, thereby helping to improve conditions and reintroductions all over the world. Zoos also play a key role in research. If we are to save wild species and restore and repair ecosystems we need to know about how key species live, act and react. Being able to undertake research on animals in zoos where there is less risk and fewer variables means real changes can be effected on wild populations. Finding out about, for example, the oestrus cycle of an animal or its breeding rate helps us manage wild populations. Procedures such as capturing and moving at-risk or dangerous individuals are bolstered by knowledge gained in zoos about doses for anaesthetics, and by experience in handling arid transporting animals. This can make a real difference to conservation efforts and to the reduction of human-animal conflicts, and can provide a knowledge base for helping with the increasing threats of habitat destruction and other problems. In conclusion, considering the many ongoing global threats to the environment, it is hard for me to see zoos as anything other than essential to the long-term survival of numerous species. They are vital not just in terms of protecting animals, but as a means of learning about them to aid those still in the wild, as well as educating and informing the general population about these animals and their world so that they can assist or at least accept the need to be more environmentally conscious. Without them, the world would be, and would increasingly become, a much poorer place.", "hypothesis": "Zoos have always excelled at transmitting information about animals to the public.", "label": "c"} +{"uid": "id_659", "premise": "Why zoos are good Scientist David Hone makes the case for zoos In my view, it is perfectly possible for many species of animals living in zoos or wildlife parks to have a quality of life as high as, or higher than, in the wild. Animals in good zoos get a varied and high-quality diet with all the supplements required, and any illnesses they might have will be treated. Their movement might be somewhat restricted, but they have a safe environment in which to live, and they are spared bullying and social ostracism by others of their kind. They do not suffer from the threat or stress of predators, or the irritation and pain of parasites or injuries. The average captive animal will have a greater life expectancy compared with its wild counterpart, and will not die of drought, of starvation or in the jaws of a predator. A lot of very nasty things happen to truly 'wild' animals that simply don't happen in good zoos, and to view a life that is 'free' as one that is automatically 'good' is, I think, an error. Furthermore, zoos serve several key purposes. Firstly, zoos aid conservation. Colossal numbers of species are becoming extinct across the world, and many more are increasingly threatened and therefore risk extinction. Moreover, some of these collapses have been sudden, dramatic and unexpected, or were simply discovered very late in the day. A species protected in captivity can be bred up to provide a reservoir population against a population crash or extinction in the wild. A good number of species only exist in captivity, with many of these living in zoos. Still more only exist in the wild because they have been reintroduced from zoos, or have wild populations that have been boosted by captive bred animals. Without these efforts there would be fewer species alive today. Although reintroduction successes are few and far between, the numbers are increasing, and the very fact that species have been saved or reintroduced as a result of captive breeding proves the value of such initiatives. Zoos also provide education. Many children and adults, especially those in cities, will never see a wild animal beyond a fox or pigeon. While it is true that television documentaries are becoming ever more detailed and impressive, and many natural history specimens are on display in museums, there really is nothing to compare with seeing a living creature in the flesh, hearing it, smelling it, watching what it does and having the time to absorb details. That alone will bring a greater understanding and perspective to many, and hopefully give them a greater appreciation for wildlife, conservation efforts and how they can contribute. In addition to this, there is also the education that can take place in zoos through signs, talks and presentations which directly communicate information to visitors about the animals they are seeing and their place in the world. This was an area where zoos used to be lacking, but they are now increasingly sophisticated in their communication and outreach work. Many zoos also work directly to educate conservation workers in other countries, or send their animal keepers abroad to contribute their knowledge and skills to those working in zoos and reserves, thereby helping to improve conditions and reintroductions all over the world. Zoos also play a key role in research. If we are to save wild species and restore and repair ecosystems we need to know about how key species live, act and react. Being able to undertake research on animals in zoos where there is less risk and fewer variables means real changes can be effected on wild populations. Finding out about, for example, the oestrus cycle of an animal or its breeding rate helps us manage wild populations. Procedures such as capturing and moving at-risk or dangerous individuals are bolstered by knowledge gained in zoos about doses for anaesthetics, and by experience in handling arid transporting animals. This can make a real difference to conservation efforts and to the reduction of human-animal conflicts, and can provide a knowledge base for helping with the increasing threats of habitat destruction and other problems. In conclusion, considering the many ongoing global threats to the environment, it is hard for me to see zoos as anything other than essential to the long-term survival of numerous species. They are vital not just in terms of protecting animals, but as a means of learning about them to aid those still in the wild, as well as educating and informing the general population about these animals and their world so that they can assist or at least accept the need to be more environmentally conscious. Without them, the world would be, and would increasingly become, a much poorer place.", "hypothesis": "Studying animals in zoos is less stressful for the animals than studying them in the wild.", "label": "n"} +{"uid": "id_660", "premise": "Wildlife expert Dr Ellen Boyle spoke at this years Wildlife Conservation Conference about the lack of a strategy to prevent the immment extinction of a number of primate species across Asia. Her talk focused on the dangers of hunters and the clearing of tropical ramforests across the continent. Several species of Asian monkey, some only recently discovered, are facing these dual threats to their natural habitats. Dr Boyle differentiated the most at-risk primates as critically endangered, but the talk stressed that other species were also living under constant threat. Dr Boyles scientific paper Several species of Asian monkey, some only recently discovered, are facing these dual threats to ther natural habitats. Dr Boyle differentiated the most at-risk primates as critically endangered, but the talk stressed that other species were also living under constant threat. Dr Boyles scientific paper presented compelling evidence of the need to halt deforestation, but in some developing Asian economies, where wood 1s collected for fuel and for sale and land 1s cleared for farming, human Interests currently take precedence over the fate of the regions lower- order primates.", "hypothesis": "The deforestation of ramforests is the only threat to Asian primates.", "label": "c"} +{"uid": "id_661", "premise": "Wildlife expert Dr Ellen Boyle spoke at this years Wildlife Conservation Conference about the lack of a strategy to prevent the immment extinction of a number of primate species across Asia. Her talk focused on the dangers of hunters and the clearing of tropical ramforests across the continent. Several species of Asian monkey, some only recently discovered, are facing these dual threats to their natural habitats. Dr Boyle differentiated the most at-risk primates as critically endangered, but the talk stressed that other species were also living under constant threat. Dr Boyles scientific paper Several species of Asian monkey, some only recently discovered, are facing these dual threats to ther natural habitats. Dr Boyle differentiated the most at-risk primates as critically endangered, but the talk stressed that other species were also living under constant threat. Dr Boyles scientific paper presented compelling evidence of the need to halt deforestation, but in some developing Asian economies, where wood 1s collected for fuel and for sale and land 1s cleared for farming, human Interests currently take precedence over the fate of the regions lower- order primates.", "hypothesis": "Dr Boyle suggests that all Asian countries prioritise economs over the survival of monkeys.", "label": "c"} +{"uid": "id_662", "premise": "Wildlife expert Dr Ellen Boyle spoke at this years Wildlife Conservation Conference about the lack of a strategy to prevent the immment extinction of a number of primate species across Asia. Her talk focused on the dangers of hunters and the clearing of tropical ramforests across the continent. Several species of Asian monkey, some only recently discovered, are facing these dual threats to their natural habitats. Dr Boyle differentiated the most at-risk primates as critically endangered, but the talk stressed that other species were also living under constant threat. Dr Boyles scientific paper Several species of Asian monkey, some only recently discovered, are facing these dual threats to ther natural habitats. Dr Boyle differentiated the most at-risk primates as critically endangered, but the talk stressed that other species were also living under constant threat. Dr Boyles scientific paper presented compelling evidence of the need to halt deforestation, but in some developing Asian economies, where wood 1s collected for fuel and for sale and land 1s cleared for farming, human Interests currently take precedence over the fate of the regions lower- order primates.", "hypothesis": "The passage gives three reasons for the destruction of tropical ramforests.", "label": "e"} +{"uid": "id_663", "premise": "Wildlife expert Dr Ellen Boyle spoke at this years Wildlife Conservation Conference about the lack of a strategy to prevent the immment extinction of a number of primate species across Asia. Her talk focused on the dangers of hunters and the clearing of tropical ramforests across the continent. Several species of Asian monkey, some only recently discovered, are facing these dual threats to their natural habitats. Dr Boyle differentiated the most at-risk primates as critically endangered, but the talk stressed that other species were also living under constant threat. Dr Boyles scientific paper Several species of Asian monkey, some only recently discovered, are facing these dual threats to ther natural habitats. Dr Boyle differentiated the most at-risk primates as critically endangered, but the talk stressed that other species were also living under constant threat. Dr Boyles scientific paper presented compelling evidence of the need to halt deforestation, but in some developing Asian economies, where wood 1s collected for fuel and for sale and land 1s cleared for farming, human Interests currently take precedence over the fate of the regions lower- order primates.", "hypothesis": "Dr Boyles talk at the Wildlife Conservation Conference explained the strategy for protecting Asian primates.", "label": "c"} +{"uid": "id_664", "premise": "Wildlife expert Dr Ellen Boyle spoke at this years Wildlife Conservation Conference about the lack of a strategy to prevent the immment extinction of a number of primate species across Asia. Her talk focused on the dangers of hunters and the clearing of tropical ramforests across the continent. Several species of Asian monkey, some only recently discovered, are facing these dual threats to their natural habitats. Dr Boyle differentiated the most at-risk primates as critically endangered, but the talk stressed that other species were also living under constant threat. Dr Boyles scientific paper Several species of Asian monkey, some only recently discovered, are facing these dual threats to ther natural habitats. Dr Boyle differentiated the most at-risk primates as critically endangered, but the talk stressed that other species were also living under constant threat. Dr Boyles scientific paper presented compelling evidence of the need to halt deforestation, but in some developing Asian economies, where wood 1s collected for fuel and for sale and land 1s cleared for farming, human Interests currently take precedence over the fate of the regions lower- order primates.", "hypothesis": "All Asian primates are threatened by extinction.", "label": "c"} +{"uid": "id_665", "premise": "Will the electric vehicle known as the Segway alter the ways that individuals get around? Dean Kamer, the inventor of the Segway, believes that this revolutionary vehicle will someday substitute for the bicycles and automobiles that now crowd our cities. When he introduced the Segway in 2001, he believed it would change our lives. Although the Segway uses up-to-the-minute technology, it looks very ordinary. The metal framework of the Segway consists of a platform where an individual stands. Attached to the front of the platform is a tall post with handles for the driver to hold. On each side of the platform is a wide, rubber wheel. Except for these two wheels, there are no mechanical parts on the Segway. It has no engine, no brakes, no pedal power, no gears, and no steering wheel. Instead it uses a computer system that imitates the ability of humans to keep their balance. This system seems to move to the driver's thoughts. For example, when the driver thinks \"Go forward\", the Segway moves forwards, and when the driver thinks, \"Stop\", it stops. The Segway is not really responding to the driver's thoughts, but to the tiny changes in balance that the driver makes as he prepares his body to move forward or to stop. For example, when the driver thinks about moving forward, he actually leans slightly forward, and when he thinks of stopping or slowing, the driver leans slightly back. The Segway is powered by batteries that allow it to travel about 17miles on one battery charge. It is designed for short-range, low-speed operation. It has three speed settings. The slowest is the setting for learning, with speeds of up to miles per hour. Next is the sidewalk setting, with speeds of up to 9 miles , per hour. The highest setting allows the driver to travel up to 12.5 miles per hour in open, flat areas. At all three speed settings, the Segway can go wherever a person can walk, both indoors and outdoors. Workers who must walk a lot in their jobs might be the primary users of Segways. For example, police officers could drive Segways to patrol city streets, and mail carriers could drive from house to house to deliver letters and packages. Farmers could quickly inspect distant fields and bams, and rangers, or parks. Security guards could protect neighborhoods or large buildings. Any task requiring a lot of walking could be made easier. In cities, shoppers could leave their cars at home and ride Segway from store to store. Also, people who cannot comfortably walk due to age, illness, or injury could minimize their walking but still be able to go many places on a Segway. Why is it, then, that our job sites, parks, and shopping centers have not been subsequently filled with Segways since they were introduced in 2001? Why hasn't the expected revolution taken place? Studies have shown that Segways can help workers get more done in a shorter time. This saves money. Engineers admire Segways as a technological marvel. Business, government agencies, and individuals, however, have been unwilling to accept the Segway. Ves, there have been some successes. In a few cities, for example, mail carriers drive Segway on their routes, and police officers patrol on Segways. San Francisco, California, and Florence, Italy, are among several cities in the world that offer tours on Segways for a small fee. Occasionally you will see golfers riding Segways around golf courses. Throughout the world more than 150 security agencies use Segways, and China has recently entered the overseas market. These examples are encouraging, but can hardly be called a revolution. The primary reason seems to be that people have an inherent fear of doing something new. They fear others will laugh at them for buying a \"toy\". They fear losing control of the vehicle. They fear being injured. They fear not knowing the rules for using a Segway. They fear making people angry if they ride on the sidewalk. All these fears and others have kept sales low. The inventor explained why people have been slow to accept the Segway. He said, \"We didn't realize that although technology moves very quickly, people's mind-set changes very slowly. \" Perhaps a hundred years from now millions of people around the world will be riding Segways.", "hypothesis": "The driver can alter the direction of the Segway by leaning to the left or right", "label": "c"} +{"uid": "id_666", "premise": "Will the electric vehicle known as the Segway alter the ways that individuals get around? Dean Kamer, the inventor of the Segway, believes that this revolutionary vehicle will someday substitute for the bicycles and automobiles that now crowd our cities. When he introduced the Segway in 2001, he believed it would change our lives. Although the Segway uses up-to-the-minute technology, it looks very ordinary. The metal framework of the Segway consists of a platform where an individual stands. Attached to the front of the platform is a tall post with handles for the driver to hold. On each side of the platform is a wide, rubber wheel. Except for these two wheels, there are no mechanical parts on the Segway. It has no engine, no brakes, no pedal power, no gears, and no steering wheel. Instead it uses a computer system that imitates the ability of humans to keep their balance. This system seems to move to the driver's thoughts. For example, when the driver thinks \"Go forward\", the Segway moves forwards, and when the driver thinks, \"Stop\", it stops. The Segway is not really responding to the driver's thoughts, but to the tiny changes in balance that the driver makes as he prepares his body to move forward or to stop. For example, when the driver thinks about moving forward, he actually leans slightly forward, and when he thinks of stopping or slowing, the driver leans slightly back. The Segway is powered by batteries that allow it to travel about 17miles on one battery charge. It is designed for short-range, low-speed operation. It has three speed settings. The slowest is the setting for learning, with speeds of up to miles per hour. Next is the sidewalk setting, with speeds of up to 9 miles , per hour. The highest setting allows the driver to travel up to 12.5 miles per hour in open, flat areas. At all three speed settings, the Segway can go wherever a person can walk, both indoors and outdoors. Workers who must walk a lot in their jobs might be the primary users of Segways. For example, police officers could drive Segways to patrol city streets, and mail carriers could drive from house to house to deliver letters and packages. Farmers could quickly inspect distant fields and bams, and rangers, or parks. Security guards could protect neighborhoods or large buildings. Any task requiring a lot of walking could be made easier. In cities, shoppers could leave their cars at home and ride Segway from store to store. Also, people who cannot comfortably walk due to age, illness, or injury could minimize their walking but still be able to go many places on a Segway. Why is it, then, that our job sites, parks, and shopping centers have not been subsequently filled with Segways since they were introduced in 2001? Why hasn't the expected revolution taken place? Studies have shown that Segways can help workers get more done in a shorter time. This saves money. Engineers admire Segways as a technological marvel. Business, government agencies, and individuals, however, have been unwilling to accept the Segway. Ves, there have been some successes. In a few cities, for example, mail carriers drive Segway on their routes, and police officers patrol on Segways. San Francisco, California, and Florence, Italy, are among several cities in the world that offer tours on Segways for a small fee. Occasionally you will see golfers riding Segways around golf courses. Throughout the world more than 150 security agencies use Segways, and China has recently entered the overseas market. These examples are encouraging, but can hardly be called a revolution. The primary reason seems to be that people have an inherent fear of doing something new. They fear others will laugh at them for buying a \"toy\". They fear losing control of the vehicle. They fear being injured. They fear not knowing the rules for using a Segway. They fear making people angry if they ride on the sidewalk. All these fears and others have kept sales low. The inventor explained why people have been slow to accept the Segway. He said, \"We didn't realize that although technology moves very quickly, people's mind-set changes very slowly. \" Perhaps a hundred years from now millions of people around the world will be riding Segways.", "hypothesis": "The Segway's framework consists of a platform and a post with handles", "label": "e"} +{"uid": "id_667", "premise": "Will the electric vehicle known as the Segway alter the ways that individuals get around? Dean Kamer, the inventor of the Segway, believes that this revolutionary vehicle will someday substitute for the bicycles and automobiles that now crowd our cities. When he introduced the Segway in 2001, he believed it would change our lives. Although the Segway uses up-to-the-minute technology, it looks very ordinary. The metal framework of the Segway consists of a platform where an individual stands. Attached to the front of the platform is a tall post with handles for the driver to hold. On each side of the platform is a wide, rubber wheel. Except for these two wheels, there are no mechanical parts on the Segway. It has no engine, no brakes, no pedal power, no gears, and no steering wheel. Instead it uses a computer system that imitates the ability of humans to keep their balance. This system seems to move to the driver's thoughts. For example, when the driver thinks \"Go forward\", the Segway moves forwards, and when the driver thinks, \"Stop\", it stops. The Segway is not really responding to the driver's thoughts, but to the tiny changes in balance that the driver makes as he prepares his body to move forward or to stop. For example, when the driver thinks about moving forward, he actually leans slightly forward, and when he thinks of stopping or slowing, the driver leans slightly back. The Segway is powered by batteries that allow it to travel about 17miles on one battery charge. It is designed for short-range, low-speed operation. It has three speed settings. The slowest is the setting for learning, with speeds of up to miles per hour. Next is the sidewalk setting, with speeds of up to 9 miles , per hour. The highest setting allows the driver to travel up to 12.5 miles per hour in open, flat areas. At all three speed settings, the Segway can go wherever a person can walk, both indoors and outdoors. Workers who must walk a lot in their jobs might be the primary users of Segways. For example, police officers could drive Segways to patrol city streets, and mail carriers could drive from house to house to deliver letters and packages. Farmers could quickly inspect distant fields and bams, and rangers, or parks. Security guards could protect neighborhoods or large buildings. Any task requiring a lot of walking could be made easier. In cities, shoppers could leave their cars at home and ride Segway from store to store. Also, people who cannot comfortably walk due to age, illness, or injury could minimize their walking but still be able to go many places on a Segway. Why is it, then, that our job sites, parks, and shopping centers have not been subsequently filled with Segways since they were introduced in 2001? Why hasn't the expected revolution taken place? Studies have shown that Segways can help workers get more done in a shorter time. This saves money. Engineers admire Segways as a technological marvel. Business, government agencies, and individuals, however, have been unwilling to accept the Segway. Ves, there have been some successes. In a few cities, for example, mail carriers drive Segway on their routes, and police officers patrol on Segways. San Francisco, California, and Florence, Italy, are among several cities in the world that offer tours on Segways for a small fee. Occasionally you will see golfers riding Segways around golf courses. Throughout the world more than 150 security agencies use Segways, and China has recently entered the overseas market. These examples are encouraging, but can hardly be called a revolution. The primary reason seems to be that people have an inherent fear of doing something new. They fear others will laugh at them for buying a \"toy\". They fear losing control of the vehicle. They fear being injured. They fear not knowing the rules for using a Segway. They fear making people angry if they ride on the sidewalk. All these fears and others have kept sales low. The inventor explained why people have been slow to accept the Segway. He said, \"We didn't realize that although technology moves very quickly, people's mind-set changes very slowly. \" Perhaps a hundred years from now millions of people around the world will be riding Segways.", "hypothesis": "The Segway was primarily designed for student to make their travel much more comfortable", "label": "n"} +{"uid": "id_668", "premise": "William Gilbert and Magnetism 16th and 17th centuries saw two great pioneers of modem science: Galileo and Gilbert. The impact of their findings is eminent. Gilbert was the first modem scientist, also the accredited father of the science of electricity and magnetism, an Englishman of learning and a physician at the court of Elizabeth. Prior to him, all that was known of electricity and magnetism was what the ancients knew, nothing more than that the: lodestone possessed magnetic properties and that amber and jet, when rubbed, would attract bits of paper or other substances of small specific gravity. However, he is less well-known than he deserves. Gilberts birth predated Galileo. Born in an eminent local family in Colchester county in the UK, on May 24,1544, he went to grammar school, and then studied medicine at St. Johns College, Cambridge, graduating in 1573. Later he traveled in the continent and eventually settled down in London. He was a very successful and eminent doctor. All this culminated in his election to the president of the Royal Science Society. He was also appointed the personal physician to the Queen (Elizabeth I) , and later knighted by the Queen. He faithfully served her until her death. However, he didnt outlive the Queen for long and died on December 10, 1603, only a few months after his appointment as a personal physician to King James. Gilbert was first interested in chemistry but later changed his focus due to the large portion of the mysticism of alchemy involved (such as the transmutation of metal). He gradually developed his interest in physics after the great minds of the ancient, particularly about the knowledge the ancient Greeks had about lodestones, strange minerals with the power to attract iron. In the meantime, Britain became a major seafaring nation in 1588 when the Spanish Armada was defeated, opening the way to the British settlement of America. British ships depended on the magnetic: compass, yet no one understood why it worked. Did the pole star attract it, as Columbus once speculated; or was there a magnetic mountain at the pole, as described in Odyssey which ships would never approach because the sailors thought its pull would yank out all their iron nails and fittings? For nearly 20 years William Gilbert conducted ingenious experiments to understand magnetism. His works include On the Magnet and Magnetic Bodies, Great Magnet of the Earth. Gilberts discovery was so important to modem physics. He investigated the nature of magnetism and electricity. He even coined the word electric. Though the early beliefs of magnetism were also largely entangled with superstitions such as that rubbing garlic on lodestone can neutralize its magnetism, one example being that sailors even believed the smell of garlic would even interfere with the action of the compass, which is why helmsmen were forbidden to eat it near a ships compass. Gilbert also found that metals can be magnetized by rubbing materials such as fur, plastic or the like on them. He named the ends of a magnet north pole and south pole. The magnetic poles can attract or repel, depending on polarity. In addition, however, ordinary iron is always attracted to a magnet. Though he started to study the relationship between magnetism and electricity, sadly he didnt complete it. His research of static electricity using amber and jet only demonstrated that objects with electrical charges can work like magnets attracting small pieces of paper and stuff. It is a French guy named du Fay that discovered that there are actually two electrical charges, positive and negative. He also questioned the traditional astronomical beliefs. Though a Copernican, he didnt express in his quintessential beliefs whether the earth is at the center of the universe or in orbit around the sun. However, he believed that stars are not equidistant from the earth, but have their own earth-like planets orbiting around them. The earth is itself like a giant magnet, which is also why compasses always point north. They spin on an axis that is aligned with the earths polarity. He even likened the polarity of the magnet to the polarity of the earth and built an entire magnetic philosophy on this analogy. In his explanation, magnetism was the soul of the earth. Thus a perfectly spherical lodestone, when aligned with the earths poles, would wobble all by itself in 24 hours. Further, he also believed that suns and other stars wobble just like the earth does around a crystal core, and speculated that the moon might also be a magnet caused to orbit by its magnetic attraction to the earth. This was perhaps the first proposal that a force might cause a heavenly orbit. His research method was revolutionary in that he used experiments rather than pure logic and reasoning like the ancient Greek philosophers did. It was a new attitude toward the scientific investigation. Until then, scientific experiments were not in fashion. It was because of this scientific attitude, together with his contribution to our knowledge of magnetism, that a unit of magnetomotive force, also known as magnetic potential, was named Gilbert in his honor. His approach of careful observation and experimentation rather than the authoritative opinion or deductive philosophy of others had laid the very foundation for modem science.", "hypothesis": "He was famous as a doctor before he was employed by the Queen", "label": "e"} +{"uid": "id_669", "premise": "William Gilbert and Magnetism 16th and 17th centuries saw two great pioneers of modem science: Galileo and Gilbert. The impact of their findings is eminent. Gilbert was the first modem scientist, also the accredited father of the science of electricity and magnetism, an Englishman of learning and a physician at the court of Elizabeth. Prior to him, all that was known of electricity and magnetism was what the ancients knew, nothing more than that the: lodestone possessed magnetic properties and that amber and jet, when rubbed, would attract bits of paper or other substances of small specific gravity. However, he is less well-known than he deserves. Gilberts birth predated Galileo. Born in an eminent local family in Colchester county in the UK, on May 24,1544, he went to grammar school, and then studied medicine at St. Johns College, Cambridge, graduating in 1573. Later he traveled in the continent and eventually settled down in London. He was a very successful and eminent doctor. All this culminated in his election to the president of the Royal Science Society. He was also appointed the personal physician to the Queen (Elizabeth I) , and later knighted by the Queen. He faithfully served her until her death. However, he didnt outlive the Queen for long and died on December 10, 1603, only a few months after his appointment as a personal physician to King James. Gilbert was first interested in chemistry but later changed his focus due to the large portion of the mysticism of alchemy involved (such as the transmutation of metal). He gradually developed his interest in physics after the great minds of the ancient, particularly about the knowledge the ancient Greeks had about lodestones, strange minerals with the power to attract iron. In the meantime, Britain became a major seafaring nation in 1588 when the Spanish Armada was defeated, opening the way to the British settlement of America. British ships depended on the magnetic: compass, yet no one understood why it worked. Did the pole star attract it, as Columbus once speculated; or was there a magnetic mountain at the pole, as described in Odyssey which ships would never approach because the sailors thought its pull would yank out all their iron nails and fittings? For nearly 20 years William Gilbert conducted ingenious experiments to understand magnetism. His works include On the Magnet and Magnetic Bodies, Great Magnet of the Earth. Gilberts discovery was so important to modem physics. He investigated the nature of magnetism and electricity. He even coined the word electric. Though the early beliefs of magnetism were also largely entangled with superstitions such as that rubbing garlic on lodestone can neutralize its magnetism, one example being that sailors even believed the smell of garlic would even interfere with the action of the compass, which is why helmsmen were forbidden to eat it near a ships compass. Gilbert also found that metals can be magnetized by rubbing materials such as fur, plastic or the like on them. He named the ends of a magnet north pole and south pole. The magnetic poles can attract or repel, depending on polarity. In addition, however, ordinary iron is always attracted to a magnet. Though he started to study the relationship between magnetism and electricity, sadly he didnt complete it. His research of static electricity using amber and jet only demonstrated that objects with electrical charges can work like magnets attracting small pieces of paper and stuff. It is a French guy named du Fay that discovered that there are actually two electrical charges, positive and negative. He also questioned the traditional astronomical beliefs. Though a Copernican, he didnt express in his quintessential beliefs whether the earth is at the center of the universe or in orbit around the sun. However, he believed that stars are not equidistant from the earth, but have their own earth-like planets orbiting around them. The earth is itself like a giant magnet, which is also why compasses always point north. They spin on an axis that is aligned with the earths polarity. He even likened the polarity of the magnet to the polarity of the earth and built an entire magnetic philosophy on this analogy. In his explanation, magnetism was the soul of the earth. Thus a perfectly spherical lodestone, when aligned with the earths poles, would wobble all by itself in 24 hours. Further, he also believed that suns and other stars wobble just like the earth does around a crystal core, and speculated that the moon might also be a magnet caused to orbit by its magnetic attraction to the earth. This was perhaps the first proposal that a force might cause a heavenly orbit. His research method was revolutionary in that he used experiments rather than pure logic and reasoning like the ancient Greek philosophers did. It was a new attitude toward the scientific investigation. Until then, scientific experiments were not in fashion. It was because of this scientific attitude, together with his contribution to our knowledge of magnetism, that a unit of magnetomotive force, also known as magnetic potential, was named Gilbert in his honor. His approach of careful observation and experimentation rather than the authoritative opinion or deductive philosophy of others had laid the very foundation for modem science.", "hypothesis": "He lost faith in the medical theories of his time.", "label": "n"} +{"uid": "id_670", "premise": "William Gilbert and Magnetism 16th and 17th centuries saw two great pioneers of modem science: Galileo and Gilbert. The impact of their findings is eminent. Gilbert was the first modem scientist, also the accredited father of the science of electricity and magnetism, an Englishman of learning and a physician at the court of Elizabeth. Prior to him, all that was known of electricity and magnetism was what the ancients knew, nothing more than that the: lodestone possessed magnetic properties and that amber and jet, when rubbed, would attract bits of paper or other substances of small specific gravity. However, he is less well-known than he deserves. Gilberts birth predated Galileo. Born in an eminent local family in Colchester county in the UK, on May 24,1544, he went to grammar school, and then studied medicine at St. Johns College, Cambridge, graduating in 1573. Later he traveled in the continent and eventually settled down in London. He was a very successful and eminent doctor. All this culminated in his election to the president of the Royal Science Society. He was also appointed the personal physician to the Queen (Elizabeth I) , and later knighted by the Queen. He faithfully served her until her death. However, he didnt outlive the Queen for long and died on December 10, 1603, only a few months after his appointment as a personal physician to King James. Gilbert was first interested in chemistry but later changed his focus due to the large portion of the mysticism of alchemy involved (such as the transmutation of metal). He gradually developed his interest in physics after the great minds of the ancient, particularly about the knowledge the ancient Greeks had about lodestones, strange minerals with the power to attract iron. In the meantime, Britain became a major seafaring nation in 1588 when the Spanish Armada was defeated, opening the way to the British settlement of America. British ships depended on the magnetic: compass, yet no one understood why it worked. Did the pole star attract it, as Columbus once speculated; or was there a magnetic mountain at the pole, as described in Odyssey which ships would never approach because the sailors thought its pull would yank out all their iron nails and fittings? For nearly 20 years William Gilbert conducted ingenious experiments to understand magnetism. His works include On the Magnet and Magnetic Bodies, Great Magnet of the Earth. Gilberts discovery was so important to modem physics. He investigated the nature of magnetism and electricity. He even coined the word electric. Though the early beliefs of magnetism were also largely entangled with superstitions such as that rubbing garlic on lodestone can neutralize its magnetism, one example being that sailors even believed the smell of garlic would even interfere with the action of the compass, which is why helmsmen were forbidden to eat it near a ships compass. Gilbert also found that metals can be magnetized by rubbing materials such as fur, plastic or the like on them. He named the ends of a magnet north pole and south pole. The magnetic poles can attract or repel, depending on polarity. In addition, however, ordinary iron is always attracted to a magnet. Though he started to study the relationship between magnetism and electricity, sadly he didnt complete it. His research of static electricity using amber and jet only demonstrated that objects with electrical charges can work like magnets attracting small pieces of paper and stuff. It is a French guy named du Fay that discovered that there are actually two electrical charges, positive and negative. He also questioned the traditional astronomical beliefs. Though a Copernican, he didnt express in his quintessential beliefs whether the earth is at the center of the universe or in orbit around the sun. However, he believed that stars are not equidistant from the earth, but have their own earth-like planets orbiting around them. The earth is itself like a giant magnet, which is also why compasses always point north. They spin on an axis that is aligned with the earths polarity. He even likened the polarity of the magnet to the polarity of the earth and built an entire magnetic philosophy on this analogy. In his explanation, magnetism was the soul of the earth. Thus a perfectly spherical lodestone, when aligned with the earths poles, would wobble all by itself in 24 hours. Further, he also believed that suns and other stars wobble just like the earth does around a crystal core, and speculated that the moon might also be a magnet caused to orbit by its magnetic attraction to the earth. This was perhaps the first proposal that a force might cause a heavenly orbit. His research method was revolutionary in that he used experiments rather than pure logic and reasoning like the ancient Greek philosophers did. It was a new attitude toward the scientific investigation. Until then, scientific experiments were not in fashion. It was because of this scientific attitude, together with his contribution to our knowledge of magnetism, that a unit of magnetomotive force, also known as magnetic potential, was named Gilbert in his honor. His approach of careful observation and experimentation rather than the authoritative opinion or deductive philosophy of others had laid the very foundation for modem science.", "hypothesis": "He is less famous than he should be.", "label": "e"} +{"uid": "id_671", "premise": "William Henry Perkin The man who invented synthetic dyes William Henry Perkin was born on March 12,1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly that in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. 28Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited by product of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Perkin was still young when he made the discovery that made him rich and famous.", "label": "e"} +{"uid": "id_672", "premise": "William Henry Perkin The man who invented synthetic dyes William Henry Perkin was born on March 12,1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly that in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. 28Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited by product of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "The trees from which quinine is derived grow only in South America.", "label": "n"} +{"uid": "id_673", "premise": "William Henry Perkin The man who invented synthetic dyes William Henry Perkin was born on March 12,1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly that in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. 28Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited by product of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Michael Faraday was the first person to recognise Perkins ability as a student of chemistry.", "label": "c"} +{"uid": "id_674", "premise": "William Henry Perkin The man who invented synthetic dyes William Henry Perkin was born on March 12,1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly that in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. 28Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited by product of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Michael Faraday suggested Perkin should enrol in the Royal College of Chemistry.", "label": "n"} +{"uid": "id_675", "premise": "William Henry Perkin The man who invented synthetic dyes William Henry Perkin was born on March 12,1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly that in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. 28Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited by product of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Perkin was inspired by the discoveries of the famous scientist Louis Pasteur.", "label": "n"} +{"uid": "id_676", "premise": "William Henry Perkin The man who invented synthetic dyes William Henry Perkin was born on March 12,1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly that in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. 28Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited by product of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Perkin employed August Wilhelm Hofmann as his assistant.", "label": "c"} +{"uid": "id_677", "premise": "William Henry Perkin The man who invented synthetic dyes William Henry Perkin was born on March 12,1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly that in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. 28Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited by product of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Perkin hoped to manufacture a drug from a coal tar waste product.", "label": "e"} +{"uid": "id_678", "premise": "William Henry Perkin, The man who invented synthetic dyes. William Henry Perkin was born on March 12, 1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited byproduct of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Michael Faraday was the first person to recognise Perkins ability as a student of chemistry.", "label": "c"} +{"uid": "id_679", "premise": "William Henry Perkin, The man who invented synthetic dyes. William Henry Perkin was born on March 12, 1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited byproduct of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Perkin employed August Wilhelm Hofmann as his assistant.", "label": "c"} +{"uid": "id_680", "premise": "William Henry Perkin, The man who invented synthetic dyes. William Henry Perkin was born on March 12, 1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited byproduct of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Michael Faraday suggested Perkin should enrol in the Royal College of Chemistry.", "label": "n"} +{"uid": "id_681", "premise": "William Henry Perkin, The man who invented synthetic dyes. William Henry Perkin was born on March 12, 1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited byproduct of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "The trees from which quinine is derived grow only in South America.", "label": "n"} +{"uid": "id_682", "premise": "William Henry Perkin, The man who invented synthetic dyes. William Henry Perkin was born on March 12, 1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited byproduct of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Perkin hoped to manufacture a drug from a coal tar waste product.", "label": "e"} +{"uid": "id_683", "premise": "William Henry Perkin, The man who invented synthetic dyes. William Henry Perkin was born on March 12, 1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited byproduct of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Perkin was still young when he made the discovery that made him rich and famous.", "label": "e"} +{"uid": "id_684", "premise": "William Henry Perkin, The man who invented synthetic dyes. William Henry Perkin was born on March 12, 1838, in London, England. As a boy, Perkins curiosity prompted early interests in the arts, sciences, photography, and engineering. But it was a chance stumbling upon a run-down, yet functional, laboratory in his late grandfathers home that solidified the young mans enthusiasm for chemistry. As a student at the City of London School, Perkin became immersed in the study of chemistry. His talent and devotion to the subject were perceived by his teacher, Thomas Hall, who encouraged him to attend a series of lectures given by the eminent scientist Michael Faraday at the Royal Institution. Those speeches fired the young chemists enthusiasm further, and he later went on to attend the Royal College of Chemistry, which he succeeded in entering in 1853, at the age of 15. At the time of Perkins enrolment, the Royal College of Chemistry was headed by the noted German chemist August Wilhelm Hofmann. Perkins scientific gifts soon caught Hofmanns attention and, within two years, he became Hofmanns youngest assistant. Not long after that, Perkin made the scientific breakthrough that would bring him both fame and fortune. At the time, quinine was the only viable medical treatment for malaria. The drug is derived from the bark of the cinchona tree, native to South America, and by 1856 demand for the drug was surpassing the available supply. Thus, when Hofmann made some passing comments about the desirability of a synthetic substitute for quinine, it was unsurprising that his star pupil was moved to take up the challenge. During his vacation in 1856, Perkin spent his time in the laboratory on the top floor of his familys house. He was attempting to manufacture quinine from aniline, an inexpensive and readily available coal tar waste product. Despite his best efforts, however, he did not end up with quinine. Instead, he produced a mysterious dark sludge. Luckily, Perkins scientific training and nature prompted him to investigate the substance further. Incorporating potassium dichromate and alcohol into the aniline at various stages of the experimental process, he finally produced a deep purple solution. And, proving the truth of the famous scientist Louis Pasteurs words chance favours only the prepared mind, Perkin saw the potential of his unexpected find. Historically, textile dyes were made from such natural sources as plants and animal excretions. Some of these, such as the glandular mucus of snails, were difficult to obtain and outrageously expensive. Indeed, the purple colour extracted from a snail was once so costly in society at the time only the rich could afford it. Further, natural dyes tended to be muddy in hue and fade quickly. It was against this backdrop that Perkins discovery was made. Perkin quickly grasped that his purple solution could be used to colour fabric, thus making it the worlds first synthetic dye. Realising the importance of this breakthrough, he lost no time in patenting it. But perhaps the most fascinating of all Perkins reactions to his find was his nearly instant recognition that the new dye had commercial possibilities. Perkin originally named his dye Tyrian Purple, but it later became commonly known as mauve (from the French for the plant used to make the colour violet). He asked advice of Scottish dye works owner Robert Pullar, who assured him that manufacturing the dye would be well worth it if the colour remained fast (i. e. would not fade) and the cost was relatively low. So, over the fierce objections of his mentor Hofmann, he left college to give birth to the modern chemical industry. With the help of his father and brother, Perkin set up a factory not far from London. Utilising the cheap and plentiful coal tar that was an almost unlimited byproduct of Londons gas street lighting, the dye works began producing the worlds first synthetically dyed material in 1857. The company received a commercial boost from the Empress Eugenie of France, when she decided the new colour flattered her. Very soon, mauve was the necessary shade for all the fashionable ladies in that country. Not to be outdone, Englands Queen Victoria also appeared in public wearing a mauve gown, thus making it all the rage in England as well. The dye was bold and fast, and the public clamoured for more. Perkin went back to the drawing board. Although Perkins fame was achieved and fortune assured by his first discovery, the chemist continued his research. Among other dyes he developed and introduced were aniline red (1859) and aniline black (1863) and, in the late 1860s, Perkins green. It is important to note that Perkins synthetic dye discoveries had outcomes far beyond the merely decorative. The dyes also became vital to medical research in many ways. For instance, they were used to stain previously invisible microbes and bacteria, allowing researchers to identify such bacilli as tuberculosis, cholera, and anthrax. Artificial dyes continue to play a crucial role today. And, in what would have been particularly pleasing to Perkin, their current use is in the search for a vaccine against malaria.", "hypothesis": "Perkin was inspired by the discoveries of the famous scientist Louis Pasteur.", "label": "n"} +{"uid": "id_685", "premise": "With increased demands on business executives to travel in the globalised economy, how do globe-trotting executives manager their travel demands? A highly invaluable resources tapped by senior executives is the use of one or more personal assistants. More than glorified receptionists, PAs hold considerable power in the work place, deciding who gains access to their employer and when, being privy to highly sensitive information and maintaining order in the executives absence. Having this extra helping hand can allow executives to focus on the more important tasks and objectives, allowing them to save time, effort and improve efficiency.", "hypothesis": "Personal assistants are expensive.", "label": "n"} +{"uid": "id_686", "premise": "With increased demands on business executives to travel in the globalised economy, how do globe-trotting executives manager their travel demands? A highly invaluable resources tapped by senior executives is the use of one or more personal assistants. More than glorified receptionists, PAs hold considerable power in the work place, deciding who gains access to their employer and when, being privy to highly sensitive information and maintaining order in the executives absence. Having this extra helping hand can allow executives to focus on the more important tasks and objectives, allowing them to save time, effort and improve efficiency.", "hypothesis": "Personal assistants are glorified receptionists.", "label": "c"} +{"uid": "id_687", "premise": "With increased demands on business executives to travel in the globalised economy, how do globe-trotting executives manager their travel demands? A highly invaluable resources tapped by senior executives is the use of one or more personal assistants. More than glorified receptionists, PAs hold considerable power in the work place, deciding who gains access to their employer and when, being privy to highly sensitive information and maintaining order in the executives absence. Having this extra helping hand can allow executives to focus on the more important tasks and objectives, allowing them to save time, effort and improve efficiency.", "hypothesis": "Personal assistants are efficient.", "label": "n"} +{"uid": "id_688", "premise": "With increased demands on business executives to travel in the globalised economy, how do globe-trotting executives manager their travel demands? A highly invaluable resources tapped by senior executives is the use of one or more personal assistants. More than glorified receptionists, PAs hold considerable power in the work place, deciding who gains access to their employer and when, being privy to highly sensitive information and maintaining order in the executives absence. Having this extra helping hand can allow executives to focus on the more important tasks and objectives, allowing them to save time, effort and improve efficiency.", "hypothesis": "Personal assistants are privy to sensitive information.", "label": "e"} +{"uid": "id_689", "premise": "With more than 32 million smart phones in the United Kingdom alone, the number of mobile phone applications or apps is rapidly increasing. These apps are used for gaming, travel, shopping, and banking and soon the department of health will be encouraging the development of medical apps to help manage medical conditions. Potentially popular apps could include blood pressure monitors, blood sugar monitors and contraceptive choice apps. These apps could make managing disease far more convenient and efficient, improving the quality of life for millions in the UK.", "hypothesis": "Heart rate monitor is listed as a potential popular app", "label": "n"} +{"uid": "id_690", "premise": "With more than 32 million smart phones in the United Kingdom alone, the number of mobile phone applications or apps is rapidly increasing. These apps are used for gaming, travel, shopping, and banking and soon the department of health will be encouraging the development of medical apps to help manage medical conditions. Potentially popular apps could include blood pressure monitors, blood sugar monitors and contraceptive choice apps. These apps could make managing disease far more convenient and efficient, improving the quality of life for millions in the UK.", "hypothesis": "Blood sugar monitor is listed as a potential popular app", "label": "e"} +{"uid": "id_691", "premise": "With over half a billion citizens in the European Union, the stability of its food market is important. The Common Agricultural Policy (also known as CAP) is an EU initiative designed to provide farmers with the economic support to help them withstand the outcomes of unexpected natural events like heavy rains, floods, cold temperatures, and fires. For the CAP to improve, information regarding the current economic situation of farms across the EU is needed. As of January 4th 2010, each farm in the EU should provide a document called \"Farm Return\", containing two types of data: income assessment and a description of the farm's business operation. The EU then uses these data to predict the implications of changes made to the CAP on the farmers, as well as to understand the current situation better. The Farm Return's route from the farm to the EU begins with a local agency called the Liaison Agency. This agency then passes the report on to a National Committee, which hands it over to the EU.", "hypothesis": "The EU has been collecting data on farms since before January 4th 2010.", "label": "n"} +{"uid": "id_692", "premise": "With over half a billion citizens in the European Union, the stability of its food market is important. The Common Agricultural Policy (also known as CAP) is an EU initiative designed to provide farmers with the economic support to help them withstand the outcomes of unexpected natural events like heavy rains, floods, cold temperatures, and fires. For the CAP to improve, information regarding the current economic situation of farms across the EU is needed. As of January 4th 2010, each farm in the EU should provide a document called \"Farm Return\", containing two types of data: income assessment and a description of the farm's business operation. The EU then uses these data to predict the implications of changes made to the CAP on the farmers, as well as to understand the current situation better. The Farm Return's route from the farm to the EU begins with a local agency called the Liaison Agency. This agency then passes the report on to a National Committee, which hands it over to the EU.", "hypothesis": "The Farm Return would be submitted by the farm to the National Committee, a local agency, which would then hand it over to the EU.", "label": "c"} +{"uid": "id_693", "premise": "With over half a billion citizens in the European Union, the stability of its food market is important. The Common Agricultural Policy (also known as CAP) is an EU initiative designed to provide farmers with the economic support to help them withstand the outcomes of unexpected natural events like heavy rains, floods, cold temperatures, and fires. For the CAP to improve, information regarding the current economic situation of farms across the EU is needed. As of January 4th 2010, each farm in the EU should provide a document called \"Farm Return\", containing two types of data: income assessment and a description of the farm's business operation. The EU then uses these data to predict the implications of changes made to the CAP on the farmers, as well as to understand the current situation better. The Farm Return's route from the farm to the EU begins with a local agency called the Liaison Agency. This agency then passes the report on to a National Committee, which hands it over to the EU.", "hypothesis": "The EU could use the data collected via the Farm Return to simulate potential consequences of different CAP policy scenarios.", "label": "e"} +{"uid": "id_694", "premise": "With the rapid technological advancement today, bridges are becoming increasingly more sophisticated, and are spanning significantly greater distances. Earthquakes, however, remain a potential threat to these immense structures as they may do irreparable and costly damage to an important bridge. As a bridges major vulnerability to earth movement lies in its supportive structures, a promising solution has been found to be a self-anchored suspension bridge. This bridge design is one in which the pull of the cables is opposed by the push or the deck, thus eliminating the supporting anchorages.", "hypothesis": "The self-anchored suspension bridge is the established solution to the threat of the earthquake damage.", "label": "c"} +{"uid": "id_695", "premise": "With the rapid technological advancement today, bridges are becoming increasingly more sophisticated, and are spanning significantly greater distances. Earthquakes, however, remain a potential threat to these immense structures as they may do irreparable and costly damage to an important bridge. As a bridges major vulnerability to earth movement lies in its supportive structures, a promising solution has been found to be a self-anchored suspension bridge. This bridge design is one in which the pull of the cables is opposed by the push or the deck, thus eliminating the supporting anchorages.", "hypothesis": "A possible solution to the risk of earthquake damage is the self-anchored suspension bridge as the forces of the cables and the anchorages oppose each other.", "label": "e"} +{"uid": "id_696", "premise": "With the rapid technological advancement today, bridges are becoming increasingly more sophisticated, and are spanning significantly greater distances. Earthquakes, however, remain a potential threat to these immense structures as they may do irreparable and costly damage to an important bridge. As a bridges major vulnerability to earth movement lies in its supportive structures, a promising solution has been found to be a self-anchored suspension bridge. This bridge design is one in which the pull of the cables is opposed by the push or the deck, thus eliminating the supporting anchorages.", "hypothesis": "Earthquakes inevitably cause costly damage to the bridges structure.", "label": "n"} +{"uid": "id_697", "premise": "With the rapid technological advancement today, bridges are becoming increasingly more sophisticated, and are spanning significantly greater distances. Earthquakes, however, remain a potential threat to these immense structures as they may do irreparable and costly damage to an important bridge. As a bridges major vulnerability to earth movement lies in its supportive structures, a promising solution has been found to be a self-anchored suspension bridge. This bridge design is one in which the pull of the cables is opposed by the push or the deck, thus eliminating the supporting anchorages.", "hypothesis": "Modern bridges have different structural features to those built before technological advancement of today.", "label": "n"} +{"uid": "id_698", "premise": "With the rapid technological advancement today, bridges are becoming increasingly more sophisticated, and are spanning significantly greater distances. Earthquakes, however, remain a potential threat to these immense structures as they may do irreparable and costly damage to an important bridge. As a bridges major vulnerability to earth movement lies in its supportive structures, a promising solution has been found to be a self-anchored suspension bridge. This bridge design is one in which the pull of the cables is opposed by the push or the deck, thus eliminating the supporting anchorages.", "hypothesis": "The elimination of the anchorages has been a proposed solution to the threat of damage caused by seismic activity.", "label": "e"} +{"uid": "id_699", "premise": "With the rapid technological advancement today, bridges are becoming increasingly more sophisticated, and are spanning significantly greater distances. Earthquakes, however, remain a potential threat to these immense structures as they may do irreparable and costly damage to an important bridge. As a bridges major vulnerability to earth movement lies in its supportive structures, a promising solution has been found to be a self-anchored suspension bridge. This bridge design is one in which the pull of the cables is opposed by the push or the deck, thus elimination the supporting anchorages.", "hypothesis": "Modern bridges have different structural features to those built before the technological advancement of today.", "label": "n"} +{"uid": "id_700", "premise": "With the rapid technological advancement today, bridges are becoming increasingly more sophisticated, and are spanning significantly greater distances. Earthquakes, however, remain a potential threat to these immense structures as they may do irreparable and costly damage to an important bridge. As a bridges major vulnerability to earth movement lies in its supportive structures, a promising solution has been found to be a self-anchored suspension bridge. This bridge design is one in which the pull of the cables is opposed by the push or the deck, thus elimination the supporting anchorages.", "hypothesis": "The elimination of the anchorages has been a proposed solution to the threat of damage caused by seismic activity.", "label": "e"} +{"uid": "id_701", "premise": "With the rapid technological advancement today, bridges are becoming increasingly more sophisticated, and are spanning significantly greater distances. Earthquakes, however, remain a potential threat to these immense structures as they may do irreparable and costly damage to an important bridge. As a bridges major vulnerability to earth movement lies in its supportive structures, a promising solution has been found to be a self-anchored suspension bridge. This bridge design is one in which the pull of the cables is opposed by the push or the deck, thus elimination the supporting anchorages.", "hypothesis": "The self-anchored suspension bridge is established solution to the threat of earthquake damage", "label": "c"} +{"uid": "id_702", "premise": "With the rapid technological advancement today, bridges are becoming increasingly more sophisticated, and are spanning significantly greater distances. Earthquakes, however, remain a potential threat to these immense structures as they may do irreparable and costly damage to an important bridge. As a bridges major vulnerability to earth movement lies in its supportive structures, a promising solution has been found to be a self-anchored suspension bridge. This bridge design is one in which the pull of the cables is opposed by the push or the deck, thus elimination the supporting anchorages.", "hypothesis": "Earthquakes inevitably cause costly damage to the bridges structure.", "label": "c"} +{"uid": "id_703", "premise": "With the rapid technological advancement today, bridges are becoming increasingly more sophisticated, and are spanning significantly greater distances. Earthquakes, however, remain a potential threat to these immense structures as they may do irreparable and costly damage to an important bridge. As a bridges major vulnerability to earth movement lies in its supportive structures, a promising solution has been found to be a self-anchored suspension bridge. This bridge design is one in which the pull of the cables is opposed by the push or the deck, thus elimination the supporting anchorages.", "hypothesis": "A possible solution to the risk of earthquake damage is the self-anchored suspension bridge as the forces of cables and the anchorages oppose each other.", "label": "c"} +{"uid": "id_704", "premise": "Within the next decade, the weakening rural economy will be the biggest challenge faced by rural areas. Agriculture, which supplies a quarter of rural job opportunities, is experiencing a recession, while tourism provides fewer than half of the job opportunities provided by agriculture. However, rural manufacturing has developed dramatically in the past decade. Even so, fewer than one In every 20 people in rural areas are working in rural manufacturing. Rural manufacturing Is threatened by companies in industrial areas, because rural areas have a larger skilled worker team and better developed transportation system.", "hypothesis": "In the future, agriculture Is likely to provide more Job opportunities.", "label": "c"} +{"uid": "id_705", "premise": "Within the next decade, the weakening rural economy will be the biggest challenge faced by rural areas. Agriculture, which supplies a quarter of rural job opportunities, is experiencing a recession, while tourism provides fewer than half of the job opportunities provided by agriculture. However, rural manufacturing has developed dramatically in the past decade. Even so, fewer than one In every 20 people in rural areas are working in rural manufacturing. Rural manufacturing Is threatened by companies in industrial areas, because rural areas have a larger skilled worker team and better developed transportation system.", "hypothesis": "In rural areas, manufacture industry provides the fewest job opportunities.", "label": "n"} +{"uid": "id_706", "premise": "Within the next decade, the weakening rural economy will be the biggest challenge faced by rural areas. Agriculture, which supplies a quarter of rural job opportunities, is experiencing a recession, while tourism provides fewer than half of the job opportunities provided by agriculture. However, rural manufacturing has developed dramatically in the past decade. Even so, fewer than one In every 20 people in rural areas are working in rural manufacturing. Rural manufacturing Is threatened by companies in industrial areas, because rural areas have a larger skilled worker team and better developed transportation system.", "hypothesis": "In the next decade, rural economy is expected to become stronger.", "label": "n"} +{"uid": "id_707", "premise": "Without exception, living non-human primates habitually more around on all fours, or quadrupedally, when they are on the ground. Scientists generally assume therefore that the last common ancestor of humans and chimpanzees (our closest living relative) was also a quadrupted. Exactly when the last common ancestor lived is unknown, but clear indications of bipedalism, the trait that distinguishes ancient humans from other apes, are evident in the oldest known species of Australopithecus, which lived in Africa roughly four millions years ago.", "hypothesis": "Bipedal apes are more evolutionarily advantaged than quadrupedal ones.", "label": "n"} +{"uid": "id_708", "premise": "Without exception, living non-human primates habitually more around on all fours, or quadrupedally, when they are on the ground. Scientists generally assume therefore that the last common ancestor of humans and chimpanzees (our closest living relative) was also a quadrupted. Exactly when the last common ancestor lived is unknown, but clear indications of bipedalism, the trait that distinguishes ancient humans from other apes, are evident in the oldest known species of Australopithecus, which lived in Africa roughly four millions years ago.", "hypothesis": "Australopithecus is as closely related to ancient man as to the chimpanzee.", "label": "c"} +{"uid": "id_709", "premise": "Without exception, living non-human primates habitually more around on all fours, or quadrupedally, when they are on the ground. Scientists generally assume therefore that the last common ancestor of humans and chimpanzees (our closest living relative) was also a quadrupted. Exactly when the last common ancestor lived is unknown, but clear indications of bipedalism, the trait that distinguishes ancient humans from other apes, are evident in the oldest known species of Australopithecus, which lived in Africa roughly four millions years ago.", "hypothesis": "Bipedalism is the main trait that distinguishes ancient humans from Australopithecus.", "label": "c"} +{"uid": "id_710", "premise": "Without exception, living non-human primates habitually more around on all fours, or quadrupedally, when they are on the ground. Scientists generally assume therefore that the last common ancestor of humans and chimpanzees (our closest living relative) was also a quadrupted. Exactly when the last common ancestor lived is unknown, but clear indications of bipedalism, the trait that distinguishes ancient humans from other apes, are evident in the oldest known species of Australopithecus, which lived in Africa roughly four millions years ago.", "hypothesis": "Australopithecus is as closely related to ancient man as to the chimpanzee.", "label": "n"} +{"uid": "id_711", "premise": "Words fail them It seems companies will soon begin to say goodbye to the written word. The basic unit of communication will no longer be typed out in e-mails. It will be shot in pictures and shown on video. Companies have already discovered that the written word is failing them. Its feebleness compared with the moving image was rammed home in 2010 when the sight of BP's oil spewing out into the Gulf of Mexico on YouTube sent a message to the world far more compelling than any written statement could ever be. If the word has become weak at conveying big corporate messages, it has become even weaker at conveying small ones. For years the in-boxes of all office workers have been overflowing with unread e-mails. But managers will do something about it and desist from communicating with staff in this way. E-mail will still exist as a way of talking to one person at a time, but as a means of mass communication it will be finished. Companies will find instead that to get a message over to employees, customers, shareholders and the outside world, video is far more effective. In the past three years video has come from nothing to make up nearly half of internet traffic; in another three, it is likely to be more than three-quarters. So far corporations have taken a back seat in this growth, but they will soon need to climb into the front and start to drive it. This shift in communications will have three important effects. It will change the sort of person who makes it to the corner office. It will alter the way that businesses are managed. And it will shift the position corporations occupy in society and possibly make us like some of them just a little bit more. The new corporate leaders will no longer be pen pushers and bean counters. The 20-year reign of faceless bosses will come to an end. Charisma will be back in: all successful business chiefs will have to be storytellers and performers. Just as political leaders have long had to be dynamite on TV to stand much hope of election or survival, so too will corporate leaders. They must be able to sell not only their vision of their companies but their vision of themselves. The new big boss will be expected to set an example; any leaders showing signs of human frailty will be out on their ears. The moral majority will tighten its hold on corporate life, first in America, but then elsewhere too. With this shift will come a change in management style. Numbers and facts will be supplanted by appeals to emotion to make employees and customers do what they are told. The businessperson's emotion may be no more genuine than the politician's, but successful bosses will get good at faking it. Others will struggle: prepare to cringe in as corporate leaders spout a lot of phoney stuff that used to look bad enough when written down, but will sound even worse spoken. One good consequence of the change, however, will be a greater clarity in the way companies think about their businesses. The written word was a forgiving medium for over-complicated, ill-conceived messages. Video demands simplicity. The best companies will use this to their advantage by thinking through more rigorously what it is they are trying to say and do.", "hypothesis": "A business leaders ability to sell themselves will become more important.", "label": "e"} +{"uid": "id_712", "premise": "Words fail them It seems companies will soon begin to say goodbye to the written word. The basic unit of communication will no longer be typed out in e-mails. It will be shot in pictures and shown on video. Companies have already discovered that the written word is failing them. Its feebleness compared with the moving image was rammed home in 2010 when the sight of BP's oil spewing out into the Gulf of Mexico on YouTube sent a message to the world far more compelling than any written statement could ever be. If the word has become weak at conveying big corporate messages, it has become even weaker at conveying small ones. For years the in-boxes of all office workers have been overflowing with unread e-mails. But managers will do something about it and desist from communicating with staff in this way. E-mail will still exist as a way of talking to one person at a time, but as a means of mass communication it will be finished. Companies will find instead that to get a message over to employees, customers, shareholders and the outside world, video is far more effective. In the past three years video has come from nothing to make up nearly half of internet traffic; in another three, it is likely to be more than three-quarters. So far corporations have taken a back seat in this growth, but they will soon need to climb into the front and start to drive it. This shift in communications will have three important effects. It will change the sort of person who makes it to the corner office. It will alter the way that businesses are managed. And it will shift the position corporations occupy in society and possibly make us like some of them just a little bit more. The new corporate leaders will no longer be pen pushers and bean counters. The 20-year reign of faceless bosses will come to an end. Charisma will be back in: all successful business chiefs will have to be storytellers and performers. Just as political leaders have long had to be dynamite on TV to stand much hope of election or survival, so too will corporate leaders. They must be able to sell not only their vision of their companies but their vision of themselves. The new big boss will be expected to set an example; any leaders showing signs of human frailty will be out on their ears. The moral majority will tighten its hold on corporate life, first in America, but then elsewhere too. With this shift will come a change in management style. Numbers and facts will be supplanted by appeals to emotion to make employees and customers do what they are told. The businessperson's emotion may be no more genuine than the politician's, but successful bosses will get good at faking it. Others will struggle: prepare to cringe in as corporate leaders spout a lot of phoney stuff that used to look bad enough when written down, but will sound even worse spoken. One good consequence of the change, however, will be a greater clarity in the way companies think about their businesses. The written word was a forgiving medium for over-complicated, ill-conceived messages. Video demands simplicity. The best companies will use this to their advantage by thinking through more rigorously what it is they are trying to say and do.", "hypothesis": "The new bosses will have to be physically stronger.", "label": "n"} +{"uid": "id_713", "premise": "Words fail them It seems companies will soon begin to say goodbye to the written word. The basic unit of communication will no longer be typed out in e-mails. It will be shot in pictures and shown on video. Companies have already discovered that the written word is failing them. Its feebleness compared with the moving image was rammed home in 2010 when the sight of BP's oil spewing out into the Gulf of Mexico on YouTube sent a message to the world far more compelling than any written statement could ever be. If the word has become weak at conveying big corporate messages, it has become even weaker at conveying small ones. For years the in-boxes of all office workers have been overflowing with unread e-mails. But managers will do something about it and desist from communicating with staff in this way. E-mail will still exist as a way of talking to one person at a time, but as a means of mass communication it will be finished. Companies will find instead that to get a message over to employees, customers, shareholders and the outside world, video is far more effective. In the past three years video has come from nothing to make up nearly half of internet traffic; in another three, it is likely to be more than three-quarters. So far corporations have taken a back seat in this growth, but they will soon need to climb into the front and start to drive it. This shift in communications will have three important effects. It will change the sort of person who makes it to the corner office. It will alter the way that businesses are managed. And it will shift the position corporations occupy in society and possibly make us like some of them just a little bit more. The new corporate leaders will no longer be pen pushers and bean counters. The 20-year reign of faceless bosses will come to an end. Charisma will be back in: all successful business chiefs will have to be storytellers and performers. Just as political leaders have long had to be dynamite on TV to stand much hope of election or survival, so too will corporate leaders. They must be able to sell not only their vision of their companies but their vision of themselves. The new big boss will be expected to set an example; any leaders showing signs of human frailty will be out on their ears. The moral majority will tighten its hold on corporate life, first in America, but then elsewhere too. With this shift will come a change in management style. Numbers and facts will be supplanted by appeals to emotion to make employees and customers do what they are told. The businessperson's emotion may be no more genuine than the politician's, but successful bosses will get good at faking it. Others will struggle: prepare to cringe in as corporate leaders spout a lot of phoney stuff that used to look bad enough when written down, but will sound even worse spoken. One good consequence of the change, however, will be a greater clarity in the way companies think about their businesses. The written word was a forgiving medium for over-complicated, ill-conceived messages. Video demands simplicity. The best companies will use this to their advantage by thinking through more rigorously what it is they are trying to say and do.", "hypothesis": "Business leaders will have to be seen in public.", "label": "e"} +{"uid": "id_714", "premise": "Words fail them It seems companies will soon begin to say goodbye to the written word. The basic unit of communication will no longer be typed out in e-mails. It will be shot in pictures and shown on video. Companies have already discovered that the written word is failing them. Its feebleness compared with the moving image was rammed home in 2010 when the sight of BP's oil spewing out into the Gulf of Mexico on YouTube sent a message to the world far more compelling than any written statement could ever be. If the word has become weak at conveying big corporate messages, it has become even weaker at conveying small ones. For years the in-boxes of all office workers have been overflowing with unread e-mails. But managers will do something about it and desist from communicating with staff in this way. E-mail will still exist as a way of talking to one person at a time, but as a means of mass communication it will be finished. Companies will find instead that to get a message over to employees, customers, shareholders and the outside world, video is far more effective. In the past three years video has come from nothing to make up nearly half of internet traffic; in another three, it is likely to be more than three-quarters. So far corporations have taken a back seat in this growth, but they will soon need to climb into the front and start to drive it. This shift in communications will have three important effects. It will change the sort of person who makes it to the corner office. It will alter the way that businesses are managed. And it will shift the position corporations occupy in society and possibly make us like some of them just a little bit more. The new corporate leaders will no longer be pen pushers and bean counters. The 20-year reign of faceless bosses will come to an end. Charisma will be back in: all successful business chiefs will have to be storytellers and performers. Just as political leaders have long had to be dynamite on TV to stand much hope of election or survival, so too will corporate leaders. They must be able to sell not only their vision of their companies but their vision of themselves. The new big boss will be expected to set an example; any leaders showing signs of human frailty will be out on their ears. The moral majority will tighten its hold on corporate life, first in America, but then elsewhere too. With this shift will come a change in management style. Numbers and facts will be supplanted by appeals to emotion to make employees and customers do what they are told. The businessperson's emotion may be no more genuine than the politician's, but successful bosses will get good at faking it. Others will struggle: prepare to cringe in as corporate leaders spout a lot of phoney stuff that used to look bad enough when written down, but will sound even worse spoken. One good consequence of the change, however, will be a greater clarity in the way companies think about their businesses. The written word was a forgiving medium for over-complicated, ill-conceived messages. Video demands simplicity. The best companies will use this to their advantage by thinking through more rigorously what it is they are trying to say and do.", "hypothesis": "Large corporations are already using video extensively.", "label": "c"} +{"uid": "id_715", "premise": "Words fail them It seems companies will soon begin to say goodbye to the written word. The basic unit of communication will no longer be typed out in e-mails. It will be shot in pictures and shown on video. Companies have already discovered that the written word is failing them. Its feebleness compared with the moving image was rammed home in 2010 when the sight of BP's oil spewing out into the Gulf of Mexico on YouTube sent a message to the world far more compelling than any written statement could ever be. If the word has become weak at conveying big corporate messages, it has become even weaker at conveying small ones. For years the in-boxes of all office workers have been overflowing with unread e-mails. But managers will do something about it and desist from communicating with staff in this way. E-mail will still exist as a way of talking to one person at a time, but as a means of mass communication it will be finished. Companies will find instead that to get a message over to employees, customers, shareholders and the outside world, video is far more effective. In the past three years video has come from nothing to make up nearly half of internet traffic; in another three, it is likely to be more than three-quarters. So far corporations have taken a back seat in this growth, but they will soon need to climb into the front and start to drive it. This shift in communications will have three important effects. It will change the sort of person who makes it to the corner office. It will alter the way that businesses are managed. And it will shift the position corporations occupy in society and possibly make us like some of them just a little bit more. The new corporate leaders will no longer be pen pushers and bean counters. The 20-year reign of faceless bosses will come to an end. Charisma will be back in: all successful business chiefs will have to be storytellers and performers. Just as political leaders have long had to be dynamite on TV to stand much hope of election or survival, so too will corporate leaders. They must be able to sell not only their vision of their companies but their vision of themselves. The new big boss will be expected to set an example; any leaders showing signs of human frailty will be out on their ears. The moral majority will tighten its hold on corporate life, first in America, but then elsewhere too. With this shift will come a change in management style. Numbers and facts will be supplanted by appeals to emotion to make employees and customers do what they are told. The businessperson's emotion may be no more genuine than the politician's, but successful bosses will get good at faking it. Others will struggle: prepare to cringe in as corporate leaders spout a lot of phoney stuff that used to look bad enough when written down, but will sound even worse spoken. One good consequence of the change, however, will be a greater clarity in the way companies think about their businesses. The written word was a forgiving medium for over-complicated, ill-conceived messages. Video demands simplicity. The best companies will use this to their advantage by thinking through more rigorously what it is they are trying to say and do.", "hypothesis": "We will probably like the managers of corporations a lot more.", "label": "c"} +{"uid": "id_716", "premise": "Work-related stress is one of the biggest causes of sick leave in the UK. If you've noticed you always seem to be rushing about, or miss meal breaks, take work home or dont have enough time for relaxation, seeing your family or for exercise, then you may well find yourself under stress, especially at work. There is often no single cause of work-related stress, but it can be caused by poor working conditions, long hours, relationship problems with colleagues, or lack of job security. Stress is often the result of a combination of these factors that builds up over time. Work-related stress can result in both physical problems such as headaches, muscular tension, back or neck pain, tiredness, digestive problems and sweating; or emotional problems, such as a lower sex drive, feelings of inadequacy, irritability and lack of concentration. According to recent surveys, one in six of the UK working population said their job is very stressful, and thirty percent of men said that the demands of their job interfere with their private lives.", "hypothesis": "If you spend more time with your family, you will not suffer from stress.", "label": "n"} +{"uid": "id_717", "premise": "Work-related stress is one of the biggest causes of sick leave in the UK. If you've noticed you always seem to be rushing about, or miss meal breaks, take work home or dont have enough time for relaxation, seeing your family or for exercise, then you may well find yourself under stress, especially at work. There is often no single cause of work-related stress, but it can be caused by poor working conditions, long hours, relationship problems with colleagues, or lack of job security. Stress is often the result of a combination of these factors that builds up over time. Work-related stress can result in both physical problems such as headaches, muscular tension, back or neck pain, tiredness, digestive problems and sweating; or emotional problems, such as a lower sex drive, feelings of inadequacy, irritability and lack of concentration. According to recent surveys, one in six of the UK working population said their job is very stressful, and thirty percent of men said that the demands of their job interfere with their private lives.", "hypothesis": "One in six working men say their job is very stressful.", "label": "n"} +{"uid": "id_718", "premise": "Work-related stress is one of the biggest causes of sick leave in the UK. If you've noticed you always seem to be rushing about, or miss meal breaks, take work home or dont have enough time for relaxation, seeing your family or for exercise, then you may well find yourself under stress, especially at work. There is often no single cause of work-related stress, but it can be caused by poor working conditions, long hours, relationship problems with colleagues, or lack of job security. Stress is often the result of a combination of these factors that builds up over time. Work-related stress can result in both physical problems such as headaches, muscular tension, back or neck pain, tiredness, digestive problems and sweating; or emotional problems, such as a lower sex drive, feelings of inadequacy, irritability and lack of concentration. According to recent surveys, one in six of the UK working population said their job is very stressful, and thirty percent of men said that the demands of their job interfere with their private lives.", "hypothesis": "Work-related stress can result in tiredness and a lack of concentration.", "label": "e"} +{"uid": "id_719", "premise": "Work-related stress is one of the biggest causes of sick leave in the UK. If you've noticed you always seem to be rushing about, or miss meal breaks, take work home or dont have enough time for relaxation, seeing your family or for exercise, then you may well find yourself under stress, especially at work. There is often no single cause of work-related stress, but it can be caused by poor working conditions, long hours, relationship problems with colleagues, or lack of job security. Stress is often the result of a combination of these factors that builds up over time. Work-related stress can result in both physical problems such as headaches, muscular tension, back or neck pain, tiredness, digestive problems and sweating; or emotional problems, such as a lower sex drive, feelings of inadequacy, irritability and lack of concentration. According to recent surveys, one in six of the UK working population said their job is very stressful, and thirty percent of men said that the demands of their job interfere with their private lives.", "hypothesis": "Stress at work is often caused by relationship problems with your partner.", "label": "n"} +{"uid": "id_720", "premise": "Workers are becoming increasingly concerned about company relocation due to its association with employee distress and isolation, which can be caused by issues such as the management of property transitions and loss of community ties. Furthermore, moving home can put a strain on workers financial resources and close relationships, especially for those working parents who may feel guilty about moving children to new schools. Regardless of the disruption created, some individuals are very willing to relocate, due to the potential for enhanced career prospects and long-term financial stability.", "hypothesis": "The potential benefits of job relocation are seen, by some, to be worth the associated distress and strain.", "label": "e"} +{"uid": "id_721", "premise": "Workers are becoming increasingly concerned about company relocation due to its association with employee distress and isolation, which can be caused by issues such as the management of property transitions and loss of community ties. Furthermore, moving home can put a strain on workers financial resources and close relationships, especially for those working parents who may feel guilty about moving children to new schools. Regardless of the disruption created, some individuals are very willing to relocate, due to the potential for enhanced career prospects and long-term financial stability.", "hypothesis": "The majority of employees feel isolated following relocation.", "label": "n"} +{"uid": "id_722", "premise": "Workers are becoming increasingly concerned about company relocation due to its association with employee distress and isolation, which can be caused by issues such as the management of property transitions and loss of community ties. Furthermore, moving home can put a strain on workers financial resources and close relationships, especially for those working parents who may feel guilty about moving children to new schools. Regardless of the disruption created, some individuals are very willing to relocate, due to the potential for enhanced career prospects and long-term financial stability.", "hypothesis": "Some people may feel guilty about the consequences of relocating.", "label": "e"} +{"uid": "id_723", "premise": "Workers are becoming increasingly concerned about company relocation due to its association with employee distress and isolation, which can be caused by issues such as the management of property transitions and loss of community ties. Furthermore, moving home can put a strain on workers financial resources and close relationships, especially for those working parents who may feel guilty about moving children to new schools. Regardless of the disruption created, some individuals are very willing to relocate, due to the potential for enhanced career prospects and long-term financial stability.", "hypothesis": "Company relocation has increased.", "label": "n"} +{"uid": "id_724", "premise": "Workers are becoming increasingly concerned about company relocation due to its association with employee distress and isolation, which can be caused by issues such as the management of property transitions and loss of community ties. Furthermore, moving home can put a strain on workers financial resources and close relationships, especially for those working parents who may feel guilty about moving children to new schools. Regardless of the disruption created, some individuals are very willing to relocate, due to the potential for enhanced career prospects and long; term financial stability.", "hypothesis": "Some people may feel guilty about the consequences of relocating.", "label": "e"} +{"uid": "id_725", "premise": "Workers are becoming increasingly concerned about company relocation due to its association with employee distress and isolation, which can be caused by issues such as the management of property transitions and loss of community ties. Furthermore, moving home can put a strain on workers financial resources and close relationships, especially for those working parents who may feel guilty about moving children to new schools. Regardless of the disruption created, some individuals are very willing to relocate, due to the potential for enhanced career prospects and long; term financial stability.", "hypothesis": "Company relocation has increased.", "label": "n"} +{"uid": "id_726", "premise": "Workers are becoming increasingly concerned about company relocation due to its association with employee distress and isolation, which can be caused by issues such as the management of property transitions and loss of community ties. Furthermore, moving home can put a strain on workers financial resources and close relationships, especially for those working parents who may feel guilty about moving children to new schools. Regardless of the disruption created, some individuals are very willing to relocate, due to the potential for enhanced career prospects and long; term financial stability.", "hypothesis": "The potential benefits of job relocation are seen, by some, to be worth the associated distress and strain.", "label": "e"} +{"uid": "id_727", "premise": "Workers now caught by the top rate of income tax include university lecturers, mid-ranking civil servants and officers of local authorities, specialist nurses and sisters, police inspectors and senior officers in the ambulance and fire service. This trend means that an extra 3.5 million workers are liable for the higher rate of tax compared to 10 years ago. More than 1 million extra people pay tax at the higher rate because growth in pay has increased faster than inflation-linked tax allowances. Over the period, these allowances have been increased in line with or less than inflation, while wages have increased at a rate of more than inflation. As a result, every year more people find themselves taxed at the highest rate for the first time. The Treasury defends the trend on the basis that the increase in numbers is a result of rising incomes and living standards. Critics point out that the higher rate of tax begins at a far lower point that in other countries. In Spain, the highest rate of tax is not applied until income is 2.5 times the average wage, while in the UK the highest rate is paid by anyone who earns 1.3 times the average wage.", "hypothesis": "Linking tax allowances to inflation has caused over 3 million people to pay the higher rate of tax.", "label": "c"} +{"uid": "id_728", "premise": "Workers now caught by the top rate of income tax include university lecturers, mid-ranking civil servants and officers of local authorities, specialist nurses and sisters, police inspectors and senior officers in the ambulance and fire service. This trend means that an extra 3.5 million workers are liable for the higher rate of tax compared to 10 years ago. More than 1 million extra people pay tax at the higher rate because growth in pay has increased faster than inflation-linked tax allowances. Over the period, these allowances have been increased in line with or less than inflation, while wages have increased at a rate of more than inflation. As a result, every year more people find themselves taxed at the highest rate for the first time. The Treasury defends the trend on the basis that the increase in numbers is a result of rising incomes and living standards. Critics point out that the higher rate of tax begins at a far lower point that in other countries. In Spain, the highest rate of tax is not applied until income is 2.5 times the average wage, while in the UK the highest rate is paid by anyone who earns 1.3 times the average wage.", "hypothesis": "The cause of the increase can correctly be summarized as growth in pay having outstripped inflation-linked tax allowances, so the number of people paying tax at the highest rate has increased.", "label": "e"} +{"uid": "id_729", "premise": "Workers now caught by the top rate of income tax include university lecturers, mid-ranking civil servants and officers of local authorities, specialist nurses and sisters, police inspectors and senior officers in the ambulance and fire service. This trend means that an extra 3.5 million workers are liable for the higher rate of tax compared to 10 years ago. More than 1 million extra people pay tax at the higher rate because growth in pay has increased faster than inflation-linked tax allowances. Over the period, these allowances have been increased in line with or less than inflation, while wages have increased at a rate of more than inflation. As a result, every year more people find themselves taxed at the highest rate for the first time. The Treasury defends the trend on the basis that the increase in numbers is a result of rising incomes and living standards. Critics point out that the higher rate of tax begins at a far lower point that in other countries. In Spain, the highest rate of tax is not applied until income is 2.5 times the average wage, while in the UK the highest rate is paid by anyone who earns 1.3 times the average wage.", "hypothesis": "The trend to which the passage refers is of wages increasing at a rate higher than inflation.", "label": "c"} +{"uid": "id_730", "premise": "Working in the movies When people ask French translator Virginie Verdier what she does for a living, it must be tempting to say enigmatically: Oh me? Im in the movies. Its strictly true, but her starring role is behind the scenes. As translating goes, it doesnt get more entertaining or glamorous than subtitling films. If youre very lucky, you get to work on the new blockbuster films before theyre in the cinema, and if youre just plain lucky, you get to work on the blockbuster movies that are going to video or DVD. The process starts when you get the original script and a tape. We would start with translating and adapting the film script. The next step is what we call timing, which means synchronising the subtitles to the dialogue and pictures. This task requires discipline. You play the film, listen to the voice and the subtitles are up on your screen ready to be timed. You insert your subtitle when you hear the corresponding dialogue and delete . it when the dialogue finishes. The video tape carries a time code which runs in hours, minutes, seconds and frames. Think of it as a clock. The subtitling unit has an insert key to capture the time code where you want the subtitle to appear. When you press the delete key, it captures the time code where you want the subtitle to disappear. So each subtitle would Subtitling is an exacting part of the translation profession. Melanie Leyshon talks to Virginie Verdier of London translation company VSI about the glamour and the grind. Virginie is quick to point out that this is as exacting as any translating job. You work hard. Its not all entertainment as you are doing the translating. You need all the skills of a good translator and those of a top-notch editor. You have to be precise and, of course, much more concise than in traditional translation work. have an in point and an out point which represent the exact time when the subtitle comes in and goes out. This process is then followed by a manual review, subtitle by subtitle, and time- codes are adjusted to improve synchronisation and respect shot changes. This process involves playing the film literally frame by frame as it is essential the subtitles respect the visual rhythm of the film. Different subtitlers use different techniques. I would go through the film and do the whole translation and then go right back from the beginning and start the timing process. But you could do it in different stages, translate lets say 20 minutes of the film, then time this section and translate the next 20 minutes, and so on. Its just a different method. For multi-lingual projects, the timing is done first to create what is called a spotting list, a subtitle template, which is in effect a list of English subtitles pre-timed and edited for translation purposes. This is then translated and the timing is adapted to the target language with the help of the translator for quality control. Like any translation work, you cant hurry subtitling, says Virginie. If subtitles are translated and timed in a rush, the quality will be affected and it will show. Mistakes usually occur when the translator does not master the source language and misunderstands the original dialogue. Our work also involves checking and reworking subtitles when the translation is not up to standard. However, the reason for redoing subtitles is not just because of poor quality translation. We may need to adapt subtitles to a new version of the film: the time code may be different. The film may have been edited or the subtitles may have been created for the cinema rather than video. If subtitles were done for cinema on 35mm, we would need to reformat the timing for video, as subtitles could be out of synch or too fast. If the translation is good, we would obviously respect the work of the original translator. On a more practical level, there are general subtitling rules to follow, says Virginie. Subtitles should appear at the bottom of the screen and usually in the centre. She says that different countries use different standards and rules. In Scandinavian countries and Holland, for example, subtitles are traditionally left justified. Characters usually appear in white with a thin black border for easy reading against a white or light background. We can also use different colours for each speaker when subtitling for the hearing impaired. Subtitles should have a maximum of two lines and the maximum number of characters on each line should be between 32 and 39. Our company standard is 37 (different companies and countries have different standards). Translators often have a favourite genre, whether its war films, musicals, comedies (one of the most difficult because of the subtleties and nuances of comedy in different countries), drama or corporate programmes. Each requires a certain tone and style. VSI employs American subtitlers, which is incredibly useful as many of the films we subtitle are American, says Virginie. For an English person, it would not be so easy to understand the meaning behind typically American expressions, and vice-versa.", "hypothesis": "For translators, all subtitling work on films is desirable.", "label": "e"} +{"uid": "id_731", "premise": "Working in the movies When people ask French translator Virginie Verdier what she does for a living, it must be tempting to say enigmatically: Oh me? Im in the movies. Its strictly true, but her starring role is behind the scenes. As translating goes, it doesnt get more entertaining or glamorous than subtitling films. If youre very lucky, you get to work on the new blockbuster films before theyre in the cinema, and if youre just plain lucky, you get to work on the blockbuster movies that are going to video or DVD. The process starts when you get the original script and a tape. We would start with translating and adapting the film script. The next step is what we call timing, which means synchronising the subtitles to the dialogue and pictures. This task requires discipline. You play the film, listen to the voice and the subtitles are up on your screen ready to be timed. You insert your subtitle when you hear the corresponding dialogue and delete . it when the dialogue finishes. The video tape carries a time code which runs in hours, minutes, seconds and frames. Think of it as a clock. The subtitling unit has an insert key to capture the time code where you want the subtitle to appear. When you press the delete key, it captures the time code where you want the subtitle to disappear. So each subtitle would Subtitling is an exacting part of the translation profession. Melanie Leyshon talks to Virginie Verdier of London translation company VSI about the glamour and the grind. Virginie is quick to point out that this is as exacting as any translating job. You work hard. Its not all entertainment as you are doing the translating. You need all the skills of a good translator and those of a top-notch editor. You have to be precise and, of course, much more concise than in traditional translation work. have an in point and an out point which represent the exact time when the subtitle comes in and goes out. This process is then followed by a manual review, subtitle by subtitle, and time- codes are adjusted to improve synchronisation and respect shot changes. This process involves playing the film literally frame by frame as it is essential the subtitles respect the visual rhythm of the film. Different subtitlers use different techniques. I would go through the film and do the whole translation and then go right back from the beginning and start the timing process. But you could do it in different stages, translate lets say 20 minutes of the film, then time this section and translate the next 20 minutes, and so on. Its just a different method. For multi-lingual projects, the timing is done first to create what is called a spotting list, a subtitle template, which is in effect a list of English subtitles pre-timed and edited for translation purposes. This is then translated and the timing is adapted to the target language with the help of the translator for quality control. Like any translation work, you cant hurry subtitling, says Virginie. If subtitles are translated and timed in a rush, the quality will be affected and it will show. Mistakes usually occur when the translator does not master the source language and misunderstands the original dialogue. Our work also involves checking and reworking subtitles when the translation is not up to standard. However, the reason for redoing subtitles is not just because of poor quality translation. We may need to adapt subtitles to a new version of the film: the time code may be different. The film may have been edited or the subtitles may have been created for the cinema rather than video. If subtitles were done for cinema on 35mm, we would need to reformat the timing for video, as subtitles could be out of synch or too fast. If the translation is good, we would obviously respect the work of the original translator. On a more practical level, there are general subtitling rules to follow, says Virginie. Subtitles should appear at the bottom of the screen and usually in the centre. She says that different countries use different standards and rules. In Scandinavian countries and Holland, for example, subtitles are traditionally left justified. Characters usually appear in white with a thin black border for easy reading against a white or light background. We can also use different colours for each speaker when subtitling for the hearing impaired. Subtitles should have a maximum of two lines and the maximum number of characters on each line should be between 32 and 39. Our company standard is 37 (different companies and countries have different standards). Translators often have a favourite genre, whether its war films, musicals, comedies (one of the most difficult because of the subtleties and nuances of comedy in different countries), drama or corporate programmes. Each requires a certain tone and style. VSI employs American subtitlers, which is incredibly useful as many of the films we subtitle are American, says Virginie. For an English person, it would not be so easy to understand the meaning behind typically American expressions, and vice-versa.", "hypothesis": "Some subtitling techniques work better than others.", "label": "c"} +{"uid": "id_732", "premise": "Working in the movies When people ask French translator Virginie Verdier what she does for a living, it must be tempting to say enigmatically: Oh me? Im in the movies. Its strictly true, but her starring role is behind the scenes. As translating goes, it doesnt get more entertaining or glamorous than subtitling films. If youre very lucky, you get to work on the new blockbuster films before theyre in the cinema, and if youre just plain lucky, you get to work on the blockbuster movies that are going to video or DVD. The process starts when you get the original script and a tape. We would start with translating and adapting the film script. The next step is what we call timing, which means synchronising the subtitles to the dialogue and pictures. This task requires discipline. You play the film, listen to the voice and the subtitles are up on your screen ready to be timed. You insert your subtitle when you hear the corresponding dialogue and delete . it when the dialogue finishes. The video tape carries a time code which runs in hours, minutes, seconds and frames. Think of it as a clock. The subtitling unit has an insert key to capture the time code where you want the subtitle to appear. When you press the delete key, it captures the time code where you want the subtitle to disappear. So each subtitle would Subtitling is an exacting part of the translation profession. Melanie Leyshon talks to Virginie Verdier of London translation company VSI about the glamour and the grind. Virginie is quick to point out that this is as exacting as any translating job. You work hard. Its not all entertainment as you are doing the translating. You need all the skills of a good translator and those of a top-notch editor. You have to be precise and, of course, much more concise than in traditional translation work. have an in point and an out point which represent the exact time when the subtitle comes in and goes out. This process is then followed by a manual review, subtitle by subtitle, and time- codes are adjusted to improve synchronisation and respect shot changes. This process involves playing the film literally frame by frame as it is essential the subtitles respect the visual rhythm of the film. Different subtitlers use different techniques. I would go through the film and do the whole translation and then go right back from the beginning and start the timing process. But you could do it in different stages, translate lets say 20 minutes of the film, then time this section and translate the next 20 minutes, and so on. Its just a different method. For multi-lingual projects, the timing is done first to create what is called a spotting list, a subtitle template, which is in effect a list of English subtitles pre-timed and edited for translation purposes. This is then translated and the timing is adapted to the target language with the help of the translator for quality control. Like any translation work, you cant hurry subtitling, says Virginie. If subtitles are translated and timed in a rush, the quality will be affected and it will show. Mistakes usually occur when the translator does not master the source language and misunderstands the original dialogue. Our work also involves checking and reworking subtitles when the translation is not up to standard. However, the reason for redoing subtitles is not just because of poor quality translation. We may need to adapt subtitles to a new version of the film: the time code may be different. The film may have been edited or the subtitles may have been created for the cinema rather than video. If subtitles were done for cinema on 35mm, we would need to reformat the timing for video, as subtitles could be out of synch or too fast. If the translation is good, we would obviously respect the work of the original translator. On a more practical level, there are general subtitling rules to follow, says Virginie. Subtitles should appear at the bottom of the screen and usually in the centre. She says that different countries use different standards and rules. In Scandinavian countries and Holland, for example, subtitles are traditionally left justified. Characters usually appear in white with a thin black border for easy reading against a white or light background. We can also use different colours for each speaker when subtitling for the hearing impaired. Subtitles should have a maximum of two lines and the maximum number of characters on each line should be between 32 and 39. Our company standard is 37 (different companies and countries have different standards). Translators often have a favourite genre, whether its war films, musicals, comedies (one of the most difficult because of the subtleties and nuances of comedy in different countries), drama or corporate programmes. Each requires a certain tone and style. VSI employs American subtitlers, which is incredibly useful as many of the films we subtitle are American, says Virginie. For an English person, it would not be so easy to understand the meaning behind typically American expressions, and vice-versa.", "hypothesis": "Subtitling work involves a requirement that does not apply to other translation work.", "label": "e"} +{"uid": "id_733", "premise": "Working in the movies When people ask French translator Virginie Verdier what she does for a living, it must be tempting to say enigmatically: Oh me? Im in the movies. Its strictly true, but her starring role is behind the scenes. As translating goes, it doesnt get more entertaining or glamorous than subtitling films. If youre very lucky, you get to work on the new blockbuster films before theyre in the cinema, and if youre just plain lucky, you get to work on the blockbuster movies that are going to video or DVD. The process starts when you get the original script and a tape. We would start with translating and adapting the film script. The next step is what we call timing, which means synchronising the subtitles to the dialogue and pictures. This task requires discipline. You play the film, listen to the voice and the subtitles are up on your screen ready to be timed. You insert your subtitle when you hear the corresponding dialogue and delete . it when the dialogue finishes. The video tape carries a time code which runs in hours, minutes, seconds and frames. Think of it as a clock. The subtitling unit has an insert key to capture the time code where you want the subtitle to appear. When you press the delete key, it captures the time code where you want the subtitle to disappear. So each subtitle would Subtitling is an exacting part of the translation profession. Melanie Leyshon talks to Virginie Verdier of London translation company VSI about the glamour and the grind. Virginie is quick to point out that this is as exacting as any translating job. You work hard. Its not all entertainment as you are doing the translating. You need all the skills of a good translator and those of a top-notch editor. You have to be precise and, of course, much more concise than in traditional translation work. have an in point and an out point which represent the exact time when the subtitle comes in and goes out. This process is then followed by a manual review, subtitle by subtitle, and time- codes are adjusted to improve synchronisation and respect shot changes. This process involves playing the film literally frame by frame as it is essential the subtitles respect the visual rhythm of the film. Different subtitlers use different techniques. I would go through the film and do the whole translation and then go right back from the beginning and start the timing process. But you could do it in different stages, translate lets say 20 minutes of the film, then time this section and translate the next 20 minutes, and so on. Its just a different method. For multi-lingual projects, the timing is done first to create what is called a spotting list, a subtitle template, which is in effect a list of English subtitles pre-timed and edited for translation purposes. This is then translated and the timing is adapted to the target language with the help of the translator for quality control. Like any translation work, you cant hurry subtitling, says Virginie. If subtitles are translated and timed in a rush, the quality will be affected and it will show. Mistakes usually occur when the translator does not master the source language and misunderstands the original dialogue. Our work also involves checking and reworking subtitles when the translation is not up to standard. However, the reason for redoing subtitles is not just because of poor quality translation. We may need to adapt subtitles to a new version of the film: the time code may be different. The film may have been edited or the subtitles may have been created for the cinema rather than video. If subtitles were done for cinema on 35mm, we would need to reformat the timing for video, as subtitles could be out of synch or too fast. If the translation is good, we would obviously respect the work of the original translator. On a more practical level, there are general subtitling rules to follow, says Virginie. Subtitles should appear at the bottom of the screen and usually in the centre. She says that different countries use different standards and rules. In Scandinavian countries and Holland, for example, subtitles are traditionally left justified. Characters usually appear in white with a thin black border for easy reading against a white or light background. We can also use different colours for each speaker when subtitling for the hearing impaired. Subtitles should have a maximum of two lines and the maximum number of characters on each line should be between 32 and 39. Our company standard is 37 (different companies and countries have different standards). Translators often have a favourite genre, whether its war films, musicals, comedies (one of the most difficult because of the subtleties and nuances of comedy in different countries), drama or corporate programmes. Each requires a certain tone and style. VSI employs American subtitlers, which is incredibly useful as many of the films we subtitle are American, says Virginie. For an English person, it would not be so easy to understand the meaning behind typically American expressions, and vice-versa.", "hypothesis": "Few people are completely successful at subtitling comedies.", "label": "n"} +{"uid": "id_734", "premise": "Worlds language. The pre-eminence of the English language globally may be under threat. One billion people in the world speak Mandarin, the dominant language of China, more than three times the number who speak English. If economic trends continue then China is set to dominate world trade and quite possibly global communication with it. It is perhaps surprising then that learning English is growing fast in China, where there are more English-language teaching jobs than in any other country. The International English Language Testing System or IELTS is taken by more than one million people worldwide, and last year 270,000 tests were taken in China. This fact belies the notion that the number of speakers of a language determines its status. English is set to remain influential because it is seen as the language of academia, diplomacy and especially science, where 95 per cent of scientific publications worldwide are written in English. The language of English is robust because it has a great literary heritage and prestige (though notably so did Latin, which subsequently declined), and it is the main language of the prosperous and stable nations of the West. The use of English became widespread following the expansion of the British Empire, and it remains the primary language of at least 45 countries and the official language of many international organizations. Above all else, it is the popularity of English as a second and third language that confirms its status as the worlds language. Globally there are almost three times as many non-native speakers of English as native speakers. The number of people who can speak English in India now exceeds the number in the United States. In Nigeria, more people can speak English (pidgin) than in the UK.", "hypothesis": "More people speak English outside of the UK than in the UK.", "label": "e"} +{"uid": "id_735", "premise": "Worlds language. The pre-eminence of the English language globally may be under threat. One billion people in the world speak Mandarin, the dominant language of China, more than three times the number who speak English. If economic trends continue then China is set to dominate world trade and quite possibly global communication with it. It is perhaps surprising then that learning English is growing fast in China, where there are more English-language teaching jobs than in any other country. The International English Language Testing System or IELTS is taken by more than one million people worldwide, and last year 270,000 tests were taken in China. This fact belies the notion that the number of speakers of a language determines its status. English is set to remain influential because it is seen as the language of academia, diplomacy and especially science, where 95 per cent of scientific publications worldwide are written in English. The language of English is robust because it has a great literary heritage and prestige (though notably so did Latin, which subsequently declined), and it is the main language of the prosperous and stable nations of the West. The use of English became widespread following the expansion of the British Empire, and it remains the primary language of at least 45 countries and the official language of many international organizations. Above all else, it is the popularity of English as a second and third language that confirms its status as the worlds language. Globally there are almost three times as many non-native speakers of English as native speakers. The number of people who can speak English in India now exceeds the number in the United States. In Nigeria, more people can speak English (pidgin) than in the UK.", "hypothesis": "In terms of English speakers, four countries are ranked as follows: India, United States, Nigeria, UK (highest number first).", "label": "n"} +{"uid": "id_736", "premise": "Worlds language. The pre-eminence of the English language globally may be under threat. One billion people in the world speak Mandarin, the dominant language of China, more than three times the number who speak English. If economic trends continue then China is set to dominate world trade and quite possibly global communication with it. It is perhaps surprising then that learning English is growing fast in China, where there are more English-language teaching jobs than in any other country. The International English Language Testing System or IELTS is taken by more than one million people worldwide, and last year 270,000 tests were taken in China. This fact belies the notion that the number of speakers of a language determines its status. English is set to remain influential because it is seen as the language of academia, diplomacy and especially science, where 95 per cent of scientific publications worldwide are written in English. The language of English is robust because it has a great literary heritage and prestige (though notably so did Latin, which subsequently declined), and it is the main language of the prosperous and stable nations of the West. The use of English became widespread following the expansion of the British Empire, and it remains the primary language of at least 45 countries and the official language of many international organizations. Above all else, it is the popularity of English as a second and third language that confirms its status as the worlds language. Globally there are almost three times as many non-native speakers of English as native speakers. The number of people who can speak English in India now exceeds the number in the United States. In Nigeria, more people can speak English (pidgin) than in the UK.", "hypothesis": "There are fewer English-language teaching jobs in the UK than in China.", "label": "e"} +{"uid": "id_737", "premise": "Worlds language. The pre-eminence of the English language globally may be under threat. One billion people in the world speak Mandarin, the dominant language of China, more than three times the number who speak English. If economic trends continue then China is set to dominate world trade and quite possibly global communication with it. It is perhaps surprising then that learning English is growing fast in China, where there are more English-language teaching jobs than in any other country. The International English Language Testing System or IELTS is taken by more than one million people worldwide, and last year 270,000 tests were taken in China. This fact belies the notion that the number of speakers of a language determines its status. English is set to remain influential because it is seen as the language of academia, diplomacy and especially science, where 95 per cent of scientific publications worldwide are written in English. The language of English is robust because it has a great literary heritage and prestige (though notably so did Latin, which subsequently declined), and it is the main language of the prosperous and stable nations of the West. The use of English became widespread following the expansion of the British Empire, and it remains the primary language of at least 45 countries and the official language of many international organizations. Above all else, it is the popularity of English as a second and third language that confirms its status as the worlds language. Globally there are almost three times as many non-native speakers of English as native speakers. The number of people who can speak English in India now exceeds the number in the United States. In Nigeria, more people can speak English (pidgin) than in the UK.", "hypothesis": "The language of Latin was gradually displaced by English.", "label": "n"} +{"uid": "id_738", "premise": "YoGo is a company that makes low-fat dairy products. It built its reputation making virtually fat-free yogurts, but has since branched out to produce low fat ice-creams, milkshakes and cooking sauces. YoGos biggest competitor is DairyFree, a company that makes fat-free, dairy-free products. In order to compete with DairyFree, YoGo is trying to lower the cost of its products. It hopes to do this by buying its ingredients in bulk, using automated production lines and reducing the amount of packaging. Since implementing these changes, YoGo has seen an increase in its profit margin but sales figures are yet to change. In comparison, DairyFree has out-sold its target for this month, as a result of a marketing scheme. This scheme included the giving away free samples and discount vouchers, a marketing ploy that YoGo will not be able to compete with.", "hypothesis": "YoGo has lowered the costs and given away free samples", "label": "c"} +{"uid": "id_739", "premise": "YoGo is a company that makes low-fat dairy products. It built its reputation making virtually fat-free yogurts, but has since branched out to produce low fat ice-creams, milkshakes and cooking sauces. YoGos biggest competitor is DairyFree, a company that makes fat-free, dairy-free products. In order to compete with DairyFree, YoGo is trying to lower the cost of its products. It hopes to do this by buying its ingredients in bulk, using automated production lines and reducing the amount of packaging. Since implementing these changes, YoGo has seen an increase in its profit margin but sales figures are yet to change. In comparison, DairyFree has out-sold its target for this month, as a result of a marketing scheme. This scheme included the giving away free samples and discount vouchers, a marketing ploy that YoGo will not be able to compete with.", "hypothesis": "YoGo will go into administration.", "label": "n"} +{"uid": "id_740", "premise": "YoGo is a company that makes low-fat dairy products. It built its reputation making virtually fat-free yogurts, but has since branched out to produce low fat ice-creams, milkshakes and cooking sauces. YoGos biggest competitor is DairyFree, a company that makes fat-free, dairy-free products. In order to compete with DairyFree, YoGo is trying to lower the cost of its products. It hopes to do this by buying its ingredients in bulk, using automated production lines and reducing the amount of packaging. Since implementing these changes, YoGo has seen an increase in its profit margin but sales figures are yet to change. In comparison, DairyFree has out-sold its target for this month, as a result of a marketing scheme. This scheme included the giving away free samples and discount vouchers, a marketing ploy that YoGo will not be able to compete with.", "hypothesis": "YoGo implemented a scheme that aims to reduce the cost of production in attempt to compete with DairyFree", "label": "e"} +{"uid": "id_741", "premise": "YoGo is a company that makes low-fat dairy products. It built its reputation making virtually fat-free yogurts, but has since branched out to produce low fat ice-creams, milkshakes and cooking sauces. YoGos biggest competitor is DairyFree, a company that makes fat-free, dairy-free products. In order to compete with DairyFree, YoGo is trying to lower the cost of its products. It hopes to do this by buying its ingredients in bulk, using automated production lines and reducing the amount of packaging. Since implementing these changes, YoGo has seen an increase in its profit margin but sales figures are yet to change. In comparison, DairyFree has out-sold its target for this month, as a result of a marketing scheme. This scheme included the giving away free samples and discount vouchers, a marketing ploy that YoGo will not be able to compete with.", "hypothesis": "YoGo implemented a scheme that gives away free samples and discount vouchers in attempt to compete with DairyFree", "label": "c"} +{"uid": "id_742", "premise": "You and your CV It is the first thing a future employer sees about you, and if its not right, may be the last. An employer will do no more than glance at your CV its estimated that most employers spend more than twenty seconds looking at each CV, so you have very little time to make the impression. Heres some advice to help you make the most of those twenty seconds. What it should look like The first rule of all CVs is to keep them clear and simple anything complicated or long tends to get rejected instantly. Achieving that is a matter of making good use of lists, bullet points and note form, and of keeping your CV to the right length. There are no fixed rules on how long it should be, and it will vary, of course, according to your age, experience, etc. , but keep it to one page if you can this length is convenient for your reader to work with. As for style, there are different kinds of layouts you can follow look at the examples on this site to see which one you prefer but the basic rule is to use headings well to signal clearly where all the relevant information is. Make sure you include these sections: qualifications, skills, education, work experience, references, personal interests/hobbies, personal qualities, then label them clearly so that your prospective employer can find the information they want quickly and easily. Content CVs tend to follow a fixed order. They start with your personal details such as name, address and contact details, then go on to personal qualities such as those things in your personality that might attract an employer e. g. conscientious, adventurous, punctual, etc. , and your career goals. After this comes the main part of your CV starting with education, then work experience. Use reverse chronological order to list these, starting with what youre doing now. Its most common to go back no more than 10 years. Give your job details such as job titles, the names of the organisations you worked for, an outline of your job duties and then note your particular achievements. Then go on to your personal interests and finish up with the details of some good, reliable referees. Your future employer may not follow up on these, but they do make an impression. Dos and donts A glance at your CV should create a good impression. Dont make spelling mistakes, and dont send in anything crumpled or with coffee stains on it. Anything like that leads to instant rejection. Use good quality A4 paper and dont send in anything other than a cover letter. Diplomas, testimonials, etc. , will be requested later ~ theyre interested in you. When you think youve finished writing your CV, read it over very carefully. Check your full stops, use of bullets, indentation, use of capital letters, etc. And never include in your CV anything thats not true. Its very easy for an employer to check, and if your CV doesnt match what they find out, then your chances of getting that job are probably gone. Finally, carry out the instructions in the job ad very carefully. If they require three copies, then send them three copies, not two or four. Make sure you meet the deadline too, and put the right stamp on your envelope. Youll need to accompany your CV with a cover letter. This should be tailored to each job you apply for. Follow the link below for advice on how to write a cover letter. And last of all Good luck! Remember to include: Career history Skills and strengths Awards and achievements Contact details", "hypothesis": "The style of CVs varies from country to country.", "label": "n"} +{"uid": "id_743", "premise": "You and your CV It is the first thing a future employer sees about you, and if its not right, may be the last. An employer will do no more than glance at your CV its estimated that most employers spend more than twenty seconds looking at each CV, so you have very little time to make the impression. Heres some advice to help you make the most of those twenty seconds. What it should look like The first rule of all CVs is to keep them clear and simple anything complicated or long tends to get rejected instantly. Achieving that is a matter of making good use of lists, bullet points and note form, and of keeping your CV to the right length. There are no fixed rules on how long it should be, and it will vary, of course, according to your age, experience, etc. , but keep it to one page if you can this length is convenient for your reader to work with. As for style, there are different kinds of layouts you can follow look at the examples on this site to see which one you prefer but the basic rule is to use headings well to signal clearly where all the relevant information is. Make sure you include these sections: qualifications, skills, education, work experience, references, personal interests/hobbies, personal qualities, then label them clearly so that your prospective employer can find the information they want quickly and easily. Content CVs tend to follow a fixed order. They start with your personal details such as name, address and contact details, then go on to personal qualities such as those things in your personality that might attract an employer e. g. conscientious, adventurous, punctual, etc. , and your career goals. After this comes the main part of your CV starting with education, then work experience. Use reverse chronological order to list these, starting with what youre doing now. Its most common to go back no more than 10 years. Give your job details such as job titles, the names of the organisations you worked for, an outline of your job duties and then note your particular achievements. Then go on to your personal interests and finish up with the details of some good, reliable referees. Your future employer may not follow up on these, but they do make an impression. Dos and donts A glance at your CV should create a good impression. Dont make spelling mistakes, and dont send in anything crumpled or with coffee stains on it. Anything like that leads to instant rejection. Use good quality A4 paper and dont send in anything other than a cover letter. Diplomas, testimonials, etc. , will be requested later ~ theyre interested in you. When you think youve finished writing your CV, read it over very carefully. Check your full stops, use of bullets, indentation, use of capital letters, etc. And never include in your CV anything thats not true. Its very easy for an employer to check, and if your CV doesnt match what they find out, then your chances of getting that job are probably gone. Finally, carry out the instructions in the job ad very carefully. If they require three copies, then send them three copies, not two or four. Make sure you meet the deadline too, and put the right stamp on your envelope. Youll need to accompany your CV with a cover letter. This should be tailored to each job you apply for. Follow the link below for advice on how to write a cover letter. And last of all Good luck! Remember to include: Career history Skills and strengths Awards and achievements Contact details", "hypothesis": "Employers spend a long time reading applicants CVs.", "label": "c"} +{"uid": "id_744", "premise": "You and your CV It is the first thing a future employer sees about you, and if its not right, may be the last. An employer will do no more than glance at your CV its estimated that most employers spend more than twenty seconds looking at each CV, so you have very little time to make the impression. Heres some advice to help you make the most of those twenty seconds. What it should look like The first rule of all CVs is to keep them clear and simple anything complicated or long tends to get rejected instantly. Achieving that is a matter of making good use of lists, bullet points and note form, and of keeping your CV to the right length. There are no fixed rules on how long it should be, and it will vary, of course, according to your age, experience, etc. , but keep it to one page if you can this length is convenient for your reader to work with. As for style, there are different kinds of layouts you can follow look at the examples on this site to see which one you prefer but the basic rule is to use headings well to signal clearly where all the relevant information is. Make sure you include these sections: qualifications, skills, education, work experience, references, personal interests/hobbies, personal qualities, then label them clearly so that your prospective employer can find the information they want quickly and easily. Content CVs tend to follow a fixed order. They start with your personal details such as name, address and contact details, then go on to personal qualities such as those things in your personality that might attract an employer e. g. conscientious, adventurous, punctual, etc. , and your career goals. After this comes the main part of your CV starting with education, then work experience. Use reverse chronological order to list these, starting with what youre doing now. Its most common to go back no more than 10 years. Give your job details such as job titles, the names of the organisations you worked for, an outline of your job duties and then note your particular achievements. Then go on to your personal interests and finish up with the details of some good, reliable referees. Your future employer may not follow up on these, but they do make an impression. Dos and donts A glance at your CV should create a good impression. Dont make spelling mistakes, and dont send in anything crumpled or with coffee stains on it. Anything like that leads to instant rejection. Use good quality A4 paper and dont send in anything other than a cover letter. Diplomas, testimonials, etc. , will be requested later ~ theyre interested in you. When you think youve finished writing your CV, read it over very carefully. Check your full stops, use of bullets, indentation, use of capital letters, etc. And never include in your CV anything thats not true. Its very easy for an employer to check, and if your CV doesnt match what they find out, then your chances of getting that job are probably gone. Finally, carry out the instructions in the job ad very carefully. If they require three copies, then send them three copies, not two or four. Make sure you meet the deadline too, and put the right stamp on your envelope. Youll need to accompany your CV with a cover letter. This should be tailored to each job you apply for. Follow the link below for advice on how to write a cover letter. And last of all Good luck! Remember to include: Career history Skills and strengths Awards and achievements Contact details", "hypothesis": "CVs are essential when applying for jobs.", "label": "e"} +{"uid": "id_745", "premise": "You can find out so much about people on the internet these days that civil liberty campaigners are arguing for new laws so that people can get back some vestige of control over their personal data. The 1998 Data Protection Act gives us the right to know the personal information companies are holding. But the new threat to personal liberty is quite the opposite it is the threat of complete strangers finding out our personal details. Undertake an internet search on someone you know with any of the main search engines and you are likely to obtain thousands of results which if trawled through can provide particulars of employment, a work phone number and e-mail address. Find a CV belonging to that person and you will get hold of their home address, date of birth, home telephone number, personal e-mail address and a listing of their educational history and interests. If the person for whom you are searching is active on a social network site or an internet specialist interest forum then you may well be able to identify a database of friends and contacts and by reading recent postings obtain a flavour of their views and preferences. Search the database of a genealogy site and you may well be able to identify generations of family members.", "hypothesis": "The threat to personal liberty is no longer one of secrecy and finding out what organizations know about us.", "label": "c"} +{"uid": "id_746", "premise": "You can find out so much about people on the internet these days that civil liberty campaigners are arguing for new laws so that people can get back some vestige of control over their personal data. The 1998 Data Protection Act gives us the right to know the personal information companies are holding. But the new threat to personal liberty is quite the opposite it is the threat of complete strangers finding out our personal details. Undertake an internet search on someone you know with any of the main search engines and you are likely to obtain thousands of results which if trawled through can provide particulars of employment, a work phone number and e-mail address. Find a CV belonging to that person and you will get hold of their home address, date of birth, home telephone number, personal e-mail address and a listing of their educational history and interests. If the person for whom you are searching is active on a social network site or an internet specialist interest forum then you may well be able to identify a database of friends and contacts and by reading recent postings obtain a flavour of their views and preferences. Search the database of a genealogy site and you may well be able to identify generations of family members.", "hypothesis": "The penultimate sentence of the passage illustrates the sort of things that people post on the internet.", "label": "e"} +{"uid": "id_747", "premise": "You cannot be very intelligent if you do not know how smart you are until you have been told your IQ rate. It is probably unwise to take an IQ test because if you do then you risk feeling either superior or disappointed when you get the result, and neither of these sentiments is beneficial. What difference would it make to your life anyway, if you were to find out that you have the IQ of a genius or well below average? These considerations did not stop almost half a million Europeans from taking part in an internet IQ test. In the test, men scored 110 while women scored 105; left- handed people scored much higher than right-handed people; and people with brown eyes scored best while people with red hair scored the least.", "hypothesis": "The results suggest that men are more intelligent than women.", "label": "c"} +{"uid": "id_748", "premise": "You cannot be very intelligent if you do not know how smart you are until you have been told your IQ rate. It is probably unwise to take an IQ test because if you do then you risk feeling either superior or disappointed when you get the result, and neither of these sentiments is beneficial. What difference would it make to your life anyway, if you were to find out that you have the IQ of a genius or well below average? These considerations did not stop almost half a million Europeans from taking part in an internet IQ test. In the test, men scored 110 while women scored 105; left- handed people scored much higher than right-handed people; and people with brown eyes scored best while people with red hair scored the least.", "hypothesis": "It is reasonable to surmise that the author would have difficulty understanding why someone would want to know their IQ.", "label": "e"} +{"uid": "id_749", "premise": "You cannot be very intelligent if you do not know how smart you are until you have been told your IQ rate. It is probably unwise to take an IQ test because if you do then you risk feeling either superior or disappointed when you get the result, and neither of these sentiments is beneficial. What difference would it make to your life anyway, if you were to find out that you have the IQ of a genius or well below average? These considerations did not stop almost half a million Europeans from taking part in an internet IQ test. In the test, men scored 110 while women scored 105; left- handed people scored much higher than right-handed people; and people with brown eyes scored best while people with red hair scored the least.", "hypothesis": "The passage is written in a satirical style.", "label": "c"} +{"uid": "id_750", "premise": "Your New Electron Washing Machine These introductory notes will outline some basic information regarding your new Electron Washing Machine. Read the notes carefully, as you will avoid some possible problems. 1. Remember, always get your washing machine installed by a qualified installer. This will include all qualified electricians and plumbers. Your retailer will probably be able to recommend someone or provide the service. Dont use friends or try it yourself and beware of cowboys! Non-qualified installation will lead to the nullification of the guarantee. 2. Your new Electron Washing Machine will work with any good quality washing detergent, but it has been designed to work with some better brands. See the main users guide for a list of recommended detergents. 3. It is very possible that the water where you live is hard. Prolonged use with hard water will lead to scale calcification in all washing machines, and no technology can stop this. To avoid this, it is recommended that you install a water softener to the washing machine water supply. Local plumbers will be able to advise you of your areas water type and what water softener would be suitable if applicable. 4. All new Electron Washing Machines come with a standard 2 year manufacturers guarantee. While we are confident that your new Electron Washing Machine has been manufactured to the highest possible quality standards, if you would like to invest in a 5 year guarantee, this can be purchased online on our website, www. electronmachines. com. We believe its the best thing to do for peace of mind. 5. Before washing clothes for the first time in your new Electron Washing Machine, it is important that you run the machine one time with no clothes. You can use detergent if you wish, but this is not necessary. Use setting 8 at 40 degrees for best results. 6. Remember, before washing clothes, check all pockets etc. for any coins, tissues or other belongings. Coins and tissues can sometimes get into the machinery and cause your new Electron Washing Machine to break down. 7. Some minor faults with your new Electron Washing Machine can be fixed without having to call in expensive help. To help you with this, we have created a troubleshooting guide on our website, www. electronmachines. com. There are straightforward questions that cover most possible problems and, once diagnosed, the problems can usually be dealt with by yourself without having to call the plumber or electrician. 8. Why not take a little time to register your new Electron Washing Machine with us? It doesnt take much time, and if we know who you are, we will be able to service you better. Just go to the appropriate icon on our website (www. electronmachines. com) to register. You will be taken to a page where you will be asked for a few details. 9. Finally, we are extremely interested to know what your experience is like with your new Electron Washing Machine. On our website, www. electronmachines. com, we have feedback pages, blogs and forums where you can have your say. Come and share with us!", "hypothesis": "The Electron washing machine includes technology that stops calcification", "label": "c"} +{"uid": "id_751", "premise": "Your New Electron Washing Machine These introductory notes will outline some basic information regarding your new Electron Washing Machine. Read the notes carefully, as you will avoid some possible problems. 1. Remember, always get your washing machine installed by a qualified installer. This will include all qualified electricians and plumbers. Your retailer will probably be able to recommend someone or provide the service. Dont use friends or try it yourself and beware of cowboys! Non-qualified installation will lead to the nullification of the guarantee. 2. Your new Electron Washing Machine will work with any good quality washing detergent, but it has been designed to work with some better brands. See the main users guide for a list of recommended detergents. 3. It is very possible that the water where you live is hard. Prolonged use with hard water will lead to scale calcification in all washing machines, and no technology can stop this. To avoid this, it is recommended that you install a water softener to the washing machine water supply. Local plumbers will be able to advise you of your areas water type and what water softener would be suitable if applicable. 4. All new Electron Washing Machines come with a standard 2 year manufacturers guarantee. While we are confident that your new Electron Washing Machine has been manufactured to the highest possible quality standards, if you would like to invest in a 5 year guarantee, this can be purchased online on our website, www. electronmachines. com. We believe its the best thing to do for peace of mind. 5. Before washing clothes for the first time in your new Electron Washing Machine, it is important that you run the machine one time with no clothes. You can use detergent if you wish, but this is not necessary. Use setting 8 at 40 degrees for best results. 6. Remember, before washing clothes, check all pockets etc. for any coins, tissues or other belongings. Coins and tissues can sometimes get into the machinery and cause your new Electron Washing Machine to break down. 7. Some minor faults with your new Electron Washing Machine can be fixed without having to call in expensive help. To help you with this, we have created a troubleshooting guide on our website, www. electronmachines. com. There are straightforward questions that cover most possible problems and, once diagnosed, the problems can usually be dealt with by yourself without having to call the plumber or electrician. 8. Why not take a little time to register your new Electron Washing Machine with us? It doesnt take much time, and if we know who you are, we will be able to service you better. Just go to the appropriate icon on our website (www. electronmachines. com) to register. You will be taken to a page where you will be asked for a few details. 9. Finally, we are extremely interested to know what your experience is like with your new Electron Washing Machine. On our website, www. electronmachines. com, we have feedback pages, blogs and forums where you can have your say. Come and share with us!", "hypothesis": "Registering the washing machine requires giving an email address.", "label": "n"} +{"uid": "id_752", "premise": "Your New Electron Washing Machine These introductory notes will outline some basic information regarding your new Electron Washing Machine. Read the notes carefully, as you will avoid some possible problems. 1. Remember, always get your washing machine installed by a qualified installer. This will include all qualified electricians and plumbers. Your retailer will probably be able to recommend someone or provide the service. Dont use friends or try it yourself and beware of cowboys! Non-qualified installation will lead to the nullification of the guarantee. 2. Your new Electron Washing Machine will work with any good quality washing detergent, but it has been designed to work with some better brands. See the main users guide for a list of recommended detergents. 3. It is very possible that the water where you live is hard. Prolonged use with hard water will lead to scale calcification in all washing machines, and no technology can stop this. To avoid this, it is recommended that you install a water softener to the washing machine water supply. Local plumbers will be able to advise you of your areas water type and what water softener would be suitable if applicable. 4. All new Electron Washing Machines come with a standard 2 year manufacturers guarantee. While we are confident that your new Electron Washing Machine has been manufactured to the highest possible quality standards, if you would like to invest in a 5 year guarantee, this can be purchased online on our website, www. electronmachines. com. We believe its the best thing to do for peace of mind. 5. Before washing clothes for the first time in your new Electron Washing Machine, it is important that you run the machine one time with no clothes. You can use detergent if you wish, but this is not necessary. Use setting 8 at 40 degrees for best results. 6. Remember, before washing clothes, check all pockets etc. for any coins, tissues or other belongings. Coins and tissues can sometimes get into the machinery and cause your new Electron Washing Machine to break down. 7. Some minor faults with your new Electron Washing Machine can be fixed without having to call in expensive help. To help you with this, we have created a troubleshooting guide on our website, www. electronmachines. com. There are straightforward questions that cover most possible problems and, once diagnosed, the problems can usually be dealt with by yourself without having to call the plumber or electrician. 8. Why not take a little time to register your new Electron Washing Machine with us? It doesnt take much time, and if we know who you are, we will be able to service you better. Just go to the appropriate icon on our website (www. electronmachines. com) to register. You will be taken to a page where you will be asked for a few details. 9. Finally, we are extremely interested to know what your experience is like with your new Electron Washing Machine. On our website, www. electronmachines. com, we have feedback pages, blogs and forums where you can have your say. Come and share with us!", "hypothesis": "Reading the troubleshooting guide can save you money.", "label": "e"} +{"uid": "id_753", "premise": "Your New Electron Washing Machine These introductory notes will outline some basic information regarding your new Electron Washing Machine. Read the notes carefully, as you will avoid some possible problems. 1. Remember, always get your washing machine installed by a qualified installer. This will include all qualified electricians and plumbers. Your retailer will probably be able to recommend someone or provide the service. Dont use friends or try it yourself and beware of cowboys! Non-qualified installation will lead to the nullification of the guarantee. 2. Your new Electron Washing Machine will work with any good quality washing detergent, but it has been designed to work with some better brands. See the main users guide for a list of recommended detergents. 3. It is very possible that the water where you live is hard. Prolonged use with hard water will lead to scale calcification in all washing machines, and no technology can stop this. To avoid this, it is recommended that you install a water softener to the washing machine water supply. Local plumbers will be able to advise you of your areas water type and what water softener would be suitable if applicable. 4. All new Electron Washing Machines come with a standard 2 year manufacturers guarantee. While we are confident that your new Electron Washing Machine has been manufactured to the highest possible quality standards, if you would like to invest in a 5 year guarantee, this can be purchased online on our website, www. electronmachines. com. We believe its the best thing to do for peace of mind. 5. Before washing clothes for the first time in your new Electron Washing Machine, it is important that you run the machine one time with no clothes. You can use detergent if you wish, but this is not necessary. Use setting 8 at 40 degrees for best results. 6. Remember, before washing clothes, check all pockets etc. for any coins, tissues or other belongings. Coins and tissues can sometimes get into the machinery and cause your new Electron Washing Machine to break down. 7. Some minor faults with your new Electron Washing Machine can be fixed without having to call in expensive help. To help you with this, we have created a troubleshooting guide on our website, www. electronmachines. com. There are straightforward questions that cover most possible problems and, once diagnosed, the problems can usually be dealt with by yourself without having to call the plumber or electrician. 8. Why not take a little time to register your new Electron Washing Machine with us? It doesnt take much time, and if we know who you are, we will be able to service you better. Just go to the appropriate icon on our website (www. electronmachines. com) to register. You will be taken to a page where you will be asked for a few details. 9. Finally, we are extremely interested to know what your experience is like with your new Electron Washing Machine. On our website, www. electronmachines. com, we have feedback pages, blogs and forums where you can have your say. Come and share with us!", "hypothesis": "Buying an extended warranty is a good idea.", "label": "e"} +{"uid": "id_754", "premise": "Your New Electron Washing Machine These introductory notes will outline some basic information regarding your new Electron Washing Machine. Read the notes carefully, as you will avoid some possible problems. 1. Remember, always get your washing machine installed by a qualified installer. This will include all qualified electricians and plumbers. Your retailer will probably be able to recommend someone or provide the service. Dont use friends or try it yourself and beware of cowboys! Non-qualified installation will lead to the nullification of the guarantee. 2. Your new Electron Washing Machine will work with any good quality washing detergent, but it has been designed to work with some better brands. See the main users guide for a list of recommended detergents. 3. It is very possible that the water where you live is hard. Prolonged use with hard water will lead to scale calcification in all washing machines, and no technology can stop this. To avoid this, it is recommended that you install a water softener to the washing machine water supply. Local plumbers will be able to advise you of your areas water type and what water softener would be suitable if applicable. 4. All new Electron Washing Machines come with a standard 2 year manufacturers guarantee. While we are confident that your new Electron Washing Machine has been manufactured to the highest possible quality standards, if you would like to invest in a 5 year guarantee, this can be purchased online on our website, www. electronmachines. com. We believe its the best thing to do for peace of mind. 5. Before washing clothes for the first time in your new Electron Washing Machine, it is important that you run the machine one time with no clothes. You can use detergent if you wish, but this is not necessary. Use setting 8 at 40 degrees for best results. 6. Remember, before washing clothes, check all pockets etc. for any coins, tissues or other belongings. Coins and tissues can sometimes get into the machinery and cause your new Electron Washing Machine to break down. 7. Some minor faults with your new Electron Washing Machine can be fixed without having to call in expensive help. To help you with this, we have created a troubleshooting guide on our website, www. electronmachines. com. There are straightforward questions that cover most possible problems and, once diagnosed, the problems can usually be dealt with by yourself without having to call the plumber or electrician. 8. Why not take a little time to register your new Electron Washing Machine with us? It doesnt take much time, and if we know who you are, we will be able to service you better. Just go to the appropriate icon on our website (www. electronmachines. com) to register. You will be taken to a page where you will be asked for a few details. 9. Finally, we are extremely interested to know what your experience is like with your new Electron Washing Machine. On our website, www. electronmachines. com, we have feedback pages, blogs and forums where you can have your say. Come and share with us!", "hypothesis": "When using the washing machine for the first time, run a cycle with some old clothes or towels.", "label": "c"} +{"uid": "id_755", "premise": "Your New Electron Washing Machine These introductory notes will outline some basic information regarding your new Electron Washing Machine. Read the notes carefully, as you will avoid some possible problems. 1. Remember, always get your washing machine installed by a qualified installer. This will include all qualified electricians and plumbers. Your retailer will probably be able to recommend someone or provide the service. Dont use friends or try it yourself and beware of cowboys! Non-qualified installation will lead to the nullification of the guarantee. 2. Your new Electron Washing Machine will work with any good quality washing detergent, but it has been designed to work with some better brands. See the main users guide for a list of recommended detergents. 3. It is very possible that the water where you live is hard. Prolonged use with hard water will lead to scale calcification in all washing machines, and no technology can stop this. To avoid this, it is recommended that you install a water softener to the washing machine water supply. Local plumbers will be able to advise you of your areas water type and what water softener would be suitable if applicable. 4. All new Electron Washing Machines come with a standard 2 year manufacturers guarantee. While we are confident that your new Electron Washing Machine has been manufactured to the highest possible quality standards, if you would like to invest in a 5 year guarantee, this can be purchased online on our website, www. electronmachines. com. We believe its the best thing to do for peace of mind. 5. Before washing clothes for the first time in your new Electron Washing Machine, it is important that you run the machine one time with no clothes. You can use detergent if you wish, but this is not necessary. Use setting 8 at 40 degrees for best results. 6. Remember, before washing clothes, check all pockets etc. for any coins, tissues or other belongings. Coins and tissues can sometimes get into the machinery and cause your new Electron Washing Machine to break down. 7. Some minor faults with your new Electron Washing Machine can be fixed without having to call in expensive help. To help you with this, we have created a troubleshooting guide on our website, www. electronmachines. com. There are straightforward questions that cover most possible problems and, once diagnosed, the problems can usually be dealt with by yourself without having to call the plumber or electrician. 8. Why not take a little time to register your new Electron Washing Machine with us? It doesnt take much time, and if we know who you are, we will be able to service you better. Just go to the appropriate icon on our website (www. electronmachines. com) to register. You will be taken to a page where you will be asked for a few details. 9. Finally, we are extremely interested to know what your experience is like with your new Electron Washing Machine. On our website, www. electronmachines. com, we have feedback pages, blogs and forums where you can have your say. Come and share with us!", "hypothesis": "Installers must provide the customer with a certificate of installation.", "label": "n"} +{"uid": "id_756", "premise": "Zeus Temple Holds Secrets of Ancient Game Athens already is preparing for the summer games of 2004. But todays games offer a far different spectacle from the contests of ancient Greece, where naked young men with oiled bodies raced and wrestled and boxed to honor their gods. Those great Panhellenic events began more than 2,700 years ago, first in Olympia and later at Delphi, lsthmia and Nemea. And at Nemea, where the games began in 573 B. C. , a Berkeley archaeologist has been patiently reconstructing a site whose legends helped inspire the modern Olympics. For Stephen G. Miller, exploring the site at Nemea, 70 miles from Athens, involves more than analyzing artifacts and ruins, dating ancient rock strata or patiently assembling broken pottery shards. It also means reliving the events he's studying. For the last two summers, large crowds have flocked to an ancient Nemean stadium (capacity 40,000) to watch a modern re-enactment of the ancient Nemean games. Seven hundred runners from 45 nations bare foot and clad in white tunics raced around the reborn stadium in groups of 12. Winners of the races were crowned just as they were in antiquity with wreaths of wild celery. Miller is a professor of classics at the University of California at Berkeley, but he also has been a barefoot runner, a slave carrying water for the athletes and a priest presiding over the re-enacted rituals of the legendary Nemean games. Playing those roles gives you a deeper sense of antiquity and a feel for the spirit of the people who lived and worked and played there so long ago, he said recently after returning from this years field work. Excavating the site every summer since 1973, Miller and his crew have found and re-assembled limestone columns that once stood proudly around the Temple of Zeus. Exactly a decade after they began the excavation and just east of the temple, they found the remains of a great altar to Zeus where athletes and their trainers performed sacrifices and swore oaths just before competing. And from ancient Greek records, two years later, Millers team also learned that his Nemea site had once seen major horse races in a hippodrome that must have existed next to the great stadium. In an earthen mound his team could trace the patterns of faint wheel marks indicating that chariots must have raced there too. In 1997 Miller and his crew, seeking more evidence of the hippodrome, dug down into a spot where four low rock walls indicate there might be a structure underneath. There they found a wine jug, drinking mugs, coins and a crude little figure of a centaur. The next summer, after digging down 20 feet, they still hadnt reached bottom. Miller wondered what purpose this deep rock-walled pit might have served, and finally concluded it must have been a reservoir holding copious quantities of water from a river near the site that now irrigates vineyards. The reservoir is a phenomenal find, Miller said, We believe it provided water for as many as 150 horses who raced in the hippodrome during the games. But how were the horses fed? And what did they do with that much manure every day? Trying to answer questions like that is one of the joys of the whole project. Eight months after finding the reservoir Miller and his team uncovered an ancient chamber that served the Nemean athletes as a locker room the apodyterion where they anointed themselves with olive oil. They then would have walked 120 feet through a vaulted entrance tunnel the krypte esodos whose walls are still marked by graffiti scratched by the athletes on their way into the stadium. The wine jug and cups unearthed in one layer of the buried reservoir may have been left by victors in one of the ancient Nemean races, but just what kind of wine they drank remains unknown. Today, the local red wine served in Nemean taverns is called the Blood of Hercules, honoring the hero who strangled the ferocious Nemean lion there more than 5,000 years ago. As in so much of archaeology, the discoveries that Miller has made at Nemea all seem to recall ancient legends and link them to reality. The Berkeley team, for example, has unearthed a tiny bronze figurine identified as the image of an infant named Opheltes, whose fate inspired the first of the Nemean games. As Miller recounts the tale, Opheltes was the son of Lykourgos and Eurydike, who had tried for many years to produce an heir. When the Oracle at Delphi warned them that their child must not touch the ground until he had learned to walk, they ordered a Nemean slave woman to care for the infant day and night. One day, when seven warrior heroes passed through Nemea on their way to march against the citadel of Thebes they were the legendary Seven Against Thebes whose bloody war was immortalized by Aeschylus the nurse placed the child on a bed of wild celery while she offered drink to the heroes. Instantly, a serpent lurking in the vegetation killed the infant and the warriors re-named the boy Archemoros, the Beginner-of- Doom, and held the first Nemean games in his honor as a funerary festival. Wreaths of wild celery crowned winners of those games, as they did the modern winners at Nemea last summer. As with all classical archaeologists, whose excavations shed so much surprising light on antiquity, Miller and his students are now ready to organize and classify their treasured finds from the summer season, and to plan for next seasons dig. In the earthen mound where we saw the imprints of wheel cuts, we also have a bronze vessel of the kind that was always used for pouring libations, Miller said. That mound goes back to 600 B. C. , so now we wonder what happened there in that complex of religion and athletics even before the Nemean games. Archaeology doesnt come cheap, and each season at Nemea costs at least $150,000 for the team, the equipment, and the 35 local workers from the nearby town of modern Nemea, whom Miller calls the core of the project. The money all comes from private sources and not the least of Millers jobs is lecturing to the public and combing the territory for contributions.", "hypothesis": "Religion played a key role in the games.", "label": "e"} +{"uid": "id_757", "premise": "Zeus Temple Holds Secrets of Ancient Game Athens already is preparing for the summer games of 2004. But todays games offer a far different spectacle from the contests of ancient Greece, where naked young men with oiled bodies raced and wrestled and boxed to honor their gods. Those great Panhellenic events began more than 2,700 years ago, first in Olympia and later at Delphi, lsthmia and Nemea. And at Nemea, where the games began in 573 B. C. , a Berkeley archaeologist has been patiently reconstructing a site whose legends helped inspire the modern Olympics. For Stephen G. Miller, exploring the site at Nemea, 70 miles from Athens, involves more than analyzing artifacts and ruins, dating ancient rock strata or patiently assembling broken pottery shards. It also means reliving the events he's studying. For the last two summers, large crowds have flocked to an ancient Nemean stadium (capacity 40,000) to watch a modern re-enactment of the ancient Nemean games. Seven hundred runners from 45 nations bare foot and clad in white tunics raced around the reborn stadium in groups of 12. Winners of the races were crowned just as they were in antiquity with wreaths of wild celery. Miller is a professor of classics at the University of California at Berkeley, but he also has been a barefoot runner, a slave carrying water for the athletes and a priest presiding over the re-enacted rituals of the legendary Nemean games. Playing those roles gives you a deeper sense of antiquity and a feel for the spirit of the people who lived and worked and played there so long ago, he said recently after returning from this years field work. Excavating the site every summer since 1973, Miller and his crew have found and re-assembled limestone columns that once stood proudly around the Temple of Zeus. Exactly a decade after they began the excavation and just east of the temple, they found the remains of a great altar to Zeus where athletes and their trainers performed sacrifices and swore oaths just before competing. And from ancient Greek records, two years later, Millers team also learned that his Nemea site had once seen major horse races in a hippodrome that must have existed next to the great stadium. In an earthen mound his team could trace the patterns of faint wheel marks indicating that chariots must have raced there too. In 1997 Miller and his crew, seeking more evidence of the hippodrome, dug down into a spot where four low rock walls indicate there might be a structure underneath. There they found a wine jug, drinking mugs, coins and a crude little figure of a centaur. The next summer, after digging down 20 feet, they still hadnt reached bottom. Miller wondered what purpose this deep rock-walled pit might have served, and finally concluded it must have been a reservoir holding copious quantities of water from a river near the site that now irrigates vineyards. The reservoir is a phenomenal find, Miller said, We believe it provided water for as many as 150 horses who raced in the hippodrome during the games. But how were the horses fed? And what did they do with that much manure every day? Trying to answer questions like that is one of the joys of the whole project. Eight months after finding the reservoir Miller and his team uncovered an ancient chamber that served the Nemean athletes as a locker room the apodyterion where they anointed themselves with olive oil. They then would have walked 120 feet through a vaulted entrance tunnel the krypte esodos whose walls are still marked by graffiti scratched by the athletes on their way into the stadium. The wine jug and cups unearthed in one layer of the buried reservoir may have been left by victors in one of the ancient Nemean races, but just what kind of wine they drank remains unknown. Today, the local red wine served in Nemean taverns is called the Blood of Hercules, honoring the hero who strangled the ferocious Nemean lion there more than 5,000 years ago. As in so much of archaeology, the discoveries that Miller has made at Nemea all seem to recall ancient legends and link them to reality. The Berkeley team, for example, has unearthed a tiny bronze figurine identified as the image of an infant named Opheltes, whose fate inspired the first of the Nemean games. As Miller recounts the tale, Opheltes was the son of Lykourgos and Eurydike, who had tried for many years to produce an heir. When the Oracle at Delphi warned them that their child must not touch the ground until he had learned to walk, they ordered a Nemean slave woman to care for the infant day and night. One day, when seven warrior heroes passed through Nemea on their way to march against the citadel of Thebes they were the legendary Seven Against Thebes whose bloody war was immortalized by Aeschylus the nurse placed the child on a bed of wild celery while she offered drink to the heroes. Instantly, a serpent lurking in the vegetation killed the infant and the warriors re-named the boy Archemoros, the Beginner-of- Doom, and held the first Nemean games in his honor as a funerary festival. Wreaths of wild celery crowned winners of those games, as they did the modern winners at Nemea last summer. As with all classical archaeologists, whose excavations shed so much surprising light on antiquity, Miller and his students are now ready to organize and classify their treasured finds from the summer season, and to plan for next seasons dig. In the earthen mound where we saw the imprints of wheel cuts, we also have a bronze vessel of the kind that was always used for pouring libations, Miller said. That mound goes back to 600 B. C. , so now we wonder what happened there in that complex of religion and athletics even before the Nemean games. Archaeology doesnt come cheap, and each season at Nemea costs at least $150,000 for the team, the equipment, and the 35 local workers from the nearby town of modern Nemea, whom Miller calls the core of the project. The money all comes from private sources and not the least of Millers jobs is lecturing to the public and combing the territory for contributions.", "hypothesis": "Miller goes far beyond what an archaeologist traditionally normally does.", "label": "e"} +{"uid": "id_758", "premise": "Zeus Temple Holds Secrets of Ancient Game Athens already is preparing for the summer games of 2004. But todays games offer a far different spectacle from the contests of ancient Greece, where naked young men with oiled bodies raced and wrestled and boxed to honor their gods. Those great Panhellenic events began more than 2,700 years ago, first in Olympia and later at Delphi, lsthmia and Nemea. And at Nemea, where the games began in 573 B. C. , a Berkeley archaeologist has been patiently reconstructing a site whose legends helped inspire the modern Olympics. For Stephen G. Miller, exploring the site at Nemea, 70 miles from Athens, involves more than analyzing artifacts and ruins, dating ancient rock strata or patiently assembling broken pottery shards. It also means reliving the events he's studying. For the last two summers, large crowds have flocked to an ancient Nemean stadium (capacity 40,000) to watch a modern re-enactment of the ancient Nemean games. Seven hundred runners from 45 nations bare foot and clad in white tunics raced around the reborn stadium in groups of 12. Winners of the races were crowned just as they were in antiquity with wreaths of wild celery. Miller is a professor of classics at the University of California at Berkeley, but he also has been a barefoot runner, a slave carrying water for the athletes and a priest presiding over the re-enacted rituals of the legendary Nemean games. Playing those roles gives you a deeper sense of antiquity and a feel for the spirit of the people who lived and worked and played there so long ago, he said recently after returning from this years field work. Excavating the site every summer since 1973, Miller and his crew have found and re-assembled limestone columns that once stood proudly around the Temple of Zeus. Exactly a decade after they began the excavation and just east of the temple, they found the remains of a great altar to Zeus where athletes and their trainers performed sacrifices and swore oaths just before competing. And from ancient Greek records, two years later, Millers team also learned that his Nemea site had once seen major horse races in a hippodrome that must have existed next to the great stadium. In an earthen mound his team could trace the patterns of faint wheel marks indicating that chariots must have raced there too. In 1997 Miller and his crew, seeking more evidence of the hippodrome, dug down into a spot where four low rock walls indicate there might be a structure underneath. There they found a wine jug, drinking mugs, coins and a crude little figure of a centaur. The next summer, after digging down 20 feet, they still hadnt reached bottom. Miller wondered what purpose this deep rock-walled pit might have served, and finally concluded it must have been a reservoir holding copious quantities of water from a river near the site that now irrigates vineyards. The reservoir is a phenomenal find, Miller said, We believe it provided water for as many as 150 horses who raced in the hippodrome during the games. But how were the horses fed? And what did they do with that much manure every day? Trying to answer questions like that is one of the joys of the whole project. Eight months after finding the reservoir Miller and his team uncovered an ancient chamber that served the Nemean athletes as a locker room the apodyterion where they anointed themselves with olive oil. They then would have walked 120 feet through a vaulted entrance tunnel the krypte esodos whose walls are still marked by graffiti scratched by the athletes on their way into the stadium. The wine jug and cups unearthed in one layer of the buried reservoir may have been left by victors in one of the ancient Nemean races, but just what kind of wine they drank remains unknown. Today, the local red wine served in Nemean taverns is called the Blood of Hercules, honoring the hero who strangled the ferocious Nemean lion there more than 5,000 years ago. As in so much of archaeology, the discoveries that Miller has made at Nemea all seem to recall ancient legends and link them to reality. The Berkeley team, for example, has unearthed a tiny bronze figurine identified as the image of an infant named Opheltes, whose fate inspired the first of the Nemean games. As Miller recounts the tale, Opheltes was the son of Lykourgos and Eurydike, who had tried for many years to produce an heir. When the Oracle at Delphi warned them that their child must not touch the ground until he had learned to walk, they ordered a Nemean slave woman to care for the infant day and night. One day, when seven warrior heroes passed through Nemea on their way to march against the citadel of Thebes they were the legendary Seven Against Thebes whose bloody war was immortalized by Aeschylus the nurse placed the child on a bed of wild celery while she offered drink to the heroes. Instantly, a serpent lurking in the vegetation killed the infant and the warriors re-named the boy Archemoros, the Beginner-of- Doom, and held the first Nemean games in his honor as a funerary festival. Wreaths of wild celery crowned winners of those games, as they did the modern winners at Nemea last summer. As with all classical archaeologists, whose excavations shed so much surprising light on antiquity, Miller and his students are now ready to organize and classify their treasured finds from the summer season, and to plan for next seasons dig. In the earthen mound where we saw the imprints of wheel cuts, we also have a bronze vessel of the kind that was always used for pouring libations, Miller said. That mound goes back to 600 B. C. , so now we wonder what happened there in that complex of religion and athletics even before the Nemean games. Archaeology doesnt come cheap, and each season at Nemea costs at least $150,000 for the team, the equipment, and the 35 local workers from the nearby town of modern Nemea, whom Miller calls the core of the project. The money all comes from private sources and not the least of Millers jobs is lecturing to the public and combing the territory for contributions.", "hypothesis": "The games were far more interesting in the past than now.", "label": "n"} +{"uid": "id_759", "premise": "Zeus Temple Holds Secrets of Ancient Game Athens already is preparing for the summer games of 2004. But todays games offer a far different spectacle from the contests of ancient Greece, where naked young men with oiled bodies raced and wrestled and boxed to honor their gods. Those great Panhellenic events began more than 2,700 years ago, first in Olympia and later at Delphi, lsthmia and Nemea. And at Nemea, where the games began in 573 B. C. , a Berkeley archaeologist has been patiently reconstructing a site whose legends helped inspire the modern Olympics. For Stephen G. Miller, exploring the site at Nemea, 70 miles from Athens, involves more than analyzing artifacts and ruins, dating ancient rock strata or patiently assembling broken pottery shards. It also means reliving the events he's studying. For the last two summers, large crowds have flocked to an ancient Nemean stadium (capacity 40,000) to watch a modern re-enactment of the ancient Nemean games. Seven hundred runners from 45 nations bare foot and clad in white tunics raced around the reborn stadium in groups of 12. Winners of the races were crowned just as they were in antiquity with wreaths of wild celery. Miller is a professor of classics at the University of California at Berkeley, but he also has been a barefoot runner, a slave carrying water for the athletes and a priest presiding over the re-enacted rituals of the legendary Nemean games. Playing those roles gives you a deeper sense of antiquity and a feel for the spirit of the people who lived and worked and played there so long ago, he said recently after returning from this years field work. Excavating the site every summer since 1973, Miller and his crew have found and re-assembled limestone columns that once stood proudly around the Temple of Zeus. Exactly a decade after they began the excavation and just east of the temple, they found the remains of a great altar to Zeus where athletes and their trainers performed sacrifices and swore oaths just before competing. And from ancient Greek records, two years later, Millers team also learned that his Nemea site had once seen major horse races in a hippodrome that must have existed next to the great stadium. In an earthen mound his team could trace the patterns of faint wheel marks indicating that chariots must have raced there too. In 1997 Miller and his crew, seeking more evidence of the hippodrome, dug down into a spot where four low rock walls indicate there might be a structure underneath. There they found a wine jug, drinking mugs, coins and a crude little figure of a centaur. The next summer, after digging down 20 feet, they still hadnt reached bottom. Miller wondered what purpose this deep rock-walled pit might have served, and finally concluded it must have been a reservoir holding copious quantities of water from a river near the site that now irrigates vineyards. The reservoir is a phenomenal find, Miller said, We believe it provided water for as many as 150 horses who raced in the hippodrome during the games. But how were the horses fed? And what did they do with that much manure every day? Trying to answer questions like that is one of the joys of the whole project. Eight months after finding the reservoir Miller and his team uncovered an ancient chamber that served the Nemean athletes as a locker room the apodyterion where they anointed themselves with olive oil. They then would have walked 120 feet through a vaulted entrance tunnel the krypte esodos whose walls are still marked by graffiti scratched by the athletes on their way into the stadium. The wine jug and cups unearthed in one layer of the buried reservoir may have been left by victors in one of the ancient Nemean races, but just what kind of wine they drank remains unknown. Today, the local red wine served in Nemean taverns is called the Blood of Hercules, honoring the hero who strangled the ferocious Nemean lion there more than 5,000 years ago. As in so much of archaeology, the discoveries that Miller has made at Nemea all seem to recall ancient legends and link them to reality. The Berkeley team, for example, has unearthed a tiny bronze figurine identified as the image of an infant named Opheltes, whose fate inspired the first of the Nemean games. As Miller recounts the tale, Opheltes was the son of Lykourgos and Eurydike, who had tried for many years to produce an heir. When the Oracle at Delphi warned them that their child must not touch the ground until he had learned to walk, they ordered a Nemean slave woman to care for the infant day and night. One day, when seven warrior heroes passed through Nemea on their way to march against the citadel of Thebes they were the legendary Seven Against Thebes whose bloody war was immortalized by Aeschylus the nurse placed the child on a bed of wild celery while she offered drink to the heroes. Instantly, a serpent lurking in the vegetation killed the infant and the warriors re-named the boy Archemoros, the Beginner-of- Doom, and held the first Nemean games in his honor as a funerary festival. Wreaths of wild celery crowned winners of those games, as they did the modern winners at Nemea last summer. As with all classical archaeologists, whose excavations shed so much surprising light on antiquity, Miller and his students are now ready to organize and classify their treasured finds from the summer season, and to plan for next seasons dig. In the earthen mound where we saw the imprints of wheel cuts, we also have a bronze vessel of the kind that was always used for pouring libations, Miller said. That mound goes back to 600 B. C. , so now we wonder what happened there in that complex of religion and athletics even before the Nemean games. Archaeology doesnt come cheap, and each season at Nemea costs at least $150,000 for the team, the equipment, and the 35 local workers from the nearby town of modern Nemea, whom Miller calls the core of the project. The money all comes from private sources and not the least of Millers jobs is lecturing to the public and combing the territory for contributions.", "hypothesis": "The author believes it must be also difficult for Miller to find funds for the excavation.", "label": "e"} +{"uid": "id_760", "premise": "Zeus Temple Holds Secrets of Ancient Game Athens already is preparing for the summer games of 2004. But todays games offer a far different spectacle from the contests of ancient Greece, where naked young men with oiled bodies raced and wrestled and boxed to honor their gods. Those great Panhellenic events began more than 2,700 years ago, first in Olympia and later at Delphi, lsthmia and Nemea. And at Nemea, where the games began in 573 B. C. , a Berkeley archaeologist has been patiently reconstructing a site whose legends helped inspire the modern Olympics. For Stephen G. Miller, exploring the site at Nemea, 70 miles from Athens, involves more than analyzing artifacts and ruins, dating ancient rock strata or patiently assembling broken pottery shards. It also means reliving the events he's studying. For the last two summers, large crowds have flocked to an ancient Nemean stadium (capacity 40,000) to watch a modern re-enactment of the ancient Nemean games. Seven hundred runners from 45 nations bare foot and clad in white tunics raced around the reborn stadium in groups of 12. Winners of the races were crowned just as they were in antiquity with wreaths of wild celery. Miller is a professor of classics at the University of California at Berkeley, but he also has been a barefoot runner, a slave carrying water for the athletes and a priest presiding over the re-enacted rituals of the legendary Nemean games. Playing those roles gives you a deeper sense of antiquity and a feel for the spirit of the people who lived and worked and played there so long ago, he said recently after returning from this years field work. Excavating the site every summer since 1973, Miller and his crew have found and re-assembled limestone columns that once stood proudly around the Temple of Zeus. Exactly a decade after they began the excavation and just east of the temple, they found the remains of a great altar to Zeus where athletes and their trainers performed sacrifices and swore oaths just before competing. And from ancient Greek records, two years later, Millers team also learned that his Nemea site had once seen major horse races in a hippodrome that must have existed next to the great stadium. In an earthen mound his team could trace the patterns of faint wheel marks indicating that chariots must have raced there too. In 1997 Miller and his crew, seeking more evidence of the hippodrome, dug down into a spot where four low rock walls indicate there might be a structure underneath. There they found a wine jug, drinking mugs, coins and a crude little figure of a centaur. The next summer, after digging down 20 feet, they still hadnt reached bottom. Miller wondered what purpose this deep rock-walled pit might have served, and finally concluded it must have been a reservoir holding copious quantities of water from a river near the site that now irrigates vineyards. The reservoir is a phenomenal find, Miller said, We believe it provided water for as many as 150 horses who raced in the hippodrome during the games. But how were the horses fed? And what did they do with that much manure every day? Trying to answer questions like that is one of the joys of the whole project. Eight months after finding the reservoir Miller and his team uncovered an ancient chamber that served the Nemean athletes as a locker room the apodyterion where they anointed themselves with olive oil. They then would have walked 120 feet through a vaulted entrance tunnel the krypte esodos whose walls are still marked by graffiti scratched by the athletes on their way into the stadium. The wine jug and cups unearthed in one layer of the buried reservoir may have been left by victors in one of the ancient Nemean races, but just what kind of wine they drank remains unknown. Today, the local red wine served in Nemean taverns is called the Blood of Hercules, honoring the hero who strangled the ferocious Nemean lion there more than 5,000 years ago. As in so much of archaeology, the discoveries that Miller has made at Nemea all seem to recall ancient legends and link them to reality. The Berkeley team, for example, has unearthed a tiny bronze figurine identified as the image of an infant named Opheltes, whose fate inspired the first of the Nemean games. As Miller recounts the tale, Opheltes was the son of Lykourgos and Eurydike, who had tried for many years to produce an heir. When the Oracle at Delphi warned them that their child must not touch the ground until he had learned to walk, they ordered a Nemean slave woman to care for the infant day and night. One day, when seven warrior heroes passed through Nemea on their way to march against the citadel of Thebes they were the legendary Seven Against Thebes whose bloody war was immortalized by Aeschylus the nurse placed the child on a bed of wild celery while she offered drink to the heroes. Instantly, a serpent lurking in the vegetation killed the infant and the warriors re-named the boy Archemoros, the Beginner-of- Doom, and held the first Nemean games in his honor as a funerary festival. Wreaths of wild celery crowned winners of those games, as they did the modern winners at Nemea last summer. As with all classical archaeologists, whose excavations shed so much surprising light on antiquity, Miller and his students are now ready to organize and classify their treasured finds from the summer season, and to plan for next seasons dig. In the earthen mound where we saw the imprints of wheel cuts, we also have a bronze vessel of the kind that was always used for pouring libations, Miller said. That mound goes back to 600 B. C. , so now we wonder what happened there in that complex of religion and athletics even before the Nemean games. Archaeology doesnt come cheap, and each season at Nemea costs at least $150,000 for the team, the equipment, and the 35 local workers from the nearby town of modern Nemea, whom Miller calls the core of the project. The money all comes from private sources and not the least of Millers jobs is lecturing to the public and combing the territory for contributions.", "hypothesis": "The Nemean games influenced the modern Olympic Games.", "label": "e"} +{"uid": "id_761", "premise": "Zoo conservation programmes One of London Zoos recent advertisements caused me some irritation, so patently did it distort reality. Headlined Without zoos you might as well tell these animals to get stuffed, it was bordered with illustrations of several endangered species and went on to extol the myth that without zoos like London Zoo these animals will almost certainly disappear forever. With the zoo worlds rather mediocre record on conservation, one might be forgiven for being slightly sceptical about such an advertisement. Zoos were originally created as places of entertainment, and their suggested involvement with conservation didnt seriously arise until about 30 years ago, when the Zoological Society of London held the first formal international meeting on the subject. Eight years later, a series of world conferences took place, entitled The Breeding of Endangered Species, and from this point onwards conservation became the zoo communitys buzzword. This commitment has now been clearh defined in The World Zpo Conservation Strategy (WZGS, September 1993), which although an important and welcome document does seem to be based on an unrealistic optimism about the nature of the zoo industry The WZCS estimates that there are about 10,000 zoos in the world, of which around 1,000 represent a core of quality collections capable of participating in co-ordinated conservation programmes. This is probably the documents first failing, as I believe that 10,000 is a serious underestimate of the total number of places masquerading as zoological establishments. Of course it is difficult to get accurate data but, to put the issue into perspective, I have found that, in a year of working in Eastern Europe, I discover fresh zoos on almost a weekly basis. The second flaw in the reasoning of the WZCS document is the naive faith it places in its 1,000 core zoos. One would assume that the calibre of these institutions would have been carefully examined, but it appears that the criterion for inclusion on this select list might merely be that the zoo is a member of a zoo federation or association. This might be a good starting point, working on the premise that members must meet certain standards, but again the facts dont support the theory. The greatly respected American Association of Zoological Parks and Aquariums (AAZPA) has had extremely dubious members, and in the UK the Federation of Zoological Gardens of Great Britain and Ireland has 24Reading occasionally had members that have been roundly censured in the national press. These include Robin Hill Adventure Park on the Isle of Wight, which many considered the most notorious collection of animals in the country. This establishment, which for years was protected by the Isles local council (which viewed it as a tourist amenity), was finally closed down following a damning report by a veterinary inspector appointed under the terms of the Zoo Licensing Act 1981. As it was always a collection of dubious repute, one is obliged to reflect upon the standards that the Zoo Federation sets when granting membership. The situation is even worse in developing countries where little money is available for redevelopment and it is hard to see a way of incorporating collections into the overall scheme of the WZCS. Even assuming that the WZCSs 1,000 core zoos are all of a high standard complete with scientific staff and research facilities, trained and dedicated keepers, accommodation that permits normal or natural behaviour, and a policy of co-operating fully with one another what might be the potential for conservation? Colin Tudge, author of Last Animals at the Zoo (Oxford University Press, 1992), argues that if the worlds zoos worked together in co-operative breeding programmes, then even without further expansion they could save around 2,000 species of endangered land vertebrates. This seems an extremely optimistic proposition from a man who must be aware of the failings and weaknesses of the zoo industry the man who, when a member of the council of London Zoo, had to persuade the zoo to devote more of its activities to conservation. Moreover, where are the facts to support such optimism? Today approximately 16 species might be said to have been saved by captive breeding programmes, although a number of these can hardly be looked upon as resounding successes. Beyond that, about a further 20 species are being seriously considered for zoo conservation programmes. Given that the international conference at London Zoo was held 30 years ago, this is pretty slow progress, and a long way off Tudges target of 2,000.", "hypothesis": "The number of successful zoo conservation programmes is unsatisfactory.", "label": "e"} +{"uid": "id_762", "premise": "Zoo conservation programmes One of London Zoos recent advertisements caused me some irritation, so patently did it distort reality. Headlined Without zoos you might as well tell these animals to get stuffed, it was bordered with illustrations of several endangered species and went on to extol the myth that without zoos like London Zoo these animals will almost certainly disappear forever. With the zoo worlds rather mediocre record on conservation, one might be forgiven for being slightly sceptical about such an advertisement. Zoos were originally created as places of entertainment, and their suggested involvement with conservation didnt seriously arise until about 30 years ago, when the Zoological Society of London held the first formal international meeting on the subject. Eight years later, a series of world conferences took place, entitled The Breeding of Endangered Species, and from this point onwards conservation became the zoo communitys buzzword. This commitment has now been clearh defined in The World Zpo Conservation Strategy (WZGS, September 1993), which although an important and welcome document does seem to be based on an unrealistic optimism about the nature of the zoo industry The WZCS estimates that there are about 10,000 zoos in the world, of which around 1,000 represent a core of quality collections capable of participating in co-ordinated conservation programmes. This is probably the documents first failing, as I believe that 10,000 is a serious underestimate of the total number of places masquerading as zoological establishments. Of course it is difficult to get accurate data but, to put the issue into perspective, I have found that, in a year of working in Eastern Europe, I discover fresh zoos on almost a weekly basis. The second flaw in the reasoning of the WZCS document is the naive faith it places in its 1,000 core zoos. One would assume that the calibre of these institutions would have been carefully examined, but it appears that the criterion for inclusion on this select list might merely be that the zoo is a member of a zoo federation or association. This might be a good starting point, working on the premise that members must meet certain standards, but again the facts dont support the theory. The greatly respected American Association of Zoological Parks and Aquariums (AAZPA) has had extremely dubious members, and in the UK the Federation of Zoological Gardens of Great Britain and Ireland has 24Reading occasionally had members that have been roundly censured in the national press. These include Robin Hill Adventure Park on the Isle of Wight, which many considered the most notorious collection of animals in the country. This establishment, which for years was protected by the Isles local council (which viewed it as a tourist amenity), was finally closed down following a damning report by a veterinary inspector appointed under the terms of the Zoo Licensing Act 1981. As it was always a collection of dubious repute, one is obliged to reflect upon the standards that the Zoo Federation sets when granting membership. The situation is even worse in developing countries where little money is available for redevelopment and it is hard to see a way of incorporating collections into the overall scheme of the WZCS. Even assuming that the WZCSs 1,000 core zoos are all of a high standard complete with scientific staff and research facilities, trained and dedicated keepers, accommodation that permits normal or natural behaviour, and a policy of co-operating fully with one another what might be the potential for conservation? Colin Tudge, author of Last Animals at the Zoo (Oxford University Press, 1992), argues that if the worlds zoos worked together in co-operative breeding programmes, then even without further expansion they could save around 2,000 species of endangered land vertebrates. This seems an extremely optimistic proposition from a man who must be aware of the failings and weaknesses of the zoo industry the man who, when a member of the council of London Zoo, had to persuade the zoo to devote more of its activities to conservation. Moreover, where are the facts to support such optimism? Today approximately 16 species might be said to have been saved by captive breeding programmes, although a number of these can hardly be looked upon as resounding successes. Beyond that, about a further 20 species are being seriously considered for zoo conservation programmes. Given that the international conference at London Zoo was held 30 years ago, this is pretty slow progress, and a long way off Tudges target of 2,000.", "hypothesis": "Colin Tudge was dissatisfied with the treatment of animals at London Zoo.", "label": "n"} +{"uid": "id_763", "premise": "Zoo conservation programmes One of London Zoos recent advertisements caused me some irritation, so patently did it distort reality. Headlined Without zoos you might as well tell these animals to get stuffed, it was bordered with illustrations of several endangered species and went on to extol the myth that without zoos like London Zoo these animals will almost certainly disappear forever. With the zoo worlds rather mediocre record on conservation, one might be forgiven for being slightly sceptical about such an advertisement. Zoos were originally created as places of entertainment, and their suggested involvement with conservation didnt seriously arise until about 30 years ago, when the Zoological Society of London held the first formal international meeting on the subject. Eight years later, a series of world conferences took place, entitled The Breeding of Endangered Species, and from this point onwards conservation became the zoo communitys buzzword. This commitment has now been clearh defined in The World Zpo Conservation Strategy (WZGS, September 1993), which although an important and welcome document does seem to be based on an unrealistic optimism about the nature of the zoo industry The WZCS estimates that there are about 10,000 zoos in the world, of which around 1,000 represent a core of quality collections capable of participating in co-ordinated conservation programmes. This is probably the documents first failing, as I believe that 10,000 is a serious underestimate of the total number of places masquerading as zoological establishments. Of course it is difficult to get accurate data but, to put the issue into perspective, I have found that, in a year of working in Eastern Europe, I discover fresh zoos on almost a weekly basis. The second flaw in the reasoning of the WZCS document is the naive faith it places in its 1,000 core zoos. One would assume that the calibre of these institutions would have been carefully examined, but it appears that the criterion for inclusion on this select list might merely be that the zoo is a member of a zoo federation or association. This might be a good starting point, working on the premise that members must meet certain standards, but again the facts dont support the theory. The greatly respected American Association of Zoological Parks and Aquariums (AAZPA) has had extremely dubious members, and in the UK the Federation of Zoological Gardens of Great Britain and Ireland has 24Reading occasionally had members that have been roundly censured in the national press. These include Robin Hill Adventure Park on the Isle of Wight, which many considered the most notorious collection of animals in the country. This establishment, which for years was protected by the Isles local council (which viewed it as a tourist amenity), was finally closed down following a damning report by a veterinary inspector appointed under the terms of the Zoo Licensing Act 1981. As it was always a collection of dubious repute, one is obliged to reflect upon the standards that the Zoo Federation sets when granting membership. The situation is even worse in developing countries where little money is available for redevelopment and it is hard to see a way of incorporating collections into the overall scheme of the WZCS. Even assuming that the WZCSs 1,000 core zoos are all of a high standard complete with scientific staff and research facilities, trained and dedicated keepers, accommodation that permits normal or natural behaviour, and a policy of co-operating fully with one another what might be the potential for conservation? Colin Tudge, author of Last Animals at the Zoo (Oxford University Press, 1992), argues that if the worlds zoos worked together in co-operative breeding programmes, then even without further expansion they could save around 2,000 species of endangered land vertebrates. This seems an extremely optimistic proposition from a man who must be aware of the failings and weaknesses of the zoo industry the man who, when a member of the council of London Zoo, had to persuade the zoo to devote more of its activities to conservation. Moreover, where are the facts to support such optimism? Today approximately 16 species might be said to have been saved by captive breeding programmes, although a number of these can hardly be looked upon as resounding successes. Beyond that, about a further 20 species are being seriously considered for zoo conservation programmes. Given that the international conference at London Zoo was held 30 years ago, this is pretty slow progress, and a long way off Tudges target of 2,000.", "hypothesis": "London Zoos advertisements are dishonest.", "label": "e"} +{"uid": "id_764", "premise": "Zoo conservation programmes One of London Zoos recent advertisements caused me some irritation, so patently did it distort reality. Headlined Without zoos you might as well tell these animals to get stuffed, it was bordered with illustrations of several endangered species and went on to extol the myth that without zoos like London Zoo these animals will almost certainly disappear forever. With the zoo worlds rather mediocre record on conservation, one might be forgiven for being slightly sceptical about such an advertisement. Zoos were originally created as places of entertainment, and their suggested involvement with conservation didnt seriously arise until about 30 years ago, when the Zoological Society of London held the first formal international meeting on the subject. Eight years later, a series of world conferences took place, entitled The Breeding of Endangered Species, and from this point onwards conservation became the zoo communitys buzzword. This commitment has now been clearh defined in The World Zpo Conservation Strategy (WZGS, September 1993), which although an important and welcome document does seem to be based on an unrealistic optimism about the nature of the zoo industry The WZCS estimates that there are about 10,000 zoos in the world, of which around 1,000 represent a core of quality collections capable of participating in co-ordinated conservation programmes. This is probably the documents first failing, as I believe that 10,000 is a serious underestimate of the total number of places masquerading as zoological establishments. Of course it is difficult to get accurate data but, to put the issue into perspective, I have found that, in a year of working in Eastern Europe, I discover fresh zoos on almost a weekly basis. The second flaw in the reasoning of the WZCS document is the naive faith it places in its 1,000 core zoos. One would assume that the calibre of these institutions would have been carefully examined, but it appears that the criterion for inclusion on this select list might merely be that the zoo is a member of a zoo federation or association. This might be a good starting point, working on the premise that members must meet certain standards, but again the facts dont support the theory. The greatly respected American Association of Zoological Parks and Aquariums (AAZPA) has had extremely dubious members, and in the UK the Federation of Zoological Gardens of Great Britain and Ireland has 24Reading occasionally had members that have been roundly censured in the national press. These include Robin Hill Adventure Park on the Isle of Wight, which many considered the most notorious collection of animals in the country. This establishment, which for years was protected by the Isles local council (which viewed it as a tourist amenity), was finally closed down following a damning report by a veterinary inspector appointed under the terms of the Zoo Licensing Act 1981. As it was always a collection of dubious repute, one is obliged to reflect upon the standards that the Zoo Federation sets when granting membership. The situation is even worse in developing countries where little money is available for redevelopment and it is hard to see a way of incorporating collections into the overall scheme of the WZCS. Even assuming that the WZCSs 1,000 core zoos are all of a high standard complete with scientific staff and research facilities, trained and dedicated keepers, accommodation that permits normal or natural behaviour, and a policy of co-operating fully with one another what might be the potential for conservation? Colin Tudge, author of Last Animals at the Zoo (Oxford University Press, 1992), argues that if the worlds zoos worked together in co-operative breeding programmes, then even without further expansion they could save around 2,000 species of endangered land vertebrates. This seems an extremely optimistic proposition from a man who must be aware of the failings and weaknesses of the zoo industry the man who, when a member of the council of London Zoo, had to persuade the zoo to devote more of its activities to conservation. Moreover, where are the facts to support such optimism? Today approximately 16 species might be said to have been saved by captive breeding programmes, although a number of these can hardly be looked upon as resounding successes. Beyond that, about a further 20 species are being seriously considered for zoo conservation programmes. Given that the international conference at London Zoo was held 30 years ago, this is pretty slow progress, and a long way off Tudges target of 2,000.", "hypothesis": "Zoos made an insignificant contribution to conservation up until 30 years ago.", "label": "e"} +{"uid": "id_765", "premise": "Zoo conservation programmes One of London Zoos recent advertisements caused me some irritation, so patently did it distort reality. Headlined Without zoos you might as well tell these animals to get stuffed, it was bordered with illustrations of several endangered species and went on to extol the myth that without zoos like London Zoo these animals will almost certainly disappear forever. With the zoo worlds rather mediocre record on conservation, one might be forgiven for being slightly sceptical about such an advertisement. Zoos were originally created as places of entertainment, and their suggested involvement with conservation didnt seriously arise until about 30 years ago, when the Zoological Society of London held the first formal international meeting on the subject. Eight years later, a series of world conferences took place, entitled The Breeding of Endangered Species, and from this point onwards conservation became the zoo communitys buzzword. This commitment has now been clearh defined in The World Zpo Conservation Strategy (WZGS, September 1993), which although an important and welcome document does seem to be based on an unrealistic optimism about the nature of the zoo industry The WZCS estimates that there are about 10,000 zoos in the world, of which around 1,000 represent a core of quality collections capable of participating in co-ordinated conservation programmes. This is probably the documents first failing, as I believe that 10,000 is a serious underestimate of the total number of places masquerading as zoological establishments. Of course it is difficult to get accurate data but, to put the issue into perspective, I have found that, in a year of working in Eastern Europe, I discover fresh zoos on almost a weekly basis. The second flaw in the reasoning of the WZCS document is the naive faith it places in its 1,000 core zoos. One would assume that the calibre of these institutions would have been carefully examined, but it appears that the criterion for inclusion on this select list might merely be that the zoo is a member of a zoo federation or association. This might be a good starting point, working on the premise that members must meet certain standards, but again the facts dont support the theory. The greatly respected American Association of Zoological Parks and Aquariums (AAZPA) has had extremely dubious members, and in the UK the Federation of Zoological Gardens of Great Britain and Ireland has 24Reading occasionally had members that have been roundly censured in the national press. These include Robin Hill Adventure Park on the Isle of Wight, which many considered the most notorious collection of animals in the country. This establishment, which for years was protected by the Isles local council (which viewed it as a tourist amenity), was finally closed down following a damning report by a veterinary inspector appointed under the terms of the Zoo Licensing Act 1981. As it was always a collection of dubious repute, one is obliged to reflect upon the standards that the Zoo Federation sets when granting membership. The situation is even worse in developing countries where little money is available for redevelopment and it is hard to see a way of incorporating collections into the overall scheme of the WZCS. Even assuming that the WZCSs 1,000 core zoos are all of a high standard complete with scientific staff and research facilities, trained and dedicated keepers, accommodation that permits normal or natural behaviour, and a policy of co-operating fully with one another what might be the potential for conservation? Colin Tudge, author of Last Animals at the Zoo (Oxford University Press, 1992), argues that if the worlds zoos worked together in co-operative breeding programmes, then even without further expansion they could save around 2,000 species of endangered land vertebrates. This seems an extremely optimistic proposition from a man who must be aware of the failings and weaknesses of the zoo industry the man who, when a member of the council of London Zoo, had to persuade the zoo to devote more of its activities to conservation. Moreover, where are the facts to support such optimism? Today approximately 16 species might be said to have been saved by captive breeding programmes, although a number of these can hardly be looked upon as resounding successes. Beyond that, about a further 20 species are being seriously considered for zoo conservation programmes. Given that the international conference at London Zoo was held 30 years ago, this is pretty slow progress, and a long way off Tudges target of 2,000.", "hypothesis": "No-one knew how the animals were being treated at Robin Hill Adventure Park.", "label": "c"} +{"uid": "id_766", "premise": "Zoo conservation programmes One of London Zoos recent advertisements caused me some irritation, so patently did it distort reality. Headlined Without zoos you might as well tell these animals to get stuffed, it was bordered with illustrations of several endangered species and went on to extol the myth that without zoos like London Zoo these animals will almost certainly disappear forever. With the zoo worlds rather mediocre record on conservation, one might be forgiven for being slightly sceptical about such an advertisement. Zoos were originally created as places of entertainment, and their suggested involvement with conservation didnt seriously arise until about 30 years ago, when the Zoological Society of London held the first formal international meeting on the subject. Eight years later, a series of world conferences took place, entitled The Breeding of Endangered Species, and from this point onwards conservation became the zoo communitys buzzword. This commitment has now been clearh defined in The World Zpo Conservation Strategy (WZGS, September 1993), which although an important and welcome document does seem to be based on an unrealistic optimism about the nature of the zoo industry The WZCS estimates that there are about 10,000 zoos in the world, of which around 1,000 represent a core of quality collections capable of participating in co-ordinated conservation programmes. This is probably the documents first failing, as I believe that 10,000 is a serious underestimate of the total number of places masquerading as zoological establishments. Of course it is difficult to get accurate data but, to put the issue into perspective, I have found that, in a year of working in Eastern Europe, I discover fresh zoos on almost a weekly basis. The second flaw in the reasoning of the WZCS document is the naive faith it places in its 1,000 core zoos. One would assume that the calibre of these institutions would have been carefully examined, but it appears that the criterion for inclusion on this select list might merely be that the zoo is a member of a zoo federation or association. This might be a good starting point, working on the premise that members must meet certain standards, but again the facts dont support the theory. The greatly respected American Association of Zoological Parks and Aquariums (AAZPA) has had extremely dubious members, and in the UK the Federation of Zoological Gardens of Great Britain and Ireland has 24Reading occasionally had members that have been roundly censured in the national press. These include Robin Hill Adventure Park on the Isle of Wight, which many considered the most notorious collection of animals in the country. This establishment, which for years was protected by the Isles local council (which viewed it as a tourist amenity), was finally closed down following a damning report by a veterinary inspector appointed under the terms of the Zoo Licensing Act 1981. As it was always a collection of dubious repute, one is obliged to reflect upon the standards that the Zoo Federation sets when granting membership. The situation is even worse in developing countries where little money is available for redevelopment and it is hard to see a way of incorporating collections into the overall scheme of the WZCS. Even assuming that the WZCSs 1,000 core zoos are all of a high standard complete with scientific staff and research facilities, trained and dedicated keepers, accommodation that permits normal or natural behaviour, and a policy of co-operating fully with one another what might be the potential for conservation? Colin Tudge, author of Last Animals at the Zoo (Oxford University Press, 1992), argues that if the worlds zoos worked together in co-operative breeding programmes, then even without further expansion they could save around 2,000 species of endangered land vertebrates. This seems an extremely optimistic proposition from a man who must be aware of the failings and weaknesses of the zoo industry the man who, when a member of the council of London Zoo, had to persuade the zoo to devote more of its activities to conservation. Moreover, where are the facts to support such optimism? Today approximately 16 species might be said to have been saved by captive breeding programmes, although a number of these can hardly be looked upon as resounding successes. Beyond that, about a further 20 species are being seriously considered for zoo conservation programmes. Given that the international conference at London Zoo was held 30 years ago, this is pretty slow progress, and a long way off Tudges target of 2,000.", "hypothesis": "The WZCS document is not known in Eastern Europe.", "label": "n"} +{"uid": "id_767", "premise": "Zoo conservation programmes One of London Zoos recent advertisements caused me some irritation, so patently did it distort reality. Headlined Without zoos you might as well tell these animals to get stuffed, it was bordered with illustrations of several endangered species and went on to extol the myth that without zoos like London Zoo these animals will almost certainly disappear forever. With the zoo worlds rather mediocre record on conservation, one might be forgiven for being slightly sceptical about such an advertisement. Zoos were originally created as places of entertainment, and their suggested involvement with conservation didnt seriously arise until about 30 years ago, when the Zoological Society of London held the first formal international meeting on the subject. Eight years later, a series of world conferences took place, entitled The Breeding of Endangered Species, and from this point onwards conservation became the zoo communitys buzzword. This commitment has now been clearh defined in The World Zpo Conservation Strategy (WZGS, September 1993), which although an important and welcome document does seem to be based on an unrealistic optimism about the nature of the zoo industry The WZCS estimates that there are about 10,000 zoos in the world, of which around 1,000 represent a core of quality collections capable of participating in co-ordinated conservation programmes. This is probably the documents first failing, as I believe that 10,000 is a serious underestimate of the total number of places masquerading as zoological establishments. Of course it is difficult to get accurate data but, to put the issue into perspective, I have found that, in a year of working in Eastern Europe, I discover fresh zoos on almost a weekly basis. The second flaw in the reasoning of the WZCS document is the naive faith it places in its 1,000 core zoos. One would assume that the calibre of these institutions would have been carefully examined, but it appears that the criterion for inclusion on this select list might merely be that the zoo is a member of a zoo federation or association. This might be a good starting point, working on the premise that members must meet certain standards, but again the facts dont support the theory. The greatly respected American Association of Zoological Parks and Aquariums (AAZPA) has had extremely dubious members, and in the UK the Federation of Zoological Gardens of Great Britain and Ireland has 24Reading occasionally had members that have been roundly censured in the national press. These include Robin Hill Adventure Park on the Isle of Wight, which many considered the most notorious collection of animals in the country. This establishment, which for years was protected by the Isles local council (which viewed it as a tourist amenity), was finally closed down following a damning report by a veterinary inspector appointed under the terms of the Zoo Licensing Act 1981. As it was always a collection of dubious repute, one is obliged to reflect upon the standards that the Zoo Federation sets when granting membership. The situation is even worse in developing countries where little money is available for redevelopment and it is hard to see a way of incorporating collections into the overall scheme of the WZCS. Even assuming that the WZCSs 1,000 core zoos are all of a high standard complete with scientific staff and research facilities, trained and dedicated keepers, accommodation that permits normal or natural behaviour, and a policy of co-operating fully with one another what might be the potential for conservation? Colin Tudge, author of Last Animals at the Zoo (Oxford University Press, 1992), argues that if the worlds zoos worked together in co-operative breeding programmes, then even without further expansion they could save around 2,000 species of endangered land vertebrates. This seems an extremely optimistic proposition from a man who must be aware of the failings and weaknesses of the zoo industry the man who, when a member of the council of London Zoo, had to persuade the zoo to devote more of its activities to conservation. Moreover, where are the facts to support such optimism? Today approximately 16 species might be said to have been saved by captive breeding programmes, although a number of these can hardly be looked upon as resounding successes. Beyond that, about a further 20 species are being seriously considered for zoo conservation programmes. Given that the international conference at London Zoo was held 30 years ago, this is pretty slow progress, and a long way off Tudges target of 2,000.", "hypothesis": "Zoos in the WZCS select list were carefully inspected.", "label": "c"} +{"uid": "id_768", "premise": "asset liquidity and market liquidity Asset liquidity is influenced by the mobility of market. Stock market exchanges liquidable finacial instruments such as bonds and shares. Some assets are not liquidable due to market is said to be \"illiquid\".", "hypothesis": "Asset liquidity is influenced by the market mobility.", "label": "e"} +{"uid": "id_769", "premise": "asset liquidity and market liquidity Asset liquidity is influenced by the mobility of market. Stock market exchanges liquidable finacial instruments such as bonds and shares. Some assets are not liquidable due to market is said to be \"illiquid\".", "hypothesis": "Some assets are not liquidable on the market because they are unsellable.", "label": "n"} +{"uid": "id_770", "premise": "asset liquidity and market liquidity Asset liquidity is influenced by the mobility of market. Stock market exchanges liquidable finacial instruments such as bonds and shares. Some assets are not liquidable due to market is said to be \"illiquid\".", "hypothesis": "Bonds and stocks are \"illiquid\" on the market.", "label": "c"} +{"uid": "id_771", "premise": "directors should work with all other groups in the community", "hypothesis": "the group's meetings are solely for directors", "label": "c"} +{"uid": "id_772", "premise": "final salary and commission distributionFinal Salary Scheme: -They provide benefits according to a fixed formula. The benefits are based on salary on the date of retirement (Guarantee payment of a fraction of the final salary)-Employer assumed all risk -Both the employer and the employee will make contributions into this type of pension scheme -Benefits not depend on investment returns or annuity rateFinal salary exists before world war two, final salary system requires more pension funds that the company has to pay for employees, and distribution salary reduces cost. Companies adopt distribution salary have fewer employees register scheme.", "hypothesis": "Company wants more to use the distribution salary than final salary.", "label": "e"} +{"uid": "id_773", "premise": "final salary and commission distributionFinal Salary Scheme: -They provide benefits according to a fixed formula. The benefits are based on salary on the date of retirement (Guarantee payment of a fraction of the final salary)-Employer assumed all risk -Both the employer and the employee will make contributions into this type of pension scheme -Benefits not depend on investment returns or annuity rateFinal salary exists before world war two, final salary system requires more pension funds that the company has to pay for employees, and distribution salary reduces cost. Companies adopt distribution salary have fewer employees register scheme.", "hypothesis": "The popularization of distribution salary drives its uptake.", "label": "c"} +{"uid": "id_774", "premise": "final salary and commission distributionFinal Salary Scheme: -They provide benefits according to a fixed formula. The benefits are based on salary on the date of retirement (Guarantee payment of a fraction of the final salary)-Employer assumed all risk -Both the employer and the employee will make contributions into this type of pension scheme -Benefits not depend on investment returns or annuity rateFinal salary exists before world war two, final salary system requires more pension funds that the company has to pay for employees, and distribution salary reduces cost. Companies adopt distribution salary have fewer employees register scheme.", "hypothesis": "Distribution salary reduces cost", "label": "e"} +{"uid": "id_775", "premise": "final salary and commission distributionFinal Salary Scheme: -They provide benefits according to a fixed formula. The benefits are based on salary on the date of retirement (Guarantee payment of a fraction of the final salary)-Employer assumed all risk -Both the employer and the employee will make contributions into this type of pension scheme -Benefits not depend on investment returns or annuity rateFinal salary exists before world war two, final salary system requires more pension funds that the company has to pay for employees, and distribution salary reduces cost. Companies adopt distribution salary have fewer employees register scheme.", "hypothesis": "There are fewer companies that use final salary system.", "label": "n"} +{"uid": "id_776", "premise": "globalisation is causing a shift in the roles of government and business. Since the end of the Cold war the rivalry between nations has assumed a predominantly economic form. Foreign policy is increasingly subordinated to commercial policy. Yet at the same time the joint interests of national governments and corporations are diverging. As corporations become more independent of their national roots, governments will have to attract foreign business investment to become globally competitive. However, because the population at large is unenthusiastic about globalisation, governments risk gaining business while losing votes.", "hypothesis": "The general public does not understand the advantages of competing at the global level.", "label": "n"} +{"uid": "id_777", "premise": "globalisation is causing a shift in the roles of government and business. Since the end of the Cold war the rivalry between nations has assumed a predominantly economic form. Foreign policy is increasingly subordinated to commercial policy. Yet at the same time the joint interests of national governments and corporations are diverging. As corporations become more independent of their national roots, governments will have to attract foreign business investment to become globally competitive. However, because the population at large is unenthusiastic about globalisation, governments risk gaining business while losing votes.", "hypothesis": "Governments and corporations used to have more similar interests.", "label": "e"} +{"uid": "id_778", "premise": "he Large Hadron Collider (LHC), located underneath the border of France and Switzerland, 1s currently the biggest experiment in the world. Its construction mvolved 9,000 magnets and over 10,000 tons of nitrogen are used for its cooling processes. Scientists and engineers have spent 4.5 billion on building an underground track at CERN, the worlds largest particle physics laboratory. This enormous scientific instrument will collect a huge amount of data, but only a small percentage of what is recorded will be useful. When proton atoms travelling almost at light speed collide inside the LHC, theoretical physicists expect new forces and particles to be produced. It may even be possible to study black holes using this experiment.", "hypothesis": "The LHC 1s the largest experiment ever conducted in Europe.", "label": "e"} +{"uid": "id_779", "premise": "he Large Hadron Collider (LHC), located underneath the border of France and Switzerland, 1s currently the biggest experiment in the world. Its construction mvolved 9,000 magnets and over 10,000 tons of nitrogen are used for its cooling processes. Scientists and engineers have spent 4.5 billion on building an underground track at CERN, the worlds largest particle physics laboratory. This enormous scientific instrument will collect a huge amount of data, but only a small percentage of what is recorded will be useful. When proton atoms travelling almost at light speed collide inside the LHC, theoretical physicists expect new forces and particles to be produced. It may even be possible to study black holes using this experiment.", "hypothesis": "The cost of the LHCs track was over 4.5 billion.", "label": "c"} +{"uid": "id_780", "premise": "he Large Hadron Collider (LHC), located underneath the border of France and Switzerland, 1s currently the biggest experiment in the world. Its construction mvolved 9,000 magnets and over 10,000 tons of nitrogen are used for its cooling processes. Scientists and engineers have spent 4.5 billion on building an underground track at CERN, the worlds largest particle physics laboratory. This enormous scientific instrument will collect a huge amount of data, but only a small percentage of what is recorded will be useful. When proton atoms travelling almost at light speed collide inside the LHC, theoretical physicists expect new forces and particles to be produced. It may even be possible to study black holes using this experiment.", "hypothesis": "Protons travel around the LHC at light speed.", "label": "c"} +{"uid": "id_781", "premise": "he Large Hadron Collider (LHC), located underneath the border of France and Switzerland, 1s currently the biggest experiment in the world. Its construction mvolved 9,000 magnets and over 10,000 tons of nitrogen are used for its cooling processes. Scientists and engineers have spent 4.5 billion on building an underground track at CERN, the worlds largest particle physics laboratory. This enormous scientific instrument will collect a huge amount of data, but only a small percentage of what is recorded will be useful. When proton atoms travelling almost at light speed collide inside the LHC, theoretical physicists expect new forces and particles to be produced. It may even be possible to study black holes using this experiment.", "hypothesis": "The LHC was designed to study black holes.", "label": "c"} +{"uid": "id_782", "premise": "he Large Hadron Collider (LHC), located underneath the border of France and Switzerland, 1s currently the biggest experiment in the world. Its construction mvolved 9,000 magnets and over 10,000 tons of nitrogen are used for its cooling processes. Scientists and engineers have spent 4.5 billion on building an underground track at CERN, the worlds largest particle physics laboratory. This enormous scientific instrument will collect a huge amount of data, but only a small percentage of what is recorded will be useful. When proton atoms travelling almost at light speed collide inside the LHC, theoretical physicists expect new forces and particles to be produced. It may even be possible to study black holes using this experiment.", "hypothesis": "The LHC uses over 10,000 tons of oxygen for its coohng processes.", "label": "c"} +{"uid": "id_783", "premise": "he concept of childhood in the western countries The history of childhood has been a topic of interest in social history since the highly influential 1960 book Centuries of Childhood, written by French historian Aries. He argued that \"childhood\" is a concept created by modern society. A. One of the most hotly debated issues in the history of childhood has been whether childhood is itself a recent invention. The historian Philippe Aries argued that in Western Europe during the Middle Ages (up to about the end of the fifteenth century) children were regarded as miniature adults, with all the intellect and personality that this implies. He scrutinized medieval pictures and diaries, and found no distinction between children and adults as they shared similar leisure activities and often the same type of work. Aries, however, pointed out that this is not to suggest that children were neglected, forsaken or despised. The idea of childhood is not to be confused with affection for children; it corresponds to an awareness of the particular nature of childhood, that particular nature which distinguishes the child from the adult, even the young adult. B. There is a long tradition of the children of the poor playing a functional role in contributing to the family income by working either inside or outside the home. In this sense children are seen as 'useful. Back in the Middle Ages, children as young as 5 or 6 did important chores for their parents and, from the sixteenth century, were often encouraged (or forced) to leave the family by the age of 9 or 10 to work as servants for wealthier families or to be apprenticed to a trade. C. With industrialization in the eighteenth and nineteenth centuries, a newdemand for child labour was created, and many children were forced to work for long hours, in mines, workshops and factories. Social reformers began to question whether labouring long hours from an early age would harm children's growing bodies. They began to recognize the potential of carrying out systematic studies to monitor how far these early deprivations might be affecting children's development. D. Gradually, the concerns of the reformers began to impact on the working conditions of children. In Britain, the Factory Act of 1833 signified the beginning of legal protection of children from exploitation and was linked to the rise of schools for factory children. The worst forms of child exploitation were gradually eliminated, partly through factory reform but also through the influence of trade unions and economic changes during the nineteenth century which made some forms of child labour redundant. Childhood was increasingly seen as a time for play and education for all children, not just for a privileged minority. Initiating children into work as 'useful' children became less of a priority. As the age for starting full-time work was delayed, so childhood was increasingly understood as a more extended phase of dependency, development and learning. Even so, work continued to play a significant, if less central role in children's lives throughout the later nineteenth and twentieth century. And the 'useful child' has become a controversial image during the first decade of the twenty-first century especially in the context of global concern about large numbers of the world's children engaged in child labour. E. The Factory Act of 1833 established half-time schools which allowed children to work and attend school. But in the 1840s, a large proportion of children never went to school, and if they did, they left by the age of 10 or 11. The situation was very different by the end of the nineteenth century in Britain. The school became central to images of 'a normal' childhood . F. Attending school was no longer a privilege and all children were expected to spend a significant part of their day in a classroom. By going to school, children's lives were now separated from domestic life at home and from the adult world of work. School became an institution dedicated to shaping the minds, behaviour and morals of the young. Education dominated the management of children's waking hours, not just through the hours spent in classrooms but through 'home' work, the growth of 'after school' activities and the importance attached to 'parental involvement. G. Industrialization, urbanization and mass schooling also set new challenges for those responsible for protecting children's welfare, and promoting their learning. Increasingly, children were being treated as a group with distinctive needs and they were organized into groups according to their age. For example, teachers needed to know what to expect of children in their classrooms, what kinds of instruction were appropriate for different age groups and how best to assess children's progress. They also wanted tools that could enable them to sort and select children according to their abilities and potential.", "hypothesis": "During the Middle Age, going to work necessarily means children were unloved indicated by Aries.", "label": "c"} +{"uid": "id_784", "premise": "he concept of childhood in the western countries The history of childhood has been a topic of interest in social history since the highly influential 1960 book Centuries of Childhood, written by French historian Aries. He argued that \"childhood\" is a concept created by modern society. A. One of the most hotly debated issues in the history of childhood has been whether childhood is itself a recent invention. The historian Philippe Aries argued that in Western Europe during the Middle Ages (up to about the end of the fifteenth century) children were regarded as miniature adults, with all the intellect and personality that this implies. He scrutinized medieval pictures and diaries, and found no distinction between children and adults as they shared similar leisure activities and often the same type of work. Aries, however, pointed out that this is not to suggest that children were neglected, forsaken or despised. The idea of childhood is not to be confused with affection for children; it corresponds to an awareness of the particular nature of childhood, that particular nature which distinguishes the child from the adult, even the young adult. B. There is a long tradition of the children of the poor playing a functional role in contributing to the family income by working either inside or outside the home. In this sense children are seen as 'useful. Back in the Middle Ages, children as young as 5 or 6 did important chores for their parents and, from the sixteenth century, were often encouraged (or forced) to leave the family by the age of 9 or 10 to work as servants for wealthier families or to be apprenticed to a trade. C. With industrialization in the eighteenth and nineteenth centuries, a newdemand for child labour was created, and many children were forced to work for long hours, in mines, workshops and factories. Social reformers began to question whether labouring long hours from an early age would harm children's growing bodies. They began to recognize the potential of carrying out systematic studies to monitor how far these early deprivations might be affecting children's development. D. Gradually, the concerns of the reformers began to impact on the working conditions of children. In Britain, the Factory Act of 1833 signified the beginning of legal protection of children from exploitation and was linked to the rise of schools for factory children. The worst forms of child exploitation were gradually eliminated, partly through factory reform but also through the influence of trade unions and economic changes during the nineteenth century which made some forms of child labour redundant. Childhood was increasingly seen as a time for play and education for all children, not just for a privileged minority. Initiating children into work as 'useful' children became less of a priority. As the age for starting full-time work was delayed, so childhood was increasingly understood as a more extended phase of dependency, development and learning. Even so, work continued to play a significant, if less central role in children's lives throughout the later nineteenth and twentieth century. And the 'useful child' has become a controversial image during the first decade of the twenty-first century especially in the context of global concern about large numbers of the world's children engaged in child labour. E. The Factory Act of 1833 established half-time schools which allowed children to work and attend school. But in the 1840s, a large proportion of children never went to school, and if they did, they left by the age of 10 or 11. The situation was very different by the end of the nineteenth century in Britain. The school became central to images of 'a normal' childhood . F. Attending school was no longer a privilege and all children were expected to spend a significant part of their day in a classroom. By going to school, children's lives were now separated from domestic life at home and from the adult world of work. School became an institution dedicated to shaping the minds, behaviour and morals of the young. Education dominated the management of children's waking hours, not just through the hours spent in classrooms but through 'home' work, the growth of 'after school' activities and the importance attached to 'parental involvement. G. Industrialization, urbanization and mass schooling also set new challenges for those responsible for protecting children's welfare, and promoting their learning. Increasingly, children were being treated as a group with distinctive needs and they were organized into groups according to their age. For example, teachers needed to know what to expect of children in their classrooms, what kinds of instruction were appropriate for different age groups and how best to assess children's progress. They also wanted tools that could enable them to sort and select children according to their abilities and potential.", "hypothesis": "Scientists think that overworked labour damages the health of young children", "label": "e"} +{"uid": "id_785", "premise": "he concept of childhood in the western countries The history of childhood has been a topic of interest in social history since the highly influential 1960 book Centuries of Childhood, written by French historian Aries. He argued that \"childhood\" is a concept created by modern society. A. One of the most hotly debated issues in the history of childhood has been whether childhood is itself a recent invention. The historian Philippe Aries argued that in Western Europe during the Middle Ages (up to about the end of the fifteenth century) children were regarded as miniature adults, with all the intellect and personality that this implies. He scrutinized medieval pictures and diaries, and found no distinction between children and adults as they shared similar leisure activities and often the same type of work. Aries, however, pointed out that this is not to suggest that children were neglected, forsaken or despised. The idea of childhood is not to be confused with affection for children; it corresponds to an awareness of the particular nature of childhood, that particular nature which distinguishes the child from the adult, even the young adult. B. There is a long tradition of the children of the poor playing a functional role in contributing to the family income by working either inside or outside the home. In this sense children are seen as 'useful. Back in the Middle Ages, children as young as 5 or 6 did important chores for their parents and, from the sixteenth century, were often encouraged (or forced) to leave the family by the age of 9 or 10 to work as servants for wealthier families or to be apprenticed to a trade. C. With industrialization in the eighteenth and nineteenth centuries, a newdemand for child labour was created, and many children were forced to work for long hours, in mines, workshops and factories. Social reformers began to question whether labouring long hours from an early age would harm children's growing bodies. They began to recognize the potential of carrying out systematic studies to monitor how far these early deprivations might be affecting children's development. D. Gradually, the concerns of the reformers began to impact on the working conditions of children. In Britain, the Factory Act of 1833 signified the beginning of legal protection of children from exploitation and was linked to the rise of schools for factory children. The worst forms of child exploitation were gradually eliminated, partly through factory reform but also through the influence of trade unions and economic changes during the nineteenth century which made some forms of child labour redundant. Childhood was increasingly seen as a time for play and education for all children, not just for a privileged minority. Initiating children into work as 'useful' children became less of a priority. As the age for starting full-time work was delayed, so childhood was increasingly understood as a more extended phase of dependency, development and learning. Even so, work continued to play a significant, if less central role in children's lives throughout the later nineteenth and twentieth century. And the 'useful child' has become a controversial image during the first decade of the twenty-first century especially in the context of global concern about large numbers of the world's children engaged in child labour. E. The Factory Act of 1833 established half-time schools which allowed children to work and attend school. But in the 1840s, a large proportion of children never went to school, and if they did, they left by the age of 10 or 11. The situation was very different by the end of the nineteenth century in Britain. The school became central to images of 'a normal' childhood . F. Attending school was no longer a privilege and all children were expected to spend a significant part of their day in a classroom. By going to school, children's lives were now separated from domestic life at home and from the adult world of work. School became an institution dedicated to shaping the minds, behaviour and morals of the young. Education dominated the management of children's waking hours, not just through the hours spent in classrooms but through 'home' work, the growth of 'after school' activities and the importance attached to 'parental involvement. G. Industrialization, urbanization and mass schooling also set new challenges for those responsible for protecting children's welfare, and promoting their learning. Increasingly, children were being treated as a group with distinctive needs and they were organized into groups according to their age. For example, teachers needed to know what to expect of children in their classrooms, what kinds of instruction were appropriate for different age groups and how best to assess children's progress. They also wanted tools that could enable them to sort and select children according to their abilities and potential.", "hypothesis": "Aries pointed out that children did different types of work as adults during the Middle Age.", "label": "c"} +{"uid": "id_786", "premise": "he concept of childhood in the western countries The history of childhood has been a topic of interest in social history since the highly influential 1960 book Centuries of Childhood, written by French historian Aries. He argued that \"childhood\" is a concept created by modern society. A. One of the most hotly debated issues in the history of childhood has been whether childhood is itself a recent invention. The historian Philippe Aries argued that in Western Europe during the Middle Ages (up to about the end of the fifteenth century) children were regarded as miniature adults, with all the intellect and personality that this implies. He scrutinized medieval pictures and diaries, and found no distinction between children and adults as they shared similar leisure activities and often the same type of work. Aries, however, pointed out that this is not to suggest that children were neglected, forsaken or despised. The idea of childhood is not to be confused with affection for children; it corresponds to an awareness of the particular nature of childhood, that particular nature which distinguishes the child from the adult, even the young adult. B. There is a long tradition of the children of the poor playing a functional role in contributing to the family income by working either inside or outside the home. In this sense children are seen as 'useful. Back in the Middle Ages, children as young as 5 or 6 did important chores for their parents and, from the sixteenth century, were often encouraged (or forced) to leave the family by the age of 9 or 10 to work as servants for wealthier families or to be apprenticed to a trade. C. With industrialization in the eighteenth and nineteenth centuries, a newdemand for child labour was created, and many children were forced to work for long hours, in mines, workshops and factories. Social reformers began to question whether labouring long hours from an early age would harm children's growing bodies. They began to recognize the potential of carrying out systematic studies to monitor how far these early deprivations might be affecting children's development. D. Gradually, the concerns of the reformers began to impact on the working conditions of children. In Britain, the Factory Act of 1833 signified the beginning of legal protection of children from exploitation and was linked to the rise of schools for factory children. The worst forms of child exploitation were gradually eliminated, partly through factory reform but also through the influence of trade unions and economic changes during the nineteenth century which made some forms of child labour redundant. Childhood was increasingly seen as a time for play and education for all children, not just for a privileged minority. Initiating children into work as 'useful' children became less of a priority. As the age for starting full-time work was delayed, so childhood was increasingly understood as a more extended phase of dependency, development and learning. Even so, work continued to play a significant, if less central role in children's lives throughout the later nineteenth and twentieth century. And the 'useful child' has become a controversial image during the first decade of the twenty-first century especially in the context of global concern about large numbers of the world's children engaged in child labour. E. The Factory Act of 1833 established half-time schools which allowed children to work and attend school. But in the 1840s, a large proportion of children never went to school, and if they did, they left by the age of 10 or 11. The situation was very different by the end of the nineteenth century in Britain. The school became central to images of 'a normal' childhood . F. Attending school was no longer a privilege and all children were expected to spend a significant part of their day in a classroom. By going to school, children's lives were now separated from domestic life at home and from the adult world of work. School became an institution dedicated to shaping the minds, behaviour and morals of the young. Education dominated the management of children's waking hours, not just through the hours spent in classrooms but through 'home' work, the growth of 'after school' activities and the importance attached to 'parental involvement. G. Industrialization, urbanization and mass schooling also set new challenges for those responsible for protecting children's welfare, and promoting their learning. Increasingly, children were being treated as a group with distinctive needs and they were organized into groups according to their age. For example, teachers needed to know what to expect of children in their classrooms, what kinds of instruction were appropriate for different age groups and how best to assess children's progress. They also wanted tools that could enable them to sort and select children according to their abilities and potential.", "hypothesis": "the rise of trade union majorly contributed to the protection children fromexploitation in 19 th century", "label": "n"} +{"uid": "id_787", "premise": "he concept of childhood in the western countries The history of childhood has been a topic of interest in social history since the highly influential 1960 book Centuries of Childhood, written by French historian Aries. He argued that \"childhood\" is a concept created by modern society. A. One of the most hotly debated issues in the history of childhood has been whether childhood is itself a recent invention. The historian Philippe Aries argued that in Western Europe during the Middle Ages (up to about the end of the fifteenth century) children were regarded as miniature adults, with all the intellect and personality that this implies. He scrutinized medieval pictures and diaries, and found no distinction between children and adults as they shared similar leisure activities and often the same type of work. Aries, however, pointed out that this is not to suggest that children were neglected, forsaken or despised. The idea of childhood is not to be confused with affection for children; it corresponds to an awareness of the particular nature of childhood, that particular nature which distinguishes the child from the adult, even the young adult. B. There is a long tradition of the children of the poor playing a functional role in contributing to the family income by working either inside or outside the home. In this sense children are seen as 'useful. Back in the Middle Ages, children as young as 5 or 6 did important chores for their parents and, from the sixteenth century, were often encouraged (or forced) to leave the family by the age of 9 or 10 to work as servants for wealthier families or to be apprenticed to a trade. C. With industrialization in the eighteenth and nineteenth centuries, a newdemand for child labour was created, and many children were forced to work for long hours, in mines, workshops and factories. Social reformers began to question whether labouring long hours from an early age would harm children's growing bodies. They began to recognize the potential of carrying out systematic studies to monitor how far these early deprivations might be affecting children's development. D. Gradually, the concerns of the reformers began to impact on the working conditions of children. In Britain, the Factory Act of 1833 signified the beginning of legal protection of children from exploitation and was linked to the rise of schools for factory children. The worst forms of child exploitation were gradually eliminated, partly through factory reform but also through the influence of trade unions and economic changes during the nineteenth century which made some forms of child labour redundant. Childhood was increasingly seen as a time for play and education for all children, not just for a privileged minority. Initiating children into work as 'useful' children became less of a priority. As the age for starting full-time work was delayed, so childhood was increasingly understood as a more extended phase of dependency, development and learning. Even so, work continued to play a significant, if less central role in children's lives throughout the later nineteenth and twentieth century. And the 'useful child' has become a controversial image during the first decade of the twenty-first century especially in the context of global concern about large numbers of the world's children engaged in child labour. E. The Factory Act of 1833 established half-time schools which allowed children to work and attend school. But in the 1840s, a large proportion of children never went to school, and if they did, they left by the age of 10 or 11. The situation was very different by the end of the nineteenth century in Britain. The school became central to images of 'a normal' childhood . F. Attending school was no longer a privilege and all children were expected to spend a significant part of their day in a classroom. By going to school, children's lives were now separated from domestic life at home and from the adult world of work. School became an institution dedicated to shaping the minds, behaviour and morals of the young. Education dominated the management of children's waking hours, not just through the hours spent in classrooms but through 'home' work, the growth of 'after school' activities and the importance attached to 'parental involvement. G. Industrialization, urbanization and mass schooling also set new challenges for those responsible for protecting children's welfare, and promoting their learning. Increasingly, children were being treated as a group with distinctive needs and they were organized into groups according to their age. For example, teachers needed to know what to expect of children in their classrooms, what kinds of instruction were appropriate for different age groups and how best to assess children's progress. They also wanted tools that could enable them to sort and select children according to their abilities and potential.", "hypothesis": "By the aid of half-time schools, most children went to school in the mid of 19 century.", "label": "c"} +{"uid": "id_788", "premise": "he concept of childhood in the western countries The history of childhood has been a topic of interest in social history since the highly influential 1960 book Centuries of Childhood, written by French historian Aries. He argued that \"childhood\" is a concept created by modern society. A. One of the most hotly debated issues in the history of childhood has been whether childhood is itself a recent invention. The historian Philippe Aries argued that in Western Europe during the Middle Ages (up to about the end of the fifteenth century) children were regarded as miniature adults, with all the intellect and personality that this implies. He scrutinized medieval pictures and diaries, and found no distinction between children and adults as they shared similar leisure activities and often the same type of work. Aries, however, pointed out that this is not to suggest that children were neglected, forsaken or despised. The idea of childhood is not to be confused with affection for children; it corresponds to an awareness of the particular nature of childhood, that particular nature which distinguishes the child from the adult, even the young adult. B. There is a long tradition of the children of the poor playing a functional role in contributing to the family income by working either inside or outside the home. In this sense children are seen as 'useful. Back in the Middle Ages, children as young as 5 or 6 did important chores for their parents and, from the sixteenth century, were often encouraged (or forced) to leave the family by the age of 9 or 10 to work as servants for wealthier families or to be apprenticed to a trade. C. With industrialization in the eighteenth and nineteenth centuries, a newdemand for child labour was created, and many children were forced to work for long hours, in mines, workshops and factories. Social reformers began to question whether labouring long hours from an early age would harm children's growing bodies. They began to recognize the potential of carrying out systematic studies to monitor how far these early deprivations might be affecting children's development. D. Gradually, the concerns of the reformers began to impact on the working conditions of children. In Britain, the Factory Act of 1833 signified the beginning of legal protection of children from exploitation and was linked to the rise of schools for factory children. The worst forms of child exploitation were gradually eliminated, partly through factory reform but also through the influence of trade unions and economic changes during the nineteenth century which made some forms of child labour redundant. Childhood was increasingly seen as a time for play and education for all children, not just for a privileged minority. Initiating children into work as 'useful' children became less of a priority. As the age for starting full-time work was delayed, so childhood was increasingly understood as a more extended phase of dependency, development and learning. Even so, work continued to play a significant, if less central role in children's lives throughout the later nineteenth and twentieth century. And the 'useful child' has become a controversial image during the first decade of the twenty-first century especially in the context of global concern about large numbers of the world's children engaged in child labour. E. The Factory Act of 1833 established half-time schools which allowed children to work and attend school. But in the 1840s, a large proportion of children never went to school, and if they did, they left by the age of 10 or 11. The situation was very different by the end of the nineteenth century in Britain. The school became central to images of 'a normal' childhood . F. Attending school was no longer a privilege and all children were expected to spend a significant part of their day in a classroom. By going to school, children's lives were now separated from domestic life at home and from the adult world of work. School became an institution dedicated to shaping the minds, behaviour and morals of the young. Education dominated the management of children's waking hours, not just through the hours spent in classrooms but through 'home' work, the growth of 'after school' activities and the importance attached to 'parental involvement. G. Industrialization, urbanization and mass schooling also set new challenges for those responsible for protecting children's welfare, and promoting their learning. Increasingly, children were being treated as a group with distinctive needs and they were organized into groups according to their age. For example, teachers needed to know what to expect of children in their classrooms, what kinds of instruction were appropriate for different age groups and how best to assess children's progress. They also wanted tools that could enable them to sort and select children according to their abilities and potential.", "hypothesis": "In 20 century almost all children need to go to school in full time schedule.", "label": "n"} +{"uid": "id_789", "premise": "he concept of childhood in the western countries The history of childhood has been a topic of interest in social history since the highly influential 1960 book Centuries of Childhood, written by French historian Aries. He argued that \"childhood\" is a concept created by modern society. A. One of the most hotly debated issues in the history of childhood has been whether childhood is itself a recent invention. The historian Philippe Aries argued that in Western Europe during the Middle Ages (up to about the end of the fifteenth century) children were regarded as miniature adults, with all the intellect and personality that this implies. He scrutinized medieval pictures and diaries, and found no distinction between children and adults as they shared similar leisure activities and often the same type of work. Aries, however, pointed out that this is not to suggest that children were neglected, forsaken or despised. The idea of childhood is not to be confused with affection for children; it corresponds to an awareness of the particular nature of childhood, that particular nature which distinguishes the child from the adult, even the young adult. B. There is a long tradition of the children of the poor playing a functional role in contributing to the family income by working either inside or outside the home. In this sense children are seen as 'useful. Back in the Middle Ages, children as young as 5 or 6 did important chores for their parents and, from the sixteenth century, were often encouraged (or forced) to leave the family by the age of 9 or 10 to work as servants for wealthier families or to be apprenticed to a trade. C. With industrialization in the eighteenth and nineteenth centuries, a newdemand for child labour was created, and many children were forced to work for long hours, in mines, workshops and factories. Social reformers began to question whether labouring long hours from an early age would harm children's growing bodies. They began to recognize the potential of carrying out systematic studies to monitor how far these early deprivations might be affecting children's development. D. Gradually, the concerns of the reformers began to impact on the working conditions of children. In Britain, the Factory Act of 1833 signified the beginning of legal protection of children from exploitation and was linked to the rise of schools for factory children. The worst forms of child exploitation were gradually eliminated, partly through factory reform but also through the influence of trade unions and economic changes during the nineteenth century which made some forms of child labour redundant. Childhood was increasingly seen as a time for play and education for all children, not just for a privileged minority. Initiating children into work as 'useful' children became less of a priority. As the age for starting full-time work was delayed, so childhood was increasingly understood as a more extended phase of dependency, development and learning. Even so, work continued to play a significant, if less central role in children's lives throughout the later nineteenth and twentieth century. And the 'useful child' has become a controversial image during the first decade of the twenty-first century especially in the context of global concern about large numbers of the world's children engaged in child labour. E. The Factory Act of 1833 established half-time schools which allowed children to work and attend school. But in the 1840s, a large proportion of children never went to school, and if they did, they left by the age of 10 or 11. The situation was very different by the end of the nineteenth century in Britain. The school became central to images of 'a normal' childhood . F. Attending school was no longer a privilege and all children were expected to spend a significant part of their day in a classroom. By going to school, children's lives were now separated from domestic life at home and from the adult world of work. School became an institution dedicated to shaping the minds, behaviour and morals of the young. Education dominated the management of children's waking hours, not just through the hours spent in classrooms but through 'home' work, the growth of 'after school' activities and the importance attached to 'parental involvement. G. Industrialization, urbanization and mass schooling also set new challenges for those responsible for protecting children's welfare, and promoting their learning. Increasingly, children were being treated as a group with distinctive needs and they were organized into groups according to their age. For example, teachers needed to know what to expect of children in their classrooms, what kinds of instruction were appropriate for different age groups and how best to assess children's progress. They also wanted tools that could enable them to sort and select children according to their abilities and potential.", "hypothesis": "Nowadays, childrens needs were much differentiated and categorised based on how old they are", "label": "e"} +{"uid": "id_790", "premise": "lack of water is an ever-worsening global crisis, with over forty percent of the worlds population now suffering from regular and severe water shortage. Increases in population mean that there is less water available per capita. In addition, pollution-related global warming is making some countries, which were already short of water, even hotter and drier(). Demand for water is doubling every twenty year and there are predictions that, in the future, nations may go to war to fight for its control.", "hypothesis": "Neither climate change nor population expansion are exacerbating the water shortage problem.", "label": "c"} +{"uid": "id_791", "premise": "lack of water is an ever-worsening global crisis, with over forty percent of the worlds population now suffering from regular and severe water shortage. Increases in population mean that there is less water available per capita. In addition, pollution-related global warming is making some countries, which were already short of water, even hotter and drier(). Demand for water is doubling every twenty year and there are predictions that, in the future, nations may go to war to fight for its control.", "hypothesis": "Oil shortages, more than a lack of water, are likely to result in war.", "label": "n"} +{"uid": "id_792", "premise": "lack of water is an ever-worsening global crisis, with over forty percent of the worlds population now suffering from regular and severe water shortage. Increases in population mean that there is less water available per capita. In addition, pollution-related global warming is making some countries, which were already short of water, even hotter and drier(). Demand for water is doubling every twenty year and there are predictions that, in the future, nations may go to war to fight for its control.", "hypothesis": "Some countries are not affected by global warning.", "label": "n"} +{"uid": "id_793", "premise": "new weapon to fight cancer British scientists are preparing to launch trials of a radical new way to fight cancer, which kills tumours by infecting them with viruses like the common cold. If successful, virus therapy could eventually form a third pillar alongside radiotherapy and chemotherapy in the standard arsenal against cancer, while avoiding some of the debilitating side-effects. Leonard Seymour, a professor of gene therapy at Oxford University, who has been working on the virus therapy with colleagues in London and the US, will lead the trials later this year. Cancer Research UK said yesterday that it was excited by the potential of Prof Seymour's pioneering techniques. One of the country's leading geneticists, Prof Seymour has been working with viruses that kill cancer cells directly, while avoiding harm to healthy tissue. \"In principle, you've got something which could be many times more effective than regular chemotherapy, \" he said. Cancer-killing viruses exploit the fact that cancer cells suppress the body's local immune system. \"If a cancer doesn't do that, the immune system wipes it out. If you can get a virus into a tumour, viruses find them a very good place to be because there's no immune system to stop them replicating. You can regard it as the cancer's Achilles' heel. \" Only a small amount of the virus needs to get to the cancer. \"They replicate, you get a million copies in each cell and the cell bursts and they infect the tumour cells adjacent and repeat the process, \" said Prof Seymour. Preliminary research on mice shows that the viruses work well on tumours resistant to standard cancer drugs. \"It's an interesting possibility that they may have an advantage in killing drug-resistant tumours, which could be quite different to anything we've had before. \" Researchers have known for some time that viruses can kill tumour cells and some aspects of the work have already been published in scientific journals. American scientists have previously injected viruses directly into tumours but this technique will not work if the cancer is inaccessible or has spread throughout the body. Prof Seymour's innovative solution is to mask the virus from the body's immune system, effectively allowing the viruses to do what chemotherapy drugs do - spread through the blood and reach tumours wherever they are. The big hurdle has always been to find a way to deliver viruses to tumours via the bloodstream without the body's immune system destroying them on the way. \"What we've done is make chemical modifications to the virus to put a polymer coat around it - it's a stealth virus when you inject it, \" he said. After the stealth virus infects the tumour, it replicates, but the copies do not have the chemical modifications. If they escape from the tumour, the copies will be quickly recognised and mopped up by the body's immune system. The therapy would be especially useful for secondary cancers, called metastases, which sometimes spread around the body after the first tumour appears. \"There's an awful statistic of patients in the west ... with malignant cancers; 75% of them go on to die from metastases, \" said Prof Seymour. Two viruses are likely to be examined in the first clinical trials: adenovirus, which normally causes a cold-like illness, and vaccinia, which causes cowpox and is also used in the vaccine against smallpox. For safety reasons, both will be disabled to make them less pathogenic in the trial, but Prof Seymour said he eventually hopes to use natural viruses. The first trials will use uncoated adenovirus and vaccinia and will be delivered locally to liver tumours, in order to establish whether the treatment is safe in humans and what dose of virus will be needed. Several more years of trials will be needed, eventually also on the polymer-coated viruses, before the therapy can be considered for use in the NHS. Though the approach will be examined at first for cancers that do not respond to conventional treatments, Prof Seymour hopes that one day it might be applied to all cancers.", "hypothesis": "Virus therapy, if successful, has an advantage in eliminating side-effects.", "label": "c"} +{"uid": "id_794", "premise": "new weapon to fight cancer British scientists are preparing to launch trials of a radical new way to fight cancer, which kills tumours by infecting them with viruses like the common cold. If successful, virus therapy could eventually form a third pillar alongside radiotherapy and chemotherapy in the standard arsenal against cancer, while avoiding some of the debilitating side-effects. Leonard Seymour, a professor of gene therapy at Oxford University, who has been working on the virus therapy with colleagues in London and the US, will lead the trials later this year. Cancer Research UK said yesterday that it was excited by the potential of Prof Seymour's pioneering techniques. One of the country's leading geneticists, Prof Seymour has been working with viruses that kill cancer cells directly, while avoiding harm to healthy tissue. \"In principle, you've got something which could be many times more effective than regular chemotherapy, \" he said. Cancer-killing viruses exploit the fact that cancer cells suppress the body's local immune system. \"If a cancer doesn't do that, the immune system wipes it out. If you can get a virus into a tumour, viruses find them a very good place to be because there's no immune system to stop them replicating. You can regard it as the cancer's Achilles' heel. \" Only a small amount of the virus needs to get to the cancer. \"They replicate, you get a million copies in each cell and the cell bursts and they infect the tumour cells adjacent and repeat the process, \" said Prof Seymour. Preliminary research on mice shows that the viruses work well on tumours resistant to standard cancer drugs. \"It's an interesting possibility that they may have an advantage in killing drug-resistant tumours, which could be quite different to anything we've had before. \" Researchers have known for some time that viruses can kill tumour cells and some aspects of the work have already been published in scientific journals. American scientists have previously injected viruses directly into tumours but this technique will not work if the cancer is inaccessible or has spread throughout the body. Prof Seymour's innovative solution is to mask the virus from the body's immune system, effectively allowing the viruses to do what chemotherapy drugs do - spread through the blood and reach tumours wherever they are. The big hurdle has always been to find a way to deliver viruses to tumours via the bloodstream without the body's immune system destroying them on the way. \"What we've done is make chemical modifications to the virus to put a polymer coat around it - it's a stealth virus when you inject it, \" he said. After the stealth virus infects the tumour, it replicates, but the copies do not have the chemical modifications. If they escape from the tumour, the copies will be quickly recognised and mopped up by the body's immune system. The therapy would be especially useful for secondary cancers, called metastases, which sometimes spread around the body after the first tumour appears. \"There's an awful statistic of patients in the west ... with malignant cancers; 75% of them go on to die from metastases, \" said Prof Seymour. Two viruses are likely to be examined in the first clinical trials: adenovirus, which normally causes a cold-like illness, and vaccinia, which causes cowpox and is also used in the vaccine against smallpox. For safety reasons, both will be disabled to make them less pathogenic in the trial, but Prof Seymour said he eventually hopes to use natural viruses. The first trials will use uncoated adenovirus and vaccinia and will be delivered locally to liver tumours, in order to establish whether the treatment is safe in humans and what dose of virus will be needed. Several more years of trials will be needed, eventually also on the polymer-coated viruses, before the therapy can be considered for use in the NHS. Though the approach will be examined at first for cancers that do not respond to conventional treatments, Prof Seymour hopes that one day it might be applied to all cancers.", "hypothesis": "Cancer Research UK is quite hopeful about Professor Seymours work on the virus therapy.", "label": "e"} +{"uid": "id_795", "premise": "new weapon to fight cancer British scientists are preparing to launch trials of a radical new way to fight cancer, which kills tumours by infecting them with viruses like the common cold. If successful, virus therapy could eventually form a third pillar alongside radiotherapy and chemotherapy in the standard arsenal against cancer, while avoiding some of the debilitating side-effects. Leonard Seymour, a professor of gene therapy at Oxford University, who has been working on the virus therapy with colleagues in London and the US, will lead the trials later this year. Cancer Research UK said yesterday that it was excited by the potential of Prof Seymour's pioneering techniques. One of the country's leading geneticists, Prof Seymour has been working with viruses that kill cancer cells directly, while avoiding harm to healthy tissue. \"In principle, you've got something which could be many times more effective than regular chemotherapy, \" he said. Cancer-killing viruses exploit the fact that cancer cells suppress the body's local immune system. \"If a cancer doesn't do that, the immune system wipes it out. If you can get a virus into a tumour, viruses find them a very good place to be because there's no immune system to stop them replicating. You can regard it as the cancer's Achilles' heel. \" Only a small amount of the virus needs to get to the cancer. \"They replicate, you get a million copies in each cell and the cell bursts and they infect the tumour cells adjacent and repeat the process, \" said Prof Seymour. Preliminary research on mice shows that the viruses work well on tumours resistant to standard cancer drugs. \"It's an interesting possibility that they may have an advantage in killing drug-resistant tumours, which could be quite different to anything we've had before. \" Researchers have known for some time that viruses can kill tumour cells and some aspects of the work have already been published in scientific journals. American scientists have previously injected viruses directly into tumours but this technique will not work if the cancer is inaccessible or has spread throughout the body. Prof Seymour's innovative solution is to mask the virus from the body's immune system, effectively allowing the viruses to do what chemotherapy drugs do - spread through the blood and reach tumours wherever they are. The big hurdle has always been to find a way to deliver viruses to tumours via the bloodstream without the body's immune system destroying them on the way. \"What we've done is make chemical modifications to the virus to put a polymer coat around it - it's a stealth virus when you inject it, \" he said. After the stealth virus infects the tumour, it replicates, but the copies do not have the chemical modifications. If they escape from the tumour, the copies will be quickly recognised and mopped up by the body's immune system. The therapy would be especially useful for secondary cancers, called metastases, which sometimes spread around the body after the first tumour appears. \"There's an awful statistic of patients in the west ... with malignant cancers; 75% of them go on to die from metastases, \" said Prof Seymour. Two viruses are likely to be examined in the first clinical trials: adenovirus, which normally causes a cold-like illness, and vaccinia, which causes cowpox and is also used in the vaccine against smallpox. For safety reasons, both will be disabled to make them less pathogenic in the trial, but Prof Seymour said he eventually hopes to use natural viruses. The first trials will use uncoated adenovirus and vaccinia and will be delivered locally to liver tumours, in order to establish whether the treatment is safe in humans and what dose of virus will be needed. Several more years of trials will be needed, eventually also on the polymer-coated viruses, before the therapy can be considered for use in the NHS. Though the approach will be examined at first for cancers that do not respond to conventional treatments, Prof Seymour hopes that one day it might be applied to all cancers.", "hypothesis": "Virus can kill cancer cells and stop them from growing again.", "label": "n"} +{"uid": "id_796", "premise": "new weapon to fight cancer British scientists are preparing to launch trials of a radical new way to fight cancer, which kills tumours by infecting them with viruses like the common cold. If successful, virus therapy could eventually form a third pillar alongside radiotherapy and chemotherapy in the standard arsenal against cancer, while avoiding some of the debilitating side-effects. Leonard Seymour, a professor of gene therapy at Oxford University, who has been working on the virus therapy with colleagues in London and the US, will lead the trials later this year. Cancer Research UK said yesterday that it was excited by the potential of Prof Seymour's pioneering techniques. One of the country's leading geneticists, Prof Seymour has been working with viruses that kill cancer cells directly, while avoiding harm to healthy tissue. \"In principle, you've got something which could be many times more effective than regular chemotherapy, \" he said. Cancer-killing viruses exploit the fact that cancer cells suppress the body's local immune system. \"If a cancer doesn't do that, the immune system wipes it out. If you can get a virus into a tumour, viruses find them a very good place to be because there's no immune system to stop them replicating. You can regard it as the cancer's Achilles' heel. \" Only a small amount of the virus needs to get to the cancer. \"They replicate, you get a million copies in each cell and the cell bursts and they infect the tumour cells adjacent and repeat the process, \" said Prof Seymour. Preliminary research on mice shows that the viruses work well on tumours resistant to standard cancer drugs. \"It's an interesting possibility that they may have an advantage in killing drug-resistant tumours, which could be quite different to anything we've had before. \" Researchers have known for some time that viruses can kill tumour cells and some aspects of the work have already been published in scientific journals. American scientists have previously injected viruses directly into tumours but this technique will not work if the cancer is inaccessible or has spread throughout the body. Prof Seymour's innovative solution is to mask the virus from the body's immune system, effectively allowing the viruses to do what chemotherapy drugs do - spread through the blood and reach tumours wherever they are. The big hurdle has always been to find a way to deliver viruses to tumours via the bloodstream without the body's immune system destroying them on the way. \"What we've done is make chemical modifications to the virus to put a polymer coat around it - it's a stealth virus when you inject it, \" he said. After the stealth virus infects the tumour, it replicates, but the copies do not have the chemical modifications. If they escape from the tumour, the copies will be quickly recognised and mopped up by the body's immune system. The therapy would be especially useful for secondary cancers, called metastases, which sometimes spread around the body after the first tumour appears. \"There's an awful statistic of patients in the west ... with malignant cancers; 75% of them go on to die from metastases, \" said Prof Seymour. Two viruses are likely to be examined in the first clinical trials: adenovirus, which normally causes a cold-like illness, and vaccinia, which causes cowpox and is also used in the vaccine against smallpox. For safety reasons, both will be disabled to make them less pathogenic in the trial, but Prof Seymour said he eventually hopes to use natural viruses. The first trials will use uncoated adenovirus and vaccinia and will be delivered locally to liver tumours, in order to establish whether the treatment is safe in humans and what dose of virus will be needed. Several more years of trials will be needed, eventually also on the polymer-coated viruses, before the therapy can be considered for use in the NHS. Though the approach will be examined at first for cancers that do not respond to conventional treatments, Prof Seymour hopes that one day it might be applied to all cancers.", "hypothesis": "Cancers Achilles heel refers to the fact that virus may stay safely in a tumor and replicate.", "label": "e"} +{"uid": "id_797", "premise": "new weapon to fight cancer British scientists are preparing to launch trials of a radical new way to fight cancer, which kills tumours by infecting them with viruses like the common cold. If successful, virus therapy could eventually form a third pillar alongside radiotherapy and chemotherapy in the standard arsenal against cancer, while avoiding some of the debilitating side-effects. Leonard Seymour, a professor of gene therapy at Oxford University, who has been working on the virus therapy with colleagues in London and the US, will lead the trials later this year. Cancer Research UK said yesterday that it was excited by the potential of Prof Seymour's pioneering techniques. One of the country's leading geneticists, Prof Seymour has been working with viruses that kill cancer cells directly, while avoiding harm to healthy tissue. \"In principle, you've got something which could be many times more effective than regular chemotherapy, \" he said. Cancer-killing viruses exploit the fact that cancer cells suppress the body's local immune system. \"If a cancer doesn't do that, the immune system wipes it out. If you can get a virus into a tumour, viruses find them a very good place to be because there's no immune system to stop them replicating. You can regard it as the cancer's Achilles' heel. \" Only a small amount of the virus needs to get to the cancer. \"They replicate, you get a million copies in each cell and the cell bursts and they infect the tumour cells adjacent and repeat the process, \" said Prof Seymour. Preliminary research on mice shows that the viruses work well on tumours resistant to standard cancer drugs. \"It's an interesting possibility that they may have an advantage in killing drug-resistant tumours, which could be quite different to anything we've had before. \" Researchers have known for some time that viruses can kill tumour cells and some aspects of the work have already been published in scientific journals. American scientists have previously injected viruses directly into tumours but this technique will not work if the cancer is inaccessible or has spread throughout the body. Prof Seymour's innovative solution is to mask the virus from the body's immune system, effectively allowing the viruses to do what chemotherapy drugs do - spread through the blood and reach tumours wherever they are. The big hurdle has always been to find a way to deliver viruses to tumours via the bloodstream without the body's immune system destroying them on the way. \"What we've done is make chemical modifications to the virus to put a polymer coat around it - it's a stealth virus when you inject it, \" he said. After the stealth virus infects the tumour, it replicates, but the copies do not have the chemical modifications. If they escape from the tumour, the copies will be quickly recognised and mopped up by the body's immune system. The therapy would be especially useful for secondary cancers, called metastases, which sometimes spread around the body after the first tumour appears. \"There's an awful statistic of patients in the west ... with malignant cancers; 75% of them go on to die from metastases, \" said Prof Seymour. Two viruses are likely to be examined in the first clinical trials: adenovirus, which normally causes a cold-like illness, and vaccinia, which causes cowpox and is also used in the vaccine against smallpox. For safety reasons, both will be disabled to make them less pathogenic in the trial, but Prof Seymour said he eventually hopes to use natural viruses. The first trials will use uncoated adenovirus and vaccinia and will be delivered locally to liver tumours, in order to establish whether the treatment is safe in humans and what dose of virus will be needed. Several more years of trials will be needed, eventually also on the polymer-coated viruses, before the therapy can be considered for use in the NHS. Though the approach will be examined at first for cancers that do not respond to conventional treatments, Prof Seymour hopes that one day it might be applied to all cancers.", "hypothesis": "To infect the cancer cells, a good deal of viruses should be injected into the tumor.", "label": "c"} +{"uid": "id_798", "premise": "new weapon to fight cancer British scientists are preparing to launch trials of a radical new way to fight cancer, which kills tumours by infecting them with viruses like the common cold. If successful, virus therapy could eventually form a third pillar alongside radiotherapy and chemotherapy in the standard arsenal against cancer, while avoiding some of the debilitating side-effects. Leonard Seymour, a professor of gene therapy at Oxford University, who has been working on the virus therapy with colleagues in London and the US, will lead the trials later this year. Cancer Research UK said yesterday that it was excited by the potential of Prof Seymour's pioneering techniques. One of the country's leading geneticists, Prof Seymour has been working with viruses that kill cancer cells directly, while avoiding harm to healthy tissue. \"In principle, you've got something which could be many times more effective than regular chemotherapy, \" he said. Cancer-killing viruses exploit the fact that cancer cells suppress the body's local immune system. \"If a cancer doesn't do that, the immune system wipes it out. If you can get a virus into a tumour, viruses find them a very good place to be because there's no immune system to stop them replicating. You can regard it as the cancer's Achilles' heel. \" Only a small amount of the virus needs to get to the cancer. \"They replicate, you get a million copies in each cell and the cell bursts and they infect the tumour cells adjacent and repeat the process, \" said Prof Seymour. Preliminary research on mice shows that the viruses work well on tumours resistant to standard cancer drugs. \"It's an interesting possibility that they may have an advantage in killing drug-resistant tumours, which could be quite different to anything we've had before. \" Researchers have known for some time that viruses can kill tumour cells and some aspects of the work have already been published in scientific journals. American scientists have previously injected viruses directly into tumours but this technique will not work if the cancer is inaccessible or has spread throughout the body. Prof Seymour's innovative solution is to mask the virus from the body's immune system, effectively allowing the viruses to do what chemotherapy drugs do - spread through the blood and reach tumours wherever they are. The big hurdle has always been to find a way to deliver viruses to tumours via the bloodstream without the body's immune system destroying them on the way. \"What we've done is make chemical modifications to the virus to put a polymer coat around it - it's a stealth virus when you inject it, \" he said. After the stealth virus infects the tumour, it replicates, but the copies do not have the chemical modifications. If they escape from the tumour, the copies will be quickly recognised and mopped up by the body's immune system. The therapy would be especially useful for secondary cancers, called metastases, which sometimes spread around the body after the first tumour appears. \"There's an awful statistic of patients in the west ... with malignant cancers; 75% of them go on to die from metastases, \" said Prof Seymour. Two viruses are likely to be examined in the first clinical trials: adenovirus, which normally causes a cold-like illness, and vaccinia, which causes cowpox and is also used in the vaccine against smallpox. For safety reasons, both will be disabled to make them less pathogenic in the trial, but Prof Seymour said he eventually hopes to use natural viruses. The first trials will use uncoated adenovirus and vaccinia and will be delivered locally to liver tumours, in order to establish whether the treatment is safe in humans and what dose of virus will be needed. Several more years of trials will be needed, eventually also on the polymer-coated viruses, before the therapy can be considered for use in the NHS. Though the approach will be examined at first for cancers that do not respond to conventional treatments, Prof Seymour hopes that one day it might be applied to all cancers.", "hypothesis": "Researches on animals indicate that virus could be used as a new way to treat drug-resistant tumors.", "label": "e"} +{"uid": "id_799", "premise": "some time on the night of October 1st, the Copacabana Club was burnt to the ground. The police are treating the fire as suspicious. The only facts known at this stage are: The club was insured for more than its real value. The club belonged to John Hodges. Les Braithwaite was known to dislike John Hodges. Between October 1st and October 2nd, Les Braithwaite was away from home on a business trip. There were no fatalities. A plan of the club was found in Les Braithwaite's flat.", "hypothesis": "John Hodges could have been at the club when the fire took place.", "label": "e"} +{"uid": "id_800", "premise": "some time on the night of October 1st, the Copacabana Club was burnt to the ground. The police are treating the fire as suspicious. The only facts known at this stage are: The club was insured for more than its real value. The club belonged to John Hodges. Les Braithwaite was known to dislike John Hodges. Between October 1st and October 2nd, Les Braithwaite was away from home on a business trip. There were no fatalities. A plan of the club was found in Les Braithwaite's flat.", "hypothesis": "The flat where the plan was found is close to the club.", "label": "n"} +{"uid": "id_801", "premise": "some time on the night of October 1st, the Copacabana Club was burnt to the ground. The police are treating the fire as suspicious. The only facts known at this stage are: The club was insured for more than its real value. The club belonged to John Hodges. Les Braithwaite was known to dislike John Hodges. Between October 1st and October 2nd, Les Braithwaite was away from home on a business trip. There were no fatalities. A plan of the club was found in Les Braithwaite's flat.", "hypothesis": "A member of John Hodges' family died in the blaze", "label": "c"} +{"uid": "id_802", "premise": "some time on the night of October 1st, the Copacabana Club was burnt to the ground. The police are treating the fire as suspicious. The only facts known at this stage are: The club was insured for more than its real value. The club belonged to John Hodges. Les Braithwaite was known to dislike John Hodges. Between October 1st and October 2nd, Les Braithwaite was away from home on a business trip. There were no fatalities. A plan of the club was found in Les Braithwaite's flat.", "hypothesis": "There are definite grounds to arrest John Hodges for arson.", "label": "c"} +{"uid": "id_803", "premise": "some time on the night of October 1st, the Copacabana Club was burnt to the ground. The police are treating the fire as suspicious. The only facts known at this stage are: The club was insured for more than its real value. The club belonged to John Hodges. Les Braithwaite was known to dislike John Hodges. Between October 1st and October 2nd, Les Braithwaite was away from home on a business trip. There were no fatalities. A plan of the club was found in Les Braithwaite's flat.", "hypothesis": "If the insurance company pays out in full, John Hodges stands to profit from the fire.", "label": "n"}