id
stringlengths 5
6
| input
stringlengths 3
301
| output
list | meta
null |
---|---|---|---|
e5mdxv
|
What were some of the other options brought up before settling on the 3/5's Compromise at the Constitution Convention?
|
[
{
"answer": "I'm afraid I can only touch on the Great Compromise a bit, but the 3/5 is in my wheelhouse.\n\nYour student's suggestion was the actual position of the less-enslaving states: enslaved people count zero for representation because they are in essentially no other ways treated like human beings. Counting them for purposes of allocating power, particularly power over white men, would have been illogical, absurd, and morally abhorrent to the white North. The argument was that if enslaved people ought to be counted, then so should livestock. It falls out this way because if representation is contingent on freeing enslaved people, then those people are no longer enslaved and don't count as such...at least as a matter of law. The oft-precarious status of free people of color did not much enter into it. \n\nThe large enslavers, of course, wanted it just the other way: they ought to be able to buy power by buying people. If that meant power flowed directly from enslaving, that was just how things ought to be...though most of them are shy about admitting this at the time. That one's worth and fitness for public office and public life derived almost entirely from one's property value -though also critically from being white and a man- was reasonably uncontroversial. They argued that women and children, and also poor white men, did not have the vote yet they were still to be used for representation. So why not enslaved people? \n\nSo far as the larger dichotomy goes between enslaving people and political power, the white South perceives the two as largely identical. They understand, at least in a nebulous way, that the less-enslaving states to their north have become increasingly hostile to enslaving people. Some of them have already enacted emancipation plans that will, over the course of decades, end slavery within their bounds. That wave of emancipationist feeling might even infect the Upper South (Maryland, Virginia, and company around the Chesapeake at this time) and that would place enslaving in a dangerous position. These fears are always very much overblown, but they're a significant engine of southern politics all through the antebellum. You can get ahead by arguing your opponent is soft on slavery and conjuring an external threat on slavery is the way you build a national coalition in the South among polities that otherwise often disagree. \n\nThus enslaving people needs extra protection, which the white South seeks ardently. This includes protection from democracy, though at the time such a concept isn't seen as inherently problematic since the founders are quite openly authoritarian oligarchs. The extra safety comes in many forms, some of which evolve over time, but the biggest are apportionment in the Senate -which was not *only* because of slavery, but people in the room at the time noted that the issue of small vs. large states rapidly dissolved in favor of division between enslaving and free-r states, with the enslaving very enthusiastic for the Senate as we know it- and the 3/5 ratio. With the sections at rough population parity, or even tilted a bit in the more enslaving states' favor since no one had a national census to work with when the decision was made, those extra representatives mean that the white, enslaving South has a veto on close House votes. The exact value of that is hard to assess -you'd actually have to reapportion the House every time around to know for sure and the method for doing that changed a few times- but it's very much the case that in the great sectional controversies to come that small margin is a factor. \n\nThe third proslavery provision of the Constitution is the one that enslavers were most keen to boast: the fugitive slave clause. In South Carolina's ratifying convention they made it very explicit: until and unless they had the Constitution, enslavers had no right to go into another state to recover a person who dared steal themselves. In essence, they compelled the North to recognize the status of Southern slavery even within Northern jurisdictions. Massachusetts or Pennsylvania might abolish slavery, but only so far as people enslaved in Massachusetts originally were concerned. Should an enslaved person from Virginia escape to either place, the Constitution granted their enslaver a new power to go and seize them back. Quite what that was going to mean was left unclear in Philadelphia, but when it came to legislation on the matter in 1793 -after a controversial rendition case- the white South were not at all ambiguous. They asked for essentially what became law in 1850: a complete legal obligation for men in free states to render aid in the recapture and rendition of enslaved people, practically on the enslaver's say-so. They had to settle for rather less during the Washington years, but still a far more substantive right than the literally nothing they'd had beforehand.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "9913492",
"title": "Georgia Platform",
"section": "Section::::Georgia Acts.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 355,
"text": "The task of the convention became the creation of a position that both supported the Compromise of 1850 as the final solution to the sectional disputes over slavery while maintaining a strong position for protecting traditional Southern rights. They did this by approving what came to be known as the Georgia Platform. The document in full is as follows:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19468510",
"title": "United States House of Representatives",
"section": "Section::::History.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 533,
"text": "Eventually, the Convention reached the Connecticut Compromise or Great Compromise, under which one house of Congress (the House of Representatives) would provide representation proportional to each state's population, whereas the other (the Senate) would provide equal representation amongst the states. The Constitution was ratified by the requisite number of states (nine out of the 13) in 1788, but its implementation was set for March 4, 1789. The House began work on April 1, 1789, when it achieved a quorum for the first time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "770920",
"title": "James Guthrie (Kentucky)",
"section": "Section::::Career.:Civil War.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 587,
"text": "The Compromise Committee proposed a plan that included seven constitutional amendments and relied on Henry Clay's Missouri Compromise as a framework. Under the committee's proposal, 36°30' north latitude would continue to divide slave and free territory in the United States, and no more territory would be annexed except with the consent of equal representation from both slave and free states. The delegates to the convention presented this idea to Congress on February 27, 1861 and asked them to call a national convention to consider the question, but Congress rejected this report.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "381531",
"title": "Crittenden–Johnson Resolution",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 565,
"text": "The resolution is sometimes confused with the \"Crittenden Compromise,\" a series of unsuccessful proposals to amend the United States Constitution that were debated after slave states began seceding, in an attempt to prevent the Confederate States of America from leaving the Union. Both measures are sometimes confused with the Corwin Amendment, a proposal to amend the Constitution that was adopted by the 36th Congress which attempted to put slavery and other states' rights under Constitutional protection; it passed Congress but was not ratified by the states.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1884082",
"title": "Peace Conference of 1861",
"section": "Section::::Conference.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 611,
"text": "On February 6, a separate committee, charged with drafting a proposal for the entire convention to consider, was formed. The committee consisted of one representative from each state and was headed by James Guthrie. The entire convention met for three weeks, and its final product was a proposed seven-point constitutional amendment that differed little from the Crittenden Compromise. The key issue, slavery in the territories, was addressed simply by extending the Missouri Compromise line to the Pacific Ocean, with no provision for newly-acquired territory. That section passed by a 9–8 vote of the states.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "483263",
"title": "Three-Fifths Compromise",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 871,
"text": "The Three-Fifths Compromise was a compromise reached among state delegates during the 1787 United States Constitutional Convention. Whether and, if so, how slaves would be counted when determining a state's total population for legislative representation and taxing purposes was important, as this population number would then be used to determine the number of seats that the state would have in the United States House of Representatives for the next ten years. The compromise solution was to count three out of every five slaves as people for this purpose. Its effect was to give the southern states a third more seats in Congress and a third more electoral votes than if slaves had been ignored, but fewer than if slaves and free people had been counted equally. The compromise was proposed by delegate James Wilson and seconded by Charles Pinckney on June 11, 1787.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23949142",
"title": "Ratification Day (United States)",
"section": "Section::::Jefferson's compromise.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 873,
"text": "Jefferson was elected to head a committee of members of both factions and arrived at a compromise. Assuming that only seven states were present, Congress would pass a resolution stating that the seven states present were unanimously in favor of ratification of the treaty, but were in disagreement as to the competency of Congress to ratify with only seven states. That although only seven states were present, their unanimous agreement in favor of ratification would be used to secure peace. The vote would not set a precedent for future decisions; the document would be forwarded to the U.S. ministers in Europe who would be told to wait until a treaty ratified by nine states could arrive, and to request a delay of three months. However, if Britain insisted, then the ministers should use the seven-state ratification, pleading that a full Congress was not in session.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
10ect6
|
why germany has maintained economic stability while greece has faltered
|
[
{
"answer": "Germany produces a lot of stuff. ",
"provenance": null
},
{
"answer": "I think the answer that terminal_velocity gave has a lot of truth to it; namely Germany has a diverse exporting infrastructure, as well as a controlled import system. \n\nBut the most important thing to remember is that Germany has taxes, okay? Greece in the meanwhile has had huge problems with tax-evasion and under-taxing. [\"With the economy losing as much as $58 billion a year in undeclared income, the administration has made tax collection a priority. But so did previous governments, which failed miserably at the task.\" ](_URL_0_)\n\n**In short, Greece has very little exporting diversity, and horrendous tax policies, while Germany has a fairly rigid tax system and a wide variety of things they sell, that helps them pay for things they need to bring in**",
"provenance": null
},
{
"answer": "Does anyone else think it might not be a coincidence that the northernmost European countries (Norway, UK, Germany etc) seem to be doing fine while the southernmost (Greece, Italy, Spain) all seem to be having troubles? I'm not European, but could geographical culture differences (work ethic, education, traditions etc..) be a factor?",
"provenance": null
},
{
"answer": "Greece didn't falter...Greece didn't have any real economic stability to begin with.\n\nGreece basically lied their way into the EU while the other countries looked the other way, then lied about their finances for the better part of a decade. When things go so bad their couldn't lie any longer, it all fell apart.\n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "27146868",
"title": "Greek government-debt crisis",
"section": "Section::::Germany's role in Greece.:Pursuit of national self-interest.\n",
"start_paragraph_id": 193,
"start_character": 0,
"end_paragraph_id": 193,
"end_character": 459,
"text": "The version of adjustment offered by Germany and its allies is that austerity will lead to an internal devaluation, i.e. deflation, which would enable Greece gradually to regain competitiveness. This view too has been contested. A February 2013 research note by the Economics Research team at Goldman Sachs claims that the years of recession being endured by Greece \"exacerbate the fiscal difficulties as the denominator of the debt-to-GDP ratio diminishes\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44891876",
"title": "Greek austerity packages",
"section": "Section::::2012.:February.:Approval.\n",
"start_paragraph_id": 93,
"start_character": 0,
"end_paragraph_id": 93,
"end_character": 303,
"text": "The latest round of austerity measures meant that Greece would face at least another year of recession before the economy started to grow again. Foreign observers were shocked by both the cold-heartedness of German negotiators and a perceived lack of integrity from Greece not honoring its commitments.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27146868",
"title": "Greek government-debt crisis",
"section": "Section::::Germany's role in Greece.\n",
"start_paragraph_id": 178,
"start_character": 0,
"end_paragraph_id": 178,
"end_character": 482,
"text": "Germany has played a major role in discussion concerning Greece's debt crisis. A key issue has been the benefits it enjoyed through the crisis, including falling borrowing rates (as Germany, along with other strong Western economies, was seen as a safe haven by investors during the crisis), investment influx, and exports boost thanks to Euro's depreciation (with profits that may have reached 100bn Euros, according to some estimates), as well as other profits made through loans\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13224",
"title": "History of Germany",
"section": "Section::::Federal Republic of Germany, 1990–present.:Merkel.\n",
"start_paragraph_id": 355,
"start_character": 0,
"end_paragraph_id": 355,
"end_character": 235,
"text": "In the worldwide economic recession that began in 2008, Germany did relatively well. However, the economic instability of Greece and several other EU nations in 2010–11 forced Germany to reluctantly sponsor a massive financial rescue.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26152387",
"title": "European debt crisis",
"section": "Section::::Evolution of the crisis.:Greece.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 649,
"text": "The Greek economy had fared well for much of the 20th century, with high growth rates and low public debt. By 2007 (i.e., before the Global Financial Crisis of 2007-2008), it was still one of the fastest growing in the eurozone, with a public debt-to-GDP that did not exceed 104% , but it was associated with a large structural deficit. As the world economy was hit by the financial crisis of 2007–08, Greece was hit especially hard because its main industries—shipping and tourism—were especially sensitive to changes in the business cycle. The government spent heavily to keep the economy functioning and the country's debt increased accordingly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38289",
"title": "Depression (economics)",
"section": "Section::::Notable depressions.:Greek Depression.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 476,
"text": "Beginning in 2009, Greece sank into a recession that, after two years, became a depression. The country saw an almost 20% drop in economic output, and unemployment soared to near 25%. Greece's high amounts of sovereign debt precipitated the crisis, and the poor performance of its economy since the introduction of severe austerity measures has slowed the entire eurozone's recovery. Greece's continuing troubles have led to discussions about its departure from the eurozone.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6749733",
"title": "Axis occupation of Greece",
"section": "Section::::The Triple Occupation.:The German occupation zone.:Economic exploitation and the Great Famine.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 876,
"text": "Greece suffered greatly during the occupation. The country's economy had already been devastated from the 6-month long war, and to it was added the relentless economic exploitation by the Nazis. Raw materials and food were requisitioned, and the collaborationist government was forced to pay the cost of the occupation, giving rise to inflation. Because the outflows of raw materials and products from Greece towards Germany weren't offset by German payments, substantial imbalances accrued in the settlement accounts at the Greek National Bank. In October 1942 the trading company DEGRIGES was founded, two months later, the Greek collaboration government was forced to agree to treat the balance as a loan without interest that was to be repaid once the war was over. At the end of the war, this forced loan amounted to 476 million Reichsmark (equivalent to billion euros).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
vnr2j
|
Is it possible to kill a house fly with a static charge that you have accumulated?
|
[
{
"answer": "Well you can accumulate a voltage of between [1000 and 10000 volts](_URL_0_) with static electricity and most electric fly swatters have a voltage of 1500 volts or less, so I would assume it is safe to say that a fly could be killed with a static charge.",
"provenance": null
},
{
"answer": "You're not gonna zap the insect by rubbing your feet on a carpet and touching it in the air, because the fly isn't grounded.",
"provenance": null
},
{
"answer": "It's amperage that kills. I use to do electrostatic painting that used 900,000 volts at micro amperage's. It put out plasma streamers like a plasma ball just without the glass.You could play it on your skin just don't have a good ground when you do. [Here's the Buck Rodgers worthy gun, note the 90KV in the description.](_URL_0_) Yes it is fun as hell to do.",
"provenance": null
},
{
"answer": "Can someone also explain the fastest way to generate static electricity in the normal home?",
"provenance": null
},
{
"answer": "Because the fly is in the air and not grounded, getting current to pass through the fly by a one finger touch is impossible. If you could store the charge in a capicitor and insert electrodes into the fly and touch them to the ends of the capicitor, yes, you could provide a closed circuit current path that would pass through the fly. Whether or not this would be fatal to the fly I am uncertain. While electrostatic discharge can rival stun guns in voltage (100,000 range), the current, or amperage is negligible. Current is what kills, not voltage, though they are related in Ohm's law. Voltage = Current x Resistance. Fun fact- once the human bodies largest resistor is overcome (the skin, people with psoriasis are sky high) by say inputting electrodes through the skin directly into the blood, it only takes 0.1Amps of current to arrest the heart if the current path is across it. If you were to stick sowing needles deep into a finger on each hand and then touch those needles to the ends of a AA battery (~0.13Amps) it would be enough to kill you. It would be an agonizing, slow painful death- DO NOT DO THIS TO ANYONE INCLUDING YOURSELF. ",
"provenance": null
},
{
"answer": "What I'm gathering from the informed discussion so far is that killing a fly with static charge is improbable, correct?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "18661020",
"title": "Aphomia sociella",
"section": "Section::::Protective behavior.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 302,
"text": "If perturbed or threatened, an adult bee moth will fall to the ground and pretend to be dead by lying on their backs in the exact form that they landed. This is beneficial when infiltrating a host wasp or bumblebee nest as the host will be less likely to attack when it believes that the moth is dead.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2354552",
"title": "Fly-killing device",
"section": "Section::::Electric flyswatter.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 513,
"text": "BULLET::::- a limit on the charge stored in the capacitor: a discharge of less than 45 microcoulombs (µC) is considered safe, even in the unlikely scenario that the current from a flyswatter would be flowing from one arm to the other arm, partly through the heart. This means that the capacitor of a 1000 V flyswatter should be less than 45 nanofarads (nF). Due to this precaution for humans, the initial shock is usually inadequate to kill flies, but will stun them for long enough that they can be disposed of.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2222635",
"title": "Atmospheric electricity",
"section": "Section::::Description.:Electrical system grounding.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 536,
"text": "Atmospheric charges can cause undesirable, dangerous, and potentially lethal charge potential buildup in suspended electric wire power distribution systems. Bare wires suspended in the air spanning many kilometers and isolated from the ground can collect very large stored charges at high voltage, even when there is no thunderstorm or lightning occurring. This charge will seek to discharge itself through the path of least insulation, which can occur when a person reaches out to activate a power switch or to use an electric device.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1355057",
"title": "Overhead power line",
"section": "Section::::Aviation accidents.\n",
"start_paragraph_id": 74,
"start_character": 0,
"end_paragraph_id": 74,
"end_character": 485,
"text": "General aviation, hang gliding, paragliding, skydiving, balloon, and kite flying must avoid accidental contact with power lines. Nearly every kite product warns users to stay away from power lines. Deaths occur when aircraft crash into power lines. Some power lines are marked with obstruction makers, especially near air strips or over waterways that may support floatplane operations. The placement of power lines sometimes use up sites that would otherwise be used by hang gliders.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4092733",
"title": "Dust collector",
"section": "Section::::Electrostatic precipitators (ESP).\n",
"start_paragraph_id": 102,
"start_character": 0,
"end_paragraph_id": 102,
"end_character": 219,
"text": "The airborne particles receive a negative charge as they pass through the ionized field between the electrodes. These charged particles are then attracted to a grounded or positively charged electrode and adhere to it.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51560627",
"title": "Cumulonimbus and aviation",
"section": "Section::::Flight inside cumulonimbus.:Soaring.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 419,
"text": "A skydiver or paraglider pilot under a cumulonimbus is exposed to a potentially deadly risk of being rapidly sucked up to the top the cloud and being suffocated, struck by lightning, or frozen. If he survives, he may suffer irreversible brain damage due to lack of oxygen or an amputation as a consequence of frostbite. German paraglider pilot Ewa Wiśnierska barely survived a climb of more than inside a cumulonimbus.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "238908",
"title": "Water balloon",
"section": "Section::::Environmental impact.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 714,
"text": "Water balloons, like air balloons, are generally made from latex, which naturally decomposes. While there still could be some environmental impact if burst water balloons are left behind in the wild where animals might ingest them, that impact would be low. However, some air balloons are made from mylar, which does not decompose (or only extremely slowly). If mylar balloons are used as water bombs, then littering or leaving behind mylar balloons will have a much bigger environmental impact. The use of mylar balloons might be less problematic in closed controlled (indoor) environments where the material is subsequently collected and recycled, which is possible with mylar. (Latex balloons are compostable.)\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
83vszh
|
Nearly half of the people who died from the Spanish Flu of 1918 were 20-to-40 year olds, a normally resistant population. Do we know why? What steps were taken to curb the outbreak (which killed more people than the Great War)? What sort of advances had we made by 1998 to prevent a recurrence?
|
[
{
"answer": "There have been some studies that have shown that people who were exposed to the Russian flu pandemic of 1889-90 were the most likely to die if they contracted the Spanish flu. \n\nExaminations of the virus structure of the Russian flu and Spanish flu have shown that they had vast differences in the structure of their respective viruses. When individuals who had previously contracted the Russian flu contracted the Spanish flu, their immune response was simultaneously delayed and wrong. The individuals' immune systems started producing antibodies based on the Russian flu, but since the virus structure was so different, these antibodies had little to no effect on the Spanish flue. By the time the immune systems realized that the antibodies it was producing was not effective, it was too late. \n\nSource: [Age-specific mortality during the 1918 influenza pandemic: unravelling the mystery of high young adult mortality.](_URL_0_)",
"provenance": null
},
{
"answer": "A common theory related to the Spanish Flu and the generally younger people who succumbed to it is the idea of a \"cytokine storm\" which is to say that an overactive immune system could cause younger people to be more at risk than those with weaker immune systems. \n\nCytokines are small proteins that are used for intracelluar signalling, and in the case of immune response, they are released by immune cells to trigger inflammation. \n\nIn turn, inflammation is a signal to generate more immune cells to fight the issue at the site of the infection and throughout the body. The new immune cells thus release more cytokines and the process continues, but at some point, there is a regulation process which prevents the loop from spiraling out of control. \n\nNevertheless, while cytokines are critical to the immune response, they are also susceptible to becoming unregulated in certain circumstances and this disregulation can cause them to start doing damage. \n\nInflammation is helpful for fighting of disease, but it should be noted that some of the immune cells release toxins that are just as capable of killing normal cells as they are for killing invading cells. Maintaining the immune response for too long, and in too many organs can cause damage which might lead to organ failure and death.\n\nIt is unclear how this disregulation process would specifically work in the case of the flu, and there are other theories, such as the potential for those younger people to be in areas where birds infected with H1N1 were located. Obviously, the war would have concentrated younger people and also caused their movement over long distances to join the war. \n\nWe have had considerable advances that should blunt future influenza outbreaks. \n\nThere is significantly better observation and reporting of influenza outbreaks. \n\nThere are vaccination programs in place. They are not perfect, since you need to get vaccinated against specific strains, but it can be effective.\n\nMedical personnel are also much better trained in general prevention of the transmission of infectious agents. \n\nI'd say that simple reporting, communication, and good practices by professionals would be the most important methods for preventing a pandemic like the 1918 Spanish Flu (which probably started in the United States, BTW). There is some discussion of anti-viral drugs as well, but I don't think anyone is expecting to be counting on those.\n\nLink from NIH that talks about much of the above in somewhat more detail:\n\n_URL_0_\n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "29802394",
"title": "Social history of viruses",
"section": "Section::::20th and 21st centuries.:Influenza.\n",
"start_paragraph_id": 65,
"start_character": 0,
"end_paragraph_id": 65,
"end_character": 608,
"text": "A new strain of the virus emerged in 1918, and the subsequent pandemic of Spanish flu was one of the worst natural disasters in history. The death toll was enormous; throughout the world around 50 million people died from the infection. There were 550,000 reported deaths caused by the disease in the US, ten times the country's losses during the First World War, and 228,000 deaths in the UK. In India there were more than 20 million deaths, and in Western Samoa 22 per cent of the population died. Although cases of influenza occurred every winter, there were only two other pandemics in the 20th century.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5606945",
"title": "History of the Royal Australian Navy",
"section": "Section::::The 1918–19 influenza pandemic.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 307,
"text": "Between April 1918 and May 1919, the Spanish flu killed approximately 25 million people worldwide, far more than had been killed in four years of war. A rigorous quarantine policy was implemented in Australia; although this reduced the immediate impact of the flu, the nation's death toll surpassed 11,500.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5511624",
"title": "History of Torquay",
"section": "Section::::World War I.:1918.\n",
"start_paragraph_id": 78,
"start_character": 0,
"end_paragraph_id": 78,
"end_character": 288,
"text": "September 1918 saw a serious outbreak of the Spanish flu which was ravaging the world at the time, over 100 American servicemen died at the Oldway Hospital in a fortnight from the disease; they were buried in Paignton cemetery, but were later exhumed and taken back to the United States.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24255",
"title": "Pandemic",
"section": "Section::::Pandemics and notable epidemics throughout history.:Influenza.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 820,
"text": "BULLET::::- The \"Spanish flu\", 1918–1919. First identified early in March 1918 in US troops training at Camp Funston, Kansas. By October 1918, it had spread to become a worldwide pandemic on all continents, and eventually infected about one-third of the world's population (or ≈500 million persons). Unusually deadly and virulent, it ended nearly as quickly as it began, vanishing completely within 18 months. In six months, some 50 million were dead; some estimates put the total of those killed worldwide at over twice that number. About 17 million died in India, 675,000 in the United States and 200,000 in the UK. The virus was recently reconstructed by scientists at the CDC studying remains preserved by the Alaskan permafrost. The H1N1 virus has a small, but crucial structure that is similar to the Spanish flu.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35001680",
"title": "Home front during World War I",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 238,
"text": "About 10 million combatants and seven million civilians died during the entire war, including many weakened by years of malnutrition; they fell in the worldwide Spanish Flu pandemic, which struck late in 1918, just as the war was ending.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "198796",
"title": "Spanish flu",
"section": "Section::::Mortality.:Patterns of fatality.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 648,
"text": "The pandemic mostly killed young adults. In 1918–1919, 99% of pandemic influenza deaths in the U.S. occurred in people under 65, and nearly half in young adults 20 to 40 years old. In 1920, the mortality rate among people under 65 had decreased sixfold to half the mortality rate of people over 65, but still 92% of deaths occurred in people under 65. This is unusual, since influenza is normally most deadly to weak individuals, such as infants (under age two), the very old (over age 70), and the immunocompromised. In 1918, older adults may have had partial protection caused by exposure to the 1889–1890 flu pandemic, known as the Russian flu.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37220",
"title": "Infection",
"section": "Section::::Epidemiology.:Historic pandemics.\n",
"start_paragraph_id": 137,
"start_character": 0,
"end_paragraph_id": 137,
"end_character": 207,
"text": "BULLET::::- The Influenza Pandemic of 1918 (or the Spanish flu) killed 25–50 million people (about 2% of world population of 1.7 billion). Today Influenza kills about 250,000 to 500,000 worldwide each year.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
7vo96p
|
why do our eyes/brains struggle to figure out how many numbers/letters are in something when one repeats it self vs when all are different (12333332 vs 60292813)
|
[
{
"answer": "This is because it is easier for the human brain to count different numbers since you can know at which number you are looking at (The criterion is that the next number is visibly different than the previous).\nWhen you have to deal with a repeatitive number, you \"have\" doubts whether you skipped or count twice a number , so you instictively start take more time to make sure you read the number correct.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "11707911",
"title": "Human multitasking",
"section": "Section::::Research.:The brain's role.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 801,
"text": "People have a limited ability to retain information, which worsens when the amount of information increases. For this reason, people alter information to make it more memorable, such as separating a ten-digit phone number into three smaller groups or dividing the alphabet into sets of three to five letters. George Miller, former psychologist at Harvard University, believes the limits to the human brain’s capacity centers around “the number seven, plus or minus two.” An illustrative example of this is a test in which a person must repeat numbers read aloud. While two or three numbers are easily repeated, fifteen numbers become more difficult. The person would, on average, repeat seven correctly. Brains are only capable of storing a limited amount of information in their short-term memories.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "533237",
"title": "Dyscalculia",
"section": "Section::::Signs and symptoms.:Common symptoms.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 213,
"text": "BULLET::::- Difficulty recalling the names of numbers, or thinking that certain different numbers \"feel\" the same (e.g. frequently interchanging the same two numbers for each other when reading or recalling them)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35005736",
"title": "Memory and decision-making",
"section": "Section::::Preferences-as-memory approach.:Impact of memory on decisions.:Inhibition and memory reactivity.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 730,
"text": "The memory system suffers from inhibition. This is why it is difficult to hold two different phone numbers in working memory at the same time. Although it may seem that inhibition impedes our memory system, it allows humans to focus on the relevant details and ignore irrelevant ones when required to make quick decisions. Earlier queries can establish preferences that inhibit responses to later queries. A person who is presented with two items and asked to choose between the two is more likely to be choose the item that was presented first. Order matters because inhibition influences both our memory and preferences so that new information competes with older information in regards to memory space and memory associations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12322554",
"title": "Spatial-numerical association of response codes",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 712,
"text": "Even for tasks in which magnitude is irrelevant, like parity judgement or phoneme detection, larger numbers are faster responded to with the right response key while smaller numbers are faster responded to with the left. This also occurs when the hands are crossed, with the right hand activating the left response key and vice versa. The explanation given by Dehaene and colleagues is that the magnitude of a number on an oriented mental number line is automatically activated. The mental number line is assumed to be oriented from left to right in populations with a left-to-right writing system (e.g. English), and oriented from right to left in populations with a right-to-left writing system (e.g. Iranian)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18597245",
"title": "Context of computational complexity",
"section": "Section::::Definitions of variables.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 712,
"text": "Because many different contexts use the same letters for their variables, confusion can arise. For example, the complexity of primality tests and multiplication algorithms can be measured in two different ways: one in terms of the integers being tested or multiplied, and one in terms of the number of binary digits (bits) in those integers. For example, if \"n\" is the integer being tested for primality, trial division can test it in Θ(n) arithmetic operations; but if \"n\" is the number of bits in the integer being tested for primality, it requires Θ(2) time. In the fields of cryptography and computational number theory, it is more typical to define the variable as the number of bits in the input integers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10240549",
"title": "American Sign Language grammar",
"section": "Section::::Morphology.:Derivation.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 330,
"text": "Certain types of signs, for example those relating to time and age, may incorporate numbers by assimilating their handshape. For example, the word WEEK has handshape /B/ with the weak hand and /1/ with the active hand; the active hand's handshape may be changed to the handshape of any number up to 9 to indicate that many weeks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24488251",
"title": "Rapid automatized naming",
"section": "Section::::Theories.:Reading Ability.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 595,
"text": "Another viewpoint is that rapid automatized naming directly relates to differences in reading competence. Supporting this is the fact that the ability to rapidly name digits and letters predicts reading better than rapidly naming colors and objects. This suggests a difference due to differences in experience with letters. However, rapid automatized naming of colors, objects, numbers and letters measured in children before they learn to read predicts later differences in reading skill, while early differences in reading ability do not predict later differences in rapid automatized naming.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1wu3cc
|
Why did China not discover Australia/ the Pacific Islands?
|
[
{
"answer": "hi! there's always room for more info on this topic, but FYI, there have been several posts asking about non-European discovery/settlement of Australia. Catch up on previous responses here:\n\nChina\n\n[Why did the British/Europeans discover Australia and not the Chinese?](_URL_5_)\n\n[Why did the Chinese or Japanese apparently never try to colonize Australia or New Zealand? They're right there.](_URL_2_)\n\n[What were some reasons that China turned inwards and neglected maritime exploration after Admiral Zheng He and his missions.](_URL_8_)\n\n[Why were Zheng He's voyages considered wasteful?](_URL_10_)\n\n[How reliable are the accounts for the Chinese explorer, Zheng He.](_URL_12_)\n\nSE Asia\n\n[Are there any evidences for pre-European contact of the Australias?](_URL_0_)\n\n[To what extent did Asian know about the Island of Australia? Are there documents showing the pass of this knowledge to Europeans?](_URL_1_)\n\n[Why did no Asian cultures ever find Australia?](_URL_3_)\n\n[I just read about the Bugis people, the Vikings of Southeast Asia because they discovered Australia and New Guinea long before the European Age of Discovery. What other maritime cultures had a golden era of exploration in the Middle Ages?](_URL_7_)\n\nSouth Pacific\n\n[Why are the aboriginal peoples of Australia and New Zealand so different? Was there much interaction between the two prior to the arrival of Europeans?](_URL_9_)\n\n[Why did the Maori not conquer aboriginal Australia?](_URL_11_)\n\n[Why didn't the Polynesians colonize Australia?](_URL_4_)\n\n[Why did Polynesians stop expanding? Also, why did they never settle Australia?](_URL_6_)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "29020730",
"title": "Senkaku Islands dispute",
"section": "Section::::Territorial dispute.:People's Republic of China and Republic of China positions.:Post-1970s position.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 1052,
"text": "Although Chinese authorities did not assert claims to the islands while they were under US administration, formal claims were announced in 1971 when the US was preparing to end its administration. A 1968 academic survey undertaken by United Nations Economic Council for Asia and the Far East found possible oil reserves in the area which many consider explains the emergence of Chinese claims, a suggestion confirmed by statements made on the diplomatic records of the Japan-China Summit Meeting by Premier Zhou Enlai in 1972. However, supporters of China's claim that the sovereignty dispute is a legacy of Japanese imperialism and that China's failure to secure the territory following Japan's military defeat in 1945 was due to the complexities of the Chinese Civil War in which the Kuomintang (KMT) were forced off the mainland to Taiwan in 1949 by the Chinese Communist Party. Both the People's Republic of China (PRC) and the Republic of China (ROC) respectively separately claim sovereignty based on arguments that include the following points:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40459273",
"title": "History of Chinese Australians",
"section": "Section::::Earliest Chinese contact with Australia: pre-1848.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1464,
"text": "Some historians have theorised that Northern Indigenous Australians may even have had dealings with Chinese traders or come across Chinese goods particularly through trepanging. Sir Joseph Banks was of the opinion that any British colony in Australia could be populated by 'useful inhabitants from China'. The first official undisputed link between China and Australia comes from the very beginning of the colony of New South Wales. The First Fleet ships, Scarborough, Charlotte and Lady Penrhyn, after dropping off their convict load, sailed for Canton to buy tea and other goods to sell on their return to England. The Bigge Report attributed the high level of tea drinking to 'the existence of an intercourse with China from the foundation of the Colony …' Many British East India Company ships used Australia as a port of call on their trips to and from buying tea from China. That the ships carrying such cargo had Chinese crew members is likely and that some of the crew and possibly passengers embarked at the port of Sydney is probable. Certainly by 1818, Mak Sai Ying (also known as John Shying) had arrived and after a period of farming became, in 1829, the publican of \"The Lion\" in Parramatta. John Macarthur, a prominent pastoralist, employed three Chinese people on his properties in the 1820s and records may well have neglected others. Another way ethnic Chinese made it to Australia was from the new British possessions of Malaysia and Singapore.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23410",
"title": "Paracel Islands",
"section": "Section::::Territorial disputes and their historical background.\n",
"start_paragraph_id": 83,
"start_character": 0,
"end_paragraph_id": 83,
"end_character": 1205,
"text": "China first asserted sovereignty in the modern sense to the South China Sea’s islands when it formally objected to France’s efforts to incorporate them into French Indochina during the 1884–1885 Sino-French war. After the war, France recognized the Paracel and Spratly islands as Chinese territories, in exchange for Chinese recognition of Vietnam as a French territory. Chinese maps since then have consistently shown China’s claims, first as a solid and then as a dotted line. Between 1881 and 1883 the German navy surveyed the islands continuously for three months each year without seeking the permission of either France or China. No protest was issued by either government and the German government published the results of the survey in 1885. In 1932, France nonetheless formally claimed both the Paracel and Spratly Islands. China and Japan both protested. In 1933, France seized the Paracels and Spratlys, announced their annexation, formally included them in French Indochina, and built a couple of weather stations on them, but did not disturb the numerous Chinese fishermen it found there. In 1941, the Japanese Empire made the Paracel and Spratly islands part of Taiwan, then under its rule.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28304178",
"title": "Baijini",
"section": "Section::::Theories.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 284,
"text": "The following year. the sinologist C.P. FitzGerald mentioned the possibility of pre-European Chinese visits to Australia in an article, which conjecture a possible early Chinese presence in northern Australia, by mentioning a Chinese statue which had been dug up in 1879 near Darwin.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27112883",
"title": "Tongues of Serpents",
"section": "Section::::Historical context.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 481,
"text": "Due to a non-historical Chinese trading port, the British are unable to establish their claim over the continent, as the Chinese and Larrakia people entered into trade agreements. In the alternate history, a British attempt to seize Australia by force is foiled by dragons led by the Chinese and Larrakia, which in turn leads to a second, successful rebellion by Macarthur and the overthrow of Macquarie. When the narrative leaves Australia, Macarthur is still the First Minister.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37113618",
"title": "Australia–Taiwan relations",
"section": "Section::::History.:Before 1972.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 275,
"text": "Prior to 1941, relations between the Republic of China and Australia were described as 'episodic.' One reason for this was Australia's reliance on Britain, as it was only in 1923 that Britain had granted its dominions permission to conclude treaties with foreign countries. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11659591",
"title": "Australia–China relations",
"section": "Section::::Political relations.:Turnbull Government.\n",
"start_paragraph_id": 76,
"start_character": 0,
"end_paragraph_id": 76,
"end_character": 432,
"text": "Australia has been among the firmest opponents of China's territorial claims to the South China Sea. In July 2016, following the ruling by an international tribunal which held that China holds \"no historical rights\" to the South China Sea based on the \"nine-dash line\" map, Australia issued a joint statement with Japan and the United States calling for China to abide by the ruling, as \"final and legally binding on both parties.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4n8gv5
|
Why were D-Day landing craft designed the way they were? Opening the large ramp from the front seems almost suicidal to me.
|
[
{
"answer": "The D-Day craft to which you are referring (the LCVP) was based heavily on the a boat designed by Andrew Higgins in the prewar years, ostensibly for \"oil workers\" but probably a smugglin' swamp boat during Prohibition. The Marine Corps was not happy with the contemporary Navy options for landing troops on beaches, and so Higgins' design was shoved into production. It underwent a few different versions, the last of which (the LCVP) had modifications based on the earlier Japanese [Daihatsu-class landing vehicle](_URL_0_). The ramp opens front so that troops and vehicles can (ideally) exit onto the shallow part of the beach while the propeller stays in deeper water, enabling the craft to back up and return to the big ships afterwards to pick up another load. If the ramp were to open to the rear, you've got troops and jeeps exiting into 8 feet of water. Yes, when one of these opened into the face of a German MG-42, it was bad, but for every one of those, there were 20 more landing (relatively) safely and unloading cargo and men efficiently.",
"provenance": null
},
{
"answer": "Well, they were designed to function in very shallow water (as aside from being flat-bottomed, ballast could be pumped out as it came into shore so the LCVP sat lighter in the water) so opening the front would cause negligible flooding. [This](_URL_0_) gives you a good idea of how they should function in an ideal situation - with the LCVP's bow-ramp opening above the water. This is aided, of course, by the fact that beaches are generally angling upward!\n\nThe bow-ramp is a good example of Higgins' masterfully economic design: the LCVP's hull was plywood with a steel plate for protection, but the ramp itself was all steel - tough enough to take a pounding from rough seas, hard-packed sand, pebbles or coral, and the love-taps of incoming fire. \n\nObviously conditions could be less than ideal (and there's accounts of the occasional ramp being opened prematurely, jack-knifing the LCVP and flooding it) - the water was choppy and craft were drenched with spray, but as with any modern boat the pumps voided the excess water. Primary sources recall men helping bail out their landing craft with their helmets when the pumps were pushed to capacity (buckets were also available).\n\n**Sources:** *US World War II Amphibious Tactics: Mediterranean & European Theaters* by Gordon L. Rottman and *D-Day, June 6, 1944: The Climactic Battle of World War II* by Stephen E. Ambrose",
"provenance": null
},
{
"answer": "You seem to be treating the single scene from \"Saving Private Ryan\" as evidence that this was how a large number of soldiers died during landings. The scene was put there primarily for effect and not to represent the typical fate of the landing squad. For landing craft to stop directly in front of the gun was just bad luck. \n\nThe idea was to place machine guns in such positions that they would have interlocking fire zones for *sweeping fire* at effective ranges. Sweeping fire was meant to stop the landing forces on the beach and pin them to the ground while they would receive mortar and artillery fire rather than getting killed outright because as deadly as machine guns can be they are not the greatest threat infantry faces in open spaces. That would be HE and fragmentation shells.\n\nThe idea behind an amphibious assault is also that you claim the beachhead by breaking out from the beach and establishing a defensive perimeter. When that happens the beachhead is being loaded up with supplies, vehicles and is the staging ground for further reinforcements, medical facilities etc. As long as you are stuck on the beach forming a beachhead is not possible and you are holding up the landing space for the next wave of troops and vehicles and later supplies. If the troops are kept on the beach then every next wave of landing troops is making the beach a better target for the artillery. \n\nIt is the difficulty in negotiating obstacles, advancing across the beach and the dunes/banks under machine gun and artillery fire rather than getting out of the boats that is the greatest challenge for the landing wave. The greatest threat for the landing force is either getting stuck on the beach and pounded or disembarking too far from the shore and getting hit while still in the water or drowning, losing weapons and supplies etc. Also you might appreciate how important that is when you take into account how difficult running on sand is, and how comparatively harder running in the water is. Then add to it the necessity of running out through an opening along with your platoon without tripping over, falling, losing anything and then sprinting through water, sand, dunes, barbed wire and whatnot towards the nearest cover.\n\nFrom the standpoint of the assault as an amphibious operation it is delivering the troops *as close to shore as possible* and getting back to the main transport ship for more troops *as quickly as possible* that is the main challenge. It is also not as easy as it seems when you have hundreds of ships maneuvering between obstacles.\nThe goal - again - is to get the troops as quickly as possible, brute-force the fortifications and establish a perimeter so that reinforcements, logistical base *on the shore* and preferably light artillery and tanks can get there before a counter attack arrives. For that purpose the bow ramp is actually the optimal solution because save for that rare instance where it opens directly in front of a machine gun nest it allows the LCVP to unload its troops quickly and along the shortest route and then be gone for more.\n\nAll those things considered together the Higgins boat was a great design which proved so successful that nobody really bothered to address the \"death trap\" of bow ramp.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "539716",
"title": "Landing craft",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 522,
"text": "Because of the need to run up onto a suitable beach, World War II landing craft were flat-bottomed, and many designs had a flat front, often with a lowerable ramp, rather than a normal bow. This made them difficult to control and very uncomfortable in rough seas. The control point (too rudimentary to call a bridge on LCA and similar craft) was normally at the extreme rear of the vessel, as were the engines. In all cases, they were known by an abbreviation derived from the official name rather than by the full title.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "252854",
"title": "Normandy landings",
"section": "Section::::Beach landings.:Tanks.\n",
"start_paragraph_id": 119,
"start_character": 0,
"end_paragraph_id": 119,
"end_character": 360,
"text": "Some of the landing craft had been modified to provide close support fire, and self-propelled amphibious Duplex-Drive tanks (DD tanks), specially designed for the Normandy landings, were to land shortly before the infantry to provide covering fire. However, few arrived in advance of the infantry, and many sank before reaching the shore, especially at Omaha.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "796411",
"title": "Landing craft tank",
"section": "Section::::Conversions and modifications.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 409,
"text": "Several special purpose versions were created for use during the Normandy landings. The British created the Landing craft tank (rocket) (LCT(R)) modified to fire salvoes of three-inch RP-3 rockets, while the landing craft guns (large) (LCG(L)) was armed with two QF 4.7 inch guns, eight Oerlikon 20 mm AA guns and two 2-pounder pom-poms. These ships did not beach; their mission was close-in gunfire support.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2171887",
"title": "Terrapin (amphibious vehicle)",
"section": "Section::::Development.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 215,
"text": "Due to a shortage of US-manufactured DUKWs, the British Ministry of Supply commissioned Thornycroft to design an amphibious vehicle capable of ferrying supplies and troops from ship to shore for the D-Day landings.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36266561",
"title": "LCM (2)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 693,
"text": "The Landing Craft, Mechanized Mark 2 or LCM (2) was a landing craft used for amphibious landings early in the United States' involvement in the Second World War. Though its primary purpose was to transport light tanks from ships to enemy-held shores, it was also used to carry guns and stores. The craft was designed by the Navy's Bureau of Construction and Repair and the initial production contract was let to the American Car & Foundry Company. A total of 147 were built by this company and Higgins Industries. Because of its light load capacity and the rapid production of the superseding LCM (3), the LCM (2) quickly fell out of use following the Allied invasion of North Africa in 1942.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2953967",
"title": "The 37's",
"section": "Section::::Production.:Landing the USS \"Voyager\".:Effects.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 913,
"text": "Because the initial description of the ship described its landing capability, four small hatches on the ventral hull were included on both the ship miniature and model. However the legs that were to emerge from those hatches had not yet been designed by \"The 37's\". In the allowed by the design of the ship, Rick Sternbach had difficulty designing \"an articulated set of legs and footpads\" that would fold out and support some of the ship's . Shots of the unfolding \"landing struts\" were CGI because motorized versions were not installed in the physical model; visual effects producer Dan Curry later said that installing such motorized elements in the model was impossible due to the size. For filming the landed \"Voyager\", miniature feet were made; however, because the producers felt the feet looked inappropriately sized for the rest of the ship, they were partially obscured by landscape in post-production.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4469076",
"title": "50th (Northumbrian) Infantry Division",
"section": "Section::::North-West Europe.:D-Day.:The Assault.\n",
"start_paragraph_id": 75,
"start_character": 0,
"end_paragraph_id": 75,
"end_character": 1107,
"text": "The landing craft were deployed from the beach, a shorter run than the Americans (), still due to the weather many of the troops were sea-sick. Rather than risk the DD-tanks with their limited free-board in the rough seas, they were landed directly onto the beaches with or slightly behind the assault infantry. Prior to this the beach group engineers had landed (280th Company for the 69th Brigade and 73rd Company for the 231st, both with supporting armour) and had begun to reduce the beach obstacles and defences. The assault battalions of the 69th brigade landed either side of La Rivière, the East Yorkshires blown to the east of their intended landing, attacking La Rivière from the rear by 10:00. To the west the Green Howards were initially caught in enfilade fire from La Rivière, but by 10:00 were inland on the Meuvaines ridge. During this advance Company Sergeant-Major Stanley Hollis of the 6th Green Howards was in the first of the actions that were to win him the VC, the only one to be won on D-Day. The 7th Green Howards, landing at H+45 minutes, captured the bridge at Creuilly by 15:00.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3e0op2
|
how nearsightedness and farsightedness work
|
[
{
"answer": "It has to do with a problem with the lens in your eye. Normally, your lens focuses the light onto the fovea of the retina, which is an area that has a high density of light sensing cells (called photoreceptors). When you are farsighted, light from close up gets focused onto a different part of the retina (not the fovea) and this causes the image to appear blurry. In nearsightedness, light from far away gets focused away from the fovea. \n\nGlasses (and contacts) work by adjusting the light so that it focuses back onto the fovea and the image isn't blurry any more. ",
"provenance": null
},
{
"answer": "With nearsightedness, the eyeball is too long for the lens, so that the image often focuses *in front* of the retina. With farsightedness, the eyeball is too short for the lens, so that the image (if we pretend the retina is transparent) would focus behind the retina. \n\nThe result is that people who are nearsighted can still see things close up, while people who are farsighted can see things far away but not close up.\n\nFarsightedness, which is uncommon, is often confused with presbyopia, which happens with age and is believed to be caused by the lens in the eye getting less elastic, and thus having less focal range. The symptoms are pretty similar, the inability to focus on near things. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1508301",
"title": "Futures studies",
"section": "Section::::Applications of foresight and specific fields.:Risk analysis and management.\n",
"start_paragraph_id": 145,
"start_character": 0,
"end_paragraph_id": 145,
"end_character": 239,
"text": "Foresight is a framework or lens which could be used in risk analysis and management in a medium- to long-term time range. A typical formal foresight project would identify key drivers and uncertainties relevant to the scope of analysis. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27643621",
"title": "Futurism (Judaism)",
"section": "Section::::Israeli futurists.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 504,
"text": "A particular place on this list should be reserved for the practitioners of foresight. Foresight is a tool for developing \"visions\", understood as possible future states of affairs that actions today can help bring about (or avoid). The practice of foresight is widespread in European strategic thinking, and to a much lesser level in Canada or United States. In Israel, foresight projects are developed at the Interdisciplinary Center for Technology Assessment and Forecasting from Tel Aviv University.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22409628",
"title": "Participative decision-making",
"section": "Section::::Concepts and methods.:Foresight.\n",
"start_paragraph_id": 82,
"start_character": 0,
"end_paragraph_id": 82,
"end_character": 234,
"text": "BULLET::::- Foresight is often still a voluntary or peripheral job (i.e. few people make foresight their core business), which demands great efforts of organizations and individuals. This may be done once, but not at a regular basis.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30305432",
"title": "Foresight (psychology)",
"section": "Section::::In management.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 240,
"text": "Foresight has been classified as a behaviour (covert and/or overt) in management, a review, analysis, and synthesis of past definitions and usages of the foresight concept into a generic definition, in order to make the concept measurable.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30305432",
"title": "Foresight (psychology)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 802,
"text": "Foresight is the ability to predict, or the action of predicting, what will happen or what is needed in the future. Studies suggest that much of human daily thought is directed towards potential future events. Because of this and its role in human control on the planet, the nature and evolution of foresight is an important topic in psychology. Recent neuroscientific, developmental, and cognitive studies have identified many commonalities to the human ability to recall past episodes. \"Science\" magazine selected new evidence for such commonalities one of the top ten scientific breakthroughs of 2007. However, there are fundamental differences between mentally travelling through time into the future (i.e., foresight) versus mentally travelling through time into the past (i.e., episodic memory).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22409628",
"title": "Participative decision-making",
"section": "Section::::Concepts and methods.:Foresight.\n",
"start_paragraph_id": 81,
"start_character": 0,
"end_paragraph_id": 81,
"end_character": 207,
"text": "BULLET::::- Foresight is a personal skill and so repetition should involve the same individuals (not institutions), which is not compatible with the people (rapidly) moving within and between organizations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3045792",
"title": "Foresight (futures studies)",
"section": "",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 338,
"text": "Many of the methods that are commonly associated with Foresight - Delphi surveys, scenario workshops, etc. - derive from the futures field. The flowchart to the right provides an over of some of the techniques as they relate to the scenario as defined in the intuitive logics tradition. So does the fact that Foresight is concerned with:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1oebvs
|
why are savings account yields in australia 4%/year, and in the u.s. they are .025%/year?
|
[
{
"answer": "Because the Australian Federal Reserve bank has a cash target rate of 2.5% - banks charge more than that for lending (I pay 4.9% on my mortgage) and also more than that for certain high interest online bank accounts (the 4% your sister is getting). _URL_0_. Australia has typically had higher interest rates but at the moment we are one of the strongest economies in the world, and if our interest rates were cut much more inflation would be triggered. \n\nThis is distinct from the US interest rate that is 0.25% right now (_URL_1_). \n\nI have Canadian family who talk about sending money here to make better interest. However, the interest earned in an Australian bank account is pre-tax and is counted as income for the purposes of income tax - so if you sent $1,000,000 to your sister (for example) she would have to declare the $40,000 as income subject to tax. So the effective rate is a bit lower. You would also be exposed to the risk that the Australian dollar will sink lower once the US interest rates start to rise again.\n\nI wish I could borrow from the US or Canada on a 30 year fixed rate mortgage, put in a nice futures contract for the foreign exchange risk because I'm pretty sure that would still end up well, well ahead.\n\nTL:DR; Australian economy good, US bad, better economies have higher interest rates all other things being equal.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1472206",
"title": "Economy of India",
"section": "Section::::Services.:Banking and financial services.\n",
"start_paragraph_id": 92,
"start_character": 0,
"end_paragraph_id": 92,
"end_character": 708,
"text": "India's gross domestic savings in 2006–07 as a percentage of GDP stood at a high 32.8%. More than half of personal savings are invested in physical assets such as land, houses, cattle, and gold. The government-owned public-sector banks hold over 75% of total assets of the banking industry, with the private and foreign banks holding 18.2% and 6.5% respectively. Since liberalisation, the government has approved significant banking reforms. While some of these relate to nationalised banks – such as reforms encouraging mergers, reducing government interference and increasing profitability and competitiveness – other reforms have opened the banking and insurance sectors to private and foreign companies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17823849",
"title": "List of U.S. states by savings rate",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 293,
"text": "This article includes a list of U.S. states that have highest portion of savings (i.e. pensions, investment products, 401(k)); regular savings account, certificate of deposit, or Individual Retirement Account. The increase in people has also increased the Nest Egg index within a given year. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "69268",
"title": "Individual Savings Account",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 738,
"text": "An Individual Savings Account (ISA; ) is a class of retail investment arrangements available to residents of the United Kingdom. It qualifies for a favourable tax status. Payments into the account are made from after-tax income. The account is exempt from income tax and capital gains tax on the investment returns, and no tax is payable on money withdrawn from the scheme either. Cash and a broad range of investments can be held within the arrangement, and there is no restriction on when or how much money can be withdrawn. Funds cannot be used as security for a loan. Until the Lifetime ISA was introduced in 2017 it was not a specific retirement product, but any type can be a useful tool for retirement planning alongside pensions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22533511",
"title": "2009 United Kingdom budget",
"section": "Section::::Details.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 307,
"text": "For savers, limits in Individual Savings Account (ISA) accounts were increased in two phases to a total of £10,200, including an additional £1,500 to the previous upper limit of £3,600 in a cash ISA. The first phase is for those over age 50 years, who can contribute additional amounts from 6 October 2009.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "464429",
"title": "Comparison of Canadian and American economies",
"section": "Section::::Government spending.:General revenue (Canada).\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 791,
"text": "In FY2017 the Canadian federal government spent $311 billion. Elderly benefits, which \"cost $48.1 billion, or 15 cents of every tax dollar\"—which include the Old Age Security (OAS) and Guaranteed Income Supplement (GIS)—represented the \"biggest single expense\". Unlike the Canada Pension Plan (CPP), the \"OAS and GIS are funded through general revenues—they not independently funded\". Other expenses included \"All other departments and agencies\" $51 billion, Other transfer payments 41.5 billion, Canada Health Transfer 36 billion, National Defence 25 billion, Public Debt Charges 24.15 billion, Children's Benefits 22 billion, Employment Insurance 20.7 billion, Fiscal Arrangements 17.1 billion, Canada Social Transfer 4.3 billion, Crown Corporations 8 billion, and Gas Tax Fund 2 billion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43344078",
"title": "Balance sheet recession",
"section": "Section::::Historical examples.:United States 2007–2009.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 621,
"text": "While U.S. savings rose significantly during the 2007–2009 recession, both residential and non-residential investment fell significantly, approximately $560 billion between Q1 2008 and Q4 2009. This moved the private sector financial balance (gross private savings minus gross private domestic investment) from an approximately $200 billion deficit in Q4 2007 to a surplus of $1.4 trillion by Q3 2009. This surplus remained elevated at $720 billion in Q1 2014. This illustrates the core issue in a balance sheet recession, that an enormous amount of savings was tied up in the banking system, rather than being invested.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18879321",
"title": "American Dream Demonstration",
"section": "Section::::Private partners.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 432,
"text": "Over 80% of accountholders approved of the withdrawal restrictions, and grew their IDA’s five times as large in savings than they would have in any other liquid bank account and savings they had before opening their IDA. Together, accountholders saved a total of $1,248,678—an average net savings of $19.07 a month over a span of two years. Average and total gross savings were much higher—$40 per month for a total of $2,530,538. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5opvys
|
Limits of the Internet. What is the current limit of possible addresses?
|
[
{
"answer": "The most commonly used version of the Internet Protocol (IP), IPv4 uses 32-bit addresses (often represented as 4 numbers between 0 and 255 separated by dots, such as 192.168.1.1). Of these, a maximum of about 4 billion addresses are possible. But due to the way these addresses are assigned, the actual number is much lower. Large parts of the address space are intended for internal networks only and in the early days of the internet, companies were allocated far more addresses than they really needed, which may leave many IPv4 addresses unused.\n\nThere are clearly not enough IPv4 addresses to give a unique address to every internet-capable device. One very common workaround is to use Network Address Translation, which is a technique that allows a number of devices that are on the same network to use a single internet-facing IP address. When you connect to a website on your laptop, that website will probably register the same IP address as when you connect to it on your desktop, tablet or phone that are part of the same home-network.\n\nClever bookkeeping by the router ensures that all traffic coming from the internet into your network is delivered to the correct device, even though multiple devices share the same public IP address.\n\nA more permanent solution is a successor to IPv4, IPv6. In IPv6, the address space is 128 bits, which means there are about 3 * 10^38 possible addresses. This means that every device can be assigned thousands or millions of unique addresses and we'd still not get anywhere close to reaching the limit.\n\nIPv6 is being deployed across the globe, but its rollout is rather slow and we're still having to rely on IPv4 and the workarounds such as NAT.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "260914",
"title": "Orders of magnitude (numbers)",
"section": "Section::::10.\n",
"start_paragraph_id": 242,
"start_character": 0,
"end_paragraph_id": 242,
"end_character": 402,
"text": "BULLET::::- \"Computing:\" 2 = 340,282,366,920,938,463,463,374,607,431,768,211,456 (≈3.40282367), the theoretical maximum number of Internet addresses that can be allocated under the IPv6 addressing system, one more than the largest value that can be represented by a single-precision IEEE floating-point value, the total number of different Universally Unique Identifiers (UUIDs) that can be generated.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15317",
"title": "IPv4",
"section": "Section::::Addressing.:Special-use addresses.:Private networks.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 415,
"text": "Of the approximately four billion addresses defined in IPv4, about 18 million addresses in three ranges are reserved for use in private networks. Packets addresses in these ranges are not routable in the public Internet; they are ignored by all public routers. Therefore, private hosts cannot directly communicate with public networks, but require network address translation at a routing gateway for this purpose.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38140",
"title": "Wide area network",
"section": "Section::::Private networks.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 415,
"text": "Of the approximately four billion addresses defined in IPv4, about 18 million addresses in three ranges are reserved for use in private networks. Packets addresses in these ranges are not routable in the public Internet; they are ignored by all public routers. Therefore, private hosts cannot directly communicate with public networks, but require network address translation at a routing gateway for this purpose.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24504672",
"title": "IPv6 address",
"section": "Section::::Address space.:Reserved anycast addresses.\n",
"start_paragraph_id": 96,
"start_character": 0,
"end_paragraph_id": 96,
"end_character": 694,
"text": "The 128 highest addresses within each subnet prefix are reserved to be used as anycast addresses. These addresses usually have the 57 first bits of the interface identifier set to 1, followed by the 7-bit anycast ID. Prefixes for the network, including subnets, are required to have a length of 64 bits, in which case the universal/local bit must be set to 0 to indicate the address is not globally unique. The address with value 0x7e in the 7 least-significant bits is defined as a mobile IPv6 home agents anycast address. The address with value 0x7f (all bits 1) is reserved and may not be used. No more assignments from this range are made, so values 0x00 through 0x7d are reserved as well.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5097334",
"title": "Solaris Containers",
"section": "Section::::Resources needed.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 615,
"text": "Currently a maximum of 8191 non-global zones can be created within a single operating system instance. \"Sparse Zones\", in which most filesystem content is shared with the global zone, can take as little as 50 MB of disk space. \"Whole Root Zones\", in which each zone has its own copy of its operating system files, may occupy anywhere from several hundred megabytes to several gigabytes, depending on installed software. The 8191 limits arises from the limit of 8,192 loopback connections per Solaris instance. Each zone needs a loopback connection. The global zone gets one, leaving 8,191 for the non-global zones.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6852935",
"title": "IPv4 address exhaustion",
"section": "Section::::Post-exhaustion mitigation.:Reclamation of unused IPv4 space.\n",
"start_paragraph_id": 57,
"start_character": 0,
"end_paragraph_id": 57,
"end_character": 568,
"text": "Some address space previously reserved by IANA has been added to the available pool. There have been proposals to use the class E network range of IPv4 addresses (which would add 268.4 million IP addresses to the available pool) but many computer and router operating systems and firmware do not allow the use of these addresses. For this reason, the proposals have sought not to designate the class E space for public assignment, but instead propose to permit its private use for networks that require more address space than is currently available through RFC 1918.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "172047",
"title": "Classful network",
"section": "Section::::Classful addressing definition.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 393,
"text": "The number of addresses usable for addressing specific hosts in each network is always , where N is the number of rest field bits, and the subtraction of 2 adjusts for the use of the all-bits-zero host portion for network address and the all-bits-one host portion as a broadcast address. Thus, for a Class C address with 8 bits available in the host field, the maximum number of hosts is 254.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2gsmmr
|
why does it often say when trying to download something "download should start soon, if it does not press this button" instead of triggering the function of the always working button.
|
[
{
"answer": "They're trying to load balance server requests so that no one server gets nailed with all of the download bandwidth. They give you the link so that if the fancy load balancer doesn't work you can still get the file from the main server. \n\nAlso ads.",
"provenance": null
},
{
"answer": "Usually the file is mirrored, located on several servers. When you click the initial \"download\"-button, the web-server will query all of its mirrors - asking who is doing the least amount of work at that moment. It will then pick the mirror that can offer you the highest download speed. When you click the \"if the download doesn't start\", it can pick the first mirror, a random one or a separate server.\n\n\nOther times, when you're presented with a countdown. \"Your download will start in 3,2,1 seconds\", they just want ad revenues.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2008939",
"title": "Drive-by download",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 818,
"text": "Drive-by downloads may happen when visiting a website, opening an e-mail attachment or clicking a link, or clicking on a deceptive pop-up window: by clicking on the window in the mistaken belief that, for example, an error report from the computer's operating system itself is being acknowledged or a seemingly innocuous advertisement pop-up is being dismissed. In such cases, the \"supplier\" may claim that the user \"consented\" to the download, although the user was in fact unaware of having started an unwanted or malicious software download. Similarly if a person is visiting a site with malicious content, the person may become victim to a drive-by download attack. That is, the malicious content may be able to exploit vulnerabilities in the browser or plugins to run malicious code without the user’s knowledge.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "466320",
"title": "Pop-up ad",
"section": "Section::::Pop-up blocking.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 396,
"text": "Web development and design technologies allow an author to associate any item on a pop-up with any action, including with a cancel or innocent looking button. Because of bad experiences and apprehensive of possible damage that they may cause, some users do not click on or interact with any item inside a pop-up window whatsoever, and may leave the site that generated them or block all pop-ups.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27902003",
"title": "DasBoot",
"section": "Section::::Creating a DasBoot device.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 317,
"text": "Once the user has selected the device they'd like to make bootable, selected the bootable disk to copy the required libraries and information from, and chosen the programs to include on the DasBoot device, clicking a single button starts the process of building the required information and copying it to the device.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "947178",
"title": "Button (computing)",
"section": "Section::::Overview.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 446,
"text": "Depending on the circumstance, buttons may be designated to be pushed only once and execute a command, while others may be used to receive instant feed back and may require the user to click more than once to receive the desired result. Other buttons are designed to toggle behavior on and off like a check box. These buttons will show a graphical clue (such as staying depressed after the mouse is released) to indicate the state of the option.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14440976",
"title": "Things (software)",
"section": "Section::::Features.:Additional features.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 252,
"text": "BULLET::::- Quick Entry is an extension on the Mac that allows the user to create to-dos while working in other apps. Activated by a global keyboard shortcut, it invokes a small pop-up window which can automatically include links to files or websites.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16161443",
"title": "IOS",
"section": "Section::::Features.:Multitasking.:Task completion.\n",
"start_paragraph_id": 57,
"start_character": 0,
"end_paragraph_id": 57,
"end_character": 344,
"text": "Task completion allows apps to continue a certain task after the app has been suspended. As of iOS 4.0, apps can request up to ten minutes to complete a task in the background. This doesn't extend to background up- and downloads though (e.g. if you start a download in one application, it won't finish if you switch away from the application).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33367993",
"title": "Google Drive",
"section": "Section::::Features.:Quick Access.\n",
"start_paragraph_id": 77,
"start_character": 0,
"end_paragraph_id": 77,
"end_character": 298,
"text": "Introduced in the Android app in September 2016, Quick Access uses machine learning to \"intelligently predict the files you need before you've even typed anything\". The feature was announced to be expanded to iOS and the web in March 2017, though the website interface received the feature in May.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
236vfg
|
What has the sleep schedule of the US President looked like historically?
|
[
{
"answer": "During his presidency Coolidge supposedly would sleep around 11 hours a day. When writer Dorothy Parker was told in 1933 that Coolidge had died she replied, \"How can they tell?\" Source: The American Age: US Foreign Policy at Home and Abroad since 1750, Walter LaFeber",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "31232903",
"title": "President's Bedroom",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 459,
"text": "The President's Bedroom is a second floor bedroom in the White House. The bedroom makes up the White House master suite along with the adjacent sitting room and the smaller dressing room, all located in the southwest corner. Prior to the Ford Administration it was common for the President and First Lady to have separate bedrooms. Until then this room was used mostly as the First Lady's bedroom; however, it was the sleeping quarters for President Lincoln.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1038417",
"title": "List of Presidents of the United States by time in office",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 272,
"text": "This is a list of Presidents of the United States by time in office. The basis of the list is the difference between \"dates\"; if counted by number of \"calendar days\" all the figures would be one greater, with the exception of Grover Cleveland, who would receive two days.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "240687",
"title": "List of Saturday Night Live commercial parodies",
"section": "Section::::H.\n",
"start_paragraph_id": 223,
"start_character": 0,
"end_paragraph_id": 223,
"end_character": 385,
"text": "BULLET::::- HuckaPM — How does White House Press Secretary Sarah Huckabee Sanders (Aidy Bryant), sleep at night after a long day of making outlandish statements in defense of the Trump administration? With this sleep aid that combines Melatonin, extra strength quaaludes, and the \"One and Dones\" prescribed to Michael Jackson by his doctor. One tablet and Sanders is instantly asleep.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19425459",
"title": "Gallet & Co.",
"section": "Section::::Time line.\n",
"start_paragraph_id": 67,
"start_character": 0,
"end_paragraph_id": 67,
"end_character": 380,
"text": "BULLET::::- 2008: Gallet & Co co-sponsors \"Time in Office\" at the National Watch and Clock Museum, an exhibition of timepieces worn by America's presidents extending back to the pocket watches of George Washington. One of the featured items in the exhibit is the Gallet Flight Officer chronograph worn by Harry S Truman during his years in office as the 33rd president of the US.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41931193",
"title": "Public Papers of the Presidents",
"section": "Section::::History.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 1096,
"text": "The \"Public Papers of the Presidents\" series was begun in 1957 in response to a recommendation of the National Historical Publications Commission. An extensive compilation of messages and papers of the Presidents covering the period 1789 to 1897 was assembled by James D. Richardson and published under congressional authority between 1896 and 1899. Since then, various private compilations have been issued, but there was no uniform publication comparable to the \"Congressional Record or the United States Supreme Court Reports\". Many Presidential papers could be found only in the form of mimeographed White House releases or as reported in the press. The Commission therefore recommended the establishment of an official series in which Presidential writings, addresses, and remarks of a public nature could be made available. The Commission’s recommendation was incorporated in regulations of the Administrative Committee of the Federal Register, issued under section 6 of the Federal Register Act (44 U.S.C. 1506), which may be found in title 1, part 10, of the Code of Federal Regulations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36313934",
"title": "Jessa Gamble",
"section": "",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 217,
"text": "At TED Global 2011 in Oxford, England, Gamble spoke about the natural sleep cycle of humans, which includes a two-hour waking period in the middle of the night. , the talk had more than two and a half million views. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2026548",
"title": "List of Vice Presidents of the United States by time in office",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 209,
"text": "This is a list of Vice Presidents of the United States by time in office. The basis of the list is the difference between \"dates\"; if counted by number of \"calendar days\" all the figures would be one greater.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1ieb04
|
What causes pianos to go out of tune, and why do they go out of tune faster if unplayed or in high humidity?
|
[
{
"answer": "I can answer some of these. A piano goes out of tune when the various components change shape and deform over time. For example, the strings are under quite a lot of tension and will eventually begin to stretch or slip ever so slightly. It doesn't take much of this at all for a good musician to hear the changes. Humidity largely effects the wood of the piano, especially the sound board. Extra humidity can cause the wood to swell which slightly changes the shape of the piano and causes tuning changes. I can't find, nor can I think of why not playing a piano would account for more tuning changes. If I find something I will update you. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1473512",
"title": "Piano maintenance",
"section": "Section::::Care by technician.:Tuning.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 413,
"text": "Pianos go out of tune primarily because of changes in humidity. Tuning can be made more secure by installing special equipment to regulate humidity, inside or underneath the piano. There is no evidence that being out-of-tune permanently harms the piano itself. However, a long-term low-humidity/high humidity environment will eventually cause the soundboard to crack and the keys and other wooden parts to warp. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "981422",
"title": "Piano tuning",
"section": "Section::::Background.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 806,
"text": "Many factors cause pianos to go out of tune, particularly atmospheric changes. For instance, changes in humidity will affect the pitch of a piano; high humidity causes the sound board to swell, stretching the strings and causing the pitch to go sharp, while low humidity has the opposite effect. Changes in temperature can also affect the overall pitch of a piano. In newer pianos the strings gradually stretch and wooden parts compress, causing the piano to go flat, while in older pianos the tuning pins (that hold the strings in tune) can become loose and don't hold the piano in tune as well. Frequent and hard playing can also cause a piano to go out of tune. For these reasons, many piano manufacturers recommend that new pianos be tuned four times during the first year and twice a year thereafter.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1473512",
"title": "Piano maintenance",
"section": "Section::::Care by owner.:Humidity.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 569,
"text": "Much of a piano is made of wood, and is therefore extremely sensitive to fluctuations in humidity. The piano's wooden soundboard is designed to have an arch, or \"crown\". The crown increases or decreases with changes of humidity, changing the tension on the strings and throwing the instrument out of tune. Larger fluctuations in humidity can affect regulation, and even cause parts to crack. If humidity changes are extreme, the soundboard can warp so much to the point that it can collapse and lose its crown, which may require rebuilding or replacing the instrument.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1473512",
"title": "Piano maintenance",
"section": "Section::::Care by owner.:Moving a piano.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 384,
"text": "Contrary to popular legend, proper piano moving does not affect tuning. Tuning is affected by changes in humidity. If a piano is properly covered during the move, it will not experience the environmental changes such as going from indoors to outdoors and back indoors again. The piano could go out of tune if exposed to a climate change such as going from a dry home to a humid home.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1473512",
"title": "Piano maintenance",
"section": "Section::::Care by technician.:Tuning.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 1259,
"text": "Pianos that are prized by their owners are tuned regularly, usually once every six months for domestic pianos, and always just before a performance in concert halls. The longer a piano remains out of tune, the more time and effort it will take for a technician to restore it to proper pitch. When a piano is only slightly out of tune, it loses the glowing tonal quality characteristic of a freshly tuned piano, especially because each note in the middle and upper range is sounded by more than one string, and these may get slightly out of tune with each other. Pianos that are more than slightly out of tune tend to be unpleasant to play and listen to, to an extent that varies with the ear of the listener. A tuning hammer and tuning mutes are the main tools that piano technicians use. Some tuners use pure aural techniques while some tuners use electronic tuning devices. Formally trained and experienced tuners find that the use of electronic tuning devices is unnecessary, important elements associated with trained aural tuners are often left out by those relying on electronic tuning devices. Consistent errors have been known as a result. These devices often attract the untrained operator in an endeavour to circumvent the need for formal training.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1473512",
"title": "Piano maintenance",
"section": "Section::::Care by technician.:Regulation.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 517,
"text": "Over time, the performance of a piano action tends to decline, due to the compression of felt, warping of wood, and other types of wear. A skilled technician can restore it to optimal precision, in a process called regulation, which involves adjustments ranging from turning a small screw to sanding down a wood surface. Many new pianos are not perfectly regulated when released from the factory, or quickly lose their regulation when moved to their new home, and benefit from regulation in the store or in the home.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "292196",
"title": "Inharmonicity",
"section": "Section::::Pianos.:Inharmonicity leads to stretched tuning.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 678,
"text": "When pianos are tuned by piano tuners, the technician sometimes listens for the sound of \"beating\" when two notes are played together, and tunes to the point that minimizes roughness between tones. Piano tuners must deal with the inharmonicity of piano strings, which is present in different amounts in all of the ranges of the instrument, but especially in the bass and high treble registers. The result is that octaves are tuned slightly wider than the harmonic 2:1 ratio. The exact amount octaves are stretched in a piano tuning varies from piano to piano and even from register to register within a single piano—depending on the exact inharmonicity of the strings involved.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2if79t
|
Has there ever been an attempt to create a SI unit of time?
|
[
{
"answer": "The second is based on something physical - it's defined as \"the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom\".\n\nIt may seem somewhat arbitrary, because it isn't based on anything fundamental to the universe, but it's no more so than, for example, the kilogram - which is the mass of a cylinder of platinum and iridium held in a vault in Paris. The meter is actually defined in terms of the second, and doesn't have it's own reference point.",
"provenance": null
},
{
"answer": "It is an SI unit, as you say, and it is clearly defined.\n\nThere was a push to decimalize time (use powers of ten) during the French Revolution- when the metric system was introduced. The idea was to divide each day into 10 hours, each hour into 100 minutes, and each minute into 1000 seconds. And I believe the second was broken up as well- if my math is right, it would have been 11.5 times longer than a \"normal\" second. There were also proposals for 10 day weeks and a 10 month year.\n\nIn any case, this never caught on. I imagine the exact reasoning is very complicated. The redefinition of a second (and other units of time) certainly would have taken a transition, though this didn't seem to be a problem for *other* forms of measurement. I suppose that, of all forms of measurement, time is perhaps the most commonly used. \n\nThe other problem with time is that it is already measured for us, in a way. For length of temperature, we can define it in way we like. But we get no say on the length of a day or a year. Considering a year is made up of an awkward number of days, there's no system that can capture time in a fully decimal way. You can use the length of a solar day as the standard and derive seconds from there. But no matter what, you're going to get a disconnect somewhere between days and years. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "19595664",
"title": "Time in physics",
"section": "Section::::The unit of measurement of time: the second.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 496,
"text": "In the International System of Units (SI), the unit of time is the second (symbol: formula_1). It is a SI base unit, and it has been defined since 1967 as \"the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom\". This definition is based on the operation of a caesium atomic clock. These clocks became practical for use as primary reference standards after about 1955 and have been in use ever since.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26873",
"title": "Second",
"section": "Section::::History of definition.:Fraction of solar day.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 590,
"text": "In 1832, Gauss proposed using the second as the base unit of time in his millimeter-milligram-second system of units. The British Association for the Advancement of Science (BAAS) in 1862 stated that \"All men of science are agreed to use the second of mean solar time as the unit of time.\" BAAS formally proposed the CGS system in 1874, although this system was gradually replaced over the next 70 years by MKS units. Both the CGS and MKS systems used the same second as their base unit of time. MKS was adopted internationally during the 1940s, defining the second as of a mean solar day.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "146006",
"title": "International Meridian Conference",
"section": "Section::::Background.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 932,
"text": "The first proposal for a consistent treatment of time worldwide was a memoir entitled \"\"Terrestrial Time\"\" by Sandford Fleming, at the time the chief engineer of the Canadian Pacific Railway, presented to the Canadian Institute in 1876. This envisaged clocks showing 24-hour universal time with an extra dial having a local time rounded to the nearest hour. He also pointed out that many of the corrections for local mean time were greater than those involved in abandoning solar time. In 1878/9, he produced modified proposals using the Greenwich meridian. Fleming's two papers were considered so important that in June 1879 the British Government forwarded copies to eighteen foreign countries and to various scientific bodies in England. At the same time the \"American Metrological Society\" produced a \"Report on Standard Time\" by Cleveland Abbe, chief of the United States Weather Service proposing essentially the same scheme.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20069953",
"title": "Special relativity (alternative formulations)",
"section": "Section::::Taiji relativity.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 621,
"text": "This section is based on the work of Jong-Ping Hsu and Leonardo Hsu. They decided to use the word \"Taiji\" which is a Chinese word meaning the ultimate principles that existed before the creation of the world. In SI units, time is measured in seconds, but taiji time is measured in units of metres — the same units used to measure space. Their arguments about choosing what units to measure time in, lead them to say that they can develop a theory of relativity which is experimentally indistinguishable from special relativity, but without using the second postulate in their derivation. Their claims have been disputed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3627968",
"title": "Unit of time",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 396,
"text": "A unit of time or midst unit is any particular time interval, used as a standard way of measuring or expressing duration. The base unit of time in the International System of Units (SI), and by extension most of the Western world, is the second, defined as about 9 billion oscillations of the caesium atom. The exact modern definition, from the National Institute of Standards and Technology is:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1165244",
"title": "Chronon",
"section": "Section::::Early work.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 297,
"text": "While time is a continuous quantity in both standard quantum mechanics and general relativity, many physicists have suggested that a discrete model of time might work, especially when considering the combination of quantum mechanics with general relativity to produce a theory of quantum gravity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8376",
"title": "Day",
"section": "Section::::International System of Units (SI).:Decimal and metric time.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 597,
"text": "In the 19th century, an idea circulated to make a decimal fraction ( or ) of an astronomical day the base unit of time. This was an afterglow of the short-lived movement toward a decimalisation of timekeeping and the calendar, which had been given up already due to its difficulty in transitioning from traditional, more familiar units. The most successful alternative is the \"centiday\", equal to 14.4 minutes (864 seconds), being not only a shorter multiple of an hour (0.24 vs 2.4) but also closer to the SI multiple \"kilosecond\" (1 000 seconds) and equal to the traditional Chinese unit, \"kè\".\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
95ylhp
|
how did manual telephone switchboards work?
|
[
{
"answer": "Your phone (and all phones) were wired to the switching center, where a person would have to manually complete the circuit between the two phones and then manually disconnect when you were done.\n\nLong distance calls required switching at multiple locations, hence the added cost back in the day.",
"provenance": null
},
{
"answer": "In your local switchboard, everyone in the locality who had a house phone had a connection on the switchboard. If you lifted the receiver at your end, at home, a light would illuminate on your connection at the switchboard. \n\nOn the desk in front of the operator, there were two rows of jack connectors sticking out of the desk, plug up [like this](_URL_0_). Each pair one above the other were connected together. That's all they were, like giant aux cables. \n\nAs well as that, for each of those pairs, there was a switch which could be in one of three positions. The middle position just treated the cable like an aux cable, as above, one position let the operator speak to that connection, and the third position made that connection ring. \n\nThe switchboard operator would then plug one of the rear plugs into your lit up socket on the board in front of them to connect them to you, and flip the switch to let them talk to you. They'd ask you where you'd like to be connected to. \n\nIf it was a local number, you'd give them the local number (which would probably be something like \"185\", and then the operator would ask you to wait while they connected you. The operator would then take the other plug sticking out the desk, plug it into the number you'd requested, and then flip the switch to the Ring position. At your end, you'd hear the ring signal, at their end the phone would ring. The operator would wait for the light to illuminate on the connection you're trying to ring, which would tell them they'd picked the phone up, and then the operator would flip the switch back to normal mode, (or if they felt like eavesdropping they could leave the switch in talk and sit there silently listening to the phone call) and you could then talk.\n\nThe operator would know when you'd hung up, because the lights on both connections would go out, at which point she would pull both plugs out and they would reel back into the desk ready for use again. \n\nIf you were calling long distance, there would be some other connections that could be used, called 'trunk lines' which would connect to another switchboard perhaps in another city. Then the operator would connect to one of those, and talk to that operator, to establish an eventual connection between you and whoever you wanted to talk to long distance via these trunk connections. Because this would take time, they would probably tell you to hang up while they established the connection and then ring you back when the call was ready. \n\nAs you can probably begin to imagine, if you were in a busy city, it was quite a fraught job. \n\nAlso, if you ever wondered, the classic old movie trope of rattling the hook switch and shouting \"Operator operator!\" down the phone was actually a thing. Of course, tapping your hook switch would cause your light at the exchange to flash, perhaps increasing the likelihood of the operator connecting to you faster. \n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1610496",
"title": "Economic history of the United States",
"section": "Section::::Late 19th century.:Commerce, industry and agriculture.:Communications.\n",
"start_paragraph_id": 269,
"start_character": 0,
"end_paragraph_id": 269,
"end_character": 213,
"text": "Automatic telephone switching, which eliminated the need for telephone operators to manually connect local calls on a switchboard, was introduced in 1892; however it did not become widespread for several decades.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28427",
"title": "Telephone switchboard",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 434,
"text": "A telephone switchboard is a telecommunications system used in the public switched telephone network or in enterprises to interconnect circuits of telephones to establish telephone calls between the subscribers or users, or between other exchanges. The switchboard was an essential component of a manual telephone exchange, and was operated by switchboard operators who used electrical cords or switches to establish the connections.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28427",
"title": "Telephone switchboard",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 313,
"text": "The electromechanical automatic telephone exchange, invented by Almon Strowger in 1888, gradually replaced manual switchboards in central telephone exchanges around the world. In 1919, the Bell System in Canada also adopted automatic switching as its future technology, after years of reliance on manual systems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "173354",
"title": "Automation",
"section": "Section::::History.:Significant applications.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 459,
"text": "The automatic telephone switchboard was introduced in 1892 along with dial telephones. By 1929, 31.9% of the Bell system was automatic. Automatic telephone switching originally used vacuum tube amplifiers and electro-mechanical switches, which consumed a large amount of electricity. Call volume eventually grew so fast that it was feared the telephone system would consume all electricity production, prompting Bell Labs to begin research on the transistor.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "629590",
"title": "Switchboard operator",
"section": "Section::::Description.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 980,
"text": "A typical telephone switchboard has a vertical panel containing an array of jacks with a desk in front. The desk has a row of switches and two rows of plugs attached to cables that retract into the desk when not in use. Each pair of plugs was part of a cord circuit with a switch associated that let the operator participate in the call or ring the circuit for an incoming call. Each jack had a light above it that lit when the customer's telephone receiver was lifted (the earliest systems required the customer to hand-crank a magneto to alert the central office and, later, to \"ring off\" the completed call). Lines from the central office were usually arranged along the bottom row. Before the advent of operator distance dialing and customer direct dial (DDD) calling, switchboard operators would work with their counterparts in the distant central office to complete long distance calls. Switchboard operators are typically required to have very strong communication skills.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50350318",
"title": "Panel switch",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 382,
"text": "The Panel Machine Switching System is an early type of automatic telephone exchange for urban service, introduced in the Bell System in the 1920s. It was developed by Western Electric Laboratories, the forerunner of Bell Labs, in the U.S., in parallel with the Rotary system at International Western Electric in Belgium before World War I. Both systems had many features in common.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41101",
"title": "Electronic switching system",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 448,
"text": "The generations of telephone switches before the advent of electronic switching in the 1950s used purely electro-mechanical relay systems and analog voice paths. These early machines typically utilized the step-by-step technique. The first generation of electronic switching systems in the 1960s were not entirely digital in nature, but used reed relay-operated metallic paths or crossbar switches operated by stored program control (SPC) systems.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1yuhj0
|
The movie zeitgeist
|
[
{
"answer": "You may be interested in [this section of the FAQ](_URL_1_).\n\nWith Zeitgeist specifically, the answer generally is that no, there's no truth to it. [This](_URL_0_) lists some of them. A few illustrative examples:\n\n1. Horus wasn't born of a virgin, as the movie states, but by Isis impregnating herself with Osiris' penis\n* Horus didn't die, and wasn't resurrected\n* Horus didn't have 12 disciples\n* The film connects Jesus being the 'son' with 'sun' gods, but those two words don't even sound similar in the relevant languages",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "34270831",
"title": "Zeitgeist (film series)",
"section": "Section::::\"Zeitgeist: The Movie\".\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 497,
"text": "\"Zeitgeist: The Movie\" is a 2007 film by Peter Joseph presenting a number of conspiracy theories. The film assembles archival footage, animations and narration. Released online on June 18, 2007, it soon received tens of millions of views on Google Video, YouTube, and Vimeo. According to Peter Joseph, the original \"Zeitgeist\" was not presented in a film format, but was a \"performance piece consisting of a vaudevillian, multimedia style event using recorded music, live instruments, and video\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34270831",
"title": "Zeitgeist (film series)",
"section": "Section::::\"Zeitgeist: The Movie\".:Reception.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 292,
"text": "The newspaper \"The Arizona Republic\" described \"Zeitgeist: The Movie\" as \"a bramble of conspiracy theories involving Sept. 11, the international monetary system, and Christianity\" saying also that the movie trailer states that \"there are people guiding your life and you don't even know it\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34270831",
"title": "Zeitgeist (film series)",
"section": "Section::::\"Zeitgeist: Addendum\".\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 235,
"text": "\"Zeitgeist: Addendum\" is a 2008 film produced and directed by Peter Joseph, and is a sequel to the 2007 film, \"Zeitgeist: The Movie\". It premiered at the 5th Annual Artivist Film Festival in Los Angeles, California on October 2, 2008.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1035528",
"title": "Astrological age",
"section": "Section::::Popular culture.\n",
"start_paragraph_id": 223,
"start_character": 0,
"end_paragraph_id": 223,
"end_character": 436,
"text": "BULLET::::- The first section of the film \"Zeitgeist\" presents a theory of astrological ages that proposes that many events in world religions, such as Moses' condemnation of the Golden Calf and Jesus' ministry, are merely allegories used to describe astrological events. The narrator of the film implies that Biblical characters, such as Jesus, never existed as real human beings, but are rather metaphors for constellations and ages.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34270831",
"title": "Zeitgeist (film series)",
"section": "Section::::\"Zeitgeist: The Movie\".:Reception.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 238,
"text": "Alex Jones, American radio host, conspiracy theorist and executive producer of \"Loose Change\", stated that film segments of \"Zeitgeist\" are taken directly from his documentary \"Terrorstorm\", and that he supports \"90 percent\" of the film.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34270831",
"title": "Zeitgeist (film series)",
"section": "Section::::\"Zeitgeist: Moving Forward\".\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 404,
"text": "\"Zeitgeist: Moving Forward\" is the third installment in Peter Joseph's \"Zeitgeist\" film series. The film premiered at the JACC Theater in Los Angeles on January 15, 2011 at the Artivist Film Festival, was released in theaters and online. As of November 2014, the film had over 23 million views on YouTube. The film is arranged in four parts, each containing interviews, narration and animated sequences.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34270831",
"title": "Zeitgeist (film series)",
"section": "Section::::\"Zeitgeist: Moving Forward\".:Reception.\n",
"start_paragraph_id": 69,
"start_character": 0,
"end_paragraph_id": 69,
"end_character": 292,
"text": "In an article, in \"Tablet\" magazine, Michelle Goldberg described the film as \"silly enough that at times [she] suspected it was [a] sly satire about new-age techno-utopianism instead of an example of it\". She describes the 3 Zeitgeist movies as \"a series of 3 apocalyptic cult documentaries.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2jupix
|
Did the original Boy Scouts (British) have a tendency to wander into the army as they got older? What kind of effect did the early Scouts have on patriotism in Britain?
|
[
{
"answer": "The early Boy Scout movement emerged hand in hand with patriotism in Britain at the beginning of the 20th century. The Boy Scouts creator, Robert Baden-Powell, noted the general despair amongst soldiers during the Boer war in South Africa because of the significant number of casualties in the army. Upon his return to London, Baden-Powell was convinced that reforms in the education of youth were needed. He envisioned a stronger British empire, and one that could rely on physically and mentally strong men. He later wrote in his memoirs that: \n\n*“we had to remedy some of the shortcomings in [soldiers’] character and to fill in the omissions left in their education by developing in them the various attributes needed for making them reliable men. We had to inculcate a good many qualities not enunciated in text-books, such as individual pluck, intelligence, initiative, and spirit of adventure.”*\n\nBy 1917, there were 194,331 Boy Scouts in Great Britain and this number increased to 443,455 over the next twenty years.\n\n**Physical strength** became an early and important feature of the new Boy Scout association. The invasion scare of 1906 and the growing perceived military threat from Germany resulted in an evident shift from social issues to problems and concerns for Britain’s patriotism and citizenship. Physical fitness and good health became national goals. One guidebook maintained that *“to the boy Scout the importance of physical training is very great, for besides being very necessary for his well being, it is also the foundation of the object of that grand Brotherhood to which he belongs.\"*\n\nA strong rhetoric of **imperialism and British strength** over other peoples‘ can be found in Boy Scout guides. In Scouting for Boys, Baden-Powell proclaims that “power at sea has enable us of late years to put a stop to the awful slave trade which used to go on the coast of Africa; it has enabled us to discover new lands for our Empire, and to bring civilization to savages in farthest corners of the world.” History was also rewritten to strengthen this idea. While talking about the Crusades, Baden-Powell asserts that “*scouts cannot do better than follow the example of your forefathers, the Knights, who made the tiny British nation into one of the best and greatest that the world has ever seen.*\" Most historiography on the crusades argues the contrary: economic reasons (knights acquiring land and riches) amongst others fostered the crusades.\n\nScouting evolved into a youth movement that offered a romantic program of outdoor adventures and activities to **remedy the division between classes** and the often disrupted and poor lifestyles caused by industrialization and urbanization. Many British sociologists in the early twentieth century, such as Brian Wilson and William Morrison, saw the increase in violence at the time as the result of the erosion of traditional authority and community control and by the development of adverse living conditions. The Boy Scout Movement acted as a movement in which these divisions could be severed, much like the armed forces in WWI and WWII. Point 4 of the Scout Law states that “A Scout is a Friend to All, and A Brother to Every Other Scout, no Matter to what Social Class the Other Belongs”. By breaking down social barriers, Boy Scouts facilitated the growth of ideas of fairness and parity amongst the British youth. Early on, the movement also established any kind of spiritual commitment (Catholicism, Protestantism, Judaism, etc.) as one of the cornerstones of the movement. A sense of reinforced uniform White racial identity was therefore maintained - one that crossed class and religious lines.\n\nWhen I researched this topic, I wasn't able to find numbers of Boy scouts who became soldiers - I don't know if any survey by the British Armed Forces was done on that subject. *One can only assume that there is a strong correlation*. In any event, the primary and secondary sources I looked at significantly convey the following: the early Boy Scout movement acted as a strong engine to reinforce the ideas of British strength, unity and patriotism.\n\nSources (Primary):\n\nAdams, Morley. *What a Scout Should Know* (London: Henry Frowde, 1915)\n\nLord Baden-Powell of Gillwell. *Lessons From the Varsity of Life* (London: C. Arthur Pearson, 1933)\n \n_____. *Scouting for Boys* (London: C. Arthur Pearson, 1937)\n\n_____. *The Wolf Cub’s Handbook.* Eight Edition. (London: C. Arthur Pearson, 1931)\n\nSources (secondary):\n\nJacobson, Sven. *British and American Scouting and Guiding Terminology* (Stockholm: Stockholm University Press, 1985)\n\nJeal, Tim. *Baden-Powell* (London: Pimlico, 1991)\n\nMacDonald, Robert H. *Reproducing the Middle-class Boy: From Purity to Patriotism in the Boys’ Magazines*, 1892-1914. Journal of Contemporary History. Volume 24, No. 3 (July \t1989), pp. 519-539\n\nParsons, Timothy H. *Race, Resistance, and the Boy Scout Movement in British Colonial America* (Athens: Ohio University Press, 2004)\n\nProctor, Tammy M.*(Uni)Forming Youth: Girl Guides and Boy Scouts in Britain, 1908-1939*. History Workshop Journal. No. 45 (Spring 1988), pp. 103-134\n\nPryke, Sam. *The Popularity of Nationalism in the Early British Boy Scout Movement*. Social History. Volume 23, No. 3 (October 1998), pp. 309-324\n\nReynolds, E. E. *The Scout Movement* (London: Oxford University Press, 1950)\n\nMary Aickin Rothschild. *To Scout or to Guide? The Girl Scout-Boy Scout Controversy, 1912-1941*. A Journal of Women Studies. Volume 6, No. 3 (Autumn 1981), pp. 115-121\n\nWarren, Allen. *Sir Robert Baden-Powell, the Scout Movement and Citizen Training in Britain, 1900-1920.* The English Historical Review. Volume 101, No. 399 (April 1986), pp. 376-398\n\nWilkinson, Paul. *English Youth Movements, 1908-1930.* Journal of Contemporary History. Volume 4, No. 2 (April 1969), p.3-23\n\nZweiniger-Bargielowska, Ina. *Building a British Superman: Physical Culture in Interwar Britain.* Journal of Contemporary History. Volume 41, No. 4 (October 2006), pp. 595-610 ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "42720",
"title": "Second Boer War",
"section": "Section::::Third phase: Guerrilla war (September 1900 – May 1902).:British response.\n",
"start_paragraph_id": 102,
"start_character": 0,
"end_paragraph_id": 102,
"end_character": 300,
"text": "The British Army also made use of Boer auxiliaries who had been persuaded to change sides and enlist as \"National Scouts\". Serving under the command of General Andries Cronjé, the National Scouts were despised as \"joiners\" but came to number a fifth of the fighting Afrikaners by the end of the War.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6125637",
"title": "History of the Boy Scouts of America",
"section": "Section::::Exploring and Venturing.\n",
"start_paragraph_id": 91,
"start_character": 0,
"end_paragraph_id": 91,
"end_character": 811,
"text": "Shortly after Boy Scouting was founded in the United States, its creators encountered a problem with older boys. Some grew bored with the program, usually around 14–15, while others didn't want to leave their troops after reaching the age of 18. To alleviate this problem, a number of new programs were created for older boys over time, including the Sea Scouts (1912), Senior Scouts and Explorer Scouts (1935), Rover Scouts (c. 1938), and Air Scouts (1942). Around 1935, most of these were brought together under the overall Senior Scout Division. In 1949, these programs were reworked into Exploring, which included Sea Explorers and Air Explorers. In 1958, these were further re-worked and condensed into a unified Exploring program with Air Explorers and Sea Explorers as relatively independent sub-groups.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "433377",
"title": "The Scout Association",
"section": "Section::::History.:1910 to 1920: growth.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 578,
"text": "Scouting spread throughout the British Empire and wider world. On 4 January 1912, The Boy Scouts Association was incorporated throughout the British Empire by Royal charter for \"the purpose of instructing boys of all classes in the principles of discipline loyalty and good citizenship\". During the First World War, more than 50,000 Scouts participated in some form of war work on the home front. Scout buglers sounded the \"all clear\" after air raids, others helped in hospitals and made up aid parcels; Sea Scouts assisted the Coastguard in watching the vulnerable East coast.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8554806",
"title": "Scouting controversy and conflict",
"section": "Section::::Militarism in early Scouting movement.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 334,
"text": "Before the start of Scouting there was criticism about a possible military goal of Scouting. It culminated in a schism where Sir Francis Vane and Battersea Scout District formed the British Boy Scouts in 1909, partly due to a suspicion of a too close involvement with military organizations. Baden-Powell always strongly denied this.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11178933",
"title": "Square knot insignia",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 341,
"text": "In the earliest days of the Scouting Movement military veterans were urged into service as Scoutmasters. The first Scout uniforms therefore resembled military uniforms. It was common for these veterans to wear their military decorations on their modified Boy Scout uniform — a national uniform was not to be developed until the early 1920s.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2145324",
"title": "Uniform and insignia of the Boy Scouts of America",
"section": "Section::::History.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1259,
"text": "Early Boy Scout uniforms were copies of the U.S. Army uniforms of the time. Scouts generally wore knickers with leggings, a button-down choke-collar coat and the campaign hat. Adults wore a Norfolk jacket with knickers or trousers. In 1916, Congress banned civilians from wearing uniforms that were similar in appearance to those of the U.S. armed forces with the exception of the BSA. The uniform was redesigned in 1923—the coat and leggings were dropped and the neckerchief standardized. In the 1930s, shorts replaced knickers and their wear was encouraged by the BSA. The garrison (flat) cap was introduced in 1943. In 1965, the uniform's material was changed from wool and cotton to permanent press cloth, although the older material uniforms continued to be sold and used through the late 1960s. The Improved Scouting Program in 1972 included a major overhaul of badges and other insignia, replacing many two color patches with multicolor versions. Also introduced was a red beret and a dark green shirt for \"Leadership Corps\" members (ages 14–15) in a Scout troop. This was done to relate those older Boy Scouts to Explorers, which wore the same uniform shirt, but by the early 1980s, the red beret and the Leadership Corps concept had been discarded. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2812343",
"title": "Rover Scout",
"section": "Section::::Rovers in the United States.:Early days.\n",
"start_paragraph_id": 68,
"start_character": 0,
"end_paragraph_id": 68,
"end_character": 921,
"text": "In the United States, glimmerings of Rovering emerged as local councils, Scout leaders, and Scouts worked together to deal with the \"older boy\" problem—that is, to find some way for Scouting to continue into young adulthood. As early as 1928 there were known to be crews in Seattle, Detroit, Toledo and elsewhere. The program particularly flourished in New England around 1929, through the efforts of Robert Hale, who produced an early Rover Scout booklet. By 1932, there were 36 official experimental crews, with 27 of them in 15 New England councils. Finally, in May 1933 the National Executive Board approved the program, and starting plans for development of literature and helps to leaders (Brown, 2002). A bimonthly newsletter, the \"Rover Record,\" was inaugurated in 1935 as a means of communicating directly with Rover Scouts and Leaders. A number of regional Rover Moots also were implemented during this period.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1xffbb
|
Regarding the Large Hadron Collider, what else in terms of the standard model are we still looking out for? Has finding the Higgs made any significant impact?
|
[
{
"answer": "Not much in terms of the standard model. There is still some work going on to determine the spin/parity of the Higgs, and there are always lots of measurements that we can make (production rate\\*, interaction rate\\*, etc).\n\nSM measurements (Higgs coupling, different cross sections, branching ratios, etc) are still incredibly important though, as they will be inputs to searches for new physics beyond the SM (BSM). Also as the LHC upgrades its beam energy, and the amount of uninteresting collisions (called pileup) increases, algorithms need to be tuned and new ones need to be developed. The Higgs will be extremely useful as a benchmark for making sure all our algorithms still work with the higher beam energy. (There will be an entire program for studying Higgs physics at the upgraded LHC.)\n\nNow that we have discovered the Higgs, scientists can really focus on casting a wide net, to eliminate (or validate) as many BSM models as possible. There are way too many topics for me to list them all, but search for things like SUSY, gravitons, dark matter, heavy neutrinos, W' and Z' bosons will certainly be looked at.\n\n*\"Cross section\" is the correct term here. In layman's terms it describes the rate/probability of something happening.",
"provenance": null
},
{
"answer": "There are still some things in the Standard Model that are incomplete or aren't fully understood, such as the strong CP problem or the origin of neutrino mass, but the Higgs was the \"last major piece,\" and the remaining issues are relatively minor in comparison. In other words, with the discovery of the Higgs, all of the broad strokes of the Standard Model have been confirmed to be correct. It is worth pausing for a moment to reflect on how amazing it is that we have literally *correctly predicted* the existence of a *fundamental particle* before it had been observed. In fact, before the Higgs, we knew we were surely on the right track, having already predicted the W+, W-, and Z particles (and their masses) before their discovery. So the Standard Model is clearly a very good model of reality. And now, having confirmed that the Standard Model is broadly correct, the main thing left to do is to simply measure the parameters of the Standard Model to better and better accuracy. We may find that at a certain point some of the predicted parameters deviate slightly from the measured parameters, and this would indicate that the Standard Model is only an approximate model. This is assumed anyways by most people, and indeed another thing that will continued to be searched for are various *extensions* to the Standard Model, such as supersymmetry, that may only be relevant at higher energies than we can currently probe. Thus there is still a desire among physicists to build ever larger particle accelerators like the LHC. There may be more particles/forces out there at higher energies, and we may never know about their existence without being able to produce those energies in the lab.",
"provenance": null
},
{
"answer": "Studying the Higgs that was found at The LHC might give scientists a jumping off point into new physics. WIMPs would interact with the Higgs in some theories, so The LHC could help discover if that is the answer to dark matter. In various supersymmetry theories there are multiple Higgs bosons, and so continuing to explore the properties of the Higgs that was found and further energies could uncover evidence for or against those theories, which could then help with problems like the hierarchy problem or the vaccuum energy problem, which could be related to cosmological expansion ie the dark energy issue. \n\nEdit: I know much of this is past the standard model, but some of the problems like the hiearchy problem are not explained by the standard model from what I understand, so I think the physics community is in agreement that the standard model will get extended or replaced at some point.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "16814127",
"title": "Worldwide LHC Computing Grid",
"section": "Section::::Background.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 550,
"text": "The Large Hadron Collider at CERN was designed to test the existence of the Higgs boson, an important but elusive piece of knowledge that had been sought by particle physicists for over 40 years. A very powerful particle accelerator was needed, because Higgs bosons might not be seen in lower energy experiments, and because vast numbers of collisions would need to be studied. Such a collider would also produce unprecedented quantities of collision data requiring analysis. Therefore, advanced computing facilities were needed to process the data.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20556903",
"title": "Higgs boson",
"section": "Section::::History.:Experimental search.:Search before 4 July 2012.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 827,
"text": "The Large Hadron Collider at CERN in Switzerland, was designed specifically to be able to either confirm or exclude the existence of the Higgs boson. Built in a 27 km tunnel under the ground near Geneva originally inhabited by LEP, it was designed to collide two beams of protons, initially at energies of per beam (7 TeV total), or almost 3.6 times that of the Tevatron, and upgradeable to (14 TeV total) in future. Theory suggested if the Higgs boson existed, collisions at these energy levels should be able to reveal it. As one of the most complicated scientific instruments ever built, its operational readiness was delayed for 14 months by a magnet quench event nine days after its inaugural tests, caused by a faulty electrical connection that damaged over 50 superconducting magnets and contaminated the vacuum system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47278268",
"title": "Future Circular Collider",
"section": "Section::::Accelerators.:FCC-hh (proton/proton and ion/ion).\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 225,
"text": "A hadron collider will also extend the study of Higgs and gauge boson interactions to energies well above the TeV scale, providing a way to analyse in detail the mechanism underlying the breaking of the electroweak symmetry.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47278268",
"title": "Future Circular Collider",
"section": "Section::::Background.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 525,
"text": "The discovery of the Higgs boson at the LHC, together with the absence so far of any phenomena beyond the Standard Model in collisions at centre of mass energies up to 8 TeV, has triggered an interest in future colliders to push the energy and precision frontiers. A future “energy frontier” collider at 100 TeV is a “discovery machine”, reaching out to so far unknown territories. \"New physics\" seen at such a machine could explain observations such as the prevalence of matter over antimatter and non-zero neutrino masses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2520305",
"title": "Holger Bech Nielsen",
"section": "Section::::Work.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 662,
"text": "In a series of recent (2009) papers uploaded on the arXiv.org web site, Nielsen and fellow physicist Masao Ninomiya proposed a radical theory to explain the seemingly improbable series of failures preventing the Large Hadron Collider (LHC) from becoming operational. The collider was intended to be used to find evidence of the hypothetical Higgs boson particle. They suggested that the particle might be so abhorrent to nature that its creation would ripple backward through time and stop the collider before it could create one, in a fashion similar to the time travel Grandfather paradox. Subsequently LHC claimed the discovery of Higgs boson on 4 July 2012.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20556903",
"title": "Higgs boson",
"section": "Section::::History.:Experimental search.\n",
"start_paragraph_id": 67,
"start_character": 0,
"end_paragraph_id": 67,
"end_character": 842,
"text": "To find the Higgs boson, a powerful particle accelerator was needed, because Higgs bosons might not be seen in lower-energy experiments. The collider needed to have a high luminosity in order to ensure enough collisions were seen for conclusions to be drawn. Finally, advanced computing facilities were needed to process the vast amount of data (25 petabytes per year as of 2012) produced by the collisions. For the announcement of 4 July 2012, a new collider known as the Large Hadron Collider was constructed at CERN with a planned eventual collision energy of 14 TeV over seven times any previous collider and over 300 trillion (3×10) LHC proton–proton collisions were analysed by the LHC Computing Grid, the world's largest computing grid (as of 2012), comprising over 170 computing facilities in a worldwide network across 36 countries.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36772327",
"title": "Search for the Higgs boson",
"section": "Section::::Experimental search and discovery of unknown boson.:Superconducting Super Collider.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 675,
"text": "In preparation for this machine, extensive phenomenological studies were produced for the production of Higgs bosons in hadron colliders. The big downside of hadron colliders for search for the Higgs is that they collide composite particles, and as a consequence produce many more background events and provide less information about the initial state of the collision. On the other hand, they provide a much higher centre-of-mass energy than lepton colliders (such as LEP) of a similar technological level. However, hadron colliders also provide another way producing a Higgs boson through the collision of two gluons mediated by a triangle of heavy (top or bottom) quarks.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
efmziw
|
Was the father of philosophy Thales of Miletus Greek or Phoenician?
|
[
{
"answer": "The short answer is: we're not sure, but it's likely Thales came from a Phoenician family that migrated to Miletus. As far as I know, Herodotus is the earliest source we have that talks about Thales. And even this source is written about a hundred years after Thales had already died. So we have very little to base a clear answer to this question.\n\nNow for the more interesting long answer. First it must be mentioned that the Milesians in the time of Thales did not call or consider themselves first and foremost as 'Greek', they would call themselves Milesians first, and Ionians second. At the same time, the people we now call Phoenicians, also never called themselves that. Second, it's important to understand that ethnic categories such as Greek, Phoenician, Carian were in reality not as clear-cut as they seem. The area around Miletus was inhabited since the neolithic, thousands of years before there were Greeks or Phoenicians. Already during the Bronze Age and later Archaic Age, Miletus was a powerful regional city with a rich history. The people who lived there were probably a changing mixture of people we would now call Lydians, Myceneans, Minoans, Carians, Phoenicians, Greeks, ...\n\nSo how can we determine if someone was Greek or Phoenician (keeping in mind that these are categories of a later date and not used by the historical people we're talking about)?\n\nFirst off, language. Miletus was part of the Ionian League. This was a defensive/religious alliance between twelve independent city-states on the western coast of what is now Turkey. They (or at least the elites of these cities) spoke the Ionian dialect of Greek. But even between the cities of the Ionian League there were many differences in dialect. Ionian Greek was also spoken in Athens, and many Ionians had the notion that they were the descendants of Athenians that migrated across the Aegean Sea. But this idea is not to be taken too literal and mostly a result of Athenian expansion in the centuries after Thales. The Athenians tried to expand their power and a semi-legendary common origin was a successful way of making alliances.\n\nThat brings us to population and migrations. The Ionian migration into the region probably occurred around the 11th century BCE. These people presumably came mostly from Attica and Boeotia. The way in which they mingled and lived together with the existing populations is uncertain and most likely happened in different ways each time according to the circumstances. In other instances of Greek migrations we see that they could marry into the local families, they could live together on relatively equal footing, they could go to war and chase off or enslave the people, and everything in between. In whatever way it happened, in these twelve cities, the Greek culture became the dominant culture during the following centuries. \n\nMiletus, like the other city-states of the Ionian League, had a political organisation based on different tribes. It seems there were six tribes. Four of them would have their origin in Greece and two would be local. How large these groups were, or what differences in social standing they had is unfortunately not clearly known. Nevertheless, this shows us that, even though Milesians would later consider themselves Greek (or at least Ionian), their origins are very murky and any idea about ethnic heterogeneity was mostly an ideal not at all reflected in reality.\n\nAbout a century before the birth of Thales, Greek pottery entered a phase called the Orientalizing Period. During this time many cultural influences from the eastern mediterranean entered the Greek speaking communities. It is very likely that a major driving force of these cultural developments were traders and craftsmen who originated in the Phoenician city-states such as Tyre, Sidon, Byblos and had contacts with Greek cities and towns. At first, Phoenician traders would visit and conduct business in these towns (and Miletus was a very important crossroads of the mediterranean trade). Later, these traders would settle down and create small industries and trade hubs. There's no question these traders and craftsmen were at first looked at as foreigners, but they could amass considerable wealth and were probably at times able to secure citizenship and an important place in their new homes.\n\nSo where does that leave us with Thales? If we look at the language, Thales was Greek. For most Greeks, language was the first and most important thing that distinguishes Greeks from barbarians. Since Thales lived in a city dominated and inhabited mostly by people who spoke Ionian Greek, he would have spoken this language as well.\n\nThat Thales was of Phoenician origin, as Herodotus and others mention, is also possible. Lots of Phoenicians spread out across the mediterranean in the centuries before Thales. As traders and craftsmen, some of them were able to become quite rich and secure citizenship and prominent places in the societies they migrated to. That Thales is a descendent of Phoenician migrants is therefore perfectly imaginable.\n\nIn conclusion: was Thales Greek of Phoenician? The answer is most likely: both, with countless caveats about the complexity of ancient ethnicity and identity.\n\nMain sources:\n\nRoebuck, C. 'Tribal Organization in Ionia' (1961).\n\nGreaves, A. M. 'The Land of Ionia' (2010).\n\nAnd of course the Histories of Herodotus.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "30072",
"title": "Thales of Miletus",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 449,
"text": "Thales of Miletus (; , \"Thalēs\", or ; 624/623 – c. 548/545 BC) was a pre-Socratic philosopher, mathematician and astronomer from Miletus in Ionia, Asia Minor. He was one of the Seven Sages of Greece; many, most notably Aristotle, regarded him as the first philosopher in the Greek tradition, and he is otherwise historically recognized as the first individual in Western civilization known to have entertained and engaged in scientific philosophy. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30072",
"title": "Thales of Miletus",
"section": "Section::::Reliability of sources.\n",
"start_paragraph_id": 103,
"start_character": 0,
"end_paragraph_id": 103,
"end_character": 376,
"text": "The earliest sources on Thales (living before 320 BC) are often the same for the other Milesian philosophers (Anaximander, and Anaximenes). These sources were either roughly contemporaneous (such as Herodotus) or lived within a few hundred years of his passing. Moreover, they were writing from an oral tradition that was widespread and well known in the Greece of their day.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19152036",
"title": "Philiscus of Aegina",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 670,
"text": "Philiscus of Aegina (; fl. 4th century BC) was a Cynic philosopher from Aegina who lived in the latter half of the 4th century BC. He was the son of Onesicritus who sent Philiscus and his younger brother, Androsthenes, to Athens where they were so charmed by the philosophy of Diogenes of Sinope that Onesicritus also came to Athens and became his disciple. According to Hermippus of Smyrna, Philiscus was the pupil of Stilpo. He is also described as an associate of Phocion. The \"Suda\" claims that he was a teacher of Alexander the Great, but no other ancient writer mentions this. Aelian, though, has preserved a short exhortation by Philiscus addressed to Alexander:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13792",
"title": "Heraclitus",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 533,
"text": "Heraclitus of Ephesus (; ; ) was a pre-Socratic Greek philosopher, and a native of the city of Ephesus, then part of the Persian Empire. He was of distinguished parentage. Little is known about his early life and education, but he regarded himself as self-taught and a pioneer of wisdom. From the lonely life he led, and still more from the apparently riddled and allegedly paradoxical nature of his philosophy and his stress upon the heedless unconsciousness of humankind, he was called \"The Obscure\" and the \"Weeping Philosopher\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2625475",
"title": "Philodemus",
"section": "Section::::Life.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 738,
"text": "Philodemus was born c. 110 BC, in Gadara, Coele-Syria (in present-day Jordan). He studied under the Epicurean Phoenician philosopher, Zeno of Sidon, the head (scholarch) of the Epicurean school, in Athens, before settling in Rome about 80 BC. He was a follower of Zeno, but an innovative thinker in the area of aesthetics, in which conservative Epicureans had little to contribute. He was a friend of Lucius Calpurnius Piso Caesoninus, and was implicated in Piso's profligacy by Cicero, who, however, praises Philodemus warmly for his philosophic views and for the \"elegans lascivia\" of his poems. Philodemus was an influence on Horace's \"Ars Poetica\". The Greek anthology contains thirty-four of his epigrams - most of them, love poems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1265182",
"title": "Alciphron",
"section": "Section::::Works.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 507,
"text": "It has been said that Alciphron was an imitator of Lucian; but besides the style, and, in a few instances, the subject matter, there is no resemblance between the two writers: the spirit in which the two treat their subjects is totally different. Both derived their materials from the same sources, and in style both aimed at the greatest perfection of the genuine Attic Greek. Classical scholar Stephan Bergler has remarked that Alciphron stands in the same relation to Menander as Lucian to Aristophanes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "177521",
"title": "Aenesidemus",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 641,
"text": "Aenesidemus ( or ) was a Greek Pyrrhonist philosopher, born in Knossos on the island of Crete. He lived in the 1st century BC, taught in Alexandria and flourished shortly after the life of Cicero. Photius says he was a member of Plato's Academy, but he came to dispute their theories, adopting Pyrrhonism instead. Diogenes Laërtius claims an unbroken lineage of teachers of Pyrrhonism through Aenesidemus, with his teacher being Heraclides. However, little is known about the names between Timon of Phlius and Aenesidemus, so this lineage is suspect. Whether Aenesidemus re-founded the Pyrrhonist school or merely revitalized it is unknown.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3fg0tv
|
What am I missing in Einstein theory of relativity?
|
[
{
"answer": "If that is the example in the biography used to explain how simultaneity of events is relative, then it is not only a terrible example, but it just entirely misses the point of relativity.\n\nIn the example, the two observers are not moving with respect to each other. So they are in the same inertial frame. This means that the same event gets the *same* temporal coordinate from both of them. In other words, they should both say that the two lightning strikes hit at the same time. The author of the example is confusing two very different concepts: coordinates of events and human perception of events. It is certainly true that the individual standing closer to one of the strikes literally sees with his eyes the arrival of one flash before the other. But that is not what is meant by relative simultaneity.\n\nThis confusion is actually more common than I would hope, simply because we typically use words like \"observe\" and \"see\" to talk about spacetime coordinates, and not to imply anything about actual human perception. Human vision is not based on spacetime coordinates, but rather the simultaneous arrival of photons at our eyes. \n\nThe relativity of simultaneity only occurs when we talk about observers in *different inertial frames*. That is, the two individuals should be moving at constant velocity with respect to each other. Before relativity was discovered, simultaneity was still absolute for all inertial observers. Every observer assigned the same temporal coordinate to same event, regardless of whether they were in motion with respect to each other. In relativity, that simply does not happen. Observers moving with respect to each other will assign different temporal coordinates to the same event, and this is very non-intuitive given our typical (human) perception of the world.\n\nYour example of the three men shooting guns is correct. All three men are not moving with respect to each other. So they should all give the same time for the two shootings.",
"provenance": null
},
{
"answer": "That example isn't very good for reasons someone else already answered, but I'll reply to this part:\n\n > My confusion is that even if they perceive things differently, that doesn't change the fact that one of the lightning strikes did indeed happen first, right?\n\n\nConsider the difference between these two statements: \"The car is to the east of the house\" and \"The car is to the right of the house\". In the first case, you can grab a compass and check. In the second one, there is no possible experiment you can do to say whether or not it's really true. What would it even mean to be really true?\n\nEveryone disagreeing doesn't imply that it's like the second case where there's no objective truth, but it certainly suggests the possibility.",
"provenance": null
},
{
"answer": "I think you may have misremembered the example, since it would have likely placed one observer on a moving train and one on the platform, as so:\n\n_URL_0_\n\nAs the video shows, it's not about \"I saw this happen first, so it happened first.\" It's about \"I saw this happen first, and I know the speed of light, so I can do math to figure out exactly when that event happened.\" \n\nSo for your gun shooting example, everyone would agree on the timing of events because the observer knows how long it takes sound to travel 1000 meters and can do the math to figure out when the gun fired.\n\nWhen you put people in different reference frames, however, they will no longer agree on the exact timing of events. They may even disagree on the order of events. Again, they are doing the math to figure out when events happen, and not just saying \"I saw it first so it happened first\" without regard to their distance from the event.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "19048",
"title": "Mass",
"section": "Section::::Definitions.:Inertial vs. gravitational mass.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 282,
"text": "Albert Einstein developed his general theory of relativity starting with the assumption of the intentionality of correspondence between inertial and passive gravitational mass, and that no experiment will ever detect a difference between them, in essence the equivalence principle.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26962",
"title": "Special relativity",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 922,
"text": "Special relativity was originally proposed by Albert Einstein in a paper published on 26 September 1905 titled \"On the Electrodynamics of Moving Bodies\". The inconsistency of Newtonian mechanics with Maxwell's equations of electromagnetism and, experimentally, the Michelson-Morley null result (and subsequent similar experiments) demonstrated that the historically hypothesized luminiferous aether did not exist. This led to Einstein's development of special relativity, which corrects mechanics to handle situations involving all motions and especially those at a speed close to that of light (known as \"\"). Today, special relativity is proven to be the most accurate model of motion at any speed when gravitational effects are negligible. Even so, the Newtonian model is still valid as a simple and accurate approximation at low velocities (relative to the speed of light), for example, the everyday motions on Earth. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57264039",
"title": "Einstein's thought experiments",
"section": "Section::::Special relativity.:Trains, embankments, and lightning flashes.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 496,
"text": "The routine analyses of the Fizeau experiment and of stellar aberration, that treat light as Newtonian corpuscles, do not require relativity. But problems arise if one considers light as waves traveling through an aether, which are resolved by applying the relativity of simultaneity. It is entirely possible, therefore, that Einstein arrived at special relativity through a different path than that commonly assumed, through Einstein's examination of Fizeau's experiment and stellar aberration.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "736",
"title": "Albert Einstein",
"section": "Section::::Scientific career.:General relativity.:Hole argument and Entwurf theory.\n",
"start_paragraph_id": 117,
"start_character": 0,
"end_paragraph_id": 117,
"end_character": 372,
"text": "While developing general relativity, Einstein became confused about the gauge invariance in the theory. He formulated an argument that led him to conclude that a general relativistic field theory is impossible. He gave up looking for fully generally covariant tensor equations, and searched for equations that would be invariant under general linear transformations only.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "91100",
"title": "Michelson–Morley experiment",
"section": "Section::::Light path analysis and consequences.:Special relativity.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 703,
"text": "Albert Einstein formulated the theory of special relativity by 1905, deriving the Lorentz transformation and thus length contraction and time dilation from the relativity postulate and the constancy of the speed of light, thus removing the \"ad hoc\" character from the contraction hypothesis. Einstein emphasized the kinematic foundation of the theory and the modification of the notion of space and time, with the stationary aether no longer playing any role in his theory. He also pointed out the group character of the transformation. Einstein was motivated by Maxwell's theory of electromagnetism (in the form as it was given by Lorentz in 1895) and the lack of evidence for the luminiferous aether.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30001",
"title": "Theory of relativity",
"section": "Section::::Experimental evidence.:Tests of special relativity.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 715,
"text": "Relativity is a falsifiable theory: It makes predictions that can be tested by experiment. In the case of special relativity, these include the principle of relativity, the constancy of the speed of light, and time dilation. The predictions of special relativity have been confirmed in numerous tests since Einstein published his paper in 1905, but three experiments conducted between 1881 and 1938 were critical to its validation. These are the Michelson–Morley experiment, the Kennedy–Thorndike experiment, and the Ives–Stilwell experiment. Einstein derived the Lorentz transformations from first principles in 1905, but these three experiments allow the transformations to be induced from experimental evidence.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "171377",
"title": "The Unreasonable Effectiveness of Mathematics in the Natural Sciences",
"section": "Section::::Responses to Wigner's original paper.:Richard Hamming.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 619,
"text": "BULLET::::- Hamming argues that Albert Einstein's pioneering work on special relativity was largely \"scholastic\" in its approach. He knew from the outset what the theory should look like (although he only knew this because of the Michelson–Morley experiment), and explored candidate theories with mathematical tools, not actual experiments. Hamming alleges that Einstein was so confident that his relativity theories were correct that the outcomes of observations designed to test them did not much interest him. If the observations were inconsistent with his theories, it would be the observations that were at fault.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
eky3te
|
What were the reasons the U.S. attempted to pull off a coup of the Iranian government in the 50s and eventually imposed the Shah?
|
[
{
"answer": "This is a complex topic but I'll give it my best stab!\n\n > I thought the Iranian public had a very positive view of the U.S. at the time and this started off a chain of events that led the very hostile relationship we currently have\n\nFirstly, you're correct that the Iranian public had a relatively positive view of the US but I'd be careful of suggesting that the coup \"started off a chain of events that led the very hostile relationship we currently have\". This can present an overly inevitable view of history; people are quick to link the coup to the Islamic Revolution but do bear in mind that they're more than 25 years apart and plenty could have gone differently in that time.\n\nBut to get to your main question, the UK and US both have several important motivations which different historians give different weight. I will separate them roughly into economy, strategy, ideology.\n\n**Economy**\n\nIt is hard to overstate the value of the Anglo-Iranian Oil Company for Britain here; it is Britain's single largest overseas asset at a time where the country is trying to rebuild itself from World War 2. It is a vital source of dollars in a very literal sense given Britain's balance of payments woes at this time.\n\nBeyond this, they were not the only player that stood to gain economically from the coup. The US had been frustrated by Britain's restrictive control on Iranian oil which denied US oil companies market access; following the coup this arrangement was clearly unsustainable and US companies were able to enter the market much to their benefit.\n\n**Strategy**\n\nAs the British Empire was increasingly called into question, British strategists increasingly turned towards Africa and the Middle East as a solution to secure Britain's global position. The Anglo-Iranian Oil Company is not only a key resource: it's a key aspect of British strategic influence in the region (along with the Suez canal). For an easy example, just consider the importance of a secure oil supply through the major wars that had just passed.\n\nBritain's strategic position is also a concern for the US. The US and the UK have just come out of WW2 where they fought as allies. They didn't always see perfectly eye to eye, but nonetheless they had an important strategic relationship. This especially true as the Cold War Era commences and the US is increasingly concerned with the spread of communism.\n\n**Ideology**\n\nThe Iranian Oil Crisis is fascinating for the way which it highlights the balance between different ideological paradigms: imperialism, nationalism, and \"cold-war\"ism.\n\nYou've asked why the UK pursued the coup even though Iran was acting within its rights as an independent nation. This reflects a modern conception of nationalism -- and one which Iranian nationalists were quick to uphold -- but which wasn't necessarily that convincing to the British imperialist mindset. At least not when vital resources were on the line. \n\nFor the US, the ideological confrontation with the USSR -- and the possible spread of communism -- was a growing concern. Mosaddeq had wide popular support and something of a socialist platform. He also unfortunately also played to US fears of Iran falling to communism in an attempt to gain aid from the US; some later American sources further suggest that the British deliberately played on this fear to push the US into action. It's difficult to know exactly how much weight to give the fear of communism since naturally its a nicer justification for the Americans involved than oil-money. One thing that I would highlight here is the distinction between the Truman and Eisenhower administrations. Under Truman the US takes a generally reconciliatory approach with significant efforts towards a negotiated settlement. Eisenhower's administration (which is generally further into the paranoia of the Cold War) takes office and the coup follows shortly after. \n\nRegarding why the UK asked the US for help, on top of the close economic and strategic relationship above there is also an important practical factor: the UK's ability to orchestrate a coup is hampered after Mosaddeq expelled Britain's diplomatic mission in 1952 and working with the CIA helps them to overcome this obstacle.\n\n**Closing thoughts**\n\nFirstly, I have separated various factors out, but please don't read them in isolation. For example, Iran's strategic importance should *also* be read in terms of the post-war geopolitcal orientation towards Cold-War competition between the US and USSR, and the American desire to open the Iranian oil market has an ideological undercurrent as well as an economic rationale. \n\nLastly, I do want to re-iterate that this is a really fascinating question which remains debated in the historical community. Over-emphasising the \"fear of communism\" interpretation risks giving too much weight to post-hoc explanations given by Americans and arguably verges on apologia. (This is also complicated by the fact that detailed American sources were more readily available than others). At the same time, over-emphasising the \"it's all about money and power\" interpretation risks boiling complex ideological and personal factors down to simplistic realpolitik. An interesting questions to ask yourself as you delve into the topic is \"why does the US behave differently around the Suez crisis only three years later?\".\n\nMain sources:\n\nGasiorowski and Byrne (Ed.), *Mohammad Mosaddeq and the 1953 Coup in Iran* \nKatouzian, *Musaddiq and the Struggle for Power in Iran* \nLouis, *The British Empire in the Middle East, 1945-1951* \nBill and Louis (Ed.), *Musaddiq, Iranian Nationalism, and Oil* \nGalpern, *Money, Oil, and Empire in the Middle East: Sterling And Postwar Imperialism, 1944–1971*\n\nFor more accessible reading, I recommend Gaziorowski, ['Coup d'etat of 1953'](_URL_0_) in the *Encyclopaedia Iranica* which is an incredible peer-reviewed online resource for Iranian history.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "310477",
"title": "Iran–United States relations",
"section": "Section::::Reign of the last Shah of Iran.:Prime Minister Mossadeq and his overthrow.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 349,
"text": "In 1953, the government of prime minister Mohammed Mossadeq was overthrown in a coup organized by the governments of the U.S. and the UK. Many Iranians argue that the coup and the subsequent U.S. support for the shah proved largely responsible for the shah's arbitrary rule, which led to the \"deeply anti-American character\" of the 1979 revolution.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20065598",
"title": "Mohammad Reza Pahlavi",
"section": "Section::::Early reign.:Oil nationalisation and the 1953 coup.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 414,
"text": "In 1953 the United States played a significant role in orchestrating the overthrow of Iran's popular prime minister, Mohammad Mosaddegh. The Eisenhower Administration believed its actions were justified for strategic reasons; but the coup was clearly a setback for Iran's political development. And it is easy to see now why many Iranians continue to resent this intervention by America in their internal affairs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15575338",
"title": "Dual containment",
"section": "Section::::Policy vision and implementation.:Iran.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 701,
"text": "Overthrow was not a viable policy option given the lack of organized opposition and American intelligence assets on the ground. Positive inducement to behavioral changes was also dismissed due to the Iranian regime's deep distrust of the U.S. Finally, punitive military action was ruled out on the grounds that Iran's retaliatory capabilities were considered too great, and the benefits of these strikes were too uncertain. Thus, it was decided to continue American efforts to prevent Iran's acquisition of ballistic missiles and access to international finance. This approach, known as \"active containment,\" was designed to convince the Iranian elite to pursue rapprochement with the West over time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "415767",
"title": "1953 Iranian coup d'état",
"section": "Section::::Aftermath.:Blowback.\n",
"start_paragraph_id": 112,
"start_character": 0,
"end_paragraph_id": 112,
"end_character": 305,
"text": "\"For many Iranians, the coup demonstrated duplicity by the United States, which presented itself as a defender of freedom but did not hesitate to use underhanded methods to overthrow a democratically elected government to suit its own economic and strategic interests\", the Agence France-Presse reported.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14653",
"title": "Iran",
"section": "Section::::History.:Contemporary era.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 415,
"text": "After the coup, the Shah became increasingly autocratic and sultanistic, and Iran entered a phase of decades-long controversial close relations with the United States and some other foreign governments. While the Shah increasingly modernized Iran and claimed to retain it as a fully secular state, arbitrary arrests and torture by his secret police, the SAVAK, were used to crush all forms of political opposition.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "415767",
"title": "1953 Iranian coup d'état",
"section": "Section::::Aftermath.:Blowback.\n",
"start_paragraph_id": 111,
"start_character": 0,
"end_paragraph_id": 111,
"end_character": 548,
"text": "The administration of Dwight D. Eisenhower considered the coup a success, but, given its blowback, that opinion is no longer generally held, because of its \"haunting and terrible legacy\". In 2000, Madeleine Albright, U.S. Secretary of State, said that intervention by the U.S. in the internal affairs of Iran was a setback for democratic government. The coup is widely believed to have significantly contributed to the 1979 Iranian Revolution, which deposed the \"pro-Western\" Shah and replaced the monarchy with an \"anti-Western\" Islamic republic.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30875988",
"title": "Farah Pahlavi",
"section": "Section::::After leaving Iran.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 600,
"text": "Due to the political situation unfolding in Iran, many governments, including those which had been on friendly terms with the Iranian Monarchy prior to the revolution, saw the Shah's presence within their borders as a liability. The Revolutionary Government in Iran had ordered the arrest (and later death) of both the Shah and the Shahbanu. The new Iranian Government would go on to vehemently demand their extradition a number of times but the extent to which it would act in pressuring foreign powers for the deposed monarch's return (and presumably that of the Empress) was at that time unknown.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1dypz7
|
If someone was hemorrhaging uncontrollably, could you keep them alive by transfusing blood in at the same rate they were losing it?
|
[
{
"answer": "People can certainly have more than their entire blood volume replaced and survive. Normal blood volume is around 70ml/kg, so about 5L on average. One unit of blood (packed red cells) is around 250ml, so once someone has received more than 20 units of blood they are close to having replaced their circulating volume. Transfusions of this amount are certainly not uncommon, and I have seen people who have received 50 or even 100 units of blood survive.\n\nThere are significant problems with giving this amount of blood though. You mentioned whole blood in your comment, and that would certainly be ideal. Most blood banks don't store whole blood though; donated blood is separated into its components parts, such as red cells and plasma. Blood is normally given as packed red cells which doesn't have the plasma component. This is important as its the plasma which contains the proteins you need to allow blood to clot.\n\nSo, as you transfuse large amounts of packed red cells into a bleeding patient the coagulation factor are lost (this is called dilutional coaguloapthy) and the patient won't stop bleeding. Its therefore important to transfuse plasma and concentrated clotting factors as well.\n\nOther problems from massive transfusions are hypothermia, as these products have been stored in freezers, so are cold, and metabolic problems from the additives in the blood packs, Citrate, for example is used to prevent the blood from clotting in the pack, but in large amounts can lower calcium levels and cause alkalosis.\n\nLarge volumes of blood can also cause fluid overload and can specifically damage the lungs, a condition called TRALI (Transfusion Related Acute Lung Injury)\n\nCurrent research into this topic is looking at how much plasma you should give per unit of blood (it looks like we haven't been giving enough in the past) and whether the age of the blood matters; it seems like the longer the blood has been stored for, the less beneficial it is.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "10163776",
"title": "Hemolysin",
"section": "Section::::Role during infection.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 468,
"text": "The main consequence of hemolysis is hemolytic anemia, condition that involves the destruction of erythrocytes and their later removal from the bloodstream, earlier than expected in a normal situation. As the bone marrow cannot make erythrocytes fast enough to meet the body’s needs, oxygen does not arrive to body tissues properly. As a consequence, some symptoms may appear, such as fatigue, pain, arrhythmias, an enlarged heart or even heart failure, among others.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "625404",
"title": "Stroke",
"section": "Section::::Management.:Hemorrhagic stroke.\n",
"start_paragraph_id": 159,
"start_character": 0,
"end_paragraph_id": 159,
"end_character": 685,
"text": "People with intracerebral hemorrhage require supportive care, including blood pressure control if required. People are monitored for changes in the level of consciousness, and their blood sugar and oxygenation are kept at optimum levels. Anticoagulants and antithrombotics can make bleeding worse and are generally discontinued (and reversed if possible). A proportion may benefit from neurosurgical intervention to remove the blood and treat the underlying cause, but this depends on the location and the size of the hemorrhage as well as patient-related factors, and ongoing research is being conducted into the question as to which people with intracerebral hemorrhage may benefit.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "234329",
"title": "Richard Lower (physician)",
"section": "Section::::Life.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 790,
"text": "Lower showed it was possible for blood to be transfused from animal to animal and from animal to man intravenously, a xenotransfusion. In November 1667, he worked with Edmund King, another student of Willis, to transfuse sheep's blood into a man who was mentally ill. Lower was interested in advancing science but also believed the man could be helped, either by the infusion of fresh blood or by the removal of old blood. It was difficult to find people who would agree to be transfused, but an eccentric scholar, Arthur Coga, consented and the procedure was carried out by Lower and King before the Royal Society on 23 November 1667. Transfusion gathered some popularity in France and Italy, but medical and theological debates arose, resulting in transfusion being prohibited in France.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16689223",
"title": "Volume expander",
"section": "Section::::Physiology.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 499,
"text": "When blood is lost, the greatest immediate need is to stop further blood loss. The second greatest need is replacing the lost volume. This way remaining red blood cells can still oxygenate body tissue. Normal human blood has a significant excess oxygen transport capability, only used in cases of great physical exertion. Provided blood volume is maintained by volume expanders, a rested patient can safely tolerate very low hemoglobin levels, less than 1/3 that of a healthy person. see:Hematocrit\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "501973",
"title": "Hemostasis",
"section": "Section::::Disorders.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 451,
"text": "The body's hemostasis system requires careful regulation in order to work properly. If the blood does not clot sufficiently, it may be due to bleeding disorders such as hemophilia or immune thrombocytopenia; this requires careful investigation. Over-active clotting can also cause problems; thrombosis, where blood clots form abnormally, can potentially cause embolisms, where blood clots break off and subsequently become lodged in a vein or artery.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "540173",
"title": "Hematemesis",
"section": "Section::::Management.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 450,
"text": "Blood transfusion is required in such conditions if the body loses more than 20 percent of body blood volume. Severe loss makes it impossible for the heart to pump a sufficient amount of blood to the body. In such conditions unmaintained blood volume could lead to Hypovolemic Shock (hypovolemic shock could lead to damage of body organs eg. kidney, brain, or gangrene of arms or legs). Note that an untreated patient could suffer cerebral atrophy. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2166658",
"title": "Hemothorax",
"section": "Section::::Mechanisms.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 985,
"text": "When a hemothorax occurs, blood enters the pleural cavity. The blood loss has several effects. Firstly, as blood builds up within the pleural cavity, it begins to interfere with the normal movement of the lungs, preventing one or both lungs from fully expanding and thereby interfering with the normal transfer of oxygen and carbon dioxide to and from the blood. Secondly, blood that has been lost into the pleural cavity can no longer be circulated. Hemothoraces can lead to very significant blood loss - each half of the thorax can hold more than 1500 milliliters of blood, representing more than 25% of an average adult's total blood volume. The body may struggle to cope with this blood loss, and in order to compensate tries to maintain blood pressure by forcing the heart to pump harder and faster, and by squeezing or constricting small blood vessels in the arms and legs. These compensatory mechanisms can be recognised by a rapid resting heart rate and cool fingers and toes.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
445wzb
|
Do all plants metabolize (convert CO2 to O2) at the same rate or do some plants generate O2 more efficiently than others?
|
[
{
"answer": "There's variation among plants generally, but specifically an alternative carbon fixation pathway called Crassulacean Acid Metabolism (CAM) used in plants that are found in arid climates that is less efficient, but with the benefit that it allows the plant to shutdown respiration during the day when heat and dry air pose a threat of substantial water loss.",
"provenance": null
},
{
"answer": "I did a little extra digging and might have answered my own question. So if anyone else is wondering, here is a [wiki article](_URL_0_) about genetically modifying RuBisCo enzymes to improve photosynthetic abilities.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3322454",
"title": "Glyoxylate cycle",
"section": "Section::::Function in organisms.:Plants.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 742,
"text": "The glyoxylate cycle can also provide plants with another aspect of metabolic diversity. This cycle allows plants to take in acetate both as a carbon source and as a source of energy. Acetate is converted to Acetyl CoA (similar to the TCA cycle). This Acetyl CoA can proceed through the glyoxylate cycle, and some succinate is released during the cycle. The four carbon succinate molecule can be transformed into a variety of carbohydrates through combinations of other metabolic processes; the plant can synthesize molecules using acetate as a source for carbon. The Acetyl CoA can also react with glyoxylate to produce some NADPH from NADP+, which is used to drive energy synthesis in the form of ATP later in the Electron Transport Chain.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38034479",
"title": "Enzyme promiscuity",
"section": "Section::::Degree of promiscuity.:Plant secondary metabolism.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 476,
"text": "Plants produce a large number of secondary metabolites thanks to enzymes that, unlike those involved in primary metabolism, are less catalytically efficient but have a larger mechanistic elasticity (reaction types) and broader specificities. The liberal drift threshold (caused by the low selective pressure due the small population size) allows the fitness gain endowed by one of the products to maintain the other activities even though they may be physiologically useless.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53293378",
"title": "Position-specific isotope analysis",
"section": "Section::::Principle.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 550,
"text": "For example, C plants are known to assimilate preferentially CO compared with C plants, leading to different C isotope composition of the biomass (Figure 2, X-axis). Recent results on ethanol show that the relative abundances of isotopomers are not equal, and that the amplitude and sign of the deviation is dependent on the plant. These variations are thought to arise from biosynthetic isotope effects and different metabolic routes, especially those associated with the biosynthesis of sugars which are converted into ethanol during fermentation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31336624",
"title": "Plant secondary metabolism",
"section": "Section::::History.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1895,
"text": "Research into secondary plant metabolism primarily took off in the later half of the 19th century, however, there was still much confusion over what the exact function and usefulness of these compounds were. All that was known was that secondary plant metabolites were \"by-products\" of the primary metabolism and were not crucial to the plant's survival. Early research only succeeded as far as categorizing the secondary plant metabolites but did not give real insight into the actual function of the secondary plant metabolites. The study of plant metabolites is thought to have started in the early 1800s when Friedrich Willhelm Serturner isolated morphine from opium poppy, and after that new discoveries were made rapidly. In the early half of the 1900s, the main research around secondary plant metabolism was dedicated to the formation of secondary metabolites in plants, and this research was compounded by the use of tracer techniques which made deducing metabolic pathways much easier. However, there was still not much research being conducted into the functions of secondary plant metabolites until around the 1980s. Before then, secondary plant metabolites were thought of as simply waste products. In the 1970s, however, new research showed that secondary plant metabolites play an indispensable role in the survival of the plant in its environment. One of the most ground breaking ideas of this time argued that plant secondary metabolites evolved in relation to environmental conditions, and this indicated the high gene plasticity of secondary metabolites, but this theory was ignored for about half a century before gaining acceptance. Recently, the research around secondary plant metabolites is focused around the gene level and the genetic diversity of plant metabolites. Biologists are now trying to trace back genes to their origin and re-construct evolutionary pathways.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6818",
"title": "Citric acid cycle",
"section": "Section::::Steps.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 359,
"text": "Mitochondria in animals, including humans, possess two succinyl-CoA synthetases: one that produces GTP from GDP, and another that produces ATP from ADP. Plants have the type that produces ATP (ADP-forming succinyl-CoA synthetase). Several of the enzymes in the cycle may be loosely associated in a multienzyme protein complex within the mitochondrial matrix.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45079602",
"title": "Carbonic anhydrase",
"section": "Section::::Structure and function.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 581,
"text": "There are at least 14 different isoforms in mammals. Plants contain a different form called \"β-carbonic anhydrase\", which, from an evolutionary standpoint, is a distinct enzyme, but participates in the same reaction and also uses a zinc ion in its active site. In plants, carbonic anhydrase helps raise the concentration of CO within the chloroplast in order to increase the carboxylation rate of the enzyme RuBisCO. This is the reaction that integrates CO into organic carbon sugars during photosynthesis, and can use only the CO form of carbon, not carbonic acid or bicarbonate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1028552",
"title": "Nelumbo",
"section": "Section::::Characteristics.:Thermoregulation.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 583,
"text": "The typical pathway in plant mitochondria involves cytochrome complexes. The pathway used to generate heat in \"Nelumbo\" involves cyanide-resistant alternative oxidase, which is a different electron acceptor than the usual cytochrome complexes. The plant also reduces ubiquitin concentrations while in thermogenesis, which allows the AOX in the plant to function without degradation Thermogenesis is restricted to the receptacle, stamen, and petals of the flower, but each of these parts produce heat independently without relying on the heat production in other parts of the flower.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
acj8mw
|
scientifically speaking, is there a hypothetical cure for every disease?
|
[
{
"answer": "the short answer is that we don't know.\n\nWe've cured a lot of diseases and accomplished amazing things with science. at times it seems that everything is possible. But unless we cure ever disease, we won't really know. \n\nMy guess would be yes. But we can really only guess.",
"provenance": null
},
{
"answer": "_Hypothetically_, yes. All a disease is is the malfunctioning of a biological process. If a process works correctly, then something happens to cause it to malfunction, it is theoretically possible to correct the malfunction and restore the process to working order.\n\nHowever, _how_ one does that is the tricky part. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "29291454",
"title": "Georges Mathé",
"section": "Section::::Biography.:Later career.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 344,
"text": "Dr. Brian Bolwell, chief of hematology at the Cleveland Clinic noted that Dr. Mathé had proved an important principle: \"You can cure an incurable leukemia patient.\", and had developed both a technique and an important term, \"adoptive immunotherapy,\" to describe how a person’s own immune system can be used to combat cancer and other diseases.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "327995",
"title": "Orthomolecular medicine",
"section": "Section::::Medical and scientific reception.:Methodology.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 1323,
"text": "Orthomolecular therapies have been criticized as lacking a sufficient evidence base for clinical use: their scientific foundations are too weak, the studies that have been performed are too few and too open to interpretation, and reported positive findings in observational studies are contradicted by the results of more rigorous clinical trials. Accordingly, \"there is no evidence that orthomolecular medicine is effective\". Proponents of orthomolecular medicine strongly dispute this statement by citing studies demonstrating the effectiveness of treatments involving vitamins, though this ignores the belief that a normal diet will provide adequate nutrients to avoid deficiencies, and that orthomolecular treatments are not actually related to vitamin deficiency. The lack of scientifically rigorous testing of orthomolecular medicine has led to its practices being classed with other forms of alternative medicine and regarded as unscientific. It has been described as food faddism and quackery, with critics arguing that it is based upon an \"exaggerated belief in the effects of nutrition upon health and disease.\" Orthomolecular practitioners will often use dubious diagnostic methods to define what substances are \"correct\"; one example is hair analysis, which produces spurious results when used in this fashion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42616472",
"title": "Big Pharma conspiracy theory",
"section": "Section::::Manifestations.:Alternative treatments.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 533,
"text": "In \"Natural Cures \"They\" Don't Want You to Know About\", Kevin Trudeau proposes that there are all-natural cures for serious illnesses including cancer, herpes, arthritis, AIDS, acid reflux disease, various phobias, depression, obesity, diabetes, multiple sclerosis, lupus, chronic fatigue syndrome, attention deficit disorder, muscular dystrophy, and that these are all being deliberately hidden and suppressed from the public by the Food and Drug Administration, the Federal Trade Commission, and the major food and drug companies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "327995",
"title": "Orthomolecular medicine",
"section": "Section::::Medical and scientific reception.:Methodology.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 472,
"text": "Proponents of orthomolecular medicine contend that, unlike some other forms of alternative medicine such as homeopathy, their ideas are at least biologically based, do not involve magical thinking, and are capable of generating testable hypotheses. \"Orthomolecular\" is not a standard medical term, and clinical use of specific nutrients is considered a form of chemoprevention (to prevent or delay development of disease) or chemotherapy (to treat an existing condition).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9543863",
"title": "History of cardiopulmonary resuscitation",
"section": "Section::::Modern resuscitation.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 729,
"text": "Several key discoveries and understandings were required to treat the problem, which would take decades to work out, and even now is not 'solved'. Doctors speak of the natural history of diseases as a way to understand how therapy alters the usual progression of a disease. For example, the natural history of breast cancer may be measured in months but treated with surgery or chemotherapy the disease can be measured in years or even cured. Sudden cardiac arrest is a disease with an extremely rapid natural history, measured in minutes, with an inexorable outcome. But when treated with CPR the course of death can be extended (CPR will delay the dying process) and if treated with timely defibrillation death can be aborted.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "212698",
"title": "Quackery",
"section": "Section::::Persons accused of quackery.:Deceased.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 225,
"text": "BULLET::::- Hulda Regehr Clark (1928–2009), was a controversial naturopath, author, and practitioner of alternative medicine who claimed to be able to cure all diseases and advocated methods that have no scientific validity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17544371",
"title": "Osteomyology",
"section": "Section::::Efficacy.:Effectiveness.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 277,
"text": "In conclusion, we have found no convincing evidence from systematic reviews to suggest that SM is a recommendable treatment option for any medical condition. In several areas, where there is a paucity of primary data, more rigorous clinical trials could advance our knowledge.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6hmn3m
|
what does the 'end task' command do differently than normally exiting out of a program?
|
[
{
"answer": "Imagine you are in a restaurant. And then you're asked to leave. You pick up your shit and leave. That's closing a program normally. Now when you end task. They bring in some bouncers and kick your ass out before you get to pick up any of your shit.",
"provenance": null
},
{
"answer": "Programs usually have various operations to do before they shutdown. If a program is frozen, it is unable to perform/finish these operations, and does not shutdown. 'End Task' closes it regardless.",
"provenance": null
},
{
"answer": "There are three main ways to stop a program. \n\nYou can quit from inside the program. The program does whatever it's programmed to do when you click quit, saving data and closing files and such normally.\n\nYou can use End Task (on windows), this is the operating system sending a signal to the program that tells the program \"Time to quit, finish what you're doing and then exit.\". The program (hopefully) responds to the signal, finishes up what it's doing, saves data and such, then quits.\n\nYou can use End Process (also Windows). Windows just ends the program, and frees up any memory associated with it. There's no communication with the program.\n\nIn terms of the other guy's restaurant metaphor: Quitting is finishing your meal then paying up and leaving, end task is being told to pack your shit and leave, and end process is being thrown out. ",
"provenance": null
},
{
"answer": "One of the main parts of a desktop application is called the \"message loop.\" It is code that continually runs, checking for new messages from the operating system or other programs. Messages include things like user input (you clicked a mouse or pressed a key) as well as other notifications that an application is supposed to respond to.\n\nOne of the messages that you can receive is the Quit message. This is how the operating system tells an application that it should shut down. The application should respond to this message by trying to exit in as graceful a manner as possible - for example, giving the user a chance to save any unsaved work. This is generally the same flow as normally exiting out of the application.\n\nSelecting \"End Task\" from the Task Manager in Windows sends a quit message to the application and then relies on the application shutting itself down. Because it's up to the application to handle this, there is a valid response which is \"No.\" For example, if you have an unsaved document the application might ask for confirmation if you want to quit and you could click no. So the OS does not actually enforce that the quit message results in the application terminating.\n\nHowever, if the application is in a bad state this message might not ever be received, or the application could still fail to shut itself down. In that case, the operating system can terminate the program by simply not running its code any more and unloading all of its code and data from memory, as well as cleaning up any shared resources it was using such as files or network ports. This means any saved work will be lost so it is the method of last resort. The OS will generally ask you if you want to terminate a process in this way if the message loop stops running for an extended period of time.\n\n",
"provenance": null
},
{
"answer": "I've always heard it explained this way. Closing a program normally is akin to being in your car driving down the interstate @ 70MPH, taking the off ramp, braking slowly, coming to a complete stop, shutting the engine off and exiting the vehicle. \nEnding the task is like having someone throw a cinder block through the windshield while driving @ 70MPH. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "42674300",
"title": "State machine (LabVIEW programming)",
"section": "Section::::State machines in LabVIEW.:Simple vending-machine example.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 359,
"text": "The \"end\" case is a very simple case that works to simply delay the program to allow the user enough time to check that they have received their change and picked up their item. After 5000 milliseconds (5 seconds) the wait timer is used, up and the program continues back to the start page to wait for another user to come by to begin the process over again.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6539754",
"title": "Exit (system call)",
"section": "Section::::How it works.:Clean up.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 512,
"text": "The exit operation typically performs clean-up operations within the process space before returning control back to the operating system. Some systems and programming languages allow user subroutines to be registered so that they are invoked at program termination before the process actually terminates for good. As the final step of termination, a primitive system exit call is invoked, informing the operating system that the process has terminated and allows it to reclaim the resources used by the process.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "88823",
"title": "AmigaDOS",
"section": "Section::::Syntax of AmigaDOS commands.:Breaking commands and pausing console output.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 338,
"text": "A user can terminate a program by invoking the key combination or . Pressing or any printing character on the keyboard suspends the console output. Output may be resumed by pressing the key (to delete all of the input) or by pressing (which will cause the input to be processed as a command as soon as the current command stops running).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5256173",
"title": "Job control (Unix)",
"section": "Section::::Implementation.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 525,
"text": "A stopped job can be resumed as a background job with the codice_6 builtin, or as the foreground job with codice_7. In either case, the shell redirects I/O appropriately, and sends the SIGCONT signal to the process, which causes the operating system to resume its execution. In Bash, a program can be started as a background job by appending an ampersand (codice_8) to the command line; its output is directed to the terminal (potentially interleaved with other programs' output), but it cannot read from the terminal input.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1883536",
"title": "Exit (command)",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 360,
"text": "The command causes the shell or program to terminate. If performed within an interactive command shell, the user is logged out of their current session, and/or user's current console or terminal connection is disconnected. Typically an optional exit code can be specified, which is typically a simple integer value that is then returned to the parent process.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1117392",
"title": "Exit status",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 264,
"text": "The exit status of a process in computer programming is a small number passed from a child process (or callee) to a parent process (or caller) when it has finished executing a specific procedure or delegated task. In DOS, this may be referred to as an errorlevel.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1569732",
"title": "Entry point",
"section": "Section::::Exit point.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 503,
"text": "Usually, there is not a single exit point specified in a program. However, in other cases runtimes ensure that programs always terminate in a structured way via a single exit point, which is guaranteed unless the runtime itself crashes; this allows cleanup code to be run, such as codice_13 handlers. This can be done by either requiring that programs terminate by returning from the main function, by calling a specific exit function, or by the runtime catching exceptions or operating system signals.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6fp08l
|
how someone can have a big belly but is relatively skinny/normal?
|
[
{
"answer": "Fat deposits vary from person to person and from source to source. A few of the hormones racing through your body affect the location of fat (cortisol directs it to the abdomen, for example) as well as your sex. (Women tend to have a 'donut' or bigger legs, men tend to have bigger bellies.) I don't know enough on the subject to give you a specific answer, sadly. On the bright side; as long as it's hanging in front or on the side of you, the dangers are relatively low. Fat between the organs or 'hard fat' is where you need to be scared.",
"provenance": null
},
{
"answer": "In some cases this could be a sign of an underlying illness, such as celiac disease (which can cause a bloated belly on a skinny person). As to other cases, I can't say.",
"provenance": null
},
{
"answer": "A swollen abdomen is also a sign of malnutrition. The boy needs to make proteins to circulate in the blood or osmosis pulls the water out of it, typically expanding the abdomen as it isn't constrained by bones like the chest or head.",
"provenance": null
},
{
"answer": "Sometimes it can be caused by alcoholism causing a swollen liver although that looks a little different because the \"belly\" might seem a little high and off center.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "39256445",
"title": "Uncle Grandpa",
"section": "Section::::Characters.:Human children and adults.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 205,
"text": "BULLET::::- Belly Kid (voiced by Zachary Gordon) – A kid who has a big belly. He was first ashamed of it, but Uncle Grandpa taught him the best features of having a big belly. He appeared in \"Belly Bros\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12627364",
"title": "List of Mr. Men",
"section": "Section::::S.:Mr. Skinny.\n",
"start_paragraph_id": 158,
"start_character": 0,
"end_paragraph_id": 158,
"end_character": 472,
"text": "Mr. Skinny is the 35th book in the \"Mr. Men\" series by Roger Hargreaves. Mr. Skinny lives in Fatland, where everything and everyone is big except for him. He has a small appetite, and sees Dr. Plump, who has him visit Mr. Greedy help increase Mr. Skinny's appetite for a month. Mr. Skinny gains a belly. Mr. Skinny appears under the titles Monsieur Maigre (French), 苗條先生 (Taiwan), 빼빼씨 (Korean), Ο Κύριος Κοκαλιάρης (Greek), Unser Herr Dünn (German), Fætter Pind (Danish).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11803048",
"title": "It's Superman!",
"section": "Section::::Other characters.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 404,
"text": "Skinny Simon is a friend of Willi and Lois, and is ironically nicknamed Skinny because of her voluptuous figure. She works at a hospital in Manhattan and is the first to tell Lois when Willi is shot. Later, she and Willi meet in Hollywood where she is almost murdered by her husband. Despite these ordeals, she perseveres and eventually finds herself back in New York. She eventually marries Ben Jaeger.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "668035",
"title": "Waist",
"section": "Section::::Structure.:Waist measurement.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 394,
"text": "The size of a person's waist or waist circumference, indicates abdominal obesity. Excess abdominal fat is a risk factor for developing heart disease and other obesity related diseases. The National Heart, Lung, and Blood Institute (NHLBI) classifies the risk of obesity-related diseases as high if men have a waist circumference greater than and women have a waist circumference greater than .\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17234250",
"title": "Peascod belly",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 275,
"text": "A peascod belly is a type of exaggeratedly padded stomach that was very popular in men's dress in the late 16th and early 17th centuries. The term is thought to have come from \"peacock,\" or from the form of contemporary plate armour. Sometimes it was called a 'goose belly.'\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31967887",
"title": "The Summer I Turned Pretty (trilogy)",
"section": "Section::::Characters.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 1173,
"text": "BULLET::::- Taylor Jewel: Belly's best friend despite being her polar opposite. Unlike Belly, she is boy crazy and shallow, though by \"We'll Always Have Summer\", she grows up and has something wise to say for once. In the first book of the series, as seen in flashbacks, she is considered something of a slut. She goes for all three boys (Steven, Conrad and Jeremiah) almost at once, determined to hook up with one of them. She is seen desperately trying to pair Belly with boys, even though her friend endlessly protests. She and Belly have a falling out towards the climax of \"It's Not Summer Without You\" after Taylor accuses Belly of being \"a crappy friend\" when Belly does not want her to come to a party at the beach house. By \"We'll Always Have Summer\", they make peace, and Taylor can be seen throughout the course of the book supporting and helping Belly with her wedding. She confronts Conrad after suspecting he said something to Belly to upset her and warns him to leave her alone. Although she admits that Belly told her a part of her will always love Conrad, and knows he loves Belly too. She asks him to \"be the good guy Belly says he is\" by letting her go.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33375094",
"title": "Effects of advertising on teen body image",
"section": "Section::::Bad effect.:Effects on young women.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 314,
"text": "Unfortunately thin-idealized bodies are attributed with self control, success and discipline, and therefore proclaimed as being desirable and socially valued. “Being slim means resisting the temptations that surround consumers in countries of overabundance and wealth” (Thompson et al 1995: Halliwell et al 2004).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4i0a08
|
why does windows 10 take over 2 gb of ram to sit there doing nothing, while windows 95 needed less than 0.004 gb?
|
[
{
"answer": "1. It's not sitting there doing nothing. There's tons of stuff running in the background- updaters, anti-virus, Cortana, and more.\n\n2. It doesn't actually need a full 2GB. But RAM that's not being used is just wasted, so it will load extra things into memory to speed up the computer if you have more RAM than you need.",
"provenance": null
},
{
"answer": "If you have RAM that you aren't using then Windows 10 will 'preload' things that you use a lot. That way when you actually go to use them it doesn't have to waste time to load it. It tends to make the OS more responsive for things that you do on a regular basis. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "34064",
"title": "Windows 95",
"section": "Section::::System requirements.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 227,
"text": "Windows 95 may fail to boot on computers with more than approximately 480 MB of memory. In such a case, reducing the file cache size or the size of video memory can help. The theoretical maximum according to Microsoft is 2 GB.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8996352",
"title": "Windows Vista editions",
"section": "Section::::Editions for personal computers.:64-bit versions.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 255,
"text": "All 32-bit editions of Windows Vista, excluding the Starter edition, support up to 4 GB of RAM. The 64-bit edition of Home Basic supports 8 GB of RAM, Home Premium supports 16 GB, and the Business, Enterprise, and Ultimate editions support 128 GB of RAM.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31546068",
"title": "List of RAM drive software",
"section": "Section::::Microsoft Windows.:Proprietary.:SoftPerfect RAM Disk.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 725,
"text": "Available for Windows XP, 2003, 2008, Vista, 7, 8 and 10. Can only access memory available to Windows (i.e. the RAM disk is limited to the same ca. 3.25 GB as the Windows 32-bit system). To use physical memory above 4 GB you must use a 64-bit system. Multiple RAM disks can be created, and these can be 'persisted' i.e. saved to, and restored from, a hard disk image. Note: Works well except for the special \"Harddisk emulation\" part tends to crash or become unstable when used with the updated windows 10 anniversary edition. Home use licence is $29 (Before 11/05/2016 it was free for non-commercial use. Last free version was 3.4.8). A commercial use license starts at $49, and discounts are offered for quantities over 5.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "795680",
"title": "Windows NT 4.0",
"section": "Section::::Comparison with Windows 95.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 267,
"text": "The maximum amount of supported physical random-access memory (RAM) in Windows NT 4.0 is 4 GB, which is the maximum possible for a purely 32-bit x86 operating system. By comparison, Windows 95 fails to boot on computers with more than approximately 480 MB of memory.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "75433",
"title": "Windows 98",
"section": "Section::::Limitations.\n",
"start_paragraph_id": 85,
"start_character": 0,
"end_paragraph_id": 85,
"end_character": 622,
"text": "Both Windows 98 and Windows 98 Second Edition have problems running on hard drives bigger than 32 GB and certain Phoenix BIOS settings. A software update fixed this shortcoming. In addition, until Windows XP with Service Pack 1, Windows was unable to handle hard drives that are over 137 GB in size with the default drivers, because of missing 48-bit Logical Block Addressing support. While Microsoft never officially fixed this issue, unofficial patches are available to fix this shortcoming in Windows 9x, although the author stated that data corruption is possible and did not guarantee that it would work as expected.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17786592",
"title": "Acer Aspire One",
"section": "Section::::Acer Aspire One D270.\n",
"start_paragraph_id": 86,
"start_character": 0,
"end_paragraph_id": 86,
"end_character": 235,
"text": "Although Intel specifies the maximum RAM capability of the N2600 as 2 GB, numerous users have reported a 4 GB SODIMM works well in the D270, with 2.99 GB reported usable by Windows 7 Home Premium 32 bit (after upgrading from Starter).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2187893",
"title": "Windows Server Essentials",
"section": "Section::::Design and licensing considerations.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 224,
"text": "BULLET::::- All Windows Small Business Server versions up to SBS 2003 are limited to no more than 4 GB of RAM. 2008 requires a minimum of 4GB for installation, it needs more for performance. 2008 supports a maximum of 32GB.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
33iigz
|
does the united states government heavily regulate media outlets?
|
[
{
"answer": "No. The concept of [prior restraint](_URL_0_) is almost completely foreign to the US legal system.\n\nNow, the government can always *ask* an outlet not to run a story, or at least to delay it, and sometimes the network or newspaper will oblige. But it's almost impossible to legally prevent a US newspaper or television network from releasing any information at all.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "21389",
"title": "Telecommunications in Nigeria",
"section": "Section::::Radio and television.:Media control and press freedom.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 451,
"text": "Although the government censors the electronic media through the National Broadcasting Commission (NBC), which is responsible for monitoring and regulating broadcast media, there's no established proof towards Government's control of the media. Radio stations remain susceptible to attacks by political groups. For example, in January 2012 some media figures alleged the NBC warned radio stations not to broadcast stories about fuel subsidy protests.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11417233",
"title": "Media freedom in Russia",
"section": "Section::::Government ownership and control of media outlets.\n",
"start_paragraph_id": 76,
"start_character": 0,
"end_paragraph_id": 76,
"end_character": 331,
"text": "The government has been using direct ownership, or ownership by large private companies with government links, to control or influence major national media and regional media outlets, especially television. There were reports of self-censorship in the television and print media, particularly on issues critical of the government.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10980740",
"title": "Censorship in Belarus",
"section": "Section::::State control over broadcast media.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 494,
"text": "The state maintains a virtual monopoly on domestic broadcast media, only the state media broadcasts nationwide, and the content of smaller television and radio stations is tightly restricted. The government has banned most independent and opposition newspapers from being distributed by the state-owned postal and kiosk systems, forcing the papers to sell directly from their newsrooms and use volunteers to deliver copies, but authorities sometimes harass and arrest the private distributors.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18934488",
"title": "Telecommunications in Guyana",
"section": "Section::::Television.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 426,
"text": "BULLET::::- Censorship: No government-imposed restrictions on television stations or suspensions of broadcasts in 2012. The government largely directs advertising to media houses aligned with the governing party. The government continues to exert heavy control over the content of the National Communications Network (TV), giving government spokespersons extended coverage, while limiting participation of opposition figures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1132045",
"title": "Media democracy",
"section": "Section::::Media ownership concentration.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 493,
"text": "The concentration of media outlets has been encouraged by government deregulation and neoliberal trade policies. In the United States, the Telecommunications Act of 1996 removed most of the media ownership rules that were previously put in place. This led to a massive consolidation of the telecommunications industry. Over 4,000 radio stations were bought out, and minority ownership in TV stations dropped to its lowest point since 1990, when the federal government began tracking the data.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52559609",
"title": "Lucas A. Powe Jr.",
"section": "Section::::Teaching and scholarship.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 629,
"text": "In Powe's 1987 book, \"American Broadcasting and the First Amendment\", he urged deregulation of broadcast media, in contrast to the theory in favor of television regulation put forth by Lee C. Bollinger. A key difference between newspapers, on the one hand, and television stations, on the other, is that broadcasters are licensed by the Federal Communications Commission. No matter what politically sensitive stories a newspaper prints, the government cannot take away its presses. But a television station that offends the government can have its license revoked, a paradigm that can contribute to self-censorship, Powe argued.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52051805",
"title": "Media ownership in Colombia",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 288,
"text": "Needless to say, this means that the most popular media outlets in the country, in which the audience is concentrated, is privately owned. There are three state-owned television stations, but two private networks, Caracol and RCN, dominate viewership. Print media is all privately owned.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
e64kn2
|
what is a name server, what is a network domain, and how are the two related?
|
[
{
"answer": "So a name server is like a telephone book, it takes the name of a website and converts that information into a usable IP address which is an internet protocol series of numbers like 8.8.8.8 which is used in this case to connect to _URL_0_",
"provenance": null
},
{
"answer": "Sorry if this gets rambly; I'm at the stage where I intuitively know what these things are, but explaining them in ways that make some amount of sense is challenging.\n\nDomains are an abstraction that tells us what groups of computers have some degree of connectivity to each other. The form of this that most people will be familiar with is your standard corporate or education network; on a hardware level you have one or more main lines out to the outside world, and these have routers in front of them; then the routers connect to special servers called *Domain Controllers* that handle functions like IP address assignments and authentication and holding an authoritative directory of computers and users on the network. If you're hired by a new place and they give you some form of a \"corporate login\" then they're probably adding a user account for you on some sort of domain controller.\n\nThe domain generally has a name, and some networks can have multiple domains that may or may not be able to see each other, but (importantly) the IP assignments and authentication from other domains won't work there. \n\nAnother place we don't think about domains, are the sites we use daily. _URL_2_ is what's called the *domain name* for a site that is open to the entire internet, and every server in the Reddit domain is under this hierarchy.\n\nName servers, or Domain Name Servers, have a particular function that is most evident when talking about domains like _URL_0_: making your computer know where you want to go when you type \"_URL_0_\" into your browser. At the most basic level it's a huge table with one side being the network address of the computer, and the right side being the \"friendly\" name that humans can remember. This happened because, funnily enough, most people didn't want to keep a list of IP addresses for their favorite websites, and companies who wanted to use websites for marketing purposes found that it was much easier getting people to visit a site like \"_URL_1_\" than \"66.248.19.154\" for instance (note: no idea where that IP leads, investigate at your own risk)\n\nBack to your work domain, this gets used if your work has specific sites like a work intranet page with internal tools for your job, or even just an online company newsletter; in this case anyone on the domain for, say, widgets inc, can put \"widgetnet\" into a browser and get to the internal widgetnet page, or say \"payroll\" for HR to be taken to the server that hosts the payroll software; these are very customizable and (importantly) not routable from the internet in general; you generally have to be inside that domain to access it.\n\nHopefully that's a somewhat decent explanation.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "39241",
"title": "Name server",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 457,
"text": "An example of a name server is the server component of the Domain Name System (DNS), one of the two principal namespaces of the Internet. The most important function of DNS servers is the translation (resolution) of human-memorable domain names and hostnames into the corresponding numeric Internet Protocol (IP) addresses, the second principal name space of the Internet which is used to identify and locate computer systems and resources on the Internet.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8339",
"title": "Domain Name System",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 421,
"text": "The Domain Name System delegates the responsibility of assigning domain names and mapping those names to Internet resources by designating authoritative name servers for each domain. Network administrators may delegate authority over sub-domains of their allocated name space to other name servers. This mechanism provides distributed and fault-tolerant service and was designed to avoid a single large central database.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39241",
"title": "Name server",
"section": "Section::::Domain Name Server.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 858,
"text": "The Internet maintains two principal namespaces: the domain name hierarchy and the IP address system. The Domain Name System maintains the domain namespace and provides translation services between these two namespaces. Internet name servers implement the Domain Name System. The top hierarchy of the Domain Name System is served by the root name servers maintained by delegation by the Internet Corporation for Assigned Names and Numbers (ICANN). Below the root, Internet resources are organized into a hierarchy of domains, administered by the respective registrars and domain name holders. A DNS name server is a server that stores the DNS records, such as address (A, AAAA) records, name server (NS) records, and mail exchanger (MX) records for a domain name (see also List of DNS record types) and responds with answers to queries against its database.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39878",
"title": "Domain name",
"section": "Section::::Purpose.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 667,
"text": "Domain names serve to identify Internet resources, such as computers, networks, and services, with a text-based label that is easier to memorize than the numerical addresses used in the Internet protocols. A domain name may represent entire collections of such resources or individual instances. Individual Internet host computers use domain names as host identifiers, also called \"host names\". The term \"host name\" is also used for the leaf labels in the domain name system, usually without further subordinate domain name space. Host names appear as a component in Uniform Resource Locators (URLs) for Internet resources such as web sites (e.g., en.wikipedia.org).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39241",
"title": "Name server",
"section": "Section::::Domain Name Server.:Authoritative name server.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 416,
"text": "An authoritative name server is a name server that gives answers in response to questions asked about names in a zone. An authoritative-only name server returns answers only to queries about domain names that have been specifically configured by the administrator. Name servers can also be configured to give authoritative answers to queries in some zones, while acting as a caching name server for all other zones.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39241",
"title": "Name server",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 362,
"text": "A name server is a computer application that implements a network service for providing responses to queries against a directory service. It translates an often humanly meaningful, text-based identifier to a system-internal, often numeric identification or addressing component. This service is performed by the server in response to a service protocol request.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1066382",
"title": "Computer network naming scheme",
"section": "Section::::Naming schemes in computing.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 247,
"text": "Server names may be named by their role or follow a common theme such as colors, countries, cities, planets, chemical element, scientists, etc. If servers are in multiple different geographical locations they may be named by closest airport code.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1r9av2
|
how can long water fasting periods be healthy?
|
[
{
"answer": "That is insanely unhealthy. I had a friend one time do a 40 day fast, did it more for some personal factors, not weight loss. He discussed it with his doctor and took the proper precautions and he did it.\n\nFasting should **never** be a method of weight loss. His obesity will simply complicate things... Terrible idea. ",
"provenance": null
},
{
"answer": "It's not healthy at all. In fact, it's often lethal. ",
"provenance": null
},
{
"answer": "First off, fasting is an absolutely terrible way to lose weight. Your body actually takes the lack of food as a signal of danger and makes your body burn less in order to sustain itself longer.\n\nWater fasting may seem to make someone lose weight, but they're just losing water weight and nothing else.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "187886",
"title": "Fasting",
"section": "Section::::In alternative medicine.\n",
"start_paragraph_id": 207,
"start_character": 0,
"end_paragraph_id": 207,
"end_character": 214,
"text": "There is no scientific evidence that prolonged fasting provides any significant health benefits. Negative health complications from long term fasting include arthritis, abdominal cramp and orthostatic hypotension.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20104879",
"title": "Intermittent fasting",
"section": "Section::::Research.:Adverse effects.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 473,
"text": "Understanding the potential adverse effects of intermittent fasting is limited by an inadequate number of rigorous clinical trials. One 2015 review of preliminary clinical studies found that short-term intermittent fasting may produce minor adverse effects, such as continuous feelings of weakness and hunger, headaches, fainting, or dehydration. Long-term, periodic fasting may cause eating disorders or malnutrition, with increased susceptibility to infectious diseases.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20104879",
"title": "Intermittent fasting",
"section": "Section::::Research.:Weight loss.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 534,
"text": "A 2018 review of intermittent fasting in obese people showed that reducing calorie intake one to six days per week over at least 12 weeks was effective for reducing body weight on an average of ; the results were not different from a simple calorie restricted diet, and the clinical trials reviewed were run mostly on middle-aged women from the US and the UK, limiting interpretation of the results. Intermittent fasting has not been studied in children, the elderly, or underweight people, and could be harmful in these populations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22299625",
"title": "Orthopathy",
"section": "Section::::Criticism.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 377,
"text": "Medical experts consider natural hygiene practices such as anti-vaccination, fasting and food combining to be quackery. There is no scientific evidence that prolonged fasting provides any significant health benefits. A prolonged fast may cause \"anemia, impairment of liver function, kidney stones, postural hypotension, mineral imbalances, and other undesirable side effects.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20104879",
"title": "Intermittent fasting",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 355,
"text": "Intermittent fasting (intermittent energy restriction or intermittent calorie restriction) is an umbrella term for various eating diet plans that cycle between a period of fasting and non-fasting over a defined period. Intermittent fasting is under preliminary research to assess if it can produce weight loss comparable to long-term calorie restriction.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49132",
"title": "Marathon",
"section": "Section::::Running.:After a marathon.\n",
"start_paragraph_id": 111,
"start_character": 0,
"end_paragraph_id": 111,
"end_character": 318,
"text": "After long training runs and the marathon itself, consuming carbohydrates to replace glycogen stores and protein to aid muscle recovery is commonly recommended. In addition, soaking the lower half of the body for approximately 20 minutes in cold or ice water may force blood through the leg muscles to speed recovery.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26434412",
"title": "Mark Mattson",
"section": "Section::::Contributions to research.:Intermittent Fasting and Hormesis.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 2029,
"text": "Animal studies performed in Mattson’s Laboratory showed that intermittent fasting has profound beneficial effects on the body and brain including: 1) Improved glucose regulation; 2) Loss of abdominal fat with maintenance of muscle mass; 3) Reduced blood pressure and heart rate, and increased heart rate variability (similar to what occurs in trained endurance athletes; 4) Improved learning and memory and motor function; 5) Protection of neurons in the brain against dysfunction and degeneration in animal models of Alzheimer's disease, Parkinson's disease, stroke and Huntington's disease. He further discovered that intermittent fasting is beneficial for health because it imposes a challenge to cells, and those cells respond adaptively by enhancing their ability to cope with stress and resist disease. This general mechanism whereby cells and organisms respond to a mild challenge or stress by improving their ability to resist more severe stress and diseases is called hormesis. Intermittent fasting imposes a mild energetic challenge on cells of the body and brain with the result being that cells produce a range of stress resistance proteins including antioxidant enzymes, protein chaperones and growth factors. Importantly, intermittent fasting also stimulates autophagy, a process by which the cells eliminate damage molecules and dysfunctional mitochondria. Mattson further established that a key factor in many of the health benefits of intermittent fasting is the depletion of liver energy (glycogen) stores and the production of ketone bodies from fat cells. The ketones are not only a fuel for neurons, but they may also stimulate the production of brain-derived neurotrophic factor which stimulates the formation of synapses between neurons and also protects the neurons against stress. The \"metabolic switching\" to ketone production occurs within 10-14 hours after the onset of fasting, and so daily fasting periods of 16-20 hours is sufficient to achieve many of the health benefits of intermittent fasting.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1c5doq
|
How dependent is oceanic life on phytoplankton? Not just fish, but mammals and all life forms that live off or in the ocean?
|
[
{
"answer": "[Extremeophiles](_URL_0_) would be okay.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "50557",
"title": "Phytoplankton",
"section": "Section::::Ecology.:Food web.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 460,
"text": "Phytoplankton serve as the base of the aquatic food web, providing an essential ecological function for all aquatic life. Under future conditions of anthropogenic warming and ocean acidification, changes in phytoplankton mortality may be significant. One of the many food chains in the ocean – remarkable due to the small number of links – is that of phytoplankton sustaining krill (a crustacean similar to a tiny shrimp), which in turn sustain baleen whales.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1281745",
"title": "SeaWiFS",
"section": "Section::::Applications.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 999,
"text": "Phytoplankton is a key component in the base of the oceanic food chain and oceanographers have hypothesized a link between oceanic chlorophyll and fisheries production for some time. The degree to which phytoplankton relates to marine fish production depends on the number of trophic links in the food chain, and how efficient each link is. Estimates of the number of trophic links and trophic efficiencies from phytoplankton to commercial fisheries have been widely debated, though they have been little substantiated. More recent research suggests that positive relationships between and fisheries production can be modeled and can be very highly correlated when examined on the proper scale. For example, Ware and Thomson (2005) found an r of 0.87 between resident fish yield (metric tons km-2) and mean annual concentrations (mg m-3). Others have found the Pacific's Transition Zone Chlorophyll Front (chlorophyll density of 0.2 mg m-3) to be defining feature in loggerhead turtle distribution.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58756962",
"title": "Viral shunt",
"section": "Section::::Effect on the marine food web.:Important Organisms.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 346,
"text": "There are many different microbes present in aquatic communities. Phytoplankton, specifically \"Picoplankton,\" are the most important organism in this microbial loop. They provide a foundation as primary producers; they are responsible for the majority of primary production in the ocean and around 50% of primary production of the entire planet.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2801560",
"title": "Ocean acidification",
"section": "Section::::Possible impacts.:Impacts on oceanic calcifying organisms.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 921,
"text": "Although the natural absorption of by the world's oceans helps mitigate the climatic effects of anthropogenic emissions of , it is believed that the resulting decrease in pH will have negative consequences, primarily for oceanic calcifying organisms. These span the food chain from autotrophs to heterotrophs and include organisms such as coccolithophores, corals, foraminifera, echinoderms, crustaceans and molluscs. As described above, under normal conditions, calcite and aragonite are stable in surface waters since the carbonate ion is at supersaturating concentrations. However, as ocean pH falls, the concentration of carbonate ions required for saturation to occur increases, and when carbonate becomes undersaturated, structures made of calcium carbonate are vulnerable to dissolution. Therefore, even if there is no change in the rate of calcification, the rate of dissolution of calcareous material increases.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20441078",
"title": "Forage fish",
"section": "Section::::In the oceans.:Ocean food webs.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 314,
"text": "The most important groups of phytoplankton include the diatoms and dinoflagellates. Diatoms are especially important in oceans, where they are estimated to contribute up to 45% of the total ocean's primary production. Diatoms are usually microscopic, although some species can reach up to 2 millimetres in length.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43269",
"title": "Antarctic krill",
"section": "Section::::Food.:Biological pump and carbon sequestration.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 372,
"text": "If the phytoplankton is consumed by other components of the pelagic ecosystem, most of the carbon remains in the upper layers of the ocean. There is speculation that this process is one of the largest biofeedback mechanisms of the planet, maybe the most sizable of all, driven by a gigantic biomass. Still more research is needed to quantify the Southern Ocean ecosystem.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1281745",
"title": "SeaWiFS",
"section": "Section::::Applications.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 1004,
"text": "Estimating the amount of global or regional chlorophyll, and therefore phytoplankton, has large implications for climate change and fisheries production. Phytoplankton play a huge role in the uptake of the world's carbon dioxide, a primary contributor to climate change. A percentage of these phytoplankton sink to ocean floor, effectively taking carbon dioxide out of the atmosphere and sequestering it in the deep ocean for at least a thousand years. Therefore, the degree of primary production from the ocean could play a large role in slowing climate change. Or, if primary production slows, climate change could be accelerated. Some have proposed fertilizing the ocean with iron in order to promote phytoplankton blooms and remove carbon dioxide from the atmosphere. Whether these experiments are undertaken or not, estimating chlorophyll concentrations in the world's oceans and their role in the ocean's biological pump could play a key role in our ability to foresee and adapt to climate change.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5qufpa
|
why us telecos still use cdma technology, when majority of the world uses gsm for the communication?
|
[
{
"answer": "The main reasons are a matter of timing, corporate greed and legacy.\n\nBack when the US networks where starting to form, the switch from analogue to digital cellular technology was also happening and CDMA had some interesting advantages over GSM.\n\nOne of the most appealing features at the time (And still continues to be) was that it is easier to lock a CDMA user into the network that provides the phone than it is with GSM technology whose spec demands that they be interoperable between networks. CDMA makes it harder for a user to leave a network for another one and take the phone with them (In some cases it's impossible).\n\nThere where other benefits to CDMA as well such as greater capacity on the network, a questionable theory that call quality was better and so forth but GSM caught up very quickly and eventually leapfrogged CDMA in the quality and feature departments.\n\nNow of course, some of those network operators have folded into the big players you see today and frankly switching from CDMA to GSM is a BIG commitment those network operators don't really wish to undertake.\n\nCDMA as a technology outside of the USA and small parts of Russia is dead with the advent of 4G. GSM has been taken up by most of the world, mostly driven by Europe's mass uptake of it. Though 3G briefly was based on a variance of CDMA, 4G uses a technology called LTE which is a further refinement of GSM technology.",
"provenance": null
},
{
"answer": "The selection of CDMA over GSM was mostly based on the distances and number of users supported by an antenna. CDMA was initially superior to GSM on both, therefore cellular networks could have better coverage with fewer macro cells (towers). However with the adoption of LTE, as well as refinements to 4G over GSM (contrary to popular belief, modifications to both CDMA and GSM were allowed to call themselves 4G without supporting LTE), and subsequent future migration to 5G, the differences have become moot.\n\nBut as the US was an early adopter of cellular technology, and Qualcomm was the leading provider of CDMA technology to both Verizon's predecessors and cell phone manufacturers.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "201952",
"title": "CdmaOne",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 606,
"text": "CDMA or \"code division multiple access\" is a digital radio system that transmits streams of bits (PN codes). CDMA permits several radios to share the same frequencies. Unlike TDMA \"time division multiple access\", a competing system used in 2G GSM, all radios can be active all the time, because network capacity does not directly limit the number of active radios. Since larger numbers of phones can be served by smaller numbers of cell-sites, CDMA-based standards have a significant economic advantage over TDMA-based standards, or the oldest cellular standards that used frequency-division multiplexing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12808",
"title": "GSM",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 422,
"text": "2G networks developed as a replacement for first generation (1G) analog cellular networks. The GSM standard originally described a digital, circuit-switched network optimized for full duplex voice telephony. This expanded over time to include data communications, first by circuit-switched transport, then by packet data transport via General Packet Radio Service (GPRS), and Enhanced Data Rates for GSM Evolution (EDGE).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "145436",
"title": "Network Rail",
"section": "Section::::Assets.:Telecoms assets.\n",
"start_paragraph_id": 67,
"start_character": 0,
"end_paragraph_id": 67,
"end_character": 295,
"text": "GSM-R radio systems are being introduced across Europe under EU legislation for interoperability. In the UK, as of March 2014, Network Rail is well underway in the UK implementation of GSM-R to replace its legacy National Radio Network (NRN) and Cab Secure Radio (CSR) systems currently in use.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20072368",
"title": "British Rail Telecommunications",
"section": "Section::::GSM-R.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 332,
"text": "GSM-R radio systems are being introduced across Europe under EU legislation for interoperability. In the UK, Network Rail has established a stakeholder's board with cross industry representation to drive the UK implementation of GSM-R to replace the National Radio Network (NRN) and Cab Secure Radio (CSR) systems currently in use.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4385443",
"title": "Generic Access Network",
"section": "Section::::Similar technologies.\n",
"start_paragraph_id": 57,
"start_character": 0,
"end_paragraph_id": 57,
"end_character": 595,
"text": "GAN/UMA is not the first system to allow the use of unlicensed spectrum to connect handsets to a GSM network. The GIP/IWP standard for DECT provides similar functionality, but requires a more direct connection to the GSM network from the base station. While dual-mode DECT/GSM phones have appeared, these have generally been functionally cordless phones with a GSM handset built-in (or vice versa, depending on your point of view), rather than phones implementing DECT/GIP, due to the lack of suitable infrastructure to hook DECT base-stations supporting GIP to GSM networks on an ad-hoc basis.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2079785",
"title": "Discontinuous transmission",
"section": "Section::::Misconception.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 714,
"text": "A common misconception is that DTX improves capacity by freeing up TDMA time slots for use by other conversations. In practice, the unpredictable availability of time slots makes this difficult to implement. However, reducing interference is a significant component in how GSM and other TDMA based mobile phone systems make better use of the available spectrum compared to older analog systems such as AMPS and NMT. While older network types theoretically allocated two 25–30 kHz channels per conversation, in practice some radios would cause interference on neighbouring channels making them unusable, and a single radio may broadcast too strong an oval signal pattern to let nearby cells reuse the same channel.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "483713",
"title": "3rd Generation Partnership Project 2",
"section": "",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 267,
"text": "GSM/GPRS/EDGE/W-CDMA is the most widespread wireless standard in the world. A few countries (such as China, the United States, Canada, Ukraine, Trinidad and Tobago, India, South Korea and Japan) use both sets of standards, but most countries use only the GSM family.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
xaiu9
|
Jesus Christ and John the Baptist - Bibical Scholars wanted
|
[
{
"answer": "The association of Jesus with John is more or less universally accepted. There are a number of clues.\n\nFirst is the baptism. John offered baptism for the remission of sin, a fact that caused the last three evangelists apparent embarrassment. Matthew and Luke have John denigrate himself at the event. John goes a step farther and eliminated the baptism entirely. Why make it up if it causes problems?\n\nAgainst this view we should bear in mind that the earliest known recension, that of Mark, shows no such shame.\n\nThe second is that all four evangelists are careful to have John either implicitly (such as his emissaries from prison) or explicitly acknowledge Jesus as the Messiah. This seems to indicate that such an endorsement was important to the early Christian movement.\n\nAdditionally, the idea of baptism for remission of sin is unattested in judaism prior to John. Both the capacity of immersion to serve in this way and the idea that such an act could be performed by a third party are novelties, shared by the Baptist and the Christian movement.",
"provenance": null
},
{
"answer": "So far as I know, one of the the most well-known versions of this theory comes from E. P. Sanders:\n\n > Two of the things which are most securely known about Jesus are the beginning and the outcome of his career, and these are also two illuminating facts. Jesus began his public work, as far as we have any information at all about it, in close connection with John the Baptist, probably as a disciple.\n\n[E. P. Sanders, *Jesus and Judaism* \\(Minneapolis: Fortress, 1985), 91.](_URL_1_)\n\nSanders is not the only scholar to hold this opinion, nor does every scholar in the field agree with Sanders. See [Max Aplin's Ph.D. dissertation, \"Was Jesus Ever a Disciple of John the Baptist? A Historical Study,\"](_URL_0_) pp. 39-42, for a list of scholars on all sides of the argument.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "17006172",
"title": "G. B. Caird",
"section": "Section::::Significance.:Historical Jesus.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 567,
"text": "Consequently, his work has a refreshing lack of negative presuppositions. As with his teacher C. H. Dodd, he was adamant that the gospels were reliable witnesses not only to the theology of the early church but to the theology of Jesus himself. His claim in particular that Jesus's friction with the Pharisees reflected a legitimate, contemporary, first-century Palestinian debate about \"what it means for the nation of Israel to be the holy people of God in a world overrun by gentiles,\" and that this is profoundly \"political,\" is fundamental to his work on Jesus.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6207563",
"title": "Greg Boyd (theologian)",
"section": "Section::::Thought.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 729,
"text": "He is also a notable figure in New Testament scholarship and the Quest for the Historical Jesus. He is critical of liberal scholarship as typified by the Jesus Seminar as well as the individual work of scholars like John Dominic Crossan and Burton Mack. He has participated in numerous public debates, most notably with friend Robert M. Price and Dan Barker on the historicity of the New Testament and related matters. His first book in this area was \"Cynic Sage or Son of God?\" (1995). More recently, his book (co-authored with Paul Rhodes Eddy), \"The Jesus Legend: A Case for the Historical Reliability of the Synoptic Jesus Tradition\" (2007) won the 2008 Christianity Today Book of the Year Award (Biblical Studies category).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1566247",
"title": "John Dominic Crossan",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 408,
"text": "John Dominic Crossan (born February 17, 1934) is an Irish-American New Testament scholar, historian of early Christianity, and former Catholic priest who was a prominent member of The Jesus Seminar. His research has focused on the historical Jesus, on the cultural anthropology of the Ancient Mediterranean and New Testament worlds and on the application of postmodern hermeneutical approaches to the Bible.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "158627",
"title": "Jesus Seminar",
"section": "Section::::Criticism from scholars.:Composition of the Seminar and qualifications of the members.\n",
"start_paragraph_id": 83,
"start_character": 0,
"end_paragraph_id": 83,
"end_character": 410,
"text": "Of the 74 [scholars] listed in their publication \"The Five Gospels\", only 14 would be leading figures in the field of New Testament studies. More than half are basically unknowns, who have published only two or three articles. Eighteen of the fellows have published nothing at all in New Testament studies. Most have relatively undistinguished academic positions, for example, teaching at a community college.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24105572",
"title": "Francis Schüssler Fiorenza",
"section": "Section::::Biography.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 250,
"text": "\"Foundational Theology: Jesus and the Church\" is one of his earliest and best-known books. He has published widely, with more than 150 essays in the areas of fundamental theology, hermeneutics, and political theology, as well as several other books.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17719068",
"title": "Hans-Josef Klauck",
"section": "Section::::Professional career.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 342,
"text": "Hans-Josef Klauck is one of the most prolific New Testament scholars today, and has worked extensively on topics such as the parables of Jesus, Paul’s Corinthian correspondence, and the Johannine letters. He has also specialized in the religious and social history of the Greco-Roman world as a necessary background to New Testament studies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2107601",
"title": "John Van Seters",
"section": "Section::::Research and Publication.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 502,
"text": "Most student handbooks on Pentateuchal studies are committed to a particular methodological approach or school of thought and largely ignore alternative theories of the Bible’s compositional history. Van Seters’ introduction, \"The Pentateuch: A Social-Science Commentary\" (1999) attempts to summarize the complex state of Pentateuchal research at the end of the 20th century and to locate his own method of Pentateuchal criticism, which is socio-historical and literary, within this scholarly context.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5rk0bm
|
what is the difference between relative humidity and dew point?
|
[
{
"answer": "They are related in that they are both measures of the amount of water in the air.\n\nRelative humidity compares how much water vapor is in the air to how much water vapor the air could possibly hold at the current temperature. It is a measure of how saturated the air is compared to how saturated it could be.\n\nDew point is the temperature at which the current amount of water vapor in the air would be the maximum amount. Since as the air temperature cools, it can hold less water vapor, there is a temperature where the air can no longer hold the water vapor it currently has. That's the dew point.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "54912",
"title": "Dew point",
"section": "Section::::Humidity.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 369,
"text": "A high relative humidity implies that the dew point is closer to the current air temperature. A relative humidity of 100% indicates the dew point is equal to the current temperature and that the air is maximally saturated with water. When the moisture content remains constant and temperature increases, relative humidity decreases, but the dew point remains constant.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "223970",
"title": "Relative humidity",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 363,
"text": "Relative humidity (RH) is the ratio of the partial pressure of water vapor to the equilibrium vapor pressure of water at a given temperature. Relative humidity depends on temperature and the pressure of the system of interest. The same amount of water vapor results in higher relative humidity in cool air than warm air. A related parameter is that of dew point.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3916626",
"title": "Wet-bulb temperature",
"section": "Section::::General.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 237,
"text": "By contrast, the dew point is the temperature to which the ambient air must be cooled to reach 100% relative humidity assuming there is no further evaporation into the air; it is the point where condensation (dew) and clouds would form.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54912",
"title": "Dew point",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 568,
"text": "The dew point is the temperature to which air must be cooled to become saturated with water vapor. When further cooled, the airborne water vapor will condense to form liquid water (dew). When air cools to its dew point through contact with a surface that is colder than the air, water will condense on the surface. When the temperature is below the freezing point of water, the dew point is called the frost point, as frost is formed rather than dew. The measurement of the dew point is related to humidity. A higher dew point means there is more moisture in the air.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54912",
"title": "Dew point",
"section": "Section::::Calculating the dew point.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 205,
"text": "A well-known approximation used to calculate the dew point, \"T\", given just the actual (\"dry bulb\") air temperature, \"T\" (in degrees Celsius) and relative humidity (in percent), RH, is the Magnus formula:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54912",
"title": "Dew point",
"section": "Section::::Calculating the dew point.:Simple approximation.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 219,
"text": "There is also a very simple approximation that allows conversion between the dew point, temperature, and relative humidity. This approach is accurate to within about ±1 °C as long as the relative humidity is above 50%:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54912",
"title": "Dew point",
"section": "Section::::Humidity.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 330,
"text": "If all the other factors influencing humidity remain constant, at ground level the relative humidity rises as the temperature falls. This is because less vapor is needed to saturate the air. In normal conditions, the dew point temperature will not be greater than the air temperature because relative humidity cannot exceed 100%.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4jd20l
|
How did the Roman aristocracy treat/view plebeians? Were they treated differently at different points in the republic?
|
[
{
"answer": "Much of the historical narrative that we have for the early Republic is dominated by the so-called struggle of the orders. This was sort of a civil rights campaign by the plebeians for equal rights. At this stage the patricians were a handful of established aristocratic families, and the plebeians were everyone else. At first, the patricians monopolised all the major political and religious offices, but the plebeians gradually won the right to hold these positions. One of the most important victories for the plebeian cause came in 367 BC, when a law was passed guaranteeing at least one plebeian consul; subsequently more and more offices were opened up to plebeians. (It should be noted that the literary sources we have for this period are much later, and are therefore open to question. But we don't have anything better to go on, so most historians tend to assume that the surviving narratives are based on a factual core even if many of the details are invented or distorted.)\n\nBy the late Republic, the patrician-plebeian distinction was largely redundant. There were still patrician families, but these were not synonymous with the office-holding nobility, as they had once been. Being a patrician could even be seen as a disadvantage: one patrician, the populist politician Publius Claudius Pulcher (wow, alliteration), had to be adopted into a plebeian family in order to stand for the office of tribune (he became Publius *Clodius* Pulcher in 59 BC as a result). In late Republican parlance, \"plebs\" became a more general term to refer to the common people, meaning anyone who didn't belong to the senatorial or equestrian classes, except in technical cases like that of Clodius.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "25793360",
"title": "Social class in Italy",
"section": "Section::::Ancient Rome.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 951,
"text": "Roman society is largely viewed as hierarchical, with slaves (\"servi\") at the bottom, freedmen (\"liberti\") above them, and free-born citizens (\"cives\") at the top. Free citizens were themselves also divided by class. The broadest, and earliest, division was between the patricians, who could trace their ancestry to one of the 100 Patriarchs at the founding of the city, and the plebeians, who could not. This became less important in the later Republic, as some plebeian families became wealthy and entered politics, and some patrician families fell on hard times. Anyone, patrician or plebeian, who could count a consul as his ancestor was a noble (\"nobilis\"); a man who was the first of his family to hold the consulship, such as Marius or Cicero, was known as a \"novus homo\" (\"new man\") and ennobled his descendants. Patrician ancestry, however, still conferred considerable prestige, and many religious offices remained restricted to patricians.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "521555",
"title": "Ancient Rome",
"section": "Section::::Society.:Class structure.\n",
"start_paragraph_id": 114,
"start_character": 0,
"end_paragraph_id": 114,
"end_character": 939,
"text": "Roman society is largely viewed as hierarchical, with slaves (\"servi\") at the bottom, freedmen (\"liberti\") above them, and free-born citizens (\"cives\") at the top. Free citizens were also divided by class. The broadest, and earliest, division was between the patricians, who could trace their ancestry to one of the 100 Patriarchs at the founding of the city, and the plebeians, who could not. This became less important in the later Republic, as some plebeian families became wealthy and entered politics, and some patrician families fell economically. Anyone, patrician or plebeian, who could count a consul as his ancestor was a noble (\"nobilis\"); a man who was the first of his family to hold the consulship, such as Marius or Cicero, was known as a \"novus homo\" (\"new man\") and ennobled his descendants. Patrician ancestry, however, still conferred considerable prestige, and many religious offices remained restricted to patricians.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3453921",
"title": "Social class in ancient Rome",
"section": "Section::::Patricians and plebeians.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 918,
"text": "The distinction between patricians and plebeians in Ancient Rome was based purely on birth. Although modern writers often portray patricians as rich and powerful families who managed to secure power over the less-fortunate plebeian families, plebeians and patricians among the senatorial class were equally wealthy. As civil rights for plebeians increased during the middle and late Roman Republic, many plebeian families had attained wealth and power while some traditionally patrician families had fallen into poverty and obscurity. The first Roman Emperor, Augustus, was of plebeian origin, as were many of his successors. By the Late Empire, few members of the Senate were from the original patrician families, most of which had died out. Rome continued to have a hierarchical class system, but it was now dominated by economic differences, rather than the hereditary distinction between Patricians and Plebeians.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3453921",
"title": "Social class in ancient Rome",
"section": "Section::::Patricians and plebeians.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 648,
"text": "In the Roman Kingdom and the early Roman Republic the most important division in Roman society was between the patricians and the plebeians. The patricians were a small elite whose ancestry was traced to the first Senate established by Romulus, who monopolised political power. The plebeians comprised the majority of Roman citizens (see below). Adult males who were not Roman citizens, whether free or slave, fall outside this division. Women and children were also not citizens, but took the social status of their father or husband, which granted them various rights and protections not available to the women and children of men of lower rank.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2421126",
"title": "Lex Licinia Sextia",
"section": "Section::::The laws.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 387,
"text": "Indebtedness was a major problem among the plebeians, particularly among small peasant farmers, and this led to conflicts with the patricians, who were the aristocracy, the owners of large landed estates and the creditors. Several laws regulating credit or the interest rates of credit to provide some relief for the helpless debtors were passed during the period of the Roman Republic.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "244404",
"title": "Plebs",
"section": "Section::::In ancient Rome.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 528,
"text": "The 19th-century historian Barthold Georg Niebuhr held that plebeians began to appear at Rome during the reign of Ancus Marcius and were possibly foreigners settling in Rome as naturalized citizens. In any case, at the outset of the Roman Republic, the patricians had a near monopoly on political and social institutions. Plebeians were excluded from magistracies and religious colleges, and they were not permitted to know the laws by which they were governed. Plebeians served in the army, but rarely became military leaders.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "181397",
"title": "Patrician (ancient Rome)",
"section": "Section::::Roman Republic and Empire.:Patricians vs. plebeians.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 535,
"text": "The distinction between patricians and plebeians in Ancient Rome was based purely on birth. Although modern writers often portray patricians as rich and powerful families who managed to secure power over the less-fortunate plebeian families, plebeians and patricians among the senatorial class were equally wealthy. As civil rights for plebeians increased during the middle and late Roman Republic, many plebeian families had attained wealth and power while some traditionally patrician families had fallen into poverty and obscurity.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3sgu4l
|
why are governments post-actively banning the filming of slaughter house cruelty instead of pro-actively following animal abuse laws?
|
[
{
"answer": "I don't know where the government is specifically making it illegal to film that - it already is based on normal privacy and employment law. If you trespass onto a private property to film, that's illegal. And if you gain employment there to film secretly, you've committed fraud - and you've probably violated a clause in the employment contract you signed. \n\nA USDA Inspector has to be present at all times a slaughterhouse is operating. However, the agency is massively underfunded given the size of the industry, and they have a shortage of inspectors. If the abuse happens where the inspector doesn't see it, and it doesn't affect the meat after slaughter, it's not going to be noticed. And those videos, horrific as they may be, are illegally obtained by private individuals with no way of verifying their authenticity, so they wouldn't be admissible in any type of legal action.\n\nAlso, if a violation was observed, I believe it would be handled as a regulatory issue, unless it became a repeat or widespread violation, or seriously affected the safety of the meat. The company would be issued with a finding and be given a time period to show that they had corrected. (I'm more familiar with FDA procedure than USDA, but I assume it would be similar.) So, if there are cases where violations were found and corrected, you'd be unlikely to hear about them.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "683092",
"title": "Cruelty to animals",
"section": "Section::::Forms.:Industrial animal farming.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 648,
"text": "Animal cruelty such as soring, which is illegal, sometimes occurs on farms and ranches, as does lawful but cruel treatment such as livestock branding. Since Ag-gag laws prohibit video or photographic documentation of farm activities, these practices have been documented by secret photography taken by whistleblowers or undercover operatives from such organizations as Mercy for Animals and the Humane Society of the United States posing as employees. Agricultural organizations such as the American Farm Bureau Federation have successfully advocated for laws that tightly restrict secret photography or concealing information from farm employers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13560962",
"title": "National Farmers Organization",
"section": "Section::::The Holding Action of 1967.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 266,
"text": "Many public slaughters were held in the late 1960s and early 1970s. Farmers would kill their own animals in front of media representatives. However, this effort backfired because it angered television audiences to see animals being needlessly and wastefully killed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1021652",
"title": "Animal Aid",
"section": "Section::::Campaigns.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 419,
"text": "BULLET::::- \"Slaughter\" Animal Aid uses hidden cameras to film in UK slaughterhouses. It has found illegal cruelty in thirteen out of fourteen slaughterhouses visited so far. Since the launch of the campaign, all the major supermarket chains have agreed to insist that their suppliers fit CCTV cameras in their slaughterhouses. Animal Aid campaigns for mandatory independently monitored CCTV in all UK slaughterhouses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "158158",
"title": "Intensive pig farming",
"section": "Section::::Regulation.:United States.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 352,
"text": "The federal Humane Slaughter Act requires pigs to be stunned before slaughter, although compliance and enforcement is questioned. There is concern from animal liberation/welfare groups that the laws have not resulted in a prevention of animal suffering and that there are \"repeated violations of the Humane Slaughter Act at dozens of slaughterhouses\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11519255",
"title": "Agriprocessors",
"section": "Section::::Controversies.:Animal abuse.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 545,
"text": "Another PETA undercover video, reportedly taken on August 13, 2008, showed violations of the Humane Methods of Livestock Slaughter Act, including the use of saw-like, multiple, hacking cuts in the necks of still-conscious animals. Dr. Grandin said the second cuts would “definitely cause the animal pain.” The episode led Grandin to state that slaughterhouse visits were useless for determining proper animal treatment. Grandin suggested that Agriprocessors install internet video cams on the killing floor for constant, independent, oversight.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3958570",
"title": "Sim Lake",
"section": "Section::::Notable cases.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 337,
"text": "In April 2013, Judge Lake ruled that videos showing cruelty to animals are protected by the First Amendment to the United States Constitution despite laws against cruelty to animals and evidence that cruelty to animals can be a precursor to cruelty to human beings as well as murder. A petition has been launched to reverse this ruling.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2600328",
"title": "Animal rights movement",
"section": "Section::::Gender, class, and other factors.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 495,
"text": "Another factor feeding the animal rights movement was revulsion to televised slaughters. In the United States, many public protest slaughters were held in the late 1960s and early 1970s by the National Farmers Organization. Protesting low prices for meat, farmers would kill their own animals in front of media representatives. The carcasses were wasted and not eaten. However, this effort backfired because it angered television audiences to see animals being needlessly and wastefully killed.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1qfqto
|
Can anyone recommend any books or articles on ancient cultures and sharks?
|
[
{
"answer": "This article might not quite fit the bill -- it's not specifically situated in an ancient period -- but it does do a good job of exploring the place of sharks in Hawaiian religion and culture.\n\nGoldberg-Hiller, Jonathan, and Noenoe K. Silva. “Sharks and Pigs: Animating Hawaiian Sovereignty against the Anthropological Machine.” South Atlantic Quarterly 110, no. 2 (Spring 2011).\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "43617",
"title": "Shark",
"section": "Section::::Evolutionary history.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 841,
"text": "Among the most ancient and primitive sharks is \"Cladoselache\", from about 370 million years ago, which has been found within Paleozoic strata in Ohio, Kentucky, and Tennessee. At that point in Earth's history these rocks made up the soft bottom sediments of a large, shallow ocean, which stretched across much of North America. \"Cladoselache\" was only about long with stiff triangular fins and slender jaws. Its teeth had several pointed cusps, which wore down from use. From the small number of teeth found together, it is most likely that \"Cladoselache\" did not replace its teeth as regularly as modern sharks. Its caudal fins had a similar shape to the great white sharks and the pelagic shortfin and longfin makos. The presence of whole fish arranged tail-first in their stomachs suggest that they were fast swimmers with great agility.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38298186",
"title": "The Sharks (novel)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1187,
"text": "The Sharks () is a novel written by Norwegian author Jens Bjørneboe between 1973 and 1974 and originally published by Gyldendal Norsk Forlag in 1974. It is an allegorical sea story and was his last work. The external action takes place aboard the bark \"Neptune,\" which is on its way from Manila to Marseille via Rio de Janeiro. The year is 1899, and it is also a voyage into the new century. The story is told through the voice of Norwegian \"Peder Jensen,\" the ship's second mate, who resembles Bjørneboe himself. This book has a very strong symbolism; the ship is always pursued by sharks and on board, there are peoples of all nationalities. The ship is a symbol of the world, where the captain symbolizes those who have the power and the crew the oppressed. The sharks that pursue the ship symbolize the demons in man. The novel ends with a mutiny and a shipwreck, before the crew and officers are stranded on a deserted island where they live in peaceful anarchy before being picked up by a passing ship. \"The Sharks\" is considered one of Bjørneboe's most important novels and has been translated into multiple languages. Two years after its publication Bjørneboe committed suicide.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17987564",
"title": "Sharks and Little Fish",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 331,
"text": "Sharks and Little Fish is a novel written by German author Wolfgang Ott. First published in 1954, it is based on the author's own experiences as a young submariner. The story centers on a sailor called Teichmann, a cynical young man, thrown at the age of seventeen into the horror and cruelty of submarine warfare in World War II.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24644173",
"title": "Shark sanctuary",
"section": "Section::::Drivers of the shark trade.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 268,
"text": "Sharks are a common seafood in many places around the world, including China (shark-fin soup), Japan, Australia (fish and chips under the name flake), in India (under the name sora in Tamil language and Telugu language), and Icelanders eat Greenland sharks as hákarl.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43617",
"title": "Shark",
"section": "Section::::Relationship with humans.:In culture.:In popular culture.\n",
"start_paragraph_id": 128,
"start_character": 0,
"end_paragraph_id": 128,
"end_character": 515,
"text": "In contrast to the complex portrayals by Hawaiians and other Pacific Islanders, the European and Western view of sharks has historically been mostly of fear and malevolence. Sharks are used in popular culture commonly as eating machines, notably in the \"Jaws\" novel and the film of the same name, along with its sequels. Sharks are threats in other films such as \"Deep Blue Sea\", \"The Reef\", and others, although they are sometimes used for comedic effect such as in \"Finding Nemo\" and the \"Austin Powers\" series. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46694272",
"title": "The Devil's Teeth",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 299,
"text": "The Devil's Teeth: A True Story of Obsession and Survival Among America's Great White Sharks is a non-fiction book about great white sharks by American journalist Susan Casey. The text was initially published by Henry Holt and Company on June 7, 2005. The book became a widely acclaimed bestseller.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14306",
"title": "Hammerhead shark",
"section": "Section::::Relationship with humans.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 1095,
"text": "In native Hawaiian culture, sharks are considered to be gods of the sea, protectors of humans, and cleaners of excessive ocean life. Some of these sharks are believed to be family members who died and have been reincarnated into shark form. However, some sharks are considered man-eaters, also known as \"niuhi\". These sharks include great white sharks, tiger sharks, and bull sharks. The hammerhead shark, also known as \"mano kihikihi\", is not considered a man-eater or \"niuhi\"; it is considered to be one of the most respected sharks of the ocean, an \"aumakua\". Many Hawaiian families believe that they have an \"aumakua\" watching over them and protecting them from the \"niuhi\". The hammerhead shark is thought to be the birth animal of some children. Hawaiian children who are born with the hammerhead shark as an animal sign are believed to be warriors and are meant to sail the oceans. Hammerhead sharks rarely pass through the waters of Maui, but many Maui natives believe that their swimming by is a sign that the gods are watching over the families, and the oceans are clean and balanced.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
77y8tw
|
how do recycling plants process liquids?
|
[
{
"answer": "Yay! Something I can finally answer! \nI work as a chemist at a waste disposal facility. At our plant we sort liquids into various catergories depending on what we think is in the containers.\n\nThese containers are then shipped off to a machine which essentially crushes all the liquid out of the containers making cubes of plastic or metal. \n\nThe liquid itself is gathered in large 1000L drums and shipped off to another facility to be chemically treated to be safe for release into the environment.\n\nIf this can't be achieved the liquid gets incinerated in small batches. \n\nHope this helped! ",
"provenance": null
},
{
"answer": "I think your question has been answered, but there's also cool machines that use lasers to estimate the amount of liquid and determine what type of material each container is, then blasts air jets to sort each type off the conveyor belt into different streams. \n\n[Here's a crappy video I found of something like this.](_URL_0_)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "8514203",
"title": "History of the petroleum industry in Canada (natural gas liquids)",
"section": "Section::::Amoco/Dome Synergies.:Recycling plants.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 1112,
"text": "Recycling plants such as those at Kaybob, West Whitecourt and Crossfield produced liquids-rich gas from \"retrograde condensation\" reservoirs. They stripped condensate and natural gas liquids and sulfer (which they alternately stored in blocks or sold, depending on demand and price), then re-injected the dry gas to cycle the reservoir to capture more liquids. Usually these plants needed make-up gas to replace the volume of the liquids stripped which came from other reservoirs. In the case of West Whitecourt, they also processed dry but sour gas from the Pine Creek field (near Edson) as a source of make-up gas. In the case of Crossfield, the liquids-rich gas came from the Wabamun D-1 zone and the make-up gas from the uphole Elkton zone. Most of these plants were built in the days of 16 cent long-term contracts from TransCanada PipeLine when the National Energy Board required 25 years of reserves in the ground in order to gain an export permit (from Canada). What drove the economics of this procedure was not gas production, but the liquids that could be recovered and sold as part of the crude mix.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "70157",
"title": "Recycling",
"section": "Section::::Recycling consumer waste.:Sorting.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 619,
"text": "Once commingled recyclates are collected and delivered to a central collection facility, the different types of materials must be sorted. This is done in a series of stages, many of which involve automated processes such that a truckload of material can be fully sorted in less than an hour. Some plants can now sort the materials automatically, known as single-stream recycling. In plants, a variety of materials is sorted such as paper, different types of plastics, glass, metals, food scraps, and most types of batteries. A 30 percent increase in recycling rates has been seen in the areas where these plants exist.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10399667",
"title": "Aquarium filter",
"section": "Section::::Overview.:Mechanical and chemical filtration.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 410,
"text": "Dissolved wastes are more difficult to remove from the water. Several techniques, collectively known as chemical filtration, are used for the removal of dissolved wastes, the most popular being the use of activated carbon and foam fractionation. To a certain extent, healthy plants extract dissolved chemical wastes from water when they grow, so plants can serve a role in the containment of dissolved wastes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45327132",
"title": "Circulating water plant",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 586,
"text": "A circulating water plant or circulating water system is an arrangement of flow of water in fossil-fuel power station, chemical plants and in oil refineries. The system is required because various industrial process plants uses heat exchanger, and also for active fire protection measures. In chemical plants, for example in caustic soda production, water is needed in bulk quantity for preparation of brine. The circulating water system in any plant consists of a circulator pump, which develops an appropriate hydraulic head, and pipelines to circulate the water in the entire plant.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37808027",
"title": "Concrete (perfumery)",
"section": "Section::::Production.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 254,
"text": "Fresh plant material is extracted with nonpolar solvents (e. g., benzene, toluene, hexane, petroleum ether). On evaporation of the solvent, a semi-solid residue of essential oils, waxes, resins and other lipophilic (oil-soluble) plant chemicals remains.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4916038",
"title": "Mechanical biological treatment",
"section": "Section::::Process.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 249,
"text": "The sorting component of the plants typically resemble a materials recovery facility. This component is either configured to recover the individual elements of the waste or produce a refuse-derived fuel that can be used for the generation of power.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14041641",
"title": "The Environmental Institute",
"section": "Section::::Integrated Biosystem Highlights.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 365,
"text": "Most conventional wastewater treatment plants try to clean water mechanically and chemically then release it into waterways. Such systems are expensive, produce limited economic benefits, and can themselves pollute. By contrast, integrated biosystems treat water by recycling it for agricultural use, producing numerous economic, health and environmental benefits.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6vuxyw
|
why do we feel the weird banging in our body when listening to loud live music
|
[
{
"answer": "Sound is pressure waves moving through the air that vibrate your eardrums.\n\nYour ribcage doesn't have much that is solid behind it to stop it vibrating to large, low frquency pressure waves.",
"provenance": null
},
{
"answer": "Sound is just waves of pressure or vibrations moving through a medium like air. What you're feeling is sound, the same sound that you are hearing. Your ears just have structures that turn those pressure waves into the thing we know as sound.\n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "6846175",
"title": "Listener fatigue",
"section": "Section::::Causes.:Sensory overload.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 357,
"text": "When exposed to a multitude of sounds from several different sources, sensory overload may occur. This overstimulation can result in general fatigue and loss of sensation in the ear. The associated mechanisms are explained in further detail down below. Sensory overload usually occurs with environmental stimuli and not noise induced by listening to music.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4358939",
"title": "Botellón",
"section": "Section::::Controversy.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 288,
"text": "BULLET::::1. Noise: Because participants gather in the streets and other public areas, the noise can disturb surrounding residents and citizens. Also, loud music contributes to the amount of noise, which is one reason why participants have begun moving to less populated areas in cities.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6846175",
"title": "Listener fatigue",
"section": "Section::::Potential risk factors.:Physical activity.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 408,
"text": "When combining exercise with exposure to loud noises, humans have been observed to experience a long temporary threshold shift as well. Physical activity also results in an increase in metabolic activity, which has already been increased as a result of the vibrations of loud sounds. This factor is particularly interesting due to the fact that a large population of people listen to music while exercising.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15774067",
"title": "Synaptic noise",
"section": "Section::::Causes.:Background activity.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 372,
"text": "Another cause of noise is due to the exocytosis of neurotransmitters from the synaptic terminals that provide input to a given neuron. This occurrence happens in the background while a cell is at resting membrane potential. Since it is happening in the background, the release is not due to a signal, but is random. This unpredictability adds to the synaptic noise level.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9343928",
"title": "Auditory hallucination",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 656,
"text": "Other types of auditory hallucination include exploding head syndrome and musical ear syndrome. In the latter, people will hear music playing in their mind, usually songs they are familiar with. This can be caused by: lesions on the brain stem (often resulting from a stroke); also, sleep disorders such as narcolepsy, tumors, encephalitis, or abscesses. This should be distinguished from the commonly experienced phenomenon of getting a song stuck in one's head. Reports have also mentioned that it is also possible to get musical hallucinations from listening to music for long periods of time. Other reasons include hearing loss and epileptic activity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16807646",
"title": "Environmental issues in India",
"section": "Section::::Major issues.:Noise pollution.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 212,
"text": "Indoor noise can be caused by machines, building activities, and music performances, especially in some workplaces. Noise-induced hearing loss can be caused by outside (e.g. trains) or inside (e.g. music) noise.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31591527",
"title": "Saint-Paul Asylum, Saint-Rémy (Van Gogh series)",
"section": "Section::::In Saint-Paul Hospital.:The corridor.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 482,
"text": "In a letter to Theo in May 1889 he explains the sounds that travel through the quiet-seeming halls, \"There is someone here who has been shouting and talking like me all the time for a fortnight. He thinks he hears voices and words in the echoes of the corridors, probably because the auditory nerve is diseased and over-sensitive, and in my case it was both sight and hearing at the same time, which is usual at the onset of epilepsy, according to what Dr. Félix Rey said one day.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2nx0kq
|
Kingdom of Sardinia's place in Italian Unification in the nineteenth century
|
[
{
"answer": "Well, there is no clean cut explanation.\n\nHistorically the Duchy of Savoia (mainly centered in Piedmont, capital Turin), was the more expansionist Italian state from the late seventeenth century onward.\n\nThe dukes played an active role in all the European wars of the period, shifting their alliance between France and Austria, expanding their territory eastward toward Milan and acquiring Sicily and the title o King (Sicily was later exchanged with Sardinia).\n\nSo it was already their long time goal to acquire the duchy of Milan.\n\nThere were also other factors: it was the more independent minded Italian state, and also the more open to external influences; it certainly had the more advanced and powerful military at the time (not counting the Austrian garrisons). Milan was maybe a more advanced city, but as it was under Austrian rule (along with Lombardy and Veneto), it could not be a fulcrum for independence (there was a revolt in 1848 where the city expelled the Austrian garrison, but it was short lived, and anyway it immediately asked for military support from Piedmont).\n\nRegarding the other states the duchy of Parma, Modena and Tuscany were closely aligned with Austria; the Pope was not interested in territorial expansion, and this also limited any ambitions from the Kingdom of Naples and Sicily (who could not well invade the territory of the Pope).\n\nAt the time a complete takeover of Italy by Piedmont was not a given. There were many other hypothesis, like a federation of independent states, with the expulsion of the Austrians, the duke of Piedmont as the Military commander and the Pope as the president of the federation.\n\nAfter the war of 1859 actually there was no impetus for further expansion from the duke of Savoia, Vittorio Emanuele II; he had gained the duchy of Milan from the Austrians and modern Emilia-Romagna and Tuscany by plebiscite, and he was pretty satisfied. There were proposals for a federation of three states, north Italy under the Savoia in the north, the Pope in the center and the Bourbon in the south.\n\nBut then Garibaldi mounted its expedition and conquered the Kingdom of Naples and Sicily with an army of volunteers. At this point Piedmont intervened, as it had no wish for the potential establishment of a republic in the south, and the kingdom of Italy was formed.\n\nSo there was more than a factor in play:\n\n- the dynastic impetus of the Savoia to expand, playing the French and the Austrians one against the other\n\n- a strong desire of a large part of the elites of the various Italian states to expel the Austrians and to form some sort of united entity in Italy, with the awareness that only the Savoia had in practice the inclination and the capability to push for this same objective\n\n- the relative fragility of the governments of the other states once the protection of the status quo by the Austrians was removed",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "679153",
"title": "Grand Duchy of Tuscany",
"section": "Section::::House of Habsburg-Lorraine.:Tuscany restored and its final demise.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 459,
"text": "In December 1859, the Grand Duchy was joined to the Duchies of Modena and Parma to form the United Provinces of Central Italy, which were annexed by the Kingdom of Sardinia a few months later. On 22 March 1860, after a referendum that voted overwhelmingly (95%) in favour of a union with Sardinia; Tuscany was formally annexed to Sardinia. Italy was unified in 1870, when the remains of the Papal States were annexed in that September, deposing Pope Pius IX.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6783155",
"title": "San Massimo",
"section": "Section::::History.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 281,
"text": "In 1815 the merger of the Kingdom of Naples, which included San Massimo, and that of Sicily led to the formation of the short lived Kingdom of Two Sicilies. In 1860, the Sardinian forces of Giuseppe Garibaldi conquered the region, leading to the formation of the Kingdom of Italy.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2622058",
"title": "Political union",
"section": "Section::::Mixed unions.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 505,
"text": "The unification of Italy involved a mixture of unions. The kingdom consolidated around the Kingdom of Sardinia, with which several states voluntarily united to form the Kingdom of Italy. Others polities, such as the Kingdom of the Two Sicilies and the Papal States, were conquered and annexed. Formally, the union in each territory was sanctioned by a popular referendum where people were formally asked if they agreed to have as their new ruler Vittorio Emanuele II of Sardinia and his legitimate heirs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29376",
"title": "Sardinia",
"section": "Section::::History.:Savoyard period.\n",
"start_paragraph_id": 73,
"start_character": 0,
"end_paragraph_id": 73,
"end_character": 833,
"text": "With the Perfect fusion in 1848, the confederation of states powered by the Savoyard kings of Sardinia became a unitary and constitutional state and moved to the Italian Wars of Independence for the Unification of Italy, that were led for thirteen years. In 1861, being Italy united by a debated war campaign, the parliament of the Kingdom of Sardinia decided by law to change its name and the title of its king to Kingdom of Italy and King of Italy. Most Sardinian forests were cut down at this time, in order to provide the Piedmontese with raw materials, like wood, used to make railway sleepers on the mainland. The extension of the primary natural forests, praised by every traveller visiting Sardinia, would in fact be reduced to 1/5 of their original number, being little more than 100.000 hectares at the end of the century.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21486771",
"title": "Kingdom of Sardinia",
"section": "Section::::Early history.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 1446,
"text": "The Kingdom of Sardinia and Corsica (later, just the \"Kingdom of Sardinia\" from 1460) was a state whose king was the King of Aragon, who started to conquer it in 1324, gained full control in 1410, and directly ruled it until 1460. In that year it was incorporated into a sort of confederation of states, each with its own institutions, called the Crown of Aragon, and united only in the person of the king. The Crown of Aragon was made by a council of representatives of the various states and grew in importance for the main purpose of separating the legacy of Ferdinand II of Aragon from that of Isabella I of Castile when they married in 1469. The idea of the kingdom was created in 1297 by Pope Boniface VIII, as a hypothetical entity created for James II of Aragon under a secret clause in the Treaty of Anagni. This was an inducement to join in the effort to restore Sicily, then under the rule of James's brother Frederick III of Sicily, to the Angevin dynasty over the oppositions of the Sicilians. The two islands proposed for this new kingdom were occupied by other states and fiefs at the time. In Sardinia, three of the four states that had succeeded Byzantine imperial rule in the 9th century had passed through marriage and partition under the direct or indirect control of Pisa and Genoa in the 40 years preceding the Anagni treaty. Genoa had also ruled Corsica since conquering the island nearly two centuries before (\"c\". 1133).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2155325",
"title": "Northern Italy",
"section": "Section::::History.:Modern history.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 378,
"text": "In the congress of Vienna, the Kingdom of Sardinia was restored, and furthermore enlarged by annexing the Republic of Genoa to strengthen it as a barrier against France. The rest of Northern Italy was under Austrian rule, either direct like in the Lombardo-Venetian Kingdom or indirect like in the Duchies of Parma and Modena. Bologna and Romagna were given to the Papal State.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53254",
"title": "History of Sardinia",
"section": "Section::::United Italy.:Kingdom of Italy.\n",
"start_paragraph_id": 61,
"start_character": 0,
"end_paragraph_id": 61,
"end_character": 362,
"text": "With the Unification of Italy in 1861, the Kingdom of Sardinia became the Kingdom of Italy. Since 1855 the national hero Giuseppe Garibaldi bought most of the island of Caprera in the Maddalena archipelago, where he moved because of the loss of his home town of Nice. His house, farm and tomb are now the most visited Sardinian museum (\"Compendio Garibaldino\").\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1k8ye7
|
Do we know of any chants that galley sailors would sing while they rowed?
|
[
{
"answer": "It's unlikely that they sung - that would not be conducive to rowing, which is physically tiring. It has been theorised that they may have hummed, which is less tiring, but there is no substantial proof of this. [A reconstructed trireme was tested using various means of synchronisation - humming was reportedly effective.](_URL_2_)\n\n[You may find this source useful, though I do not know how you might best access a full version of it](_URL_1_); 'The Athenian Trireme: The History and Reconstruction of an Ancient Greek Warship,\n By J. S. Morrison, J. F. Coates, N. B. Rankov'.\n\nEdit: See also [The Trireme](_URL_0_), by Prof Boris Rankov (Royal Holloway), also a rower.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "311643",
"title": "Sea shanty",
"section": "Section::::Nature of the songs.:Lyrical content.\n",
"start_paragraph_id": 107,
"start_character": 0,
"end_paragraph_id": 107,
"end_character": 347,
"text": "As a rule, the chantey in its entirety possesses neither rhyme nor reason; nevertheless, it is admirably fitted for sailors' work. Each of these sea-songs has a few stock verses or phrases to begin with, but after these are sung, the soloist must improvise, and it is principally his skill in this direction that marks the successful chantey-man.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "382326",
"title": "Work song",
"section": "Section::::Sea shanties.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 751,
"text": "Work songs sung by sailors between the eighteenth and twentieth centuries are known as sea shanties. These songs were typically performed while adjusting the rigging, raising anchor, and other tasks where men would need to pull in rhythm. These songs usually have a very punctuated rhythm precisely for this reason, along with a \"call-and-answer\" format. Well before the nineteenth century, sea songs were common on rowing vessels. Such songs were also very rhythmic in order to keep the rowers together. Because many cultures used slaves to row, some of these songs might also be considered slave songs. Improvised verses sung by sailors spoke of ills with work conditions and captains. These songs were performed with and without the aid of a drum.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "311643",
"title": "Sea shanty",
"section": "Section::::Word.:Etymology.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 242,
"text": "The \"chants\", as may be supposed, have more of rhyme than reason in them. The tunes are generally plaintive and monotonous, as are most of the capstan tunes of sailors, but resounding over the still waters of the Bay, they had a fine effect.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2294022",
"title": "Bugis, Singapore",
"section": "Section::::History.:1950s–1980s.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 351,
"text": "One of the \"hallowed traditions\" bestowed upon the area by sojourning sailors (usually from Britain, Australia and New Zealand), was the ritualistic \"Dance of the Flaming Arseholes\" on top of the infamous toilet's roof. Compatriots on the ground would chant the signature \"Haul 'em down you Zulu Warrior\" song whilst the matelots performed their act.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "311643",
"title": "Sea shanty",
"section": "Section::::History and development.:Emergence.:Early Anglo-British and American sailor work songs.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 613,
"text": "A step up in sophistication from the sing-outs was represented by the first widely established sailors' work song of the 19th century, \"Cheer'ly Man\". Although other work-chants were evidently too variable, non-descript, or incidental to receive titles, \"Cheer'ly Man\" appears referred to by name several times in the early part of the century, and it lived on alongside later-styled shanties to be remembered even by sailors recorded by James Madison Carpenter in the 1920s. \"Cheer'ly Man\" makes notable appearances in the work of both Dana (sea experience 1834–36) and Herman Melville (sea experience 1841–42).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20653261",
"title": "Poor Paddy Works on the Railway",
"section": "Section::::History.:As a chanty.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 282,
"text": "Several versions of this chanty were audio-recorded from the singing of veteran sailors in the 1920s-40s by folklorists like R.W. Gordon, J.M. Carpenter, and William Main Doerflinger. Capt. Mark Page, whose sea experience spanned 1849-1879, sang it for Carpenter in the late 1920s.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12084618",
"title": "The Sailor's Hornpipe",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 315,
"text": "Samuel Pepys referred to it in his diary as \"The Jig of the Ship\" and Captain Cook, who took a piper on at least one voyage, is noted to have ordered his men to dance the hornpipe in order to keep them in good health. The dance on-ship became less common when fiddlers ceased to be included in ships' crew members.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
umi4r
|
Why do certain musical scales sound happy, scary , eerie, etc?
|
[
{
"answer": "Im not aware of a ton of work in this area, but one guy who is sorta studying this is Gilden at UT Austin. Though he mostly focuses on the nature of musical \"groove\". It's a bit of a new line for him, but he talks briefly about it on his site (_URL_0_). The guy is crazy smart though, so if you're interested keep up with him. Used to be an astrophysicist trained by a nobel laureate before switching to psychology. As for pop science, you might check out an Oliver Sacks book called Musicophilia if you haven't already. Also, there's this scientific american article from a while back, ( _URL_1_ ). I dunno how particular an answer you're looking for, but hopefully something there will interest you more than being called a dumb fuck.",
"provenance": null
},
{
"answer": "Some of it has to do with [consonance and dissonance](_URL_0_).",
"provenance": null
},
{
"answer": "i am not a scientist, but a reasonably educated musician.\n\nthe associations with scales is largely cultural. minor scales are not sad in all cultures. however, minor scales, because of how the notes compare to the harmonic series, tend to resolve downward to structural pitches rather than upward, which accounts for a lot of the difference.\n\nthere are also modes of the major scale. a mode is the same pitch relationship starting on a different pitch. natural minor is the 6th mode of the major scale, meaning you start on the 6th degree and play all the notes in the octave. lydian (major, aka ionian with a raised 4) is the brightest mode, and you can hear how bright and \"up\" it is in for example the simpsons theme song or in the 3rd movement of [beethoven's op 132, (starting at 19:24)](_URL_0_)\n\n(EDIT: and for the record, that string quartet is one of the finest chamber works ever, in my opinion. the third movement is the high point of the work, but it's worth listening to the whole thing. there was such a stir about it, that schubert requested to hear it his deathbed, and his response was \"after this, what is left for us to compose?\" AND beethoven was stone deaf for years before he wrote it. impressive guy.)\n\ni'm afraid the ability to scientifically determine what's going on once and for all is rather limited at this time, because in addition to physics/acoustics, we have to deal with psychoacoustics (how our brains process sounds, deleting and adding content from different combinations of pitches and harmonics), cultural training, and personal associations.\n\nEDIT: thanks to z3ugma for the youtube link that takes you to the right spot in the video.",
"provenance": null
},
{
"answer": "I'm not sure how well versed in music you are, but the primary difference between a major and minor scale is the 3rd note of that scale. Most of the others stay the same (the rules changing depending on the type of minor scale, but that is more of a music theory question than a psychology question). \n\nSo let us focus a second on the third note of the scale. Basic chords are made up of the root note, the third (be it major or minor), and the 5th. In a well tuned instrument, the 5th has a pitch ratio of 3:2, meaning that for every 3 vibrations of the upper note, the lower note will vibrate two. This creates a generally pleasing effect as the waves that make up these notes restart at the same place every 6 cycles. \n\nNow we look at the major 3rd, which has a pitch ratio of 5:4, which is also pleasing as we hear a sync with the root every 20 vibrations. The minor 3rd, which has a pitch ratio of 6:5, is somewhat less pleasing. \n\nSo you may be asking what this has to do with speech. In normal conversation, the notes our voice makes are rarely larger than an octave. In fact, most speech is within a half octave range (I don't have a source for this, sorry). That means that in order to convey meta-information, we must listen to the subtleties of voice inflection. One who is sad is less likely to add emphasis to certain non-monosyllabic words, thus dropping the pitch, raising the pitch ratio, etc. etc.\n\nThink of Eeyore from Winnie the Pooh, and the way he says his name. I'm sure if you say it his way, and then say it as if you were happy to be saying the name, you'd be dropping a major 3rd rather than a minor 3rd. \n\nWe have become quite adept at picking out these subtleties. Here's a paper on how good we actually are: _URL_0_\n\nSorry if this is a bunch of disjoint ideas. Hopefully it helps!",
"provenance": null
},
{
"answer": "I highly recommend Dr. Daniel J. Levithin's *This is your Brain on Music.* ",
"provenance": null
},
{
"answer": "A *lot* of this is cultural, but some of it is related to physics of sound. \n\nBrushing aside a ton of stuff and zeroing in on western equal-tempered stuff, and then over-simplifying to boot...\n\nA real-world \"note\" produced by an instrument or voice has an infinite sequence of harmonic overtones (you can think of them as \"higher notes\" simultaneously produced by fractional vibrations of the air or wood or string or whatever). Smaller \"fractions\" are the most prominent ones (1/2, 1/3, 1/4, 1/5, etc...)\n\nCertain intervals in the 12-tone scale correlate \"perfectly\" (or at least very closely) with prominent harmonics of the root note (perfect fifths and fourths, octaves). These are \"neutral\" and sound neither major nor minor, they just sound sort of consonant and \"reinforcing\" of the sonic texture of the root note.\n\nOther intervals do not correlate to any of the prominent harmonics and sound obviously \"dissonant\" (flatted 2nd, tritone, etc). These again do not sound obviously major nor minor without context, just dissonant and \"unnatural\" and jarring. \n\nNow, there are other intervals (especially thirds) that are close to but not quite right on top of prominent harmonics. The major third is slightly sharp of the \"perfectly\" consonant interval that an untrained ear \"expects\", and the minor third is slightly more flat of the same \"blue note\" or \"perfect third\" that doesn't quite exist on the scale, but that does in nature (sort of). \n\nAs a result, a major chord or passage with a major third suggests a rising pitch, which is a sonic effect we associate with approaching things, excited speech, eagerness, and rising volume. \n\nOTOH, a minor chord, with it's \"flat\" interval, suggests receding sound, decaying sound, and quiet or somber speech. \n\nPart of these associations are due to things like doppler effects and the way that frequency perception changes with volume and distance, and part is related to how speech patterns reflect emotion (which might in turn be related to the former). \n\nFar more importantly, music creates its own impressions and expectations. Progressions and intervals might suggest certain physical phenomena or speech patterns, but they also suggest other songs and melodies you have heard or known, and the associations you have with them. \n\nFor an interesting example of how these kinds of associations and sonic effects interact with the emotional content of a piece of music, try playing [\"Happy Birthday to You\" in minor](_URL_0_), it sounds like a dirge, or something sinister and fatal. ",
"provenance": null
},
{
"answer": "i actually did a paper on this for one of my college courses. It was very interesting to see that while some is obviously based off of culture and where you are in the world how you are conditioned to react to certain types of sounds (example: jaws music putting you on edge) a lot of seems to be for lack of better words \"pre-programmed\". There were extensive studies done with with babies reacting to certain sounds like perfect fifths positively so i think there's something in that",
"provenance": null
},
{
"answer": "Part of it may have to do with the fact that we are constantly processing scales. Not musical scales, but all sorts of scales. What is a scale? It associates, etymologically, with climbing, and has a general meaning of a kind of traversing and measurement. We are constantly scaling: we view the face of another and \"scale up and down\" the person, their body, their face. We scale stairs: starting, we make our way up or down. We constantly measure, and that measure has a \"scale\" to it: a sense of things across, up and down, etc. We even \"measure\" situations in various ways. We scale our speech, step it up or down, etc. The issue is to draw the connection between the musical scale as such, which will be mentioned in light of your question and is not hard to see, and the scaling we do all the time. \n\nSo take the sense of \"scale\" in a kind of expanded sense that includes a few basic features: a measurement and span, roughly. So just how much of this \"scaling\" do we do? The question is more like: when *aren't* we in an \"scale\" of some kind? Look at any situation you're in and ask yourself where there is a \"traversing measurment\" involved. Whether it's walking to the coke machine or ambling slowly to someone you need to say something uncomfortable to, we're always scoping out and being in some degree of placement, commencement, passage through, etc., various \"things\", all sorts of things. Any \"thing\" in the physical sense has a \"scale\" in it: looking across the thing from left to right, or up and down, etc. Little moments and broad passages. A week is a kind of scale: seven days, one to the next, with a sense of middle, then TGIF, then the weekend, you name it, there's a \"scale\", a line, a measurement in it, a traversing or possible traversing. \n\nSo we have a constant cognitive mechanism of engagement with scales. So when we hear scales, we have a big serious of operations going on that get sparked and engaged. It's on this basis that musical scales have meaning for us, I think. Also, it seems important to include in this internal scales and balances: so we're constantly in scales within ourselves, in our emotions/feelings, though how \"scaling\" as such occurs in this seems a little harder in some was to see, in other ways not. Some comments have mentioned the \"up and down\" of the voice, the natural range of speaking, and how we traverse that range and have predilections for parts of that range, how we are primed to meaning on the basis of placement in that range. \n\nSo you can go on about how the musical scale has signal points, such as the median or 3, which can be high or low, with implications. And that's all true enough, but it seems quite important to realize that we are involved in scales all the time, as I suggested and not just when hearing music. Is there a do-re-mi of the face? Kind of, yes. And of every sentence in this comment, in a way. \n\nSo then the question is: What happens when a musical scale comes into contact with the \"scaling human\". It sets off all kinds of associations. Or can.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "321868",
"title": "Bohlen–Pierce scale",
"section": "Section::::Music and composition.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 434,
"text": "What does music using a Bohlen–Pierce scale sound like, aesthetically? Dave Benson suggests it helps to use only sounds with only odd harmonics, including clarinets or synthesized tones, but argues that because \"some of the intervals sound a bit like intervals in [the more familiar] twelve-tone scale, but badly out of tune,\" the average listener will continually feel \"that something isn't quite right,\" due to social conditioning.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3970341",
"title": "Wow and flutter measurement",
"section": "Section::::Audible effects.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 1467,
"text": "Wow and flutter are particularly audible on music with oboe, string, guitar, flute, brass, or piano solo playing. While wow is perceived clearly as pitch variation, flutter can alter the sound of the music differently, making it sound ‘cracked’ or ‘ugly’. A recorded 1 kHz tone with a small amount of flutter (around 0.1%) can sound fine in a ‘dead’ listening room, but in a reverberant room constant fluctuations will often be clearly heard. These are the result of the current tone ‘beating’ with its echo, which since it originated slightly earlier, has a slightly different pitch. What is heard is quite pronounced amplitude variation, which the ear is very sensitive to. This probably explains why piano notes sound ‘cracked’. Because they start loud and then gradually tail off, piano notes leave an echo that can be as loud as the dying note that it beats with, resulting in a level that varies from complete cancellation to double-amplitude at a rate of a few Hz: instead of a smoothly dying note we hear a heavily modulated one. Oboe notes may be particularly affected because of their harmonic structure. Another way that flutter manifests is as a truncation of reverb tails. This may be due to the persistence of memory with regard to spatial location based on early reflections and comparison of Doppler effects over time. The auditory system may become distracted by pitch shifts in the reverberation of a signal that should be of fixed and solid pitch.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40651",
"title": "Scale (music)",
"section": "Section::::Background.:Types of scale.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 334,
"text": "\"The number of the notes that make up a scale as well as the quality of the intervals between successive notes of the scale help to give the music of a culture area its peculiar sound quality.\" \"The pitch distances or intervals among the notes of a scale tell us more about the sound of the music than does the mere number of tones.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40651",
"title": "Scale (music)",
"section": "Section::::Non-Western scales.:Other.\n",
"start_paragraph_id": 64,
"start_character": 0,
"end_paragraph_id": 64,
"end_character": 450,
"text": "Many other musical traditions use scales that include other intervals. These scales originate within the derivation of the harmonic series. Musical intervals are complementary values of the harmonic overtones series. Many musical scales in the world are based on this system, except most of the musical scales from Indonesia and the Indochina Peninsulae, which are based on inharmonic resonance of the dominant metalophone and xylophone instruments.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11807980",
"title": "Enigmatic scale",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 268,
"text": "The enigmatic scale (\"scala enigmatica\") is an unusual musical scale, with elements of both major and minor scales, as well as the whole-tone scale. It was originally published in a Milan journal as a musical challenge, with an invitation to harmonize it in some way.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5303229",
"title": "Yo scale",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 401,
"text": "The \"yo\" scale, which does not contain minor notes, according to a traditional theory is a pentatonic scale used in much Japanese music including gagaku and shomyo. The \"yo\" scale is used specifically in folk songs and early popular songs and is contrasted with the \"in\" scale which does contain minor notes. The \"in\" scale is described as 'dark' while the yo scale is described as 'bright' sounding.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "113040",
"title": "Whole tone scale",
"section": "",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 521,
"text": "The whole tone scale has no leading tone and because all tones are the same distance apart, \"no single tone stands out, [and] the scale creates a blurred, indistinct effect\". This effect is especially emphasised by the fact that triads built on such scale tones are all augmented triads. Indeed, all six tones of a whole tone scale can be played simply with two augmented triads whose roots are a major second apart. Since they are symmetrical, whole tone scales do not give a strong impression of the tonic or tonality.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
rjisw
|
Is depression more frequent amongst people in developed countries?
|
[
{
"answer": "I do not believe we have strong data about depression amongst underdeveloped nations, so this comparison may not be possible.",
"provenance": null
},
{
"answer": "_URL_0_ \n(Bokmal) _URL_1_\n\nI don't know about depression in general, but Scandinavia has a very high rate of seasonal affective disorder thanks to its latitude.",
"provenance": null
},
{
"answer": "A [2011 study](_URL_1_) reported:\n > On average, the estimated lifetime prevalence [of depression] was **higher in high-income (14.6%) than low- to middle-income (11.1%) countries** (t = 5.7, P < 0.001). Indeed, the four lowest lifetime prevalence estimates ( < 10%) were in low- to middle-income countries (India, Mexico, China, South Africa). Conversely, with the exception of Brazil, the highest rates ( > 18%) were in four high-income countries (France, the Netherlands, New Zealand, the USA).\n\n...and generally that:\n > Consistent with previous cross-national reports, the WMH MDE [World Mental Health major depressive episodes] prevalence estimates varied considerably between countries, with the highest prevalence estimates found in some of the wealthiest countries in the world. \n\nThe researchers provided several possible explanations for these results (including the suggestion that [\"depression is to some extent an illness of affluence\"](_URL_0_)), but also **acknowledged several limitations and that their findings might be due to recall error**. They concluded **more work needed to be done**. \n\nEdit: more bold for clarity.\n\nEdit 2: **Social context is indeed a known issue**, in addition to many other factors. Please refer to Epilepep's remarks, which have unfortunately become buried. \n\nAlso, **please (at least) read the methods** of the paper before commenting about potential errors in data collection. This study may not be completely culturally sensitive, but efforts were made to conduct the face-to-face interviews as objectively as possible. For instance, the \"interview translation, back-translation and harmonization protocol required culturally competent bilingual clinicians in the participating countries to review, modify and approve the key phrases used to describe symptoms of all disorders assessed in the survey\". \n\nThe researchers explicitly noted that \"no attempt was made to go beyond the DSM-IV criteria\", but stated that \"as noted in the introduction, previous research has shown that the latent structure of the symptoms of major depression is consistent across countries, providing a principled basis for focusing on this criterion set in our analysis\".\n\n**Again, the authors of this paper made a very cautious conclusion**: \n > MDE is a significant public-health concern across all regions of the world and is strongly linked to social conditions. Future research is needed to investigate the combination of demographic risk factors that are most strongly associated with MDE in the specific countries included in the WMH.",
"provenance": null
},
{
"answer": "Depression is a funny thing. In different cultures it is defined differently. In the 1960s the World Health Organisation conducted a study to determine exactly this. I can't find a page linking to an example of the study, but they found something along the lines of nobody in Africa is actually depressed. This was because the WHO was organised and developed in westernised countries and the cultures in Africa did not lean to the definition of depression we had then. \n\nThis also translates to collectivist cultures in countries such as Japan and China which only recently developed an increase in the number of cases of minor depression. This is largely believed to be because of the exposure of western individualist cultures as opposed to their usual collectivist nature and the influx of pharmaceutical companies marketing anti-depressants. An article on the Japanese influence [here](_URL_0_).\n\nIt's difficult to find the World health organisation study on this. Can anyone help me? I am unsure about the facts I stated because I read this a while ago.",
"provenance": null
},
{
"answer": "A passing observation would be the amount of psychiatrists/psychologists per 100 square miles would be related with diagnoses of mental illness in the area -- that would be a lengthy study though.",
"provenance": null
},
{
"answer": "[Statistics I have seen](_URL_0_) show Scandinavian countries as some of the happiest so I am confused how one can be depressed overall, yet one of the happiest nations? Maybe everyone is depressed?",
"provenance": null
},
{
"answer": "There are some significant differences between organic mental illness and situational emotional response. But they are hard to separate out without real effort, because they share many symptoms in common.\n\nThere is no reason to believe that the incidence of various organic mental illnesses in general populations (not counting small isolated gene pools) has varied much around the world, nor throughout history.",
"provenance": null
},
{
"answer": "i heard depression is particularly common in scandinavian countries and those towards north pole(iceland, northern russia etc). apparently its probably linked with the long nights and short days and perpetual coldness over the winter months.\n\ni guess the problem would be the same approaching the south pole if any significant population actually lived down that way...",
"provenance": null
},
{
"answer": "I see very little here about how depression can be genetic.",
"provenance": null
},
{
"answer": "Higher income societies are more materialistically based. Which has a significant bias towards a few individuals who can afford to have sought after things. The majority that cannot afford those things, and have the intelligence to realize that they can perhaps never get those things, become depressed because social influences dictate that they are lesser people for not having those \"things\". Societies that have a less materialistically based culture will have less depression.\n\nIt is a phenomena that would take some time to describe, but that is the general idea behind it. I would also mention that collectivistic cultures will also have significantly lower depression rates than individualistic cultures. ",
"provenance": null
},
{
"answer": "[I'll just leave this here](_URL_0_)",
"provenance": null
},
{
"answer": "TL;DR Suicide rates skyrocketed when people lost specific roles in life. Going from a farming oriented ife, where everyone was born into specific lifelong roles, to a privilaged city life with a multitude of free choices lead to stark increase in suicides. Which could mean that people in developed countries indeed are prone to more depression.\n\n\nOne of the fathers of sociology studied suicide rates extensively.\n\n\n_URL_0_\n\n\nI don't believe he directly addressed whether or not development had an affect, but I believe his findings are very relevant to this discussion.\n\nPlease correct me if I over simplified it.",
"provenance": null
},
{
"answer": "No doubt depression is diagnosed more in developed countries because more people have access to diagnosticians, so I think this will be a hard question to answer.",
"provenance": null
},
{
"answer": "Depression (or at least the diagnostic criteria for it in the ICD-10 and the DSM-IV) is at least partly a Western phenomenon. Culture does have a significant part to play in how people describe what's going on with them psychologically, and some cultures, particularly non-Western ones, may describe what we would think of as \"depression\" as something entirely different. (This is part of why the psych program I graduated from put so much emphasis on multicultural approaches to treatment.)",
"provenance": null
},
{
"answer": "I think the word you're looking for is *ennui*",
"provenance": null
},
{
"answer": "I'd recommend looking at the theory of anomie. Durkheim mentions some great ideas in his theory of anomic suicide, which occurs mostly to people in cities and other developed areas. I'm on a phone so all I can do is give you this link: \n\n_URL_0_",
"provenance": null
},
{
"answer": "developement leads to- education leads to- intelligence leads to- knowledge leads to- wisdom leads to- realization leads to- depression. It's an old concept.",
"provenance": null
},
{
"answer": "I highly recommend checking out the book \"The Spirit Level\"\nIt uses statistical data to show a significant positive trend between depression and inequality, as opposed to total income level",
"provenance": null
},
{
"answer": "Someone did a TED talk that convinced me of why this is. [Here's the link.](_URL_0_)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "35500653",
"title": "Epidemiology of depression",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 447,
"text": "Depression is a major cause of morbidity worldwide, as the epidemiology has shown. Lifetime prevalence estimates vary widely, from 3% in Japan to 17% in the US. Epidemiological data shows higher rates of depression in the Middle East, North Africa, South Asia and America than in other countries. Among the 10 countries studied, the number of people who would suffer from depression during their lives falls within an 8–12% range in most of them.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54301172",
"title": "Well-being contributing factors",
"section": "Section::::Other factors.:Modernity.\n",
"start_paragraph_id": 336,
"start_character": 0,
"end_paragraph_id": 336,
"end_character": 492,
"text": "Much research has pointed at the rising rates of depression, leading people to speculate that modernization may be a factor in the growing percentage of depressed people. One study found that women in urban America were much more likely to experience depression than those in rural Nigeria. Other studies have found a positive correlation between a country's GDP per capita, as quantitative measure of modernization, and lifetime risk of a mood disorder trended toward significance (p=0.06).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42730418",
"title": "Depression and culture",
"section": "Section::::Causes.:Gender.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 314,
"text": "As is true in Western societies, depression is more prevalent in women than in men in collective cultures. Some have hypothesized that this is due to their inferior positions in the culture, in which they may experience domestic violence, poverty, and blatant inequality that can greatly contribute to depression.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1810614",
"title": "Health equity",
"section": "Section::::Poor health and economic inequality.\n",
"start_paragraph_id": 87,
"start_character": 0,
"end_paragraph_id": 87,
"end_character": 645,
"text": "Poor health outcomes appear to be an effect of economic inequality across a population. Nations and regions with greater economic inequality show poorer outcomes in life expectancy, mental health, drug abuse, obesity, educational performance, teenage birthrates, and ill health due to violence. On an international level, there is a positive correlation between developed countries with high economic equality and longevity. This is unrelated to average income per capita in wealthy nations. Economic gain only impacts life expectancy to a great degree in countries in which the mean per capita annual income is less than approximately $25,000.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1713306",
"title": "Primitive culture",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 319,
"text": "Further research published in JAMA has found very high rates of clinical depression in impoverished nations, such as Zimbabwe, and that depression wasn't a Western disease but a human one, and in fact glossing over primitive culture as being leisure filled and stress-free was entirely opposite of scientific findings.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8389",
"title": "Major depressive disorder",
"section": "Section::::Prognosis.\n",
"start_paragraph_id": 111,
"start_character": 0,
"end_paragraph_id": 111,
"end_character": 498,
"text": "Depression is often associated with unemployment and poverty. Major depression is currently the leading cause of disease burden in North America and other high-income countries, and the fourth-leading cause worldwide. In the year 2030, it is predicted to be the second-leading cause of disease burden worldwide after HIV, according to the WHO. Delay or failure in seeking treatment after relapse and the failure of health professionals to provide treatment are two barriers to reducing disability.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4599275",
"title": "Human overpopulation",
"section": "Section::::Dangers and effects.:Poverty, and infant and child mortality.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 915,
"text": "The UN Human Development Report of 1997 states: \"During the last 15–20 years, more than 100 developing countries, and several Eastern European countries, have suffered from disastrous growth failures. The reductions in standard of living have been deeper and more long-lasting than what was seen in the industrialised countries during the depression in the 1930s. As a result, the income for more than one billion people has fallen below the level that was reached 10, 20 or 30 years ago\". Similarly, although the proportion of \"starving\" people in sub-Saharan Africa has decreased, the absolute number of starving people has increased due to population growth. The percentage dropped from 38% in 1970 to 33% in 1996 and was expected to be 30% by 2010. But the region's population roughly doubled between 1970 and 1996. To keep the numbers of starving constant, the percentage would have dropped by more than half.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9zdrzx
|
what is more dangerous for the human body. high ac or dc and why?
|
[
{
"answer": "The biggest advantage to AC is that it fluctuates the level, which gives you more of a chance of disconnecting from it. DC will lock your muscles and keep you from letting go.",
"provenance": null
},
{
"answer": "Low-frequency AC is more likely to disrupt your heart rhythm than DC, but DC can still do that. High-frequency AC is extremely unlikely to stimulate nerves in a way that causes damage, which is why there are surgical tools that apply HF currents to body parts.",
"provenance": null
},
{
"answer": "Well high AC will make you less likely be hit and avoid damage at all. But high DC is also hard to beat, especially if your saving throw is low and the failure can be critical.\n\n & #x200B;\n\n\\#Dndthings",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "211889",
"title": "Electrical injury",
"section": "Section::::Pathophysiology.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 487,
"text": "The minimum current a human can feel depends on the current type (AC or DC) as well as frequency for AC. A person can feel at least 1 mA (rms) of AC at 60 Hz, while at least 5 mA for DC. At around 10 mA, AC current passing through the arm of a human can cause powerful muscle contractions; the victim is unable to voluntarily control muscles and cannot release an electrified object. This is known as the \"let go threshold\" and is a criterion for shock hazard in electrical regulations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "211889",
"title": "Electrical injury",
"section": "Section::::Pathophysiology.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 1129,
"text": "The current may, if it is high enough and is delivered at sufficient voltage, cause tissue damage or fibrillation which can cause cardiac arrest; of AC (rms, 60 Hz) or of DC at high voltage can cause fibrillation. A sustained electric shock from AC at 120 V, 60 Hz is an especially dangerous source of ventricular fibrillation because it usually exceeds the let-go threshold, while not delivering enough initial energy to propel the person away from the source. However, the potential seriousness of the shock depends on paths through the body that the currents take. If the voltage is less than 200 V, then the human skin, more precisely the stratum corneum, is the main contributor to the impedance of the body in the case of a macroshock—the passing of current between two contact points on the skin. The characteristics of the skin are non-linear however. If the voltage is above 450–600 V, then dielectric breakdown of the skin occurs. The protection offered by the skin is lowered by perspiration, and this is accelerated if electricity causes muscles to contract above the let-go threshold for a sustained period of time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4027782",
"title": "1,1-Dichloroethene",
"section": "Section::::Safety.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 420,
"text": "The health effects from exposure to 1,1-DCE are primarily on the central nervous system, including symptoms of sedation, inebriation, convulsions, spasms, and unconsciousness at high concentrations. 1,1-DCE is considered a potential occupational carcinogen by the National Institute for Occupational Safety and Health . It is also listed as a chemical known to the state of California to cause cancer and birth defects.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10809046",
"title": "Para-Methoxy-N-ethylamphetamine",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 550,
"text": "\"para\"-Methoxyethylamphetamine (PMEA), is a stimulant drug related to PMA. PMEA reputedly produces similar effects to PMA, but is considerably less potent and seems to have slightly less tendency to produce severe hyperthermia, at least at low doses. At higher doses however the side effects and danger of death approach those of PMA itself, and PMEA should still be considered a potentially dangerous drug. Investigation of a drug-related death in Japan in 2005 showed PMEA to be present in the body and was thought to be responsible for the death.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1250286",
"title": "Pentachlorophenol",
"section": "Section::::Toxicity.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 286,
"text": "Short-term exposure to large amounts of PCP can cause harmful effects on the liver, kidneys, blood, lungs, nervous system, immune system, and gastrointestinal tract. Elevated temperature, profuse sweating, uncoordinated movement, muscle twitching, and coma are additional side effects.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57038447",
"title": "Tolerogenic dendritic cell",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 341,
"text": "Tolerogenic DCs present a potential strategy for treatment of autoimmune diseases, allergic diseases and transplant rejections. Moreover, Ag-specific tolerance in humans can be induced \"in vivo\" via vaccination with Ag-pulsed \"ex vivo\" generated tolerogenic DCs. For that reason, tolerogenic DCs are an important promising therapeutic tool.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4498159",
"title": "Estramustine phosphate",
"section": "Section::::Side effects.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 1169,
"text": "Severe adverse effects of EMP are thromboembolic and cardiovascular complications including pulmonary embolism, deep vein thrombosis, stroke, thrombophlebitis, coronary artery disease (ischemic heart disease; e.g., myocardial infarction), thrombophlebitis, and congestive heart failure with fluid retention. EMP produces cardiovascular toxicity similarly to diethylstilbestrol, but to a lesser extent in comparison at low doses (e.g., 280 mg/day oral EMP vs. 1 mg/day oral diethylstilbestrol). The prostate cancer disease state also increases the risk of thromboembolism, and combination with docetaxel may exacerbate the risk of thromboembolism as well. Meta-analyses of clinical trials have found that the overall risk of thromboembolism with EMP is 4 to 7%, relative to 0.4% for chemotherapy regimens without EMP. Thromboembolism is the major toxicity-related cause of discontinuation of EMP. Anticoagulant therapy with medications such as aspirin, warfarin, unfractionated and low-molecular-weight heparin, and vitamin K antagonists can be useful for decreasing the risk of thromboembolism with EMP and other estrogens like diethylstilbestrol and ethinylestradiol.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
16b161
|
How do changes in Earth's global temperature caused by Milankovitch cyclicity compare to other climate change sources (anthropogenic and other)?
|
[
{
"answer": "[This page at skeptialscience](_URL_0_) discusses Milankovitch Cycles and cites the variation of solar forcing due to orbital eccentricity as ~0.45 W/m^2. Current estimates of anthropogenic alterations to the radiative balance ([see IPCC](_URL_1_)) are about 1.6 W/m^2. So variations in forcing due to Milankovitch Cycles were less than 1/3 as strong as the current anthropogenic perturbation.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "14534679",
"title": "Climate of the Arctic",
"section": "Section::::Changes in Arctic Climate.:Global warming.\n",
"start_paragraph_id": 105,
"start_character": 0,
"end_paragraph_id": 105,
"end_character": 771,
"text": "According to the Intergovernmental Panel on Climate Change (IPCC), \"warming of the climate system is unequivocal\", and the global-mean temperature has increased by over the last century. This report also states that \"most of the observed increase in global average temperatures since the mid-20th century is very likely [greater than 90% chance] due to the observed increase in anthropogenic greenhouse gas concentrations.\" The IPCC also indicate that, over the last 100 years, the annually averaged temperature in the Arctic has increased by almost twice as much as the global mean temperature has. In 2009, NASA reported that 45 percent or more of the observed warming in the Arctic since 1976 was likely a result of changes in tiny airborne particles called aerosols.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9470070",
"title": "Henrik Svensmark",
"section": "Section::::Debate and controversy.:Galactic Cosmic Rays vs Global Temperature.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 525,
"text": "Mike Lockwood of the UK's Rutherford Appleton Laboratory and Claus Froehlich of the World Radiation Center in Switzerland published a paper in 2007 which concluded that the increase in mean global temperature observed since 1985 correlates so poorly with solar variability that no type of causal mechanism may be ascribed to it, although they accept that there is \"considerable evidence\" for solar influence on Earth's pre-industrial climate and to some degree also for climate changes in the first half of the 20th century.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14090587",
"title": "Low-carbon power",
"section": "Section::::The outlook for, and requirements of, low carbon power.:Emissions.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 306,
"text": "The Intergovernmental Panel on Climate Change stated in its first working group report that “most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations, contribute to climate change.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21350772",
"title": "Greenhouse gas",
"section": "Section::::Natural and anthropogenic sources.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 461,
"text": "The 2007 Fourth Assessment Report compiled by the IPCC (AR4) noted that \"changes in atmospheric concentrations of greenhouse gases and aerosols, land cover and solar radiation alter the energy balance of the climate system\", and concluded that \"increases in anthropogenic greenhouse gas concentrations is very likely to have caused most of the increases in global average temperatures since the mid-20th century\". In AR4, \"most of\" is defined as more than 50%.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41994159",
"title": "Global warming hiatus",
"section": "Section::::Reports by scientific bodies.:National Academy of Sciences-Royal Society Report.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 656,
"text": "A joint report from the UK Royal Society and the US National Academy of Sciences in February 2014 said that there is no \"pause\" in climate change and that the temporary and short-term slowdown in the rate of increase in average global surface temperatures in the non-polar regions is likely to start accelerating again in the near future. \"Globally averaged surface temperature has slowed down. I wouldn’t say it's paused. It depends on the datasets you look at. If you look at datasets that include the Arctic, it is clear that global temperatures are still increasing,\" said Tim Palmer, a co-author of the report and a professor at University of Oxford.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22115620",
"title": "Regional effects of global warming",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 604,
"text": "Regional effects of global warming are long-term significant changes in the expected patterns of average weather of a specific region due to global warming. The world average temperature is rising due to the greenhouse effect caused by increasing levels of greenhouse gases, especially carbon dioxide. When the global temperature changes, the changes in climate are not expected to be uniform across the Earth. In particular, land areas change more quickly than oceans, and northern high latitudes change more quickly than the tropics, and the margins of biome regions change faster than do their cores.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9270031",
"title": "IPCC First Assessment Report",
"section": "Section::::Overview.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 252,
"text": "BULLET::::- There are many uncertainties in our predictions particularly with regard to the timing, magnitude and regional patterns of climate change, due to our incomplete understanding of: sources and sinks of GHGs; clouds; oceans; polar ice sheets.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3rl322
|
Washing clothing that was not color fast.
|
[
{
"answer": "To cover the basics first, dyes before modern dyes were color fast and dry cleaning did exist. One of the most popular dyes still today is indigo. Interestingly it doesn't dissolve in water nor adhere to fabric, so how does this work? You need an alkaline solution such as lye, ammonia, or urine in order to dissolve the indigo. Once set into the fabric, water won't wash it out. Other dyes that are water soluble will still be color fast because of the process of using a mordant. The New Pocket Cyclopaedia of 1813 lists common mordants as \"sulphate of alumine, oxide of tin, oxide of iron in combination with acids, oxide of arsenic, tan, & c.\" It goes on to say that the most permanent dyes are cochineal and gum-lac (scarlets), indigo and woad (blue), dyers weed (yellow), and madder (coarse reds, purples, blacks). Mordant can be added before, during, or after the dying process depending on the chemistry needed. It does change the final dye color. Iron oxide mordants are notorious for their deterioration of textiles, and are the reason so many black garments survive in terrible shape if at all. Basically mordants help the dye to bond to the textile and keep it much more permanent. Silks and wools do incredibly well with this, linen less so and it's harder to get a dark color set into it for that reason.\n\nWhen it comes to cleaning you are very correct about not having to wash exterior pieces often (if at all). Undergarments were white for this reason, being able to be bleached and boiled to make them clean. Outer garments can sometimes be washed and submerged in water, but more often are spot cleaned based on the exterior dirt/stain. Dry cleaning by definition is simply cleaning with something other than water. For example, if you got a grease stain on silk (due to the carriage or dinner), it gets sprinkled with fullers earth, covered in paper, and a light amount of heat applied. This will draw out the stain (in modern day use baby powder to much the same effect). To remove any remaining rings you can wash the area with soap and water, washing it out with gin, and then washing it out with water. If the entire garment must be wetted to keep from getting water rings you'll wrap it up in a towel immediately to dry. Cook books often have recipes in back for laundering. And if you don't want to deal with it professional laundering can be had as well.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "32288197",
"title": "Écouché in the Second World War",
"section": "Section::::Daily life.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 226,
"text": "Washing clothes was not easy. Washing powder was eventually replaced by ashes or by using infusions of plants such as soapwort, which lathers like soap. Ivy was used for black garments, worn by the numerous women in mourning.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "172111",
"title": "Washing machine",
"section": "Section::::Washing by machine.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 386,
"text": "Clothes washer technology developed as a way to reduce the manual labor spent, providing an open basin or sealed container with paddles or fingers to automatically agitate the clothing. The earliest machines were hand-operated and constructed from wood, while later machines made of metal permitted a fire to burn below the washtub, keeping the water warm throughout the day's washing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "147699",
"title": "Laundry",
"section": "Section::::Common problems.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 334,
"text": "Another common problem is color bleeding. For example, washing a red shirt with white underwear can result in pink underwear. Often only similar colors are washed together to avoid this problem, which is lessened by cold water and repeated washings. Sometimes this blending of colors is seen as a selling point, as with madras cloth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17599355",
"title": "White",
"section": "Section::::History and art.:Modern history.:18th and 19th centuries.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 346,
"text": "White was the universal color of both men and women's underwear and of sheets in the 18th and 19th centuries. It was unthinkable to have sheets or underwear of any other color. The reason was simple; the manner of washing linen in boiling water caused colors to fade. When linen was worn out, it was collected and turned into high-quality paper.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60804518",
"title": "Laundry enzyme",
"section": "Section::::Merits.:A wider variety of clothes at one time.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 410,
"text": "As a consequential benefit, consumers can freely choose a larger range of clothes with diverse materials. Lower temperature laundry condition allows more delicate materials like wool and silk that are easily affected when placed into a high-temperature environment. Moreover, lower temperature also avoids fading jeans and denim which are usually dyed with dark colors. Thus there will be less color transfer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25816",
"title": "Roman Republic",
"section": "Section::::Culture.:Clothing.\n",
"start_paragraph_id": 183,
"start_character": 0,
"end_paragraph_id": 183,
"end_character": 1171,
"text": "For most Romans, even the simplest, cheapest linen or woolen clothing represented a major expense. Worn clothing was passed down the social scale until it fell to rags, and these in turn were used for patchwork. Wool and linen were the mainstays of Roman clothing, idealised by Roman moralists as simple and frugal. Landowners were advised that female slaves not otherwise occupied should be producing homespun woolen cloth, good enough for clothing the better class of slave or supervisor. Cato the Elder recommended that slaves be given a new cloak and tunic every two years; coarse rustic homespun would likely be \"too good\" for the lowest class of slave, but not good enough for their masters. For most women, the carding, combing, spinning and weaving of wool were part of daily housekeeping, either for family use or for sale. In traditionalist, wealthy households, the family's wool-baskets, spindles and looms were positioned in the semi-public reception area (\"atrium\"), where the mater familias and her familia could thus demonstrate their industry and frugality; a largely symbolic and moral activity for those of their class, rather than practical necessity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "151916",
"title": "T-shirt",
"section": "Section::::Decoration.:Screen printing.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 476,
"text": "In the 1980s, thermochromatic dyes were used to produce T-shirts that changed color when subjected to heat. The Global Hypercolour brand of these was a common sight on the streets of the UK for a few years, but has since mostly disappeared. These were also very popular in the United States among teenagers in the late 1980s. A downside of color-change garments is that the dyes can easily be damaged, especially by washing in warm water, or dye other clothes during washing.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
yjenl
|
Suppose we had working fusion reactors with the output and specifications we're working towards? What implications would it realistically have for other energy sources and humanity in general in terms of the other problems we have?
|
[
{
"answer": "Engineering student here.\n\nRegarding size, mobility - large, more or less immobile. ITER is large - it's a technology demonstrator for a power station. The reactor itself, plus all the necessary hardware to actually produce power (steam generation, turbines, generators, heat dumps) would take up even more space. I figure it could fit into a large ship, but I haven't even done napkin math. \nAssuming ITER works, we'd still be a very long way from fitting one into, say, a car or truck.\n\nWould it obsolete existing generating sources? Probably not. Coal is cheap - at the end of the day, coal, oil, nuclear (fission and fusion), solar thermal, and (part of) combined cycle gas are just different ways of boiling water to turn a turbine - and coal is dirt cheap.\n\nSafety - I'd need someone else more qualified, but as I understand it, there's no risk like those with fission plants. There simply isn't enough mass available at any given moment to go runaway.\nWaste-wise there's no heavy metal, radioactive fission products. They would produce radioactive tritium gas, and anything nearby would become radioactive due to neutron bombardment, and would need to be disposed of carefully.",
"provenance": null
},
{
"answer": "Hi! I'm a fusion researcher (tokamaks, specifically). These are some interesting questions. Let's see:\n\n > I mean based on what size we predict them to be, amount of output per generator etc.\n\nSome of the other replies have already talked about ITER for a sense of scale. An actual power-plant tokamak (a concept called DEMO) would be somewhat larger than ITER based on an ITER-like design - ITER is more the proof of concept of scaling our existing fusion experiments up to power-plant sizes, rather than a power-plant prototype itself.\n\nThe trick is, tokamaks generally get substantially more efficient the bigger you make them, the tradeoff being that the capital cost to build the power plant becomes unfeasibly high. You could actually build a fusion power plant right now, using only reactor tech and plasma physics we've understood since the 1980's. The problem is, to compensate for the crappy plasma behavior you'd have to make the tokamak huge (major radius of around 20 meters, compared to 6 meters on ITER). Since the major cost-of-electricity from a tokamak would just be the cost of building it amortized over its lifetime (since operating costs would be very low), this puts the cost per kWh out of an economically competitive range. The idea, then, is to build the tokamak as small as possible while still having efficient output. Based on a number of advances in reactor tech (particularly superconducting-magnet design) and plasma physics, it could very well be possible to build a power plant *smaller* by a significant margin than ITER.\n\nAll that said, you're generally looking at a power plant facility (tokamak, fuel handling, and all the ancillary structures) with a footprint of comparable size to an existing fission power plant, putting out in the neighborhood of a GW electric. There's actually some concern that building a tokamak large enough to be efficient would actually produce *too much* power for a single point-of-generation on our existing grid; in such a case, some of the power would be diverted to other energy-intensive uses, like hydrogen fuel cell charging or pumped-water storage.\n\n > What it cast all form of existing electricity generation into obsolescence?\n\nNot entirely. So every form of power generation has strengths and weaknesses, even fusion. Trying to pick one and saying \"this is how we will power America\" very quickly becomes a round-pegs-in-square-holes type of problem. What fusion can do is this: large-scale (GW+), always-on baseload power without environmental pollution or risk of radiation, and with *extremely* plentiful fuel. This means it can entirely replace coal-fired power plants, for example, and many existing fission plants (though I'd forsee small modular fission reactors as another viable option for certain situations). There will also pretty much always be regions where wind, solar, or hydroelectric are more economical - remember, I said the limiting factor on fusion power is the capital investment needed to build the plant. Generation from wind or solar is geographically dependent, though - so a fusion plant can get (comparatively) compact power generation easily for high-draw areas, like near population centers.\n\n > How big would they need to be, including surrounding safety equipment? Would they be mobile?\n\nI think I've addressed this rolled into my responses above. To reiterate - a footprint for the entire power plant comparable to existing fission plants. However, thanks to the inherent safety of fusion plants (more on this later) the area around the compound would be far more usable, rather than the \"no-man's land\" typically found in the immediate vicinity of fission plants.\n\nDue to their size, it's unlikely that fusion plants would be mobile in the near future. However, the high power output at a single point of generation does lend itself to mobile energy forms like hydrogen cells.\n\n > Would they require as many safety precautions as existing fission plants or would the reaction simply be extinguished in the event of an equipment failure?\n\nThis is one of the biggest wins for a fusion plant - they are *extremely* safe. Even in the event of a catastrophic loss of confinement, the nature of the reaction is simply to burn itself out, rather than run away. Part of this is due to the nature of the fuel - fusion fuel would be gas continuously pumped into the reactor, rather than solid fuel rods stored in the reactor vessel. A fission plant contains a year's worth of fuel at once - this is a huge source of free energy in the case of a meltdown. In a fusion plant, the fueling cuts off as soon as you lose confinement. It's rather like the difference between turning off the ignition in your car, versus lighting the gas tank on fire. \n\nEven if you continued fueling, losing the confining magnetic fields and heating would cause the plasma to rapidly expand and cool, contacting the reactor walls. Though the plasma is very hot (~150 million degrees C), there is very little of it - ITER, for example, would contain less than a gram of fuel at a time. Any contact with the wall would rapidly cool the plasma, re-neutralizing it and burning the plasma out. This would cause serious (read: expensive) damage to the wall, but presents basically no safety risk. ITER's own safety plans (per their licensing with France's nuclear regulatory commission) mandate that even in pretty much the worst case, you don't need to evacuate outside the facility perimeter.\n\nAs for waste: fusion reactions produce *very* little radioactive waste, and what there is is relatively easy to handle. For one thing, half of the fuel (deuterium) is nonradioactive, and the other half (tritium) is short-lived, so there would be very little stored on-site (it would actually be manufactured in the reactor shielding itself!). The high-energy neutrons produced by fusion reactions would activate structural materials in the reactor, creating some radioactive waste. However, we can engineer these materials to be somewhat resistant to damage, and they largely retain their solid, chemically-inert form (compared to fission waste, which is a toxic, radioactive slurry in addition to being radioactive), so it's relatively easy to handle. All in all, the waste handling from a fusion plant would be more like that for a hospital radiology department, rather than like a fission plant.\n\n > Would it open many possibilities for projects whose energy demands would have made them perviously unfeasible due to their energy demands? E.g. CO2 sequestration, huge aircraft (similar to [1] The Valiant in Doctor Who) etc..\n\nYes, at least in stationary activities (desalination, water cracking for hydrogen and oxygen, hydrogen fuel cell charging). A large plane would be unlikely, given the weight of a full-power tokamak. \n\nAs for WarPhalange's concern about fuel - that would be less of an issue. Fusion fuel (DT, specifically) has around 10 times greater energy density per mass than fission fuel.\n\n*edit:* check out this [AMA](_URL_0_) several researchers from my lab did a few months back - I like to point to it, since we had a pretty broad discussion. May give you more ideas for questions!",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "55017",
"title": "Fusion power",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 581,
"text": "Fusion reactors generally use hydrogen isotopes such as deuterium and tritium, which react more easily than hydrogen. The designs aim to heat their fuel to tens of millions of degrees using a wide variety of methods. The major challenge in realising fusion power is to engineer a system that can confine the plasma long enough at high enough temperature and density for many reactions to occur. A second issue that affects common reactions, is managing neutrons that are released during the reaction, which over time degrade many common materials used within the reaction chamber.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20766780",
"title": "Nuclear fusion–fission hybrid",
"section": "Section::::Hybrid concepts.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 662,
"text": "This is a key concept in the hybrid concept, known as \"fission multiplication\". For every fusion event, several fission events may occur, each of which gives off much more energy than the original fusion, about 11 times. This greatly increases the total power output of the reactor. This has been suggested as a way to produce practical fusion reactors in spite of the fact that no fusion reactor has yet reached break-even, by multiplying the power output using cheap fuel or waste. However, a number of studies have repeatedly demonstrated that this only becomes practical when the overall reactor is very large, 2 to 3 GWt, which makes it expensive to build.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22233981",
"title": "IGNITOR",
"section": "Section::::Development.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 722,
"text": "The goal to produce meaningful fusion reactors in a reasonable time leads to pursuing the achievement of ignition conditions in the near term in order to understand the plasma physical regimes needed for a net power producing reactor. In addition, an objective other than ignition that can be envisioned for the relatively near term is that of high flux neutron sources for material testing involving compact, high density fusion machines. This has been one of the incentives that have led the Ignitor Project to adopt magnesium diboride (MgB) superconducting cables in the machine design, a first in fusion research. Accordingly, the largest coils (about 5 m diameter) of the machine will be made entirely of MgB cables.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30519868",
"title": "Spherical tokamak",
"section": "Section::::Background.:Energy balance.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 580,
"text": "To achieve net power, a device must be built which optimizes this equation. Fusion research has traditionally focused on increasing the first \"P\" term: the fusion rate. This has led to a variety of machines that operate at ever higher temperatures and attempt to maintain the resulting plasma in a stable state long enough to meet the desired triple product. However, it is also essential to maximize the \"η\" for practical reasons, and in the case of a MFE reactor, that generally means increasing the efficiency of the confinement system, notably the energy used in the magnets.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13440300",
"title": "Too cheap to meter",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 518,
"text": "Strauss gave no public hint at the time that he was referring to fusion reactors, because of the classified nature of Project Sherwood, and the press naturally took his prediction regarding cheap electricity to apply to conventional fission reactors. However, the U.S. Atomic Energy Commission itself, in testimony to the U.S. Congress only months before, lowered the expectations for fission power, projecting only that the costs of reactors could be brought down to about the same as those for conventional sources.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46293573",
"title": "Laser Inertial Fusion Energy",
"section": "Section::::LIFE.:Fusion–fission hybrid.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 492,
"text": "The economics of fission–fusion designs have always been questionable. The same basic effect can be created by replacing the central fusion reactor with a specially designed fission reactor, and using the surplus neutrons from the fission to breed fuel in the blanket. These fast breeder reactors have proven uneconomical in practice, and the greater expense of the fusion systems in the fission–fusion hybrid has always suggested they would be uneconomical unless built in very large units.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36952575",
"title": "Plasma-facing material",
"section": "",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 355,
"text": "Currently, fusion reactor research focuses on improving efficiency and reliability in heat generation and capture and on raising the rate of transfer. Generating electricity from heat is beyond the scope of current research, due to existing efficient heat-transfer cycles, such as heating water to operate steam turbines that drive electrical generators.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2ciivm
|
How have norms against attacking civilian populations developed and been applied by Western countries post WW2?
|
[
{
"answer": " > The more specific question is: what instances after 1945 are there of Western militaries attacking civilian targets with the explicit or implicit aim of coercing the civilian population?\n\nWell, for one, the Korean War which began in 1950 saw the US use B-29 bombers to bomb Pyongyang and other cities in North Korea. However, as I'll explain later, the deliberate targeting civilians already begun to fell away as a strategy.\n\n > The broader one is: how have we got from there to here? When did Western militaries accept and start teaching that this was unacceptable? What resistance has there been to the changes? \n\nStudies after WW2 were conducted about the effectiveness of strategic bombing. It found that the deliberate targeting on civilian centers wasn't all that effective in terms of breaking the civilian morale. Not only did strategic bombing of German cities day and night not end the war any sooner (the Soviets taking Berlin and the Allies closing in from the west prompted their surrender), but the Allies suffered their own forms of bombing (England during the Blitz) and instead found their citizens more resolute in defending their own homes.\n\nWhere strategic bombing did become more useful was when it was targeted at infrastructure and other assets that their military would use. Bridges, rail centers, transportation hubs, etc. You see this in Korea where B-29s started dropping radio bombs to target bridges/railways, and in Vietnam even our major bombing campaigns with strategic bombers like the B-52 (e.g. Operation Linebacker II) were targeted at ports, railways, supply depots, etc.\n\nAnd finally, beyond the fact that military studies finding deliberate targeting of civilian centers being less effective, there was the fact that warfare had changed.\n\nToday it takes a single B-52 with a crew of 5 to drop the same amount of bombs that 16 B-17s with 160 total crew members took from London to Berlin. And oh yeah, that B-52 took off from Louisiana.\n\nBecause of this, we employ far fewer bombers - which also makes each bomber significantly more valuable. Losing a B-52 much less a B-2 today would be extremely costly - instead, we employ our bombers completely differently. We found, during Vietnam, that bombers like the B-52 could be very vulnerable to surface to air missile systems.\n\nHence, in the post-Vietnam era, the USAF focused on fast bombers that either flew extremely high (like the XB-70) or low (like the B-1) and then on stealth aircraft (the F-117 and then of course the B-2). The B-52 has become more tailored to carrying long range cruise missiles as a stand-off missile platform (though it can carpet bomb as it used to as well).\n\nSo to answer your question: the studies done after WW2 and the lessons learned in Korea and Vietnam have changed military doctrine regarding aerial bombardment. Not only that, but changes in air defense and in bombing technology have more or less ended the days where bombers fly in massive formations to indiscriminately carpet bomb large areas.\n\n > How was the discord between this norm and the doctrines of nuclear war managed?\n\nNuclear war has been treated as a separate entity from strategic bombing really ever since the Soviet Union developed their own nuclear weapons and the capability to deliver them.\n\nThe idea of mutually assured destruction has more or less relegated nuclear weapons into two categories: tactical and strategic, with strategic being more aligned with the idea of wiping out an entire civilization.\n\nBoth sides drew up numerous use cases for nuclear weapons. Some believed that tactical exchanges against enemy armored formations would be acceptable - indeed, it was suggested that if the enemy used nuclear weapons strictly on military targets only, we'd respond in kind.\n\nThe whole idea that \"if one nuke goes off, we wipe them completely out\" is a misconception a lot of people have about nuclear weapons. All that stuff is way above public discourse for obvious reasons, but using nuclear weapons to wipe out an entire country's populace is not a frequent reason for the use of nuclear weapons.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3050114",
"title": "Targeting (warfare)",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 210,
"text": "Technologically advanced countries can generally select their targets in such a way as to minimize collateral damage and civilian casualties. This can fall by the wayside, however, during unrestricted warfare.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36154031",
"title": "Nazism and the Wehrmacht",
"section": "Section::::Mechanisms of control.:Terror.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 1163,
"text": "The exception towards the otherwise ferocious application of military justice was the widespread tolerance of war crimes against civilians and POWs, especially in Eastern Europe, provided that such actions took place in a \"disciplined\" and \"orderly\" way. So-called \"wild shootings\" and \"wild requisitions\" against civilians were always disapproved of, whereas massive violence against civilians provided that it took place in a context that was \"disciplined\" and pseudo-legal were considered to be acceptable. This was especially the case with Jews in the occupied areas of the Soviet Union, where it was official policy to generally not prosecute those soldiers who killed Soviet Jews, and even in those cases, where prosecutions did occur, claiming that one hated Jews and killed out of a desire for \"revenge\" for the November Revolution of 1918 was allowed as a defense (through in fact, the Soviet Jewish population had nothing to do with the November Revolution). German military courts always gave very light sentences to those soldiers who killed Soviet Jews, even in an \"undisciplined\" way, and even then, Hitler usually intervened to pardon the accused.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42314",
"title": "Naturalization",
"section": "Section::::Denaturalization.:Between World Wars.\n",
"start_paragraph_id": 143,
"start_character": 0,
"end_paragraph_id": 143,
"end_character": 309,
"text": "Before World War I, only a small number of countries had laws governing denaturalization that could be enforced against citizens guilty of \"lacking patriotism\". Such denaturalized citizens became stateless persons. During and after the war, most European countries passed amendments to revoke naturalization.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "145072",
"title": "Failed state",
"section": "Section::::Promoting democracy and combating terrorism in failed states.:Transnational crime and terrorism.\n",
"start_paragraph_id": 95,
"start_character": 0,
"end_paragraph_id": 95,
"end_character": 433,
"text": "Research by James Piazza of the Pennsylvania State University finds evidence that nations affected by state failure experience and produce more terrorist attacks. Contemporary transnational crimes \"take advantage of globalization, trade liberalization and exploding new technologies to perpetrate diverse crimes and to move money, goods, services and people instantaneously for purposes of perpetrating violence for political ends\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35682355",
"title": "Prisoners' rights in international law",
"section": "Section::::History.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 564,
"text": "The events of World War I and World War II had a profound effect on international law due to the widespread denial of civil rights and liberties on the basis of racial, religious, and political discrimination. The systematic use of violence, including murder and ultimately genocide, the use of slave labor, abuse and murder of prisoners of war, deportations, and confiscation of property forced changes to the status quo. Over the proceeding decades, large scale changes began to occur in all areas of international law, and prisoners’ rights were no exception. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31667740",
"title": "The World Development Report 2011",
"section": "Section::::Synopsis.:Part 3: Reducing the Risks of Violence—Directions for International Policy.:Chapter 9: New directions for international support.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 655,
"text": "This chapter suggests new directions for international policy and institutions. The report notes how the trans national organisations set up after WWII achieved considerable success in reducing the number of wars, and that after the cold war ended new tools were developed which successfully reduced the number of civil wars. But comparable tools are not yet in place for dealing with the 21st century forms of mass violence, where some countries have suffered more deaths from organised criminal violence than they did while being ravaged by a traditional war. The final chapter discusses how this shortfall in international capability can be rectified.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3938470",
"title": "History of international law",
"section": "Section::::The League of Nations.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 993,
"text": "Following World War I, as after the Thirty Years' War, there was an outcry for rules of warfare to protect civilian populations, as well as a desire to curb invasions. The League of Nations, established after the war, attempted to curb invasions by enacting a treaty agreement providing for economic and military sanctions against member states that used \"external aggression\" to invade or conquer other member states. An international court was established, the Permanent Court of International Justice, to arbitrate disputes between nations without resorting to war. Meanwhile, many nations signed treaties agreeing to use international arbitration rather than warfare to settle differences. International crises, however, demonstrated that nations were not yet committed to the idea of giving external authorities a say in how nations conducted their affairs. Aggression on the part of Germany, Italy and Japan went unchecked by international law, and it took a Second World War to end it.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
244y1q
|
how do enzymes actually lower activation energy?
|
[
{
"answer": "The reactants don't immediately form products, they'll form a transition state first, and the energy needed to form it corresponds to the activation energy.\n\nA catalyst (including enzymes) will make a transition state available which is lower in energy than the one which is used without the catalyst. Therefore the activation energy corresponds to the new lower-energy transition state.",
"provenance": null
},
{
"answer": "Basically they grab the substrate and hold it in the right configuration for the reaction to go.",
"provenance": null
},
{
"answer": "Enzymes alter the chemical structure of the substrate slightly to make it more likely to undergo change. \n\nSo, you have your enzyme and substrate which collide at the right angle and with sufficient energy to form an enzyme-substrate complex. The bonds between these two cause a small change in the structure of the substrate for the change to happen, then then the new molecule (substrate) is released by another chemical change. \n\nIn my Biochemistry I class we focused on the mechanism for chymotrypsin goes through. See the mechanism [here](_URL_0_).\n\nHope this helps! \n\nEdit: spelling.",
"provenance": null
},
{
"answer": "An answer I know! \n\nThere are several reasons that enzymes lower activation energy, the primary reasons being that:\n\n* an enzyme aligns two substrates together in such a way that make the substrates react better/faster. Think of two lego blocks being two substrates and our hands being the enzyme. Our hands align the legos in the appropriate way to stack together, or “react”. Otherwise, legos by themselves would fall on each other any such way, which won’t always make them stackable (imagine the top ends touching, for example). Less energy is needed for a reaction when the legos are arranged to bond more favorably, versus just being randomly mixed together any such way. **Enzymes align the substrates in such a way that make them easier to react.**\n\n* when an enzyme is bonded to a substrate to form a substrate-enzyme complex, the concentration of the substrate by itself is temporarily “decreased” around the enzyme, because the substrate alone becomes part of a substrate-enzyme complex. So this substrate-enzyme complex actually causes substrate concentration to readjust, bringing more substrate closer to the enzyme, to equalize concentration. In terms of equilibrium, substrate and substrate-enzyme complex concentrations are two different entities, but realistically, if forming a substrate-enzyme complex causes additional substrate to move closer to the enzyme, a reaction will more easily take place due to the greater availability of the substrate. **The concentration of substrate molecules naturally increase around an enzyme, due to the formation of substrate-enzyme complexes. With substrate molecules more readily available, reactions are more favorable.**\n\nWith both of these reasons combined, it is apparent why less energy would be needed to make a reaction occur with an enzyme. An enzyme arranges substrate in energetically favorable ways while also attracting additional substrate towards the enzyme. \n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "9257",
"title": "Enzyme",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 974,
"text": "Like all catalysts, enzymes increase the reaction rate by lowering its activation energy. Some enzymes can make their conversion of substrate to product occur many millions of times faster. An extreme example is orotidine 5'-phosphate decarboxylase, which allows a reaction that would otherwise take millions of years to occur in milliseconds. Chemically, enzymes are like any catalyst and are not consumed in chemical reactions, nor do they alter the equilibrium of a reaction. Enzymes differ from most other catalysts by being much more specific. Enzyme activity can be affected by other molecules: inhibitors are molecules that decrease enzyme activity, and activators are molecules that increase activity. Many therapeutic drugs and poisons are enzyme inhibitors. An enzyme's activity decreases markedly outside its optimal temperature and pH, and many enzymes are (permanently) denatured when exposed to excessive heat, losing their structure and catalytic properties.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3043886",
"title": "Enzyme kinetics",
"section": "Section::::Single-substrate reactions.:Michaelis–Menten kinetics.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 621,
"text": "Majority of the enzymes are proteins and they speeds up the rate of biochemical reactions by decreasing the activation energy. During the process enzymes combines with the substrate and convert it into product. Enzymes may have single or multiple substrate binding sites (catalytic site). As substrate concentration goes on increasing, catalytic sites may get filled progressively. In the initial period velocity of the enzyme catalyzed reaction is directly proportional to substrate concentration. Michaelis and Menten specifically focused on the initial period of the reaction (initial velocity) to develop this model.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2634856",
"title": "Enzyme assay",
"section": "Section::::Enzyme units.:Enzyme activity.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 219,
"text": "An increased amount of substrate will increase the rate of reaction with enzymes, however once past a certain point, the rate of reaction will level out because the amount of active sites available has stayed constant.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38034479",
"title": "Enzyme promiscuity",
"section": "Section::::Introduction.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 399,
"text": "Enzymes are evolved to catalyse a particular reaction on a particular substrate with a high catalytic efficiency (\"k/K\", \"cf\". Michaelis–Menten kinetics). However, in addition to this main activity, they possess other activities that are generally several orders of magnitude lower, and that are not a result of evolutionary selection and therefore do not partake in the physiology of the organism.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3043886",
"title": "Enzyme kinetics",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 416,
"text": "Enzyme kinetics is the study of the chemical reactions that are catalysed by enzymes. In enzyme kinetics, the reaction rate is measured and the effects of varying the conditions of the reaction are investigated. Studying an enzyme's kinetics in this way can reveal the catalytic mechanism of this enzyme, its role in metabolism, how its activity is controlled, and how a drug or an agonist might inhibit the enzyme.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "540448",
"title": "Product (chemistry)",
"section": "Section::::Biochemistry.:Product inhibition.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 350,
"text": "Some enzymes are inhibited by the product of their reaction binds to the enzyme and reduces its activity. This can be important in the regulation of metabolism as a form of negative feedback controlling metabolic pathways. Product inhibition is also an important topic in biotechnology, as overcoming this effect can increase the yield of a product.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2160013",
"title": "Regulatory enzyme",
"section": "Section::::Covalently modulated enzymes.:Phosphorylation.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 252,
"text": "Phosphorylation or dephosphorylation make the enzyme be functional at the time when the cell needs the reaction to happen. The effects produced by the addition of phosphoryl groups that regulate the kinetics of a reaction can be divided in two groups:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
57ss1w
|
why do some people believe ai (artificial intelligence) will take over the world
|
[
{
"answer": "One of the larger concerns with AI is that if not programmed correctly, it will pose various risks to humans by simply being too good at its job. This is illustrated pretty well with the \"paperclip \nmaximizer\" thought experiment: \n\n_URL_0_\n\nIn addition, popular media has done a pretty good job of hyping up malicious AI as a possible doomsday scenario.",
"provenance": null
},
{
"answer": "In theory, it might be possible for an AI to improve itself, becoming smarter faster than humans can keep up with it. It could become capable of doing anything with any computer system in the world (that isn't physically isolated), such as taking control of power grids, banks, dams, satellites, military hardware, etc These things are all vulnerable to human hackers but are defended by human means; an AI could overcome any possible defense.\n\nAn AI may be smarter than humans but lack morals, or have it's own alien morality system, for example it might see humans as the greatest threat to itself/the planet/the universe and so it may decide it should eliminate humans.\n\nAn AI controlling a toaster oven is not dangerous, but an AI that could both improve itself beyond it's design and take control of other systems would be incredibly dangerous.\n\nIt's just as likely that such an AI would seek to help humanity instead but that's less interesting as a literary conflict.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "54245",
"title": "Technological singularity",
"section": "Section::::Impact.:Existential risk.\n",
"start_paragraph_id": 76,
"start_character": 0,
"end_paragraph_id": 76,
"end_character": 392,
"text": "Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments. AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources, and humans would be powerless to stop them. Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "813176",
"title": "AI takeover",
"section": "Section::::Warnings.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 1018,
"text": "Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could \"spell the end of the human race\". Stephen Hawking said in 2014 that \"Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.\" Hawking believed that in the coming decades, AI could offer \"incalculable benefits and risks\" such as \"technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.\" In January 2015, Nick Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and numerous AI researchers, in signing the Future of Life Institute's open letter speaking to the potential risks and benefits associated with artificial intelligence. The signatories \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1164",
"title": "Artificial intelligence",
"section": "Section::::Philosophy and ethics.:Potential harm.:Existential risk.\n",
"start_paragraph_id": 186,
"start_character": 0,
"end_paragraph_id": 186,
"end_character": 275,
"text": "Physicist Stephen Hawking, Microsoft founder Bill Gates, and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could \"spell the end of the human race\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21666977",
"title": "AI effect",
"section": "Section::::Legacy of the AI winter.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 289,
"text": "Many AI researchers find that they can procure more funding and sell more software if they avoid the tarnished name of \"artificial intelligence\" and instead pretend their work has nothing to do with intelligence at all. This was especially true in the early 1990s, during the \"AI winter\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1164",
"title": "Artificial intelligence",
"section": "Section::::In fiction.\n",
"start_paragraph_id": 237,
"start_character": 0,
"end_paragraph_id": 237,
"end_character": 488,
"text": "Several works use AI to force us to confront the fundamental of question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's \"R.U.R.\", the films \"A.I. Artificial Intelligence\" and \"Ex Machina\", as well as the novel \"Do Androids Dream of Electric Sheep?\", by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1164",
"title": "Artificial intelligence",
"section": "",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 562,
"text": "The field was founded on the claim that human intelligence \"can be so precisely described that a machine can be made to simulate it\". This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI to be a danger to humanity if it progresses unabated. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51344878",
"title": "Thomas G. Dietterich",
"section": "Section::::Dangers of AI: an academic perspective.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 684,
"text": "It should be noted that most proponents of taking AI seriously as an existential risk do not believe that machines will become self-aware, either (see e.g. here). Additionally, there is nearly universal agreement among the people that advocate for taking the existential risk from AI serious, that advanced AI systems will not suddenly develop a negative intent (hate or anger) against humanity. and then decide to run amok. Instead, much of the work done in the AI safety community does indeed focus around accidents and design flaws. It is thus unclear in how far Dietterich is merely attacking a straw man version of the argument for existential risk from artificial intelligence.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2sfem0
|
how is normalizing relations with cuba going to affect the us economy?
|
[
{
"answer": "It will have some positive effect, but I think you're right that it's not going to be hugely impacting one way or the other.\n\nLifting the embargo will take quite some time because our laws will have to change. There's no clear agreement that lifting the embargo is a good thing, and our lawmakers seem to be having a hard time doing anything at all lately.",
"provenance": null
},
{
"answer": "From what I understand, this is a potentially great opportunity for many north american companies to start investing in a new market. If the embargo were to be lifted, and a while after Cuba restructures its local infrastructure and economy, we'll start to see companies such as GM or Ford shipping their merchandise over to Cuba. To keep this short, both the American and Cuban economies will be exposed to each others products which in turn will drive up the profits in a win win scenario for the U.S and Cuba.\n\nAs for negatives, the only that comes into mind in terms of economy, is that this lifting of the embargo could potentially affect Puerto Rico's economy. This is due to the fact that the U.S might shift market investments towards Cuba instead of P.R 's flailing economy. Then again Puerto Rico could benefit from the embargo if the countries leaders and businessmen decide to ship our local products and companies over to Cuba.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "18776573",
"title": "Pink tide",
"section": "Section::::History.:End of commodity boom and decline: 2010s.:Economy and social development.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 455,
"text": "Economic hardships occurred in countries such as Argentina, Brazil and Venezuela as oil and commodity prices declined and according to analysts because of their unsustainable policies. In regard to the economic situation, President of Inter-American Dialogue Michael Shifter stated: \"The United States–Cuban Thaw occurred with Cuba reapproaching the United States when Cuba's main international partner, Venezuela, began experiencing economic hardships\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50206",
"title": "Fulgencio Batista",
"section": "Section::::Military coup and second presidency (1952–1959).:Support of U.S. business and government.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 943,
"text": "In a manner that antagonized the Cuban people, the U.S. government used its influence to advance the interests of and increase the profits of the private American companies, which \"dominated the island's economy\". By the late 1950s, U.S. financial interests owned 90% of Cuban mines, 80% of its public utilities, 50% of its railways, 40% of its sugar production and 25% of its bank deposits—some $1 billion in total. According to historian Louis Perez, author of the book \"On Becoming Cuban\", \"Daily life had developed into a relentless degradation, with the complicity of political leaders and public officials who operated at the behest of American interests.\" As a symbol of this relationship, ITT Corporation, an American-owned multinational telephone company, presented Batista with a Golden Telephone, as an \"expression of gratitude\" for the \"excessive telephone rate increase\" that Batista granted at the urging of the U.S. government.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22906446",
"title": "Cuban sugar economy",
"section": "Section::::Dependence on the Soviet Union.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 527,
"text": "Due to its historical dependence on sugar, the Cuban economy was tied to external markets and price fluctuations. Moreover, the United States remained the major source of capital and technology. After the Cuban Revolution, Fidel Castro's government sought to end the mono-production of sugar and shift the Cuban economy towards self-reliance through industrialization and economic diversification. However, the industrialization effort failed while sugar production decreased and Cuba was forced to return to sugar production.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4736325",
"title": "Tourism in Cuba",
"section": "Section::::Social impacts of tourism.:Tourist vs Cuban hotels.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 518,
"text": "Cuba's tourism policies of the early 1990s, which were driven by the government's pressing need to earn hard currency, had a major impact on the underlying egalitarianism espoused by the Cuban revolution. Two parallel economies and societies quickly emerged, divided by their access to the newly legalized U.S. dollar. Those having access to dollars through contact with the lucrative tourist industry suddenly found themselves at a distinct financial advantage over professional, industrial and agricultural workers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5231073",
"title": "Cuba–United States relations",
"section": "Section::::U.S. public opinion on Cuba–United States relations.\n",
"start_paragraph_id": 77,
"start_character": 0,
"end_paragraph_id": 77,
"end_character": 337,
"text": "Over time, the United States' laws and foreign policy regarding Cuba has changed drastically due to strained relationship. Beginning with opposition to the Castro led Independence Revolution in Cuba, the Spanish–American War, naval use of Guantanamo Bay, trade restrictions imposed by Nixon, and a trade embargo opened in the year 2000.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5042481",
"title": "Cuba",
"section": "Section::::History.:Revolution and Communist party rule (1959–present).\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 798,
"text": "The United States government initially reacted favorably to the Cuban revolution, seeing it as part of a movement to bring democracy to Latin America. Castro's legalization of the Communist party and the hundreds of executions of Batista agents, policemen and soldiers that followed caused a deterioration in the relationship between the two countries. The promulgation of the Agrarian Reform Law, expropriating thousands of acres of farmland (including from large U.S. landholders), further worsened relations. In response, between 1960 and 1964 the U.S. imposed a range of sanctions, eventually including a total ban on trade between the countries and a freeze on all Cuban-owned assets in the U.S. In February 1960, Castro signed a commercial agreement with Soviet Vice-Premier Anastas Mikoyan.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38301",
"title": "Fidel Castro",
"section": "Section::::Premiership.:Economic stagnation and Third World politics: 1969–1974.\n",
"start_paragraph_id": 68,
"start_character": 0,
"end_paragraph_id": 68,
"end_character": 891,
"text": "Cuba's economy grew in 1974 as a result of high international sugar prices and new credits with Argentina, Canada, and parts of Western Europe. A number of Latin American states called for Cuba's re-admittance into the Organization of American States (OAS), with the U.S. finally conceding in 1975 on Henry Kissinger's advice. Cuba's government underwent a restructuring along Soviet lines, claiming that this would further democratization and decentralize power away from Castro. Officially announcing Cuba's identity as a socialist state, the first National Congress of the Cuban Communist Party was held, and a new constitution adopted that abolished the position of President and Prime Minister. Castro remained the dominant figure in governance, taking the presidency of the newly created Council of State and Council of Ministers, making him both head of state and head of government.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ieekm
|
Is there a correlation between race and intelligence?
|
[
{
"answer": "It turns out that there is an arguably large cultural bias on many of these intelligence tests, combined with confounding factors like low socioeconomic status (poverty) and minority status in the US. For this one specific test, there may not be doubt that there are differences (on average) between ethnicity groups - the incorrect part of the argument is that the 'racial' group is the *reason* for the higher/lower score.\n\ntl;dr there's no difference in actual intelligence just based upon race - many other factors get in the way and make it look like there are differences.",
"provenance": null
},
{
"answer": "As with most social sciences, the difficulty here is separating your variables.\n\nWe know that race is correlated with socioeconomic status. We also know that nutrition, access to education, and culture biases play a role in the development of intelligence, and on performance on tests.\n\nSo even if you had a test definitively showing differences in IQ by race, it would be hard to link the result solely to race.\n\n*The Bell ~~Jar~~ Curve* is generally regarded as bad science, but it is very distressing how it was shouted down as a question science should not even ask. If we could isolate a physical mechanism to caused lower intelligence, and possibly cure it, we'd wind up with a lot of smarter people. ",
"provenance": null
},
{
"answer": "_URL_0_\n\nthis seems to suggest that there is.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "26494",
"title": "Race and intelligence",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 740,
"text": "The connection between race and intelligence has been a subject of debate in both popular science and academic research since the inception of IQ testing in the early 20th century. There remains some debate as to whether and to what extent differences in intelligence test scores reflect environmental factors as opposed to genetic ones, as well as to the definitions of what \"race\" and \"intelligence\" are, and whether they can be objectively defined. Currently, there is no non-circumstantial evidence that these differences in test scores have a genetic component, although some researchers believe that the existing circumstantial evidence makes it at least plausible that hard evidence for a genetic component will eventually be found.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26926596",
"title": "History of the race and intelligence controversy",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 619,
"text": "The history of the race and intelligence controversy concerns the historical development of a debate, concerning possible explanations of group differences encountered in the study of race and intelligence. Since the beginning of IQ testing around the time of World War I there have been observed differences between average scores of different population groups, but there has been no agreement about whether this is mainly due to environmental and cultural factors, or mainly due to some genetic factor, or even if the dichotomy between environmental and genetic factors is the most effectual approach to the debate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26926596",
"title": "History of the race and intelligence controversy",
"section": "Section::::History.:1920–1960.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 370,
"text": "In 1929, Robert Woodworth, in his textbook \"Psychology: a study of mental life\" made no claims about innate differences in intelligence between races, pointing instead to environmental and cultural factors. He considered it advisable to \"suspend judgment and keep our eyes open from year to year for fresh and more conclusive evidence that will probably be discovered\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26494",
"title": "Race and intelligence",
"section": "Section::::Validity of race and IQ.:Race.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 1215,
"text": "Race in studies of human intelligence is almost always determined using self-reports, rather than based on analyses of the genetic characteristics of the tested individuals. According to psychologist David Rowe, self-report is the preferred method for racial classification in studies of racial differences because classification based on genetic markers alone ignore the \"cultural, behavioral, sociological, psychological, and epidemiological variables\" that distinguish racial groups. Hunt and Carlson write that \"Nevertheless, self-identification is a surprisingly reliable guide to genetic composition. applied mathematical clustering techniques to sort genomic markers for over 3,600 people in the United States and Taiwan into four groups. There was almost perfect agreement between cluster assignment and individuals' self-reports of racial/ethnic identification as white, black, East Asian, or Latino.\" Sternberg and Grigorenko disagree with Hunt and Carlson's interpretation of Tang, \"Tang et al.'s point was that ancient geographic ancestry rather than current residence is associated with self-identification and not that such self-identification provides evidence for the existence of biological race.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14892",
"title": "Intelligence quotient",
"section": "Section::::Group differences.\n",
"start_paragraph_id": 123,
"start_character": 0,
"end_paragraph_id": 123,
"end_character": 367,
"text": "Among the most controversial issues related to the study of intelligence is the observation that intelligence measures such as IQ scores vary between ethnic and racial groups and sexes. While there is little scholarly debate about the \"existence\" of some of these differences, their \"causes\" remain highly controversial both within academia and in the public sphere.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26494",
"title": "Race and intelligence",
"section": "Section::::Group differences.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 816,
"text": "Hunt and Carlson outlined four contemporary positions on differences in IQ based on race or ethnicity. The first is that these reflect real differences in average group intelligence, which is caused by a combination of environmental factors and heritable differences in brain function. A second position is that differences in average cognitive ability between races are caused entirely by social and/or environmental factors. A third position holds that differences in average cognitive ability between races do not exist, and that the differences in average test scores are the result of inappropriate use of the tests themselves. Finally, a fourth position is that either or both of the concepts of race and general intelligence are poorly constructed and therefore any comparisons between races are meaningless.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26494",
"title": "Race and intelligence",
"section": "Section::::Environmental influences on group differences in IQ.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 1692,
"text": "The following environmental factors are some of those suggested as explaining a portion of the differences in average IQ between races. These factors are not mutually exclusive with one another, and some may, in fact, contribute directly to others. Furthermore, the relationship between genetics and environmental factors may be complicated. For example, the differences in socioeconomic environment for a child may be due to differences in genetic IQ for the parents, and the differences in average brain size between races could be the result of nutritional factors. All recent reviews agree that some environmental factors that are unequally distributed between racial groups have been shown to affect intelligence in ways that could contribute to the test score gap. However, currently, the question is whether these factors can account for the entire gap between white and black test scores, or only part of it. One group of scholars, including Richard E. Nisbett, James R. Flynn, Joshua Aronson, Diane Halpern, William Dickens, Eric Turkheimer (2012) have argued that the environmental factors so far demonstrated are sufficient to account for the entire gap. Nicholas Mackintosh (2011) considers this a reasonable argument, but argues that probably it is impossible to ever know for sure; another group including Earl B. Hunt (2010), Arthur Jensen, J. Philippe Rushton and Richard Lynn have argued that this is impossible. Jensen and Rushton consider that it may account for as little as 20% of the gap. Meanwhile, while Hunt considers this a vast overstatement, he nonetheless considers it likely that some portion of the gap will eventually be shown to be caused by genetic factors.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6ostsb
|
why does baking soda help get rid if mouth ulcers?
|
[
{
"answer": "Baking soda is a base and often ulcers are irritated by the acidity of our saliva. So if you the baking soda with the saliva create a neutral environment for the sore to heal. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "22022246",
"title": "Minimal intervention dentistry",
"section": "Section::::Approach to restorative dentistry.:Treatment: controlling and curing.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 741,
"text": "Classical dietary and oral hygiene techniques of reducing sugar content and eating frequency, and removing plaque by effective brushing, are still very important practices for treatment as well as prevention. Also, biochemical techniques can be used to treat the bacterial infection directly. Agents such as chlorhexidine can help fight gum disease and thus reduce the amount of bacteria in the mouth that are responsible for tooth decay. After a wave of empirical studies on the efficacy of Xylitol (a sugar alcohol) a consensus report in the \"British Dental Journal\" considered it to give a reduction in the risk of caries. There is also increasing use of newer technologies such as \"photo-activated disinfection\" and treating with ozone.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1145905",
"title": "Tooth whitening",
"section": "Section::::Methods.:At Home.:Natural (alternative) methods.\n",
"start_paragraph_id": 68,
"start_character": 0,
"end_paragraph_id": 68,
"end_character": 604,
"text": "Baking Soda is a safe, low abrasive and effective stain removal and tooth whitening dentrifice. Dentrifices that have excessive abrasivity are harmful to dental tissue, therefore baking soda is a desirable alternative. To date, clinical studies on baking soda report that there have been no reported adverse effects. It also contains acid-buffering components that makes baking soda biologically antibacterial at high concentrations and capable of preventing growth of \"Streptococcus Mutans.\" Baking soda might be useful for caries prone patients as well as those who are after a tooth whitening effect.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9812712",
"title": "Tooth brushing",
"section": "Section::::Toothbrushing guidelines.:Timing.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 341,
"text": "One study found that brushing immediately after an acidic meal (diet soda) caused more damage to enamel and the dentin, compared to waiting 30 minutes. Flushing the acid away with water or dissolved baking soda could help reduce acid damage exacerbated by brushing. The same response was recommended for acid re-flux and other acidic meals.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "414350",
"title": "Tooth decay",
"section": "Section::::Prevention.:Dietary modification.\n",
"start_paragraph_id": 91,
"start_character": 0,
"end_paragraph_id": 91,
"end_character": 384,
"text": "In the presence of sugar and other carbohydrates, bacteria in the mouth produce acids that can demineralize enamel, dentin, and cementum. The more frequently teeth are exposed to this environment, the more likely dental caries is to occur. Therefore, minimizing snacking is recommended, since snacking creates a continuous supply of nutrition for acid-creating bacteria in the mouth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1605200",
"title": "Salt",
"section": "Section::::Edible salt.:Fortified table salt.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 446,
"text": "A lack of fluorine in the diet is the cause of a greatly increased incidence of dental caries. Fluoride salts can be added to table salt with the goal of reducing tooth decay, especially in countries that have not benefited from fluoridated toothpastes and fluoridated water. The practice is more common in some European countries where water fluoridation is not carried out. In France, 35% of the table salt sold contains added sodium fluoride.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53951",
"title": "Diarrhea",
"section": "Section::::Management.:Fluids.\n",
"start_paragraph_id": 112,
"start_character": 0,
"end_paragraph_id": 112,
"end_character": 1164,
"text": "Oral rehydration solution (ORS) (a slightly sweetened and salty water) can be used to prevent dehydration. Standard home solutions such as salted rice water, salted yogurt drinks, vegetable and chicken soups with salt can be given. Home solutions such as water in which cereal has been cooked, unsalted soup, green coconut water, weak tea (unsweetened), and unsweetened fresh fruit juices can have from half a teaspoon to full teaspoon of salt (from one-and-a-half to three grams) added per liter. Clean plain water can also be one of several fluids given. There are commercial solutions such as Pedialyte, and relief agencies such as UNICEF widely distribute packets of salts and sugar. A WHO publication for physicians recommends a homemade ORS consisting of one liter water with one teaspoon salt (3 grams) and two tablespoons sugar (18 grams) added (approximately the \"taste of tears\"). Rehydration Project recommends adding the same amount of sugar but only one-half a teaspoon of salt, stating that this more dilute approach is less risky with very little loss of effectiveness. Both agree that drinks with too much sugar or salt can make dehydration worse.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1232890",
"title": "Iodised salt",
"section": "Section::::No-additive salts for canning and pickling.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 350,
"text": "In contrast to table salt, which often has iodide as well as anticaking ingredients, special canning and pickling salt is made for producing the brine to be used in pickling vegetables and other foodstuffs. This salt has no iodine added because the iodide can be oxidised by the foods and darken them—a harmless but aesthetically undesirable effect.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1u2f14
|
What was Lenin's real nature in comparison to Trotsky or Stalin?
|
[
{
"answer": "I would absolutely agree with the first part of his thesis: that Lenin was a brutal dictator and that the myth of Lenin being a good Bolshevik and Stalin being a bad Bolshevik is unsupported by evidence. Lenin clearly wanted to provoke a civil war in Russia, as documents uncovered since the opening of the Russian archives in 1991 have shown. This is because he knew that the Social Revolutionary and Menshevik forces in Russia would not support his violent overthrow of Alexander Kerensky's provisional government. Through both the Russian Civil War 1917-1921 and its accompanying Red Terror, Lenin clearly demonstrated the brutality and ruthlessness that characterized Stalin's regime. Lenin established his secret police, the Cheka, in December 1917, and formed the gulag prison camp system in the same month. Both of which were used to enact horrible terror and brutality on the Russian people. By 1923, over 500,000 people languished in gulag labor camps. In January 1918, Lenin had his Third Soviet Congress pass what was called the Loot the Looters Decree, whose intent was to annihilate Russia's middle and upper classes. Lenin also implemented his War Communism in 1918, an attempt to solve Russia's food crisis, but an attempt that ended in disaster and resulted in a horrible famine. Through all of this, it is clear that in many ways, Lenin displayed the same blindness to human suffering that Stalin did. From the Civil War and the Red Terror alone, I think one can reasonably claim that Lenin was indeed a ruthless and brutal dictator, just like Stalin. \n\nYet, I don't necessarily think that Lenin wanted Stalin to succeed him. I think one can claim that Stalin was in many ways a continuation of Lenin, as both used terror to achieve their goals. But Lenin was pragmatic, as his New Economic Policy of 1921 demonstrates. He was not completely blind to human suffering, as Stalin was. Not only that, but the postscript to Lenin's Testament of early 1923 shows that Lenin feared Stalin's place in the party because of Stalin's complete disregard for human suffering, such as the kind that Stalin displayed in the Georgian Affair of 1922. Not only that, but I think the relationship that Lenin and Trotsky had, especially through Trotsky's role as War Commissar in the Civil War, demonstrates that the two had a closer relationship that Stalin and Lenin.\n\nThus, to answer your first question, I think that to a major extent is the sympathetic image of Lenin incorrect. Lenin was clearly a brutal and ruthless dictator, who although pragmatic, was not afraid to use terror and civil war to accomplish his goals. And to answer your second point, I don't think that Lenin necessarily wanted Stalin to succeed him. Rather, he preferred Trotsky. Stalin's rule however does demonstrate that he in many ways continued what Lenin started. But I think that the poor relationship between Stalin and Lenin at the end of Lenin's life demonstrates that while Stalin did continue with the precedence that Lenin set, Lenin did not necessarily want this to occur.\n\nEven though Lenin did shift away from War Communism to NEP in the 1921, it's impossible to overlook the fact that Lenin was brutal and repressive and that the civil war and terror he used were very effective in guaranteeing his rule. Yet, I think this shift demonstrates Lenin's pragmatism, something that Stalin did not display. As such, I don't think that Lenin's late actions support the thesis that he wanted Stalin to succeed him.\n\nSources:\nA People's Tragedy: The Russian Revolution by Orlando Figes",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "43473028",
"title": "Trotsky: A Biography",
"section": "Section::::Background.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 743,
"text": "Lenin favoured Stalin until, too late, their fallout in 1923 really opened Lenin's eyes to the danger of a future with Stalin in power. Trotsky failed to form alliances and was socially inept and never fully accepted in the Bolshevik party leadership, which he had joined late. However, Stalin, contrary to his opponent, was a brilliant politician and political tactician, who was among the few who genuinely understood the consequences and means of political maneuvering in an environment in which appeals to the masses (where the other leaders were strong) had been systematically cut out of the equation by the means of the red-terror and prohibition of most means and vehicles of opposition that they had themselves promoted and embraced.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17888",
"title": "Leon Trotsky",
"section": "Section::::Russian Revolution and aftermath.:After Lenin's death (1924).\n",
"start_paragraph_id": 130,
"start_character": 0,
"end_paragraph_id": 130,
"end_character": 630,
"text": "There was little overt political disagreement within the Soviet leadership throughout most of 1924. On the surface, Trotsky remained the most prominent and popular Bolshevik leader, although his \"mistakes\" were often alluded to by \"troika\" partisans. Behind the scenes, he was completely cut off from the decision-making process. Politburo meetings were pure formalities since all key decisions were made ahead of time by the \"troika\" and its supporters. Trotsky's control over the military was undermined by reassigning his deputy, Ephraim Sklyansky, and appointing Mikhail Frunze, who was being groomed to take Trotsky's place.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39039502",
"title": "Collective leadership in the Soviet Union",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 1155,
"text": "Vladimir Lenin was, according to Soviet literature, the perfect example of a leader ruling in favour of the collective. Stalin's rule was characterised by one-man dominance, which was a deep breach of inner-party democracy and collective leadership; this made his leadership highly controversial in the Soviet Union following his death in 1953. At the 20th Party Congress, Stalin's reign was criticised as the \"cult of the individual\". Nikita Khrushchev, Stalin's successor, supported the ideal of collective leadership but only ruled in a collective fashion when it suited him. In 1964, Khrushchev was ousted due to his disregard of collective leadership and was replaced in his posts by Leonid Brezhnev as First Secretary and by Alexei Kosygin as Premier. Collective leadership was strengthened during the Brezhnev years and the later reigns of Yuri Andropov and Konstantin Chernenko. Mikhail Gorbachev's reforms helped spawn factionalism within the Soviet leadership, and members of Gorbachev's faction openly disagreed with him on key issues. The factions usually disagreed on how little or how much reform was needed to rejuvenate the Soviet system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53796628",
"title": "Stalin (Trotsky book)",
"section": "Section::::Summary.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 688,
"text": "Trotsky spends the next few chapters discussing Stalin's increasing role in revolutionary activities with the likes of Vladimir Lenin and Trotsky himself. Many of the revolutionary activities Stalin participated in during the early years of his life were against the Tsarist regime, who ruled Russia at the time. Trotsky is quick to point out the difference between Lenin and Stalin, saying of Lenin, \"The idea of making a fetish of the political machine was not only alien but repugnant to his nature.\" Trotsky contrasts this sentiment of Lenin with a critique of Stalin, saying of him, \"His thinking is too slow, his associations too single-tracked, his style too plodding and barren.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46941733",
"title": "Government of Vladimir Lenin",
"section": "Section::::Consolidating power: 1917–1918.:Constitutional and Governmental Organization.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 797,
"text": "Lenin was the most significant figure in this governance structure; as well as being the Chairman of Sovnarkom and sitting on the Council of Labor and Defense, he was on the Central Committee and Politburo of the Communist Party. The only individual to have anywhere near this influence was Lenin's right-hand man, Yakov Sverdlov, although the latter died in March 1919 during a flu pandemic. However, in the Russian public imagination it would be Leon Trotsky who was usually seen as the second-in-command; although Lenin and Trotsky had had differences in the past, after 1918 Lenin came to admire Trotsky's skills as an organizer and his ruthlessness in dealing with the Bolsheviks' enemies. Within this Bolshevik inner circle, it was Zinoviev and Kamenev who were personally closest to Lenin.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44379667",
"title": "Collective leadership",
"section": "Section::::Communist examples.:Soviet Union.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 1250,
"text": "Lenin was, according to Soviet literature, the perfect example of a leader ruling in favour of the collective. Stalin also embodied this style of ruling, with most major policy decisions involving lengthy discussion and debate in the politburo and/or central committee; after his death in 1953, Nikita Khrushchev accused Stalin of one-man dominance, leading to controversy surrounding the period of his rule. At the 20th Party Congress, Stalin's reign was criticized by Khrushchev as a \"personality cult\". As Stalin's successor, Khrushchev supported the ideal of collective leadership but increasingly ruled in an autocratic fashion, his anti-Stalin accusations followed by much the same behaviour which led to accusations of hypocrisy. In 1964, Khrushchev was ousted and replaced by Leonid Brezhnev as First Secretary and by Alexei Kosygin as Premier. Collective leadership was strengthened during the Brezhnev years and the later reigns of Yuri Andropov and Konstantin Chernenko. Mikhail Gorbachev's reforms helped spawn factionalism within the Soviet leadership, and members of Gorbachev's faction openly disagreed with him on key issues. The factions usually disagreed on how little or how much reform was needed to rejuvenate the Soviet system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "62092",
"title": "Trotskyism",
"section": "Section::::History.:\"Legend of Trotskyism\".\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 717,
"text": "During 1922–1924, Lenin suffered a series of strokes and became increasingly incapacitated. Before his death in 1924, while describing Trotsky as \"distinguished not only by his exceptional abilities—personally he is, to be sure, the most able man in the present Central Committee\" and also maintaining that \"his non-Bolshevik past should not be held against him\", Lenin criticized him for \"showing excessive preoccupation with the purely administrative side of the work\" and also requested that Stalin be removed from his position of General Secretary, but his notes remained suppressed until 1956. Zinoviev and Kamenev broke with Stalin in 1925 and joined Trotsky in 1926 in what was known as the United Opposition.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2o1u9j
|
Tuesday Trivia | Never Done: Women’s Work in History
|
[
{
"answer": "Is today’s theme just an excuse for me to post a long winded ramble about my hobbies? Of course not.\n\nBut it’s time to talk about knitting and its fantastically understudied history. Knitting gets very little attention from “real” historians, its history and folklore was only passed orally for a long time, and today it is primarily discussed in pattern books, blogs, and forums. I know of a whopping TWO academic-ish history books about knitting. It’s a history you don’t think about unless you start doing it I think. \n\nI knit my share of junky modern stuff with fun-fur eyelash yarn and such, but I try to do some historic and vintage things because they make me feel a connection with women of the past that I really can’t get elsewhere, even from cooking historic recipes which I also like to do. (I suspect my foremothers would laugh at me feeling historical by making anything in my modern kitchen on my clean electric stove.) Knitting is kinda extra-historical because its largely unchanged from its murky invention, sure there are innovations, like the knitting machine, or those sweet rubber caps for the tips of needles so your stitches don’t slide off the ends, and the fun of superbulky yarns knit on needles [the size of Olive Garden breadsticks,](_URL_1_ ) but in the end hand-knitting is still just you making some fabric with 2 or more pointy sticks and some string. Just like people (especially women-people) did generations and generations before you. \n\nI’ll steal a quote from Franklin Habit who really sums up the appeal: \n\n > Whenever I work through an antique pattern, my thoughts inevitably turn to another knitter, long gone and utterly forgotten, who may have pursued the same course of knits and purls [...] Sometimes she’s an expectant mother puzzling over the Baby’s Hood, or a grandmother with a quiet afternoon turning out yet another Pence Jug. She may be called Ada or Isabel. She may live on the American frontier or in a London row house. She may be knitting under a tree, or beside a coal fire. She often, when confounded by the same vagueness in the pattern that confounds me, indulges in unladylike and possibly anachronistic vulgarities. (“Oh, @#$!% this @#%@^ nightcap,” said Aunt Ada.) ([source](_URL_0_), I’ve been meaning to make the Mrs. Roosevelt Mittens for like 5 years now.) \n\nAnother reason I am drawn to knitting is because [it was a skilled cottage industry for a lot of centuries, especially for Scottish women,](_URL_2_) who would be the bulk of my ancestors. Scotland in particular was known for socks and lacework. Knitting was a source of money people who otherwise couldn’t do any useful labor - it doesn’t require a lot of investment in materials (you just need needles and yarn), nor does it require physical strength, it could be done by people otherwise unemployable like the elderly or infirm, or just farming-people in winter. Another reason knitting is so lovely is because it’s a social or asocial activity as you see fit. You can knit in a group, you can knit with a friend, you can knit on the couch with your husband, or you can knit all by yourself. Many of the women who had to knit for money or to keep their family clothed [did that shit on the go](_URL_5_) between other work. \n\nMy favorite type of knitting is lace knitting, my goal is to someday make a Shetland lace [“wedding ring” shawl](_URL_6_), which is the pinnacle of a lace knitter’s art. But for the meantime I stick to simpler lacework. Like most knitters, I have a habit of buying yarn that I think looks really cool in the skein, getting it home, forgetting about it, and years later discovering some butt ugly yarn in my closet and then desperately trying to think of something to use it up on. [Here is my yarn of the moment](_URL_3_), a bulky weight dark purple mohair with a sparkle running through it, which I know must have seemed awesome at the time but now strikes me as compelling evidence that I have truly awful taste and should not be allowed to dress myself. It’s also itchy, and I had 8 goddamned skeins of this to somehow make into something acceptable. I decided on a lace wrap (as wraps/shawls don’t get too close to your body so the itchy wouldn’t be too bad). Mohair yarns’ fuzzy halo kinda “muffles” the visual impact of lace patterns, so I wanted a bolder, simpler lace that would still be visible through the eye-stunning fug of a sparkly mohair. I settled on [Old Shale Stitch,](_URL_7_) which is an old Shetland lace pattern. It’s actually really “Shell Stitch” because it looks like seashells but the early knitting pattern collectors didn’t speak Highland brogue so it got put down in the books as Shale. It’s the traditional edging on [hap shawls](_URL_8_) which are big wool shawls that would be everyday wear, so I thought I’d make my wrap something of an homage to those. It’s also an “unbalanced” lace (that doesn’t stay square as you knit up), so I thought the way it pulled itself into ripples was also kinda neat. \n\nSo, my husband was unexpectedly in the hospital for a few days 2 weeks ago, and when I rushed home after he was admitted I had a few minutes to gather some overnight stuff and then just anything to distract me, and I grabbed: a gay romance novel, a Tupperware container full of soynuts (I don’t remember what the thinking was on this one), and my knitting. And boy did that damn knitting just about save my life. I did not have 2 brains cells to rub together long enough to do any sort of reading so the book got left in my bag the entire time, but I think I knitted about 5 skeins in 48 hours. “I’m sorry about all the mohair!” I said to the cleaning staff as I shredded like a dog blowing coat, compulsively knitting in the guest chair. But that repetitive, productive movement of knitting gave me a comforting connection to countless women before me who had no doubt sat at many besides waiting to see what would become of their loved ones. Husbands have always gotten sick. Illness has always been fearful. And women have always worked through it. (And after all this I forgot to take a picture of the final product last night, I'll see if I can update with a photo when I get home.) \n\nAnyway he’s fine. After Christmas is done I think I’m going to make good on my threats and finally make him a historic [Scottish-pattern gansey](_URL_4_) and force him to wear it. \n\nAnyway, if you’d like to read about the history of knitting, here are the two books: \n\n* *A History of Hand Knitting* by Richard Rutt, from 1987 and EXTREMELY British \n\n* *Knitting by the Fireside and on the Hillside: A History of the Shetland Hand Knitting Industry c.1600-1950* by Linda G. Fryer, from 1994 and not so terribly British ",
"provenance": null
},
{
"answer": "Sorry this is a tad lazy, but I've written on Chinese immigrant prostitution a few times in the past on /r/askhistorians, so I'm going patch together my previous posts on the topic. Excuse the lack of context, these were all answers to distinct questions that were not 100% about prostitution.\n\n > One type of Chinese slavery that did occur with more frequency, even after the passage of the 13th amendment, was the sexual slavery of Chinese prostitutes. As with anything, the individual quality of life for these prostitutes varied, but was of course abysmal. They were often beaten, abused, etc. The prostitutes came from China (typically southern, Cantonese-speaking provinces like Guangdong). They were usually lured over--either with trickery or outright kidnapping, made to sign a pretty malicious contract, and worked for somewhere around 3-5 years, during which time they were completely the property of the brothel owner, or \"pimp\" to use the modern-day terminology.\n > I am not aware of any black prostitutes in California at this time (though I'm sure they existed), so I can't speak to that. I can tell you that Chinese prostitutes had a better time within the Chinese community than white prostitutes had in the white community. The Chinese prostitutes were usually prostitutes due to family necessity--their homes in China couldn't take care of them, the family needed money, etc. The Californian Chinese community knew this, and thus treated the Chinese prostitutes not as \"dirty whores\" the way some white prostitutes were looked at, but instead as disadvantaged women, trying to do what was right for the family.\n\n.\n\n > In the earlier days of Chinatown, the Chinese quarter had a significant seedy underbelly, which had things like opium dens. One of the businesses more frequently attended by whites were the brothels. Rumors that Chinese vaginas were shaped differently than American ones led to the popularity of the \"ten-cent lookee\"--providing a cheap sexual outlet for young white laborers, sailors, et al.\n\n.\n\n > Because of the under-the-table nature of prostitution, as well as Chinese presence in general, it's hard to estimate how many women served a prostitutes. If we look at the numbers and err on the side of more prostitution, we can get a figure as high as 85% for the peak percentage of women serving as prostitutes (this number is specific to San Francisco). This percentage declined over time, as more women came and started taking the roles of housewives or even laborers and wage-earners.\n\nExcuse me for offering such a dismal portion of \"women's work\"!",
"provenance": null
},
{
"answer": "While my post won't compare to Caffarelli's, my favorite time to study in history was WWII, and place would be Germany, the United States, and Great Britain. Most of us know how the role of Women changed in the United States and Great Britain with women moving into the jobs held by men because the men were all off fighting the war, as well as the women working industrial jobs due to the huge demand for heavy industrial products thanks to the war, but the change of roles in Nazi Germany for women is quite different, and very interesting. In Nazi Germany, women actually went away from the workforce and back to being housewives, a change rarely seen during wartime. Hitler saw no purpose or reason for women to work, and their main job was to take care of the house and MOST importantly, produce children. That is, only if they were of Aryan descent. Women deemed \"unpure\" that wouldn't be necessarily prosecuted under Nazi rule (so not jews, gypsys, etc.) but still deemed second class citizens because they were not of Aryan blood would sometimes undergo forced sterilization. To enforce this, all marriages had to be approved by local government officials, and if a member of the SS was seeking a bride, they potential bride had to undergo a very extensive background check to ensure they were \"worthy\" to marry and have children with Hitler's elite Aryan soldiers. This whole obsession over large families and motherhood started because in the 1920s Germany had the lowest birthrate in Europe, and this was viewed as a problem by Hitler because he thought a high birthrate meant a better chance at victory, not to mention he wanted future Aryans to continue conquest and to settle in what he envisioned as \"the thousand year Reich\" The Nazis declared mothers day a holiday and started giving gold cross awards to elderly mothers with lots of children. Although the award itself was not valuable and held little meaning, it was viewed as a prestigious honor and encouraged women to have children. This along with extensive state propaganda, and a plenitude of financial benefits to young women deemed worthy to have children made the German birthrate take off from it's extremely low point in the 1920s and early 30s. There was a sort of cult of motherhood in place, and it was very desirable and encouraged to have a large family. In fact, the term family was reserved for couples with four or more children. While this may seem off topic since the topic of this thread was Women's work and I discussed the birthrate and motherhood in Nazi Germany, it's really not as that WAS women's jobs in Nazi Germany. It was accepted that women and their fertility belonged to the state, and they owed it to the nation to have a large family. (If they met the standards of course). ",
"provenance": null
},
{
"answer": "OMG, so I recently got a book about propaganda posters during the Cultural Revolution in China, and in it, there's a section about women as subjects, called \"women hold up half the sky\". I'M SO EXCITED TO TALK ABOUT WOMEN IN MY AREA OF HISTORY OMG YAYYYYYYYYYYYYY.\n\n(I know, I was really informal and rather unbecoming in my tone, but I've been excited to write this post all day, or at least since I saw this thread at eight this morning. Please forgive me for the informality of the preceding paragraph. I'll try to contain my excitement a bit.)\n\n\"Women hold up half the sky\" was a propaganda slogan that came into prominence during the Cultural Revolution in China. This was yet another way in which women's roles had been changed, continuing a tradition of redefining a women's place stemming from discussions on how to make China a stronger country in the 19th century.\n\nSome background. Women and their role in society has been a part of national debate and intellectual discussion since the late 19th century. Their status was linked to the country's prosperity and health (of sorts). Intellectuals came to believe that the Chinese woman, with her bound feet and her restrictions encoded in Confucian ideology, was a sign of China's backward nature. As such, it was believed that reforming or revolutionizing her status would lead to a stronger, more modernized China. Reforms included abolishing the concubine system, banning the practice of foot binding, educating women, free choice marriage instead of the arranged marriage system, the right to divorce, and encouraging women to take part in the political -- and outer -- sphere. Most of these early reform efforts were led by men, although some women did take part (both on the Nationalist side and the Communist side).\n\n(I'm getting rather off topic, but if you're curious about this topic, I highly recommend the book *Engendering the Chinese Revolution, Radical Women, Communist Politics, and Mass Movements in the 1920s* by Christina Kelley Gilmartin. The focus is on the Communist Party's gender politics, but there's a chapter in there that does discuss how the Communists and Nationalists worked together to advance women's rights during the First United Front of the 1920s.)\n\nAnyways, so the Communists come into power, brought about various reforms such as the Marriage Law of the 1950s and the land reforms (which allowed women to own land), etc etc etc. Now we get to the topic of this post: propaganda posters!\n\nThe Cultural Revolution had new goals for women, new roles. Women were being encouraged to take on traditionally masculine professions, to stop wearing feminine dress, and to otherwise help build a better Communist nation. For a woman to act more masculine is to lead revolution; being concerned with things like beauty and fashion were seen as bourgeoisie and otherwise *bad*. Because of this, many of the women depicted in these posters have short hair and are dressed in clothes that don't emphasize curves or the feminine form. Furthermore, attention wasn't being drawn to her because she was a woman, but because she was accomplishing tasks that would further state goals and lead to revolution and another new China. Whether she was working in the capacity of an agricultural worker, a student, an air force pilot, an electrical worker, a chemist, or more, she was furthering revolution. It didn't matter what her sex was; what mattered was whether she could do these jobs. And as these propaganda posters show, the Chinese Communist Party believed that she *could*. She held up half the sky, remember?\n\nThis time period saw the introduction of the Iron Girl, a woman who was able to take on jobs in heavy industry, construction, and agriculture alongside men. She was able to do the same work that the men did, and she did it well. She was the epitome of what an ideal woman should be during this time period: more masculine, equal in status alongside men, and holding her half of the sky. These Iron Girls were the subject of many propaganda posters, serving as a prominent symbol of the Chinese Communist Party's gender ideology during this period.\n\nHowever, when the Cultural Revolution came to an end, the Iron Girl served as a symbol on what was wrong with Party ideology prior to economic reforms. Femininity became an important trait again, with publications emphasizing that men and women were inherently different from one another. Instead of being held up to the standards of men, people during the 1980s believed that a new standard should be made for women to live up to, in order to account for this \"inherent difference\" between the sexes. (See: *Personal Voices: Chinese Women in the 1980s* by Emily Honig and Gail Hershatter, which discusses this topic at length).\n\n[For the curious, here's a small sample of posters from the time period (with apologizes in advance for using an iPod camera instead of setting up my scanner/printer).](_URL_0_) ",
"provenance": null
},
{
"answer": "Once again I'll deal with some hockey history. This is about Marguerite Norris, the first woman to have her name engraved on the Stanley Cup.\n\nNorris' father was James Norris, Sr. Through a series of questionable business dealings that aren't relevant here, he had control of three of the six teams in the NHL when he died in 1952. To sort this out, his sons had to relinquish control over his initial, and favourite, team, the Detroit Red Wings. James' 24-year-old daughter Marguerite was named president of the team. Ostensibly this was done so the brothers, James, Jr. and Bruce, could exert control over the Red Wings; she was just supposed to be a figurehead.\n\nHowever that didn't work, and she took her job seriously, while ignoring her brothers. Already a strong team (they won the Stanley Cup in 1950 and 1952), the Red Wings won the Cup again in 1954 and 1955.\n\nBy this time the Norris brothers, and the GM of the Red Wings, Jack Adams, had had enough of a woman running the team, and worked to oust her. James, Jr., or Jimmy as he was known, sold his share of the Red Wings to Bruce in exchange for shares in the Chicago Black Hawks. This gave Bruce enough leverage in the Red Wings to appoint himself president of the team, demoting Marguerite to Vice-President, a largely meaningless role. With the Norris' still controlling three of the teams (and they would until the mid-1960s), the NHL was sometimes referred to as the \"Norris House League,\" and not in a positive manner (not that it helped; except for 1961, no Norris-controlled team would win the Cup again).\n\nMarguerite kept on with the team until 1957, when an abortive attempt by several players, including key members of the Red Wings, to form a players' union fell through. She opposed Bruce's reaction, which was to trade the offending players away, and resigned shortly after. Detroit would then endure decades of poor play and humiliation, and not win the Stanley Cup again until 1997 (the Norris family sold the team in the early 1980s).",
"provenance": null
},
{
"answer": "Well, I had done a little prepping for questions about the role of women in the Spanish Civil War, and then no one asked about it in the AMA! So while it doesn't 100 percent fit, here is a bit I had already done a rough write-up of.\n\nViews on gender roles between the Nationalists and Loyalists were quite stark in their differences. Spain had been way behind the rest of the west in terms of women's rights, but the rise of the Republic in 1931 set the country on a crash course of liberalization - one of the major criticism of the Popular Front from the right prior to the war. By the mid-30s, Spain was at the forefront of women's liberation!\n\nIn the case of the Loyalists, progressive attitudes saw women not only being encouraged to work outside the home in in formerly male dominated jobs such as factories, but in some factions (especially the Anarchist CNT-FAI), women fought side by side with men in the militias as they battled the Nationalists. This would become less common however with the consolidation of forces under PCE (Communist) control later in the war.\n\nWhile the Nationalists did encourage women to contribute to the war effort as well, it was much more constrained within framework of traditional domesticity. The Auxilio Social, for instance, was a humanitarian arm of the Falange’s “Seccion Femenina”, and provided nursing and relief work for both soldiers and civilians in Nationalist controlled areas. Such work was only for young, unmarried women, and leaders made clear that a woman’s most important role remained in the home with her family.\n\nWith the defeat of the Loyalists, what changes had happened were quickly reversed. Women were back to being seen as mothers and wives, and lost the equality that they had briefly enjoyed. Spain would again fall behind the rest of the west - only women who were heads of households would be allowed to vote until the 1970s.",
"provenance": null
},
{
"answer": "Well I'll shoot. Women had huge networks of seed exchange as part of the settlement of the west. Women would be all alone on these absurdly isolated farms and one of the ways they would keep materially in touch was exchanging seeds for various crops, vegetables, decorative plants, and flowers! It would be an incredibly difficult project, but someday when tenure is assured and/or I have tons of time, I'd love to travel the country digging through archives to compile a history of women's seed networks across the spreading US through the 19th to twentieth century!\nsighhhhh . . . . . . ",
"provenance": null
},
{
"answer": "So this is one of my favorite stories. It's about a woman named Barira, who was a slave in Arabia in the early 600s. Her example became an important legal precedent for early Islamic scholars. The first scholar to organize stories about Muhammad into a collection of precedents for use by jurists (the *Sahih al-Bukhari*, c.840s) included the story of [Barira](_URL_0_) in 33 different places, suggesting just how important he thought it was.\n\n > Aisha said:\n\n > \tBarira had come to her seek help with her emancipation contract. She had to pay five ounces (of gold) in five yearly installments. Aisha said to her, “Do you think that if I pay the whole sum at once, your masters will sell you to me? If so, then I will free you and your *wala’* (loyalty) will be for me.” Barira went to her masters and told them about the offer. They said that they would not agree to it unless her *wala’* would be for them.\n\n > \tAisha continued: I went to God’s Messenger and told him about it. God’s Messenger said to her, “Buy Barira and manumit her. The *wala’* will be for the liberator.” God’s Messenger then got up and said, “What about those people who stipulate conditions that are not present in God’s laws? If anybody stipulates a condition which is not in God’s laws, then what he stipulates is invalid. God’s conditions are the truth and are more solid.”\n\nThis story doesn't tell us much about Barira's day-to-day work. Other stories about the life and sayings of Muhammad make it clear that some slave women worked as cooks, fortunetellers, household managers, prostitutes, shepherds, tanners, and wet nurses. Not all this work was licit, but Barira's example shows that slave women could engage in these or other types of work to earn their own money, and eventually to buy their freedom.\n\nWhat I think is even more interesting about this story is the opportunities taken by both Barira and Aisha (one of Muhammad's wives). Barira, a slave woman, negotiates and enters into a contract with her masters (i.e. she's partially owned by several different people). Aisha herself has access to what seems to be a substantial sum of money. Between the two of them, they negotiate not only for Barira's freedom, but they also establish who will receive Barira's *wala’*. This is an important relationship, similar to the patron/client relationships of antiquity, and it guarantees that Barira will still have someone obligated to provide for her once she's free. In return Aisha will receive a client obligated to support her, including providing food, hospitality, and requested services. And the hadith ends with Muhammad standing on a pulpit, affirming that these two women had the right to enter into such an agreement, purchasing Barira for manumission and clientage, even on the false condition that her *wala’* will go to her previous owners.\n\nSo although the story of Barira doesn't tell us about the particular labor that women were doing, it does tell us a bit about their legal personhood, their abilities to enter into contracts, and their abilities to accumulate and discharge wealth. Altogether, I think it's a very surprising precedent for what women should be able to do, set by none other than a slave woman laboring at the birth of Islam.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "37775366",
"title": "Makers: Women Who Make America",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 551,
"text": "Makers: Women Who Make America is a 2013 documentary film about the struggle for women's equality in the United States during the last five decades of the 20th century. The film was narrated by Meryl Streep and distributed by the Public Broadcasting Service as a three-part, three-hour television documentary in February 2013. \"Makers\" features interviews with women from all social strata, from politicians like Hillary Clinton and television stars like Ellen DeGeneres and Oprah Winfrey, to flight attendants, coal miners and phone company workers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60900034",
"title": "A Woman of the Century",
"section": "Section::::Introduction.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 734,
"text": "The publication of \"A Woman Of The Century\" was undertaken to provide a biographical record of the 19th-century, woman's representation in that era having been recognized. The work was meant to meet the requirements demanded by a discriminating public. It was the most important undertaking of its kind attempted to date. It embraced biographical sketches of women prominently connected with that era of woman's activity—all women considered noteworthy in the church, at the bar, in literature and music, in art and the drama, in science and invention, in social and political reform, in commerce or in philanthropy. It includes women's achievements in various branches of human activity identified with American progress of the day.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14810184",
"title": "Woman in the Nineteenth Century",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 296,
"text": "Woman in the Nineteenth Century is a book by American journalist, editor, and women's rights advocate Margaret Fuller. Originally published in July 1843 in \"The Dial\" magazine as \"The Great Lawsuit. Man versus Men. Woman versus Women\", it was later expanded and republished in book form in 1845.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1271947",
"title": "First-wave feminism",
"section": "Section::::United States.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 823,
"text": "\"Woman in the Nineteenth Century\" by Margaret Fuller has been considered the first major feminist work in the United States and is often compared to Wollstonecraft's \"A Vindication of the Rights of Woman\". Prominent leaders of the feminist movement in the United States include Lucretia Coffin Mott, Elizabeth Cady Stanton, Lucy Stone, and Susan B. Anthony; Anthony and other activists such as Victoria Woodhull and Matilda Joslyn Gage made attempts to cast votes prior to their legal entitlement to do so, for which many of them faced charges. Other important leaders included several women who dissented against the law in order to have their voices heard, (Sarah and Angelina Grimké), in addition to other activists such as Carrie Chapman Catt, Alice Paul, Sojourner Truth, Ida B. Wells, Margaret Sanger and Lucy Burns.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48901865",
"title": "Historiography of the United Kingdom",
"section": "Section::::Since 1945.:New themes.:Women's history.\n",
"start_paragraph_id": 109,
"start_character": 0,
"end_paragraph_id": 109,
"end_character": 306,
"text": "Women's history started to emerge in the 1970s against the passive resistance of many established men who had long dismissed it as frivolous, trivial, and \"outside the boundaries of history.\" That sentiment persisted for decades in Oxbridge, but has largely faded in the red bricks and newer universities.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18706997",
"title": "Women's Progress Commemorative Commission",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 757,
"text": "The Women's Progress Commemorative Commission is a U.S. bipartisan commission established pursuant to the Women's Progress Commemoration Act (Public Law 105-341, 1998-10-31) under President Bill Clinton. The bill was introduced by Congresswoman Louise Slaughter and Senator Chris Dodd. The commission was tasked with identifying and preserving websites significant to American women's history. It was established in honor of the 150 year anniversary of the Seneca Falls Convention. The commission's first meeting was held 2000-07-12 in Seneca Falls, New York to develop a scope. Subsequent meetings, some sponsored by the National Park Service, included discussions regarding assistance from United State governors as well as problems with data collection.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60168963",
"title": "Hazel MacKaye",
"section": "Section::::Career.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 584,
"text": "Her 1914 production \"The American Woman: Six Periods of American Life\", presented by the New York City Men's League for Women's Suffrage, \"used historical scenes to expose the specific economic, political, and social oppressions of American women\". It was not a popular success. Her 1915 production \"Susan B. Anthony\", presented at Convention Hall in Washington, D.C., was more successful, raising money for Paul's Congressional Union and celebrating the life of the great early leader of women's suffrage. These productions were huge enterprises, involving hundreds of participants.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2c55li
|
why do commercial airplanes board passengers using the seemingly inefficient "zone" system, rather than filling the seats chronologically?
|
[
{
"answer": "[This](_URL_0_) article from CNN implies that airlines are indeed doing some research in this area, maybe even testing. \n\nHere is a description of a [method](_URL_1_) modeled by Dr. Steffen, a physicist at Fermilab, which seems to cut boarding time roughly in half. ",
"provenance": null
},
{
"answer": "Generally, the zones are based on where the seat is on the aircraft -- they fill back to front, with two important exceptions:\n\n1 - First/Business class board first.\n2 - Preboarding. Generally designed for very frequent flyers in coach, as well as elderly or people with small children, many people do preboard. And there's really no enforcement -- if someone feels they need extra time to preboard, then they will preboard. As a result, when the back-to-front zoned boarding begins, there are already people seated all over the plane.",
"provenance": null
},
{
"answer": "There was research done into the fastest way to board a plane. The result? Completely random boarding.\n\nImagine passengers get in line to board the plane in perfect order from rear to front. The first person in line reaches the back of the plane and proceeds to load his bags in the bins. The passenger behind him must wait for him to finish because he is in the same row, same with the one behind him and so on. There are six people in the rear row but only one of them has reached the back of the plane. The entire line is now backed up. In this case *only one passenger* is actively in the process of boarding *the whole time*.\n\nNow imagine you line the passengers up randomly. Yes, you will get times where one person is blocking the line up at front, but now you will also get times where multiple people are boarding at a time because you have some mixes of people in the back and people in the front getting seated simultaneously. Make sense?\n\nEach zone has people from all over the plane to ensure a random mix of boarding throughout the plane as much as possible.\n\nI suppose if you could be perfectly ordered the best way to do it would be to line everyone with a left window seat rear to front, followed by everyone with a right window seat rear to front, followed by middle seats ect. Realistically the logistics of lining people up like this would just be impossible, especially because now you have to separate families and groups. Random is the easiest way to ensure somewhat efficient boarding.\n\nOne idea I have to slightly speed the process is to board all lone travelers with window seats first. Maybe that's just me being selfish :) though.\n\ntr; dr: Airlines researched the fastest way to board. Random (though it may seem counter intuitive at first) is the best answer.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "156259",
"title": "Price discrimination",
"section": "Section::::Examples of price discrimination.:Travel industry.\n",
"start_paragraph_id": 52,
"start_character": 0,
"end_paragraph_id": 52,
"end_character": 490,
"text": "Since airlines often fly multi-leg flights, and since no-show rates vary by segment, competition for the seat has to take in the spatial dynamics of the product. Someone trying to fly A-B is competing with people trying to fly A-C through city B on the same aircraft. This is one reason airlines use yield management technology to determine how many seats to allot for A-B passengers, B-C passengers, and A-B-C passengers, at their varying fares and with varying demands and no-show rates.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11025622",
"title": "Airline reservations system",
"section": "Section::::Availability display and reservation (PNR).\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 1983,
"text": "Users access an airline’s inventory through an availability display. It contains all offered flights for a particular city-pair with their available seats in the different booking classes. This display contains flights which are operated by the airline itself as well as code share flights which are operated in co-operation with another airline. If the city pair is not one on which the airline offers service, it may display a connection using its own flights or display the flights of other airlines. The availability of seats of other airlines is updated through standard industry interfaces. Depending on the type of co-operation, it supports access to the last seat (last seat availability) in real-time. Reservations for individual passengers or groups are stored in a so-called passenger name record (PNR). Among other data, the PNR contains personal information such as name, contact information or special services requests (SSRs) e.g. for a vegetarian meal, as well as the flights (segments) and issued tickets. Some reservation systems also allow to store customer data in profiles to avoid data re-entry each time a new reservation is made for a known passenger. In addition, most systems have interfaces to CRM systems or customer loyalty applications (aka frequent traveler systems). Before a flight departs, the so-called passenger name list (PNL) is handed over to the departure control system that is used to check-in passengers and baggage. Reservation data such as the number of booked passengers and special service requests is also transferred to flight operations systems, crew management and catering systems. Once a flight has departed, the reservation system is updated with a list of the checked-in passengers (e.g. passengers who had a reservation but did not check in (no shows) and passengers who checked in, but did not have a reservation (go shows)). Finally, data needed for revenue accounting and reporting is handed over to administrative systems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1942",
"title": "Airline",
"section": "Section::::Economic considerations.:Assets and financing.\n",
"start_paragraph_id": 119,
"start_character": 0,
"end_paragraph_id": 119,
"end_character": 470,
"text": "In view of the congestion apparent at many international airports, the ownership of slots at certain airports (the right to take-off or land an aircraft at a particular time of day or night) has become a significant tradable asset for many airlines. Clearly take-off slots at popular times of the day can be critical in attracting the more profitable business traveler to a given airline's flight and in establishing a competitive advantage against a competing airline.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "240584",
"title": "Instrument landing system",
"section": "Section::::ILS categories.:Special CAT II and CAT III operations.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 780,
"text": "Some commercial aircraft are equipped with automatic landing systems that allow the aircraft to land without transitioning from instruments to visual conditions for a normal landing. Such autoland operations require specialized equipment, procedures and training, and involve the aircraft, airport, and the crew. Autoland is the only way some major airports such as Charles de Gaulle Airport remain operational every day of the year. Some modern aircraft are equipped with Enhanced flight vision systems based on infrared sensors, that provide a day-like visual environment and allow operations in conditions and at airports that would otherwise not be suitable for a landing. Commercial aircraft also frequently use such equipment for takeoffs when \"takeoff minima\" are not met.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23188969",
"title": "Aerial advertising",
"section": "Section::::Type of aircraft used.:Fixed-wing aircraft.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 311,
"text": "Because of the relatively low speed and altitude ceiling of propeller aircraft, this type is generally favored for the deployment of mobile billboards when fixed-wing aircraft are used. All metropolitan areas in the U.S. can be serviced except New York City and Washington, D.C. which have restricted airspace.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2337070",
"title": "Freighting",
"section": "",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 440,
"text": "Air transportation- Airplanes are used by customers all over the world because of how fast and easy it is. Though because the shipment is in flight and needs an experienced pilot. Air transportation is used for small shipments that usually reach their destination in a week. Usually air transportation is managed by shipping companies such as Fedex, UPS, and or DHL. Air is usually used for its speed, and reliability, though it is costly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7802776",
"title": "Ground support equipment",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 457,
"text": "Many airlines subcontract ground handling to an airport or a handling agent, or even to another airline. Ground handling addresses the many service requirements of a passenger aircraft between the time it arrives at a terminal gate and the time it departs for its next flight. Speed, efficiency, and accuracy are important in ground handling services in order to minimize the turnaround time (the time during which the aircraft remains parked at the gate).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
10xcdl
|
If the Earth is shaped like a pear does that mean atmospheric pressure at sea-level differs depending on latitude?
|
[
{
"answer": "Where have you heard that the earth is shaped like a pear?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "41822",
"title": "Troposphere",
"section": "Section::::Pressure and temperature structure.:Pressure.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 326,
"text": "The pressure of the atmosphere is maximum at sea level and decreases with altitude. This is because the atmosphere is very nearly in hydrostatic equilibrium so that the pressure is equal to the weight of air above a given point. The change in pressure with altitude can be equated to the density with the hydrostatic equation\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47484",
"title": "Atmospheric pressure",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 838,
"text": "In most circumstances atmospheric pressure is closely approximated by the hydrostatic pressure caused by the weight of air above the measurement point. As elevation increases, there is less overlying atmospheric mass, so that atmospheric pressure decreases with increasing elevation. Pressure measures force per unit area, with SI units of Pascals (1 pascal = 1 newton per square metre, 1N/m). On average, a column of air with a cross-sectional area of 1 square centimetre (cm), measured from mean (average) sea level to the top of Earth's atmosphere, has a mass of about 1.03 kilogram and exerts a force or \"weight\" of about 10.1 newtons, resulting in a pressure of 10.1 N/cm or 101kN/m (101 kilopascals, kPa). A column of air with a cross-sectional area of 1in would have a weight of about 14.7lb, resulting in a pressure of 14.7lb/in.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9228",
"title": "Earth",
"section": "Section::::Physical characteristics.:Atmosphere.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 469,
"text": "The atmospheric pressure at Earth's sea level averages , with a scale height of about . A dry atmosphere is composed of 78.084% nitrogen, 20.946% oxygen, 0.934% argon, and trace amounts of carbon dioxide and other gaseous molecules. Water vapor content varies between 0.01% and 4% but averages about 1%. The height of the troposphere varies with latitude, ranging between at the poles to at the equator, with some variation resulting from weather and seasonal factors.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18122217",
"title": "Vertical datum",
"section": "",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 761,
"text": "Earth is not a sphere, but an irregular shape approximating a biaxial ellipsoid. It is nearly spherical, but has an equatorial bulge making the radius at the Equator about 0.3% larger than the radius measured through the poles. The shorter axis approximately coincides with the axis of rotation. Though early navigators thought of the sea as a flat surface that could be used as a vertical datum, this is not actually the case. Earth has a series of layers of equal potential energy within its gravitational field. Height is a measurement at right angles to this surface, roughly toward Earth's center, but local variations make the equipotential layers irregular (though roughly ellipsoidal). The choice of which layer to use for defining height is arbitrary.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "178383",
"title": "Polar circle",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 588,
"text": "The latitude of the polar circles is 90 degrees minus the axial tilt of the Earth's axis of daily rotation relative to the ecliptic, the plane of the Earth's orbit. This tilt varies slightly, a phenomenon described as nutation. Therefore, the latitudes noted above are calculated by averaging values of tilt observed over many years. The axial tilt also exhibits long-term variations as described in the reference article (a difference of 1 second of arc in the tilt is equivalent to change of about 31 metres north or south in the positions of the polar circles on the Earth's surface).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9228",
"title": "Earth",
"section": "Section::::Physical characteristics.:Shape.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 237,
"text": "In geodesy, the exact shape that Earth's oceans would adopt in the absence of land and perturbations such as tides and winds is called the geoid. More precisely, the geoid is the surface of gravitational equipotential at mean sea level.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47484",
"title": "Atmospheric pressure",
"section": "Section::::Altitude variation.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 231,
"text": "At low altitudes above sea level, the pressure decreases by about for every 100 metres. For higher altitudes within the troposphere, the following equation (the barometric formula) relates atmospheric pressure \"p\" to altitude \"h\":\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
8248hw
|
Henry VIII jousting in The Tudors
|
[
{
"answer": "Jousting armour typically had excellent neck and forearm protection. As you note, it would be very dangerous to not have it.\n\nOne example of one of Henry's jousting armours:\n\n_URL_4_\n\nI believe this is the armour he wore for jousting at the Field of the Cloth of Gold tournament.\n\nJousting armours could be field armours (i.e., armours intended for battle) with extra pieces of armour to provide more protection. On example of such additional protection is this *grandguard*:\n\n_URL_2_\n\nwhich was made for this armour:\n\n_URL_5_\n\n(among the 89 photos of this armour on that page, you will see some with this reinforcing piece, and other additional pieces, in place). The grandguard provides additional protection for the left shoulder and neck. The armour it is made for already has excellent neck protection. With this, it has even more excellent neck protection. The real-life Henry VIII would not have put up with the armour in the TV series!\n\nJousting helmets could also be made specially for jousting. For example,\n\n_URL_1_\n\nand\n\n_URL_0_\n\n(this type is often called a \"frog-mouth helm\").\n\nHenry VIII took his combat sports very seriously, and could afford the best in available armour, and his armours show this. Two further examples of this are this armour:\n\n_URL_6_\n\nand his \"spacesuit\" armour:\n\n_URL_3_\n\nwhich is remarkable for its thorough coverage of the insides of joints (which are often unprotected, or protected by mail \"voiders\" in field armours).\n\nEven with that superb protection, jousting was still dangerous. Henry VIII suffered a serious accident in a joust in 1536 when his horse fell on him, resulting in a leg injury and possibly a brain injury. The possible connection between the accident and his later tyranny has been discussed in various documentaries and web articles, but as discussed earlier, here, by u/rbaltimore in r/AskHistorians/comments/4i4e1w/was_henry_viiis_jousting_accident_in_1536_really/ and also elsewhere (e.g., Suzannah Lipscomb, *1536: The Year that Changed Henry VIII*, Lion, 2009, who points out that Henry was on the way to tyranny already), it doesn't seem likely. An earlier accident, in 1524, where he was hit in his helmet while his visor was up, has also been blamed (see Lipscomb).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "44154",
"title": "Catherine de' Medici",
"section": "Section::::Marriage.:Queen of France.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 1191,
"text": "King Henry took part in the jousting, sporting Diane's black-and-white colours. He defeated the dukes of Guise and Nemours, but the young Gabriel, comte de Montgomery, knocked him half out of the saddle. Henry insisted on riding against Montgomery again, and this time, Montgomery's lance shattered in the king's face. Henry reeled out of the clash, his face pouring blood, with splinters \"of a good bigness\" sticking out of his eye and head. Catherine, Diane, and Prince Francis all fainted. Henry was carried to the Château de Tournelles, where five splinters of wood were extracted from his head, one of which had pierced his eye and brain. Catherine stayed by his bedside, but Diane kept away, \"for fear\", in the words of a chronicler, \"of being expelled by the Queen\". For the next ten days, Henry's state fluctuated. At times he even felt well enough to dictate letters and listen to music. Slowly, however, he lost his sight, speech, and reason, and on 10 July 1559 he died, aged 40. From that day, Catherine took a broken lance as her emblem, inscribed with the words \"\"lacrymae hinc, hinc dolor\"\" (\"from this come my tears and my pain\"), and wore black mourning in memory of Henry.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47075517",
"title": "Blinding (punishment)",
"section": "Section::::Examples.:In history.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 359,
"text": "In the 11th century, William the Conqueror used blinding as a punishment for rebellion to replace the death penalty in his laws for England. King William was also accused of making the killing of a hart or hind in a royal forest into a crime punishable by blinding, but the Anglo-Saxon Chronicle claims that this was made up to tarnish the King's reputation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "221562",
"title": "Jousting",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 541,
"text": "Jousting is based on the military use of the lance by heavy cavalry. It transformed into a specialised sport during the Late Middle Ages, and remained popular with the nobility in England and Wales, Germany and other parts of Europe throughout the whole of the 16th century (while in France, it was discontinued after the death of King Henry II in an accident in 1559). In England, jousting was the highlight of the Accession Day tilts of Elizabeth I and of James VI and I, and also was part of the festivities at the marriage of Charles I.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5070109",
"title": "Spanish Armada",
"section": "Section::::History.:Background.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 595,
"text": "Henry VIII began the English Reformation as a political exercise over his desire to divorce his first wife, Catherine of Aragon. Over time, it became increasingly aligned with the Protestant reformation taking place in Europe, especially during the reign of Henry's son, Edward VI. Edward died childless and his half-sister, Mary I, ascended the throne. A devout Catholic, Mary, with her co-monarch and husband, Philip II of Spain, began to reassert Roman influence over church affairs. Her attempts led to more than 260 people being burned at the stake, earning her the nickname 'Bloody Mary'.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53238888",
"title": "The 1511 Westminster Tournament Roll",
"section": "Section::::Contents.:Membrane 24 to 27 The Joust.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 339,
"text": "The jousting scene, with the Challengers at one end and the Answerers at the other, depicts Henry's joust, just as he shatters his lance on his opponent's helmet, in doing so scoring the highest points. The King's joust is shown as watched over by Queen Katherine along with ladies and gentlemen of the court seated in an ornate pavilion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2385862",
"title": "Margaret Roper",
"section": "Section::::Relationship with Thomas More.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 552,
"text": "Thomas More was beheaded in 1535 for his refusal to accept the Acts of Supremacy and the Act of Succession (1534) of Henry VIII of England and swear allegiance to Henry as head of the English Church. Afterwards, More's head was displayed on a pike at London Bridge for a month. Roper bribed the man whose business it was to throw the head into the river to give it to her instead. She preserved it by pickling it in spices until her own death at the age of 39 in 1544. After her death, William Roper took charge of the head, and it is buried with him.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45597",
"title": "Henry V of England",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 575,
"text": "Henry V (16 September 1386 – 31 August 1422), also called Henry of Monmouth, was King of England from 1413 until his early death in 1422. He was the second English monarch of the House of Lancaster. Despite his relatively short reign, Henry's outstanding military successes in the Hundred Years' War against France, most notably in his famous victory at the Battle of Agincourt in 1415, made England one of the strongest military powers in Europe. Immortalised in the plays of Shakespeare, Henry is known and celebrated as one of the great warrior kings of medieval England.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4nfa7k
|
where do nicknames for enemy soldiers come from?
|
[
{
"answer": " > In the US the bad guys are \"Charlie\"\n\nNo they aren't. That specifically refers to Vietnamese forces in the Vietnam war. The Vietcong (VC) is \"Victor Charlie\" in radio speak. That's where the charlie comes from.\n\n > in the UK they are \"Jerry\"\n\nAgain, it refers to just Germans. Not just any aggressor. Jerry is just an abbreviation of German.",
"provenance": null
},
{
"answer": "No. Each war gives a nickname for enemy troops unique to that war. It is normally related to the short hand for the country/governments of the enemy or it is a common name used by the culture of the enemy. \n\nThe Vietcong (communist Vietnamese) were called Charlie by the US because of the military alphabet \"victor charlie\" being VC. \n\nAll the English speaking world used \"Gerry\" pronounced Jerry for short hand for German. It is derived from the \"Ger.\" abbreviated form of Germany. \n\nThe Germans called Russians \"Ivans\" in both world wars. \n\nDuring the American civil war you had \"Johnny Reb\" for the South and \"Billy Yank\" for the North. As well as just \"Yankee\" and \"Rebel\" used. ",
"provenance": null
},
{
"answer": "From WWI until now:\n**Pejoratives for enemy troops from whichever side the user is on**\n\nGermans- Fritz, Hun, Jerry, Boche (French), Kraut, Alleyman (from Allemand), Hans\n\nOttomans/Turks- Wog\n\nJapanese- Japs, Nips, Tojo\n\nBritish- Tommy, Limey, Les rosbifs\n\nFrench- Frog, Franzmann, Franzacke\n\nRussians- Ivan, Commie, Red, Bolshy, Russki\n\nSerbs- Jugos, Turks\n\nChinese- Slant-eye, Chink, Gook, Chinaman\n\nKorean- Gook, Commie\n\nVietnamese- Charlie, Gook, Slope, Zipperhead\n\nSomalis- Skinnies (unburdened_by_wit)\n\nAfghans- Raghead, Towelhead, Goatfucker, Haji, Muj motherfucker, Terry Taliban (Mr_Katanga) \n\nArabs- See Afghans plus: Ahmed, Derka, Camelfucker\n\nAmericans- Yankee, Gaijin, White Devil, Kuffar/Kaffir, Round Eye\n\n\nFeel free to add to the list if you see something or a whole people missing. Hope this doesn't get me banned.\n\n*Edit: Formatting",
"provenance": null
},
{
"answer": "I see you getting a lot of examples of \"nicknames\" for the enemy, but let me tell you \"Why\" they happen in the first place. \n\nSoldiers have likely done this since the beginning of time and the real reason is that it serves the purpose of dehumanizing your enemy. It's logically easier to \"waste a gook\" or \"smoke a hadji\" than it is to kill a man. When combatants think about the other side as being human beings with mothers and children, killing them becomes tougher to reconcile than a stereotype. It is a very human coping mechanism.",
"provenance": null
},
{
"answer": "It's usually derivations of the phonetic alphabet for the US.\n\nVeitcong = VC = Victor Charlie\n\nTarget = T = Tango\n\nJerry would be German, but more just a derivation of the initial consonant sound.",
"provenance": null
},
{
"answer": "It derives from the enemies culture. Germans were called Krauts because it's a popular food in Germany. Japanese were called Nips because Nippon is the Japanese word for Japan",
"provenance": null
},
{
"answer": "It depends on the war, who's saying it, and who they're describing. They're usually either an abbreviation, slang, and/or a pejorative. Oftentimes they describe a visible aspect of the enemy, such as ethnicity, uniforms, vehicles, insignia, etc. In certain cases, they would be considered racist today (and perhaps racist back then, but no one would have cared too much about that fact). Examples:\n\n*U.S. Civil War\n > North describing South: \"Rebels, Rebs, Dixies\"\n\n > South describing North: \"Yankees, Yanks, Feds, Federals, Blue Bellies\"\n\n*WWII\n\n > Americans describing Japanese: \"Japs, Slits, Meatball\"\n\n > British describing Germans: \"Krauts, Jerry, Hun\"",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "41916620",
"title": "33rd Regiment Alabama Infantry",
"section": "Section::::Regimental commanders, weapons and equipment.:Regimental nicknames.\n",
"start_paragraph_id": 193,
"start_character": 0,
"end_paragraph_id": 193,
"end_character": 785,
"text": "Giving nicknames to soldiers has long been a feature of military life. Private David McCook of Company B was referred to as the \"Skillet Wagon\" by men of other companies, because they were always borrowing his tin pans, buckets or cans for cooking. Private Matthews was called \"Marker\", while others sported such monikers as \"Burnt Tail Coat\", \"Fatty Bread\", \"Mumps\", \"Lousy Jim\", \"Cakes\", \"Keno\", \"Strap\" and \"Sharp.\" Matthews further reports that one evening as the regiment was being inspected by its commander, Colonel Adams called a twenty-year-old recruit to attention, referring to him \"by a name that he would not have given a married man\"; he writes that this name stuck with that man afterward, and was used by the regimental surgeon to warn malingerers away from sick call.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "459502",
"title": "Sturmtruppen",
"section": "Section::::Characters.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 217,
"text": "Most characters don't have proper names but, rather, are called by their military rank or position. Most simple soldiers are given generic \"German\" names such as Otto, Franz, Fritz, etc. Recurring characters include:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4181",
"title": "Battle",
"section": "Section::::Naming.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 525,
"text": "Some battles are named for the convenience of military historians so that periods of combat can be neatly distinguished from one another. Following the First World War, the British Battles Nomenclature Committee was formed to decide on standard names for all battles and subsidiary actions. To the soldiers who did the fighting, the distinction was usually academic; a soldier fighting at Beaumont Hamel on November 13, 1916 was probably unaware he was taking part in what the committee would call the \"Battle of the Ancre\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1028852",
"title": "Easy Company (comics)",
"section": "Section::::Publication history.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 435,
"text": "In the graphic novel \"Between Hell and a Hard Place,\" Sgt. Rock explained that he gave nicknames to Easy Company men because during battle, they would be required to do things their civilian identities might not be able to live with; once the war was over, the nicknames could be left behind once the soldiers resumed their civilian lives. This accounts for the proliferation of unusual character names in Easy Company over the years.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "380358",
"title": "Star Wars sources and analogues",
"section": "Section::::Similarities and inspirations.:Historical.:Modern and early modern history.\n",
"start_paragraph_id": 67,
"start_character": 0,
"end_paragraph_id": 67,
"end_character": 1442,
"text": "The stormtroopers from the movies share a name with the German stormtroopers. Imperial officers' uniforms also resemble some historical German Army uniforms (see Wehrmacht) and the political and security officers of the Empire resemble the black clad SS down to the imitation silver death's head insignia on their officer's caps (although the uniforms technically had more basis with the German Uhlans within the Prussian Empire). World War II terms were used for names in \"Star Wars\"; examples include the planets Kessel (a term that refers to a group of encircled forces), Hoth (Hermann Hoth was a German general who served on the snow laden Eastern Front), and Tatooine (Tataouine - a province south of Tunis in Tunisia, roughly where Lucas filmed for the planet; Libya was a WWII arena of war). Palpatine being Chancellor before becoming Emperor mirrors Adolf Hitler's role as Chancellor before appointing himself Dictator. The Great Jedi Purge alludes to the events of The Holocaust, the Great Purge, the Cultural Revolution, and the Night of the Long Knives. In addition, Lucas himself has drawn parallels between Palpatine and his rise to power to historical dictators such as Julius Caesar, Napoleon Bonaparte, and Adolf Hitler. The final medal awarding scene in \"A New Hope\", however, references Leni Riefenstahl's \"Triumph of the Will\". The space battles in \"A New Hope\" were based on filmed World War I and World War II dogfights.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2234297",
"title": "Carignan-Salières Regiment",
"section": "Section::::Departure and settlement in Canada.\n",
"start_paragraph_id": 61,
"start_character": 0,
"end_paragraph_id": 61,
"end_character": 393,
"text": "The French had a practice of allotting \"noms de guerre\" – nicknames – to their soldiers (this is still continued, but for different reasons, in the Foreign Legion). Many of these nicknames remain today as they gradually became the official surnames of the many soldiers who elected to remain in Canada when their service expired as well as the names of cities and towns throughout New France.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2808124",
"title": "Warboys",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 205,
"text": "The place-name 'Warboys' is first attested in a Saxon charter of 974, where it appears as \"Wardebusc\" and \"Weardebusc\". The name is from the Old Norse \"vardi\" and \"buski\", and means 'beacon with bushes'. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1ld379
|
what exactly fills up when computer memory is full?
|
[
{
"answer": "Imagine your device as a blank notebook. Now start writing all the 'data' or 'songs' down in the notebook. When it's full, so is your device. To put it as simply as possible. As far as the physical size of the device is concerned I'll go back to the notebook. As we get better at making notebooks we can make them with smaller and smaller lines allowing you to fit more information on each page. The more data per page the more info you can put in a smaller notebook. ",
"provenance": null
},
{
"answer": "All storage on electronic devices is related to physical space. All data used by electronic devices is stored as either 0's or 1's. The 0's and 1's thing confuses many people right off the bat. By 0's and 1's, we actually mean sequences of binary states. What does this mean? Basically sequences of bumps, dots, magnetic fields, voltage states, or anything that can be read as either having, or not having. Think of Morse code. A short beep is a zero, a long beep is a 1. If we come up with systems to separate huge strings of 0's and 1's into numbers and letters, we can store data, like sounds, images, and text. Hard drives (for example) store these strings of 0's and 1's with a magnetic coating on the disk, which is sectioned into pieces for each 0 and 1. When you read files, a scanner moves along the spinning disk and finds whatever file you accessed, then it reads the magnetic field on the disk to assemble a series of 0's and 1's, and that's your file. Various forms of error correction are built in, so that even if part of the magnetic section is lost (disk gets scratched, etc.), the data might still be readable. So yes, the amount of space a device can store is related to physical space, but as technology gets better, we can make the space required to hold a 0 or a 1 smaller. Flash memory, which is what usb sticks/phones/music players use, stores the 0's and 1's in a special circuit board system which retains power even when unplugged.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "78029",
"title": "Magnetic-core memory",
"section": "",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 367,
"text": "Although core memory is obsolete, computer memory is still sometimes called \"core\" even though it's made of semiconductors, particularly by people who had worked with machines having real core memory. And the files that result from saving the entire contents of memory to disk for debugging purposes when a major error occurs are still generally called \"core dumps\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "78029",
"title": "Magnetic-core memory",
"section": "Section::::Description.:Physical characteristics.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 673,
"text": "Core memory is non-volatile storage—it can retain its contents indefinitely without power. It is also relatively unaffected by EMP and radiation. These were important advantages for some applications like first-generation industrial programmable controllers, military installations and vehicles like fighter aircraft, as well as spacecraft, and led to core being used for a number of years after availability of semiconductor MOS memory (see also MOSFET). For example, the Space Shuttle IBM AP-101B flight computers initially used core memory, which preserved the contents of memory even through the \"Challenger\"s disintegration and subsequent plunge into the sea in 1986.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5300",
"title": "Computer data storage",
"section": "Section::::Hierarchy of storage.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 435,
"text": "In contemporary usage, \"memory\" is usually semiconductor storage read-write random-access memory, typically DRAM (dynamic RAM) or other forms of fast but temporary storage. \"Storage\" consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3407145",
"title": "Out of memory",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 437,
"text": "Out of memory (OOM) is an often undesired state of computer operation where no additional memory can be allocated for use by programs or the operating system. Such a system will be unable to load any additional programs, and since many programs may load additional data into memory during execution, these will cease to function correctly. This usually occurs because all available memory, including disk swap space, has been allocated.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5300",
"title": "Computer data storage",
"section": "Section::::Hierarchy of storage.:Secondary storage.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 422,
"text": "Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2691029",
"title": "DRTE Computer",
"section": "Section::::The computer.:Memory system.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 503,
"text": "The computer used core memory for all storage, lacking \"secondary\" systems such as a memory drum. Normally the memory for a machine would be built up by stacking a number of core assemblies, or \"planes\", each one holding a single bit of the machine's word. For instance, with a 40-bit word as in the DRTE, the system would use 40 planes of core. Addresses would be looked up by translating each 10-bit address into an X and Y address in the planes; for 1,024 words in the DTRE this needed 32×32 planes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8131170",
"title": "Single-level store",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 316,
"text": "Single-level storage (SLS) or single-level memory is a computer storage term which has had two meanings. The two meanings are related in that in both, pages of memory may be in primary storage (RAM) or in secondary storage (disk); however, the current actual physical location of a page is unimportant to a process.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1nt5d0
|
why do we use radioactive metals in nuclear power plants?
|
[
{
"answer": "We don't choose uranium and plutonium because they are radioactive. We choose uranium and plutonium because they happen to split easily when they absorb a neutron.....AND produce more neutrons when they split to continue the reaction on their own.\n\nYou can split just about any element, but most atoms will require you to put a lot of energy/effort into splitting them. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "11872111",
"title": "Naturally occurring radioactive material",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 510,
"text": "Natural radioactive elements are present in very low concentrations in Earth's crust, and are brought to the surface through human activities such as oil and gas exploration or mining, and through natural processes like leakage of radon gas to the atmosphere or through dissolution in ground water. Another example of TENORM is coal ash produced from coal burning in power plants. If radioactivity is much higher than background level, handling TENORM may cause problems in many industries and transportation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22153",
"title": "Nuclear power",
"section": "Section::::Life cycle of nuclear fuel.:Nuclear waste.:Waste relative to other types.\n",
"start_paragraph_id": 160,
"start_character": 0,
"end_paragraph_id": 160,
"end_character": 807,
"text": "In countries with nuclear power, radioactive wastes account for less than 1% of total industrial toxic wastes, much of which remains hazardous for long periods. Overall, nuclear power produces far less waste material by volume than fossil-fuel based power plants. Coal-burning plants are particularly noted for producing large amounts of toxic and mildly radioactive ash due to concentrating naturally occurring metals and mildly radioactive material in coal. A 2008 report from Oak Ridge National Laboratory concluded that coal power actually results in more radioactivity being released into the environment than nuclear power operation, and that the population effective dose equivalent, or dose to the public from radiation from coal plants is 100 times as much as from the operation of nuclear plants.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "202522",
"title": "Ionizing radiation",
"section": "Section::::Uses.\n",
"start_paragraph_id": 72,
"start_character": 0,
"end_paragraph_id": 72,
"end_character": 699,
"text": "Neutron radiation is essential to the working of nuclear reactors and nuclear weapons. The penetrating power of x-ray, gamma, beta, and positron radiation is used for medical imaging, nondestructive testing, and a variety of industrial gauges. Radioactive tracers are used in medical and industrial applications, as well as biological and radiation chemistry. Alpha radiation is used in static eliminators and smoke detectors. The sterilizing effects of ionizing radiation are useful for cleaning medical instruments, food irradiation, and the sterile insect technique. Measurements of carbon-14, can be used to date the remains of long-dead organisms (such as wood that is thousands of years old).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37257",
"title": "Radioactive waste",
"section": "Section::::Management.:Initial treatment.:Ion exchange.\n",
"start_paragraph_id": 98,
"start_character": 0,
"end_paragraph_id": 98,
"end_character": 770,
"text": "It is common for medium active wastes in the nuclear industry to be treated with ion exchange or other means to concentrate the radioactivity into a small volume. The much less radioactive bulk (after treatment) is often then discharged. For instance, it is possible to use a ferric hydroxide floc to remove radioactive metals from aqueous mixtures. After the radioisotopes are absorbed onto the ferric hydroxide, the resulting sludge can be placed in a metal drum before being mixed with cement to form a solid waste form. In order to get better long-term performance (mechanical stability) from such forms, they may be made from a mixture of fly ash, or blast furnace slag, and Portland cement, instead of normal concrete (made with Portland cement, gravel and sand).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20825543",
"title": "High-level radioactive waste management",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 361,
"text": "High-level radioactive waste management concerns how radioactive materials created during production of nuclear power and nuclear weapons are dealt with. Radioactive waste contains a mixture of short-lived and long-lived nuclides, as well as non-radioactive nuclides. There was reported some 47,000 tonnes of high-level nuclear waste stored in the USA in 2002.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39303603",
"title": "Gyeongju nuclear waste disposal facility",
"section": "Section::::Overview on Radioactive Waste.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 403,
"text": "Low Level Radioactive Waste (LLRW) includes radio isotope waste generated in industry, hospitals, research, and the objects associated with the nuclear fuel cycle. LLRW rarely needs shielding and consists mainly of items with short lived radioactivity. Usually they are compacted and shallowly buried. Materials include paper, clothing, and other materials which may have been exposed to radioactivity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3085316",
"title": "Commonly used gamma-emitting isotopes",
"section": "Section::::Activation products.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 288,
"text": "Some radionuclides, for example cobalt-60 and iridium-192, are made by the neutron irradiation of normal non-radioactive cobalt and iridium metal in a nuclear reactor, creating radioactive nuclides of these elements which contain extra neutrons, compared to the original stable nuclides.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
46hvu3
|
If we ever get to do brain transplants, what would happen? Would the person with the new brain have the new brain old memories, or would all memories be forgotten?
|
[
{
"answer": "**IF** it were ever possible, and that is a big if, you, and every conscious aspect of you, would be transported with your brain. Your mind is the product of the pink squish stuff between your ears. Move the squishy stuff around, and the mind follows.\n\nNow, there would be some things that would probably not be transported (at least to some degree), like certain physical skills that involved precisely timed interactions between muscles and the nervous system. But conscious memories? They're in your brain.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "20474671",
"title": "Sebastian Seung",
"section": "Section::::The Connectome Theory.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 646,
"text": "He proposes that every memory, skill, and passion is encoded somehow in the connectome. And when the brain is not wired properly it can result in mental disorders such as autism, schizophrenia, Alzheimer's, and Parkinson's. Understanding the human connectome may not only help cure such diseases with treatments but also possibly help doctors prevent them from occurring in the first place. And if we can represent the sum of all human experiences and memories in the connectome, then we can download human brains on to flash drives, save them indefinitely, and replay those memories in the future, thereby granting humans a kind of immortality.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4483284",
"title": "Body memory",
"section": "Section::::Cellular memory.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 742,
"text": "Cellular memory (CM) is a parallel hypothesis to BM positing that memories can be stored outside the brain in all cells. The idea that non-brain tissues can have memories is believed by some who have received organ transplants, though this is considered impossible. The author said the stories are intriguing though and may lead to some serious scientific investigation in the future. In his book \"TransplantNation\" Douglas Vincent suggests that atypical newfound memories, thoughts, emotions and preferences after an organ transplant are more suggestive of immunosuppressant drugs and the stress of surgery on perception than of legitimate memory transference. In other words, \"as imaginary as a bad trip on LSD or other psychotropic drug.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37399719",
"title": "Eleanor Maguire",
"section": "Section::::Research and career.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 334,
"text": "This is also true of her other work such as that showing that patients with amnesia cannot imagine the future which several years ago was rated as one of the scientific breakthroughs of the year; and her other studies demonstrating that it is possible to decode people's memories from the pattern of fMRI activity in the hippocampus.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29826376",
"title": "Hippocampal prosthesis",
"section": "Section::::Memory Codes.:Goals for the future.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 534,
"text": "The research teams at USC and Wake Forest are working to possibly make this system applicable to humans whose brains suffer damage from Alzheimer's, stroke, or injury, the disruption of neural networks often stops long-term memories from forming. The system designed by Berger and implemented by Deadwyler and Hampson allows the signal processing to take place that would occur naturally in undamaged neurons. Ultimately, they hope to restore the ability to create long-term memories by implanting chips such as these into the brain.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49045552",
"title": "The Final Last of the Ultimate End",
"section": "Section::::I Meet Her.:Plot summary.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 252,
"text": "A man who had a terminal disease decided to get a whole body transplant surgery, removing his brain from his original body and transplanting it to a new brainless body, which was cloned from his cell. He gets the surgery and it seems to be successful.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20510214",
"title": "Activity-dependent plasticity",
"section": "Section::::Relationship to behavior.:Stroke rehabilitation.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 790,
"text": "On the other hand, people with such conditions have the capacity to recover some degree of their lost abilities through continued challenges and use. An example of this can be seen in Norman Doidge's \"The Brain That Changes Itself\". Bach y Rita's father suffered from a disabling stroke that left the 65-year-old man half-paralyzed and unable to speak. After one year of crawling and unusual therapy tactics including playing basic children's games and washing pots, his father's rehabilitation was nearly complete and he went back to his role as a professor at City College in New York. This remarkable recovery from a stroke proves that even someone with abnormal behavior and severe medical complications can recover nearly all of the normal functions by much practice and perseverance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3627193",
"title": "Albert Einstein's brain",
"section": "Section::::Fate of the brain.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 619,
"text": "Whether or not Einstein's brain was preserved with his prior consent is a matter of dispute. Ronald Clark's 1979 biography of Einstein states, \"he had insisted that his brain should be used for research and that he be cremated\", but more recent research has suggested that this may not be true and that the brain was removed and preserved without the permission of either Einstein or his close relatives. Hans Albert Einstein, the physicist's elder son, endorsed the removal after the event, but insisted that his father's brain should be used only for research to be published in scientific journals of high standing.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2fkllu
|
I can't think of the name of painting, nor the artist, that I can write a perfect paper about.
|
[
{
"answer": "I think /r/tipofmytongue might be a better place for this.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5081530",
"title": "Donald Jarvis",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 423,
"text": "\"I see the painter as an instrument, a function, a conduit of the essential unity. My work is metaphor, never simile. I make no distinction between subject and object, inner and outer, maker and viewer. I am continually surprised by what arises on the canvas or the paper. I am not a 'creator'. How can one create what is already there? I am mist, trees, rain, sun brush, canvas, weather, season, figure.\" Don Jarvis 1999.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47523752",
"title": "Mario Naves",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 434,
"text": "Mario Naves stated that his works-on-paper are \"painting by other means.\" Artist and critic Maureen Mullarkey wrote that these \"means are simple. Paint is dripped, scraped, scumbled, sponged, patted and brushed on pieces of paper that are then torn and rearranged. His technique preserves the accidental aspect of the painting process while it subordinates all randomness to the cognitive, disciplined basis of traditional painting.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5613831",
"title": "Accessio",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 917,
"text": "If someone wrote on the papyrus (\"chartulae\") or parchment (\"membranae\") of another, the material was considered the principal, and of course the writing belonged to the owner of the paper or parchment. If a person painted a picture on someone else's wood (\"tabula\") or whatever the materials might be, the painting was considered to be the principal (\"tabula picturae cedit\"). The principle which determined the acquisition of a new property by accessio was this—the intimate and inseparable union of the accessory with the principal. Accordingly, there might be \"accessio\" by pure accident without the intervention of any rational agent. If a piece of land was torn away by a stream from someone's land and attached to the land of another, it became the property of the person to whose land it was attached after it was firmly attached to it, but not before. This should not be confused with the case of \"alluvio\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42178737",
"title": "Roy Oxlade",
"section": "Section::::Paintings.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 702,
"text": "\"Painting to me is like a room of the imagination. It’s up to me what I do with it. I choose its size and its materials – usually canvas and oil paint. At the beginning its relationships don’t amount to much – it’s a rectangle in a jumble of art history I relate to. There would not be much fun in leaving the room empty, a passive – one colour field – a blank canvas. And entirely abstract forms place too many restrictions on dialogue. So I have put in some other stuff, some characters, some actors – tables, pots, colours, easels, lamps, scribbles, figures and faces to interact with each other. I adjust the temperature, open the windows, shut the windows, throw things out, change the lighting.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39663469",
"title": "Du \"Cubisme\"",
"section": "Section::::English edition.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 1557,
"text": "For the authors of this book \"painting is not—or is no longer—the art of imitating an object by means of lines and colours, but the art of giving our instinct a plastic consciousness\". Many will follow them so far who will be unable or unwilling to follow them further on the road to cubism. Yet even to the unwilling their book will prove suggestive. Their theory of painting is founded upon a philosophic idealism. It is impossible to paint things \"as they are\", because it is impossible to know how and what they \"really\" are. Decoration must go by the board; decorative work is the antithesis of the picture, which \"bears its pretext, the reason for its existence, within it\". The authors are not afraid of the conclusions which they find resulting from their premisses. The ultimate aim of painting is to touch the crowd; but it is no business of the painter to explain himself to the crowd. On the contrary, it is the business of the crowd to follow the painter in his transubstantiation of the object, \"so that the most accustomed eye has some difficulty in discovering it\". Yet the authors disapprove of \"fantastic occultism\" no less than of the negative truth conveyed by the conventional symbols of the academic painters. Indeed, the object of the whole book is to condemn systems of all kinds, and to defend cubism as the liberator from systems, the means of expression of the one truth, which is the truth in the artist's mind. The short but able and suggestive essay is followed by twenty-five half-tone illustrations, from Cézanne to Picabia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19191260",
"title": "James Coleman (American artist)",
"section": "Section::::Personal quotes.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 364,
"text": "One of the teachers said he was going to take us into his studio and show us a brush stroke he had worked his whole life to invent. I just looked at this guy and said to myself: 'This is a bunch of nonsense!' I realized he was intellectualizing something that, in my mind, was a spiritual thing. Painting isn't an intellectual study . . . it comes from the heart.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7651379",
"title": "Paul Ziff",
"section": "Section::::Philosophical and other works.:Articles.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 1141,
"text": "\"Art and the 'Object of Art'\" was originally in \"Mind\" in 1951, and takes apart the claim, made by prominent philosophers in the 1930s-1950s, that \"the painting is not the work of art.\" This paper is reprinted in other places as well, notably William Elton's famous collection \"Aesthetics and Language\", which put aestheticians on notice that the analytics had shown up to clean house. \"The Task of Defining a Work of Art\" has been anthologized at least three times. It is the most sophisticated of the \"you can’t define art\" papers in the apply-Wittgenstein/ordinary language analysis years. \"Reasons in Art Criticism\" was in the two best aesthetics anthologies of the 60s, the ones edited by Kennick and by Margolis. It was also in the Bobbs-Merrill Reprint Series in Philosophy, which was a selection of the most talked about articles in the 50s and 60s. George Dickie devoted a chapter of his book \"Evaluating Art\" to Ziff's view about reasons why a work of art is good, and, 30 years after it was first published, he said it \"remains one of the few truly stimulating pieces by present-day philosophers on the theory of art evaluation.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3e6fp6
|
What reasons made Justinian’s conquest of Italy take so long?
|
[
{
"answer": "The conquest of Italy started off well in 536 with the quick taking over of Sicily and then Naples after some resistance. Once Belisarius reached Rome however things slowed down. He entered the city unopposed but there were significant challenges in holding they city. Food first of all there was the issue of feeding the poplulation and secondly the wall circuit of Rome was enormous and the city itself had no gates. There was a one year siege of the city (537-538) which was successfully repelled.\n\nBelisarius eventually conquered most of Italy by 540 including Ravenna the capital.\n\nSo the initial conquest was only 4 years long but holding the gains would prove difficult and wars would last two decades. See this wikipedia article on the Gothic wars for more in depth information:(_URL_0_)\n\nCompared to the conquest of Africa where the Vandals had only two fortified cities (Hippo and Carthage), the Goths left the walls of the Italian cities intact and thus had more strongholds to retreat to and retaliate from.\n\nThe Goths also had allies such as Franks and Persians who would distract the empire and force resources elsewhere.\n\nLastly there is the problem of Justinian not trusting his generals' loyalties and splitting up armies in between them. The generals did not always cooperate either and at one point many just holed up in their own cities with their treasure and individually knocked out. [There is a fun little reference to this in computer science known as the byzantine general problem ](_URL_1_)\n\nLuckily we have great sources primary on this era thanks to Procopius's *Wars* and *Secret History* and his being present with the armies.",
"provenance": null
},
{
"answer": "I would put most of the blame on Belisarius. Justinian's decree would have given North Italy (Po Valley?) to the Goths, which would have acted as a buffer state. The Empire would have acquired the rest of Italy at a point when the costs of operations was light. But Belisarius' disobedience of Justinian and treachery to the Goths galvanised resistance. Collins also point out that it's also possible Belisarius was going to accept the Western Emperorship but could not get support from his army, half of the officer of which did not trust him.* Procopius blames Justinian's jealousy of course, but considering a renew Persian War was looming and Belisarius was ordered to take over the Persian front and reinforce it with his Gothic captives, jealousy or not it was an incredibly sound decision.\n\nThe Goths elected Baduila (Procopius calls him Totila) a far more competent leader than the previous two Belisarius faced (the first one, Theodehad, it seems had be wholly incompetent), who went on the offensive with the empire distracted in the East. Belisarius returned to Italy later, but without adequate forces or the divergent campaigns of his first conquest due to the eastern needs, empire suffering from plague and economic down turn, and/or Justinian's jealousy and not trusting Belisarius anymore.\n\n**Early Medieval Europe* by Roger Collins.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "27619",
"title": "Sicily",
"section": "Section::::History.:Germanic and Byzantine periods (469–965).:Byzantine (535–965).\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 882,
"text": "After taking areas occupied by the Vandals in North Africa, Justinian decide to retake Italy as an ambitious attempt to recover the lost provinces in the West. The re-conquests marked an end to over 150 years of accommodationist policies with tribal invaders. His first target was Sicily (known as the Gothic War (535–554) began between the Ostrogoths and the Eastern Roman Empire, also known as the Byzantine Empire). His general Belisarius was assigned the task. Sicily was used as a base for the Byzantines to conquer the rest of Italy, with Naples, Rome, Milan. It took five years before the Ostrogoth capital Ravenna fell in 540. However, the new Ostrogoth king Totila counterattacked, moving down the Italian peninsula, plundering and conquering Sicily in 550. Totila was defeated and killed in the Battle of Taginae by Byzantine general Narses in 552 but Italy was in ruins.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44833",
"title": "Manuel I Komnenos",
"section": "Section::::Italian campaign.:Failure of the Church union.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 766,
"text": "The final results of the Italian campaign were limited in terms of the advantages gained by the Empire. The city of Ancona became a Byzantine base in Italy, accepting the Emperor as sovereign. The Normans of Sicily had been damaged and now came to terms with the Empire, ensuring peace for the rest of Manuel's reign. The Empire's ability to get involved in Italian affairs had been demonstrated. However, given the enormous quantities of gold which had been lavished on the project, it also demonstrated the limits of what money and diplomacy alone could achieve. The expense of Manuel's involvement in Italy must have cost the treasury a great deal (probably more than 2,160,000 \"hyperpyra\" or 30,000 pounds of gold), and yet it produced only limited solid gains.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2884656",
"title": "Ostrogothic Kingdom",
"section": "Section::::History.:Gothic War and end of the Ostrogothic Kingdom (535–554).\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 470,
"text": "The war had its roots in the ambition of Justinian to recover the provinces of the former Western Roman Empire, which had been lost to invading barbarian tribes in the previous century (the Migration Period). By the end of the conflict Italy was devastated and considerably depopulated. As a consequence, the victorious Byzantines found themselves unable to resist the invasion of the Lombards in 568, which resulted in the loss of large parts of the Italian peninsula.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "322915",
"title": "Italian Renaissance",
"section": "Section::::Origins and background.:Northern and Central Italy in the Late Middle Ages.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1562,
"text": "In contrast, Northern and Central Italy had become far more prosperous, and it has been calculated that the region was among the richest of Europe. The Crusades had built lasting trade links to the Levant, and the Fourth Crusade had done much to destroy the Byzantine Roman Empire as a commercial rival to the Venetians and Genoese. The main trade routes from the east passed through the Byzantine Empire or the Arab lands and onward to the ports of Genoa, Pisa, and Venice. Luxury goods bought in the Levant, such as spices, dyes, and silks were imported to Italy and then resold throughout Europe. Moreover, the inland city-states profited from the rich agricultural land of the Po valley. From France, Germany, and the Low Countries, through the medium of the Champagne fairs, land and river trade routes brought goods such as wool, wheat, and precious metals into the region. The extensive trade that stretched from Egypt to the Baltic generated substantial surpluses that allowed significant investment in mining and agriculture. Thus, while northern Italy was not richer in resources than many other parts of Europe, the level of development, stimulated by trade, allowed it to prosper. In particular, Florence became one of the wealthiest of the cities of Northern Italy, mainly due to its woolen textile production, developed under the supervision of its dominant trade guild, the \"Arte della Lana\". Wool was imported from Northern Europe (and in the 16th century from Spain) and together with dyes from the east were used to make high quality textiles.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45462",
"title": "Corfu",
"section": "Section::::History.:Roman and medieval history.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 899,
"text": "The peace and prosperity of the Macedonian era ended with another Saracen attack in 1033, but more importantly with the emergence of a new threat: following the Norman conquest of Southern Italy, the ambitious Norman monarchs set their sights on expansion in the East. Three times on the space of a century Corfu was the first target and served as a staging area for the Norman invasions of Byzantium. The first Norman occupation from 1081 to 1084 was ended only after the Byzantine emperor Alexios I Komnenos secured the aid of the Republic of Venice, in exchange to wide-ranging commercial concessions to Venetian merchants. The admiral George of Antioch captured Corfu again in 1147, and it took a ten-month siege for Manuel I Komnenos to recover the island in 1149. In the third invasion in 1185, the island was again captured by William II of Sicily, but was soon regained by Isaac II Angelos.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38160955",
"title": "Byzantine Anatolia",
"section": "Section::::Justinian dynasty 518–602.:Justinian I 527–565.:Foreign policy.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 414,
"text": "Justinian's foreign policy centred around trying to recapture the lost territories of the western empire after half century of Barbarian occupation, recapturing Rome in 536. However the western operations were hampered by competing wars in the east (see below) and the outbreak of plague in 541 (see below). Rome and Italy changed hands frequently after 541 but by 554 the Byzantines were firmly in control again.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "106130",
"title": "Second Crusade",
"section": "Section::::Aftermath.\n",
"start_paragraph_id": 58,
"start_character": 0,
"end_paragraph_id": 58,
"end_character": 536,
"text": "Relations between the Eastern Roman Empire and the French were badly damaged by the Crusade. Louis and other French leaders openly accused the Emperor Manuel I of colluding with Turkish attacks on them during the march across Asia Minor. The memory of the Second Crusade was to color French views of the Byzantines for the rest of the 12th and 13th centuries. Within the empire itself, the crusade was remembered as a triumph of diplomacy. In the eulogy for the Emperor Manuel by Archbishop Eustathius of Thessalonica, it was declared:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1jebwg
|
How is it that different dogs breeds have specific personality types?
|
[
{
"answer": "Selective breeding by humans. If you want a good hunter then you breed the best hunters together, if you want a guard dog then you breed the animals that are more territorial. If you want a lap dog then you need to select the dogs that are less stressed by being around new and different humans. humans select the breeding animals for both desired form and temperament.",
"provenance": null
},
{
"answer": "Like D_I_S_D says, the presence of certain personality traits in certain breeds of dogs have be purposefully selected for by humans in the creation of the breeds. But your deeper question of what causes the different behaviors is much harder to answer. In short, we don't know a lot about the chemical and genetic causes of personality or behaviors. There is a really cool effort happening right now to understand the dog genome, as a tool for understanding complex traits like personality in humans. Check out the [Dog Genome Project](_URL_0_) for a list of publications from this work. I have heard that they are including a survey on dog behavioral traits so that they can try to find the genes or regions of the genome that affect some dog behaviors.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "17246461",
"title": "Dog type",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 278,
"text": "Dog types are broad categories of dogs based on form, function or style of work, lineage, or appearance. In contrast, modern \"dog breeds\" are particular breed standards, sharing a common set of heritable characteristics, determined by the kennel club that recognizes the breed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "79676",
"title": "Dog breed",
"section": "Section::::Genetic evidence of breeds.:Dog types.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 277,
"text": "Dog types are broad categories of dogs based on form, function or style of work, lineage, or appearance. In contrast, modern dog breeds are particular breed standards, sharing a common set of heritable characteristics, determined by the kennel club that recognizes the breed. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "79676",
"title": "Dog breed",
"section": "Section::::Breeds.:Pure breeds.:Kennel clubs.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 700,
"text": "A dog breed is represented by a sufficient number of individuals to stably transfer its specific characteristics over generations. Dogs of same breed have similar characteristics of appearance and behavior, primarily because they come from a select set of ancestors who had the same characteristics. Dogs of a specific breed breed true, producing young that are very similar to their parents. An individual dog is identified as a member of a breed through proof of ancestry, using genetic analysis or written records of ancestry. Without such proof, identification of a specific breed is not reliable. Such records, called stud books, may be maintained by individuals, clubs, or other organizations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1802377",
"title": "Maremma Sheepdog",
"section": "Section::::Characteristics.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 488,
"text": "Some divide the breed into various subtypes, largely based on small differences in physical attributes and with subtype names based on village and provincial names where the dogs may be found, e.g. the Maremmano, the Marsicano, the Aquilano, the Pescocostanzo, the Maiella, and the Peligno. However, biologists dispute this division, as well as over reliance on minor physical differences, as the dogs were bred over the centuries for their behavioral characteristics as flock guardians.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "79676",
"title": "Dog breed",
"section": "Section::::Genetic evidence of breeds.:Medical research.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 234,
"text": "As dogs are a subspecies but their breeds are distinct genetic units, and because only certain breeds share the same type of cancers as humans, the differences in the genes of different breeds may be useful in human medical research.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "79676",
"title": "Dog breed",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 273,
"text": "Dog breeds are dogs that have relatively uniform physical characteristics developed by humans, with breeding animals selected for phenotypic traits such as size, coat color, structure, and behavior. The Fédération Cynologique Internationale recognizes 337 pure dog breeds.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7118482",
"title": "Dog behavior",
"section": "Section::::Social behavior.:Personalities.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 322,
"text": "Dog Breed plays an important role in the dog’s personality dimensions, while the effects of age and sex have not been clearly determined. Dogs personality models can be used for a range of tasks, including guide and working dog selection, finding appropriate families to re-home shelter dogs, or selecting breeding stock.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
mnedq
|
How can animals sense danger and humans can't?
|
[
{
"answer": "Animals have adapted to react to a variety of sensory inputs that are associated with weather changes or imminent disasters.\n\nDogs can sense the pressure and humidity changes in the air with an impending rainstorm. You'd always hear old people say \"I can feel a storm coming, I feel it in my bones.\" They're feeling pressure changes **literally** in their bones (due to old age). \n\nAs for earthquakes, numerous studies have been done by the USGS (United Stated Geological Survey), but nothing substantial enough to make legitimate claims have been found. \n\nDogs can sense many things humans can't. They can detect certain medical issues before they happen, such as sensing hypoglycemia (diabetic condition) by feeling tremors and hyperglycemia (the opposite of the first) by smelling a ketone smell. Some dogs can sense seizures before they happen and alert their owners before they happen. Some dogs can even sense a myocardial infarction (heart attack) before they happen.",
"provenance": null
},
{
"answer": "Actually, you do sense natural danger just like your dog does. However due to the fact that you will analyze and think about the issues at hand, you may just dismiss it as being silly or paranoid. \n",
"provenance": null
},
{
"answer": "Animals have adapted to react to a variety of sensory inputs that are associated with weather changes or imminent disasters.\n\nDogs can sense the pressure and humidity changes in the air with an impending rainstorm. You'd always hear old people say \"I can feel a storm coming, I feel it in my bones.\" They're feeling pressure changes **literally** in their bones (due to old age). \n\nAs for earthquakes, numerous studies have been done by the USGS (United Stated Geological Survey), but nothing substantial enough to make legitimate claims have been found. \n\nDogs can sense many things humans can't. They can detect certain medical issues before they happen, such as sensing hypoglycemia (diabetic condition) by feeling tremors and hyperglycemia (the opposite of the first) by smelling a ketone smell. Some dogs can sense seizures before they happen and alert their owners before they happen. Some dogs can even sense a myocardial infarction (heart attack) before they happen.",
"provenance": null
},
{
"answer": "Actually, you do sense natural danger just like your dog does. However due to the fact that you will analyze and think about the issues at hand, you may just dismiss it as being silly or paranoid. \n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3731756",
"title": "Hazards of outdoor recreation",
"section": "Section::::Specific accidents and ailments.:Animals.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 677,
"text": "In many areas, adventurers may encounter large predatory animals such as bears or cougars. These animals rarely seek out humans as food, but they will attack under some conditions. Some hazardous encounters occur when animals raid human property for food. Additionally, if travelers come upon an unsuspecting animal and surprise it, it may attack. Regularly making loud noise, such as by clapping or yelling, reduces the risk of surprising an animal. Some people use bear bells as noisemakers, but these are usually too quiet to be heard from far away. Any mammal infected with rabies may behave unexpectedly, even aggressively, and could infect a human with rabies by biting.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12035702",
"title": "Supersense",
"section": "Section::::Episodes.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 424,
"text": "BULLET::::- \"Sixth Sense\": Animals use senses of which humans are unaware. Sensitivity to the earth’s electromagnetic fields, or to weather pressure, can be used to aid navigation. Some animals can predict earthquakes. Predators put these senses to lethal use: a shark homes in on the body electricity of its prey, vampire bats detect the infra-red radiation of blood, and a rattlesnake sees a ‘heat picture’ of its victim.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "470843",
"title": "Fight-or-flight response",
"section": "Section::::Other animals.:Varieties of responses.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 298,
"text": "Animals respond to threats in many complex ways. Rats, for instance, try to escape when threatened, but will fight when cornered. Some animals stand perfectly still so that predators will not see them. Many animals freeze or play dead when touched in the hope that the predator will lose interest.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "966870",
"title": "Baird's tapir",
"section": "Section::::Behavior.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 236,
"text": "Adults can be potentially dangerous to humans and should not be approached if spotted in the wild. The animal is most likely to follow or chase a human for a bit, though they have been known to charge and gore humans on rare occasions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23416874",
"title": "Sense",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 575,
"text": "Other animals also have receptors to sense the world around them, with degrees of capability varying greatly between species. Humans have a comparatively weak sense of smell and a stronger sense of sight relative to many other mammals while some animals may lack one or more of the traditional five senses. Some animals may also intake and interpret sensory stimuli in very different ways. Some species of animals are able to sense the world in a way that humans cannot, with some species able to sense electrical and magnetic fields, and detect water pressure and currents.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17237382",
"title": "Tourism in Kenya",
"section": "Section::::Ecotourism.:Negative Environmental Impacts.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 1804,
"text": "Interaction between humans and wild animals in their natural habitat can lead to a number of unforeseen and unconscious complications. The mere presence of humans can be sensed by most animals and, although not always visible, can change their physiology and behavior. The sound of footsteps, an approaching vehicle, or the sight of human being is such a novel stimulus to most animals in the wild that it can cause major shifts in their actions, often resulting in them disrupting their feeding or breeding rituals to either hide or flee, sometimes even abandoning their young in the process. In some cases, like with passing aircraft often carrying tourists for aerial tours in helicopters or hot air balloons, the intrusion is so alarming that it causes a mass scattering of the animals below, disturbing feeding groups, and in some cases the injury or death of an animal as it tries to flee. More subtle noises caused by humans and vehicles, those even unable to be heard by the human ear, can still cause major disruption to the delicate signals used by snakes or some nocturnal animals to find prey or navigate, leading them to become confused or lost. Another problem is caused by the sheer amount of foreign travel in and out of rural villages and reservations that otherwise are not exposed to certain bacteria which can sometimes lead to the introduction of foreign diseases into both human and animal communities. Most of the negative effects tourism has on wildlife are short term changes in their behavior, but after repeated exposure to human induced stimuli they can become desensitized and habituated with the presence of tourists and lose aspects of their natural behavior, resulting in possible long-term effects to their entire population like reduced breeding or increased mortality.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12162468",
"title": "Prey detection",
"section": "Section::::Following detection.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 503,
"text": "Animals living in groups have increased vigilance, and even solitary animals are capable of rapid escape when needed. Even if it does make a capture, its prey may attract competing predators, giving it a chance to escape in the struggle. It may also strike a non-vital organ: some species have deceptive appearances such that one part of their body resembles another, such as insects with false heads. This makes consumption (or fatal wound)s less probable, giving the prey a second chance at escaping.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1111w0
|
why are you supposed to always add acid to water and never water to acid?
|
[
{
"answer": "If water splashes up at you, it's not a big deal. If acid splashes up at you, it can be.",
"provenance": null
},
{
"answer": "Water can absorb a great deal of heat. Acid, not so much.\n\nSo if you add acid to water, there's more water than acid. As the acid mixes into the water, it produces heat. There's plenty of water to absorb the heat.\n\nBut if you add water to acid--initially, at least--there's more acid than water. It can't absorb the heat, so the relatively small amount of water that you have only just begun to add to the acid has to absorb the heat, and there can be enough heat to boil the water, which can cause the acid to be splashed on you.\n\n*Always* add acid to water.",
"provenance": null
},
{
"answer": "If you add water to Acid,acid is more than water and the temperature rises and could also splash onto your face.Or the test tube would melt due to excessive local heating.\n\n\nWhile adding acid to water,water is more than _URL_0_ the hot acid goes into the test tube and cools down in water and no splashing or excessive heating occur.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "29247",
"title": "Sulfuric acid",
"section": "Section::::Chemical properties.:Reaction with water and dehydrating property.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 407,
"text": "Because the hydration reaction of sulfuric acid is highly exothermic, dilution should always be performed by adding the acid to the water rather than the water to the acid. Because the reaction is in an equilibrium that favors the rapid protonation of water, addition of acid to the water ensures that the \"acid\" is the limiting reagent. This reaction is best thought of as the formation of hydronium ions:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29247",
"title": "Sulfuric acid",
"section": "Section::::Safety.:Dilution hazards.\n",
"start_paragraph_id": 121,
"start_character": 0,
"end_paragraph_id": 121,
"end_character": 312,
"text": "Preparation of the diluted acid can be dangerous due to the heat released in the dilution process. To avoid splattering, the concentrated acid is usually added to water and not the other way around. Water has a higher heat capacity than the acid, and so a vessel of cold water will absorb heat as acid is added.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24027000",
"title": "Properties of water",
"section": "Section::::Reactions.:Acid-base reactions.\n",
"start_paragraph_id": 89,
"start_character": 0,
"end_paragraph_id": 89,
"end_character": 397,
"text": "Water is amphoteric: it has the ability to act as either an acid or a base in chemical reactions. According to the Brønsted-Lowry definition, an acid is a proton () donor and a base is a proton acceptor. When reacting with a stronger acid, water acts as a base; when reacting with a stronger base, it acts as an acid. For instance, water receives an ion from HCl when hydrochloric acid is formed:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9976506",
"title": "Acidulated water",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 471,
"text": "Acidulated water is water where some sort of acid is added—often lemon juice, lime juice, or vinegar—to prevent cut or skinned fruits or vegetables from browning so as to maintain their appearance. Some vegetables and fruits often placed in acidulated water are apples, avocados, celeriac, potatoes and pears. When the fruit or vegetable is removed from the mixture, it will usually resist browning for at least an hour or two, even though it is being exposed to oxygen.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1740813",
"title": "Acid salt",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 657,
"text": "Acid salts are a class of salts that produce an acidic solution after being dissolved in a solvent. Its formation as a substance has a greater electrical conductivity than that of the pure solvent. An acidic solution formed by acid salt is made during partial neutralization of diprotic or polyprotic acids. A \"half-neutralization\" occurs due to the remaining of replaceable hydrogen atoms from the partial dissociation of weak acids that have not been reacted with hydroxide ions (OH) to create water molecules. Acid salt is an ionic compound consisted of an anion, contributed from a weak parent acid, and a cation, contributed from a strong parent base.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53575995",
"title": "Environmental impact of iron ore mining",
"section": "Section::::Issues.:Acid rock drainage.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 940,
"text": "Acid is created when water and oxygen interact with sulphur bearing minerals and chemicals in rocks. Sulphuric acid is the most common chemical reaction that results from mining activities as the beneficiation process requires dissolving the minerals surrounding the ore, which releases metals and chemicals previously bound up in the rock into nearby streams, freshwater bodies, and the atmosphere.., . Acid may be generated under natural conditions prior to any disturbance, but mining activities typically magnify the amount of acid produced, thereby causing an inequality in the surrounding environment. This process is referred to as Acid Mine Drainage (AMD). Acid produced from AMD causes health hazards to many fish and aquatic organisms as well as land animals who drink from contaminated water sources. Many metals become mobile as water becomes more acidic and at high concentrations these metals become toxic to most life forms \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "578099",
"title": "Hypochlorous acid",
"section": "Section::::Formation, stability and reactions.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 244,
"text": "The acid can also be prepared by dissolving dichlorine monoxide in water; under standard aqueous conditions, anhydrous hypochlorous acid is currently impossible to prepare due to the readily reversible equilibrium between it and its anhydride:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4kfgyt
|
I know that the Civil War armies were said to make a miserable show to Europeans, but how did the Continental Army perform by European standards?
|
[
{
"answer": "Great question!\n\nThe answer to this will depend firstly on when in the way you're asking about, and secondly on what scale you grade an army's capabilities on. \n\nAt the start of the war, the Continental Army was, by almost every standard, terrible. The army that amassed outside Boston and later fought around New York City had poor officers, little discipline, uncertain supplies, and a patchwork of enlistment contracts that made its size unpredictable at best and doomed to extinction at worst. While the army won at Bunker Hill, that was more a testament to Gage making the wrong decision (a frontal assault), then lacking the manpower to launch a flank attack. Around New York, the Continentals were repeatedly outfought bad outmaneuvered by the British and Hessian armies, even in the wooded and broken terrain that was supposed to favor American troops. In one of the war's most catastrophic defeats, poor command and control of the army led to a force of 3,000 Americans staying in Fort Washington well after the position should have been abandoned. A Hessian assault force captured this garrison, and and an even more valuable cache of supplies, in one afternoon with minimal casualties. \n\nEven in the early years of the war, the Continental Army showed signs of promise. Knox's transport of 200 cannon from Fort Ticonderoga to Boston during the winter of 75-76 was a master stroke that effectively created the army's artillery branch in one go. Patriot troops captured Montreal and nearly seized Quebec in a daring and difficult winter campaign. Individual units fought well during the New York campaign, and American troops consistently demonstrated the ability to rapidly build large fieldworks. Washington's winter campaign in 76-77 saw a hardened core of veteran survivors of the Battles of New York surprise and out-maneuver well-commanded British forces in Southern New Jersey, doing permanent harm to the British war effort by making them look unable to hold territory the conquered or to defend the loyalists who publicly opposed the rebellion. Continentals stalled and swarmed a British attack towards Albany in the Battles of Saratoga, while Americans managed to make fighting retreat d at the Brandywine and Germantown I that fall of 77. \n\nBetween 1778 and the end of the war in 1783, the Continental army got more technically proficient as it professionalized. The Continentals fought the best units of the British Army to a draw on the open field at Monmouth, made successful night attacks at Stoney Point and Yorktown's Redoubt #10, and demonstrated incredible resiliency in the \"fight, get beat, get up and fight again\" chase across the Carolinians. At Yorktown, the Americans laid an exceptionally smooth formal, conventional siege, arguably the defining trait of military craft in the 18th century. \n\nWhy, then, do I still have reservations about calling the Continental Army good? First and foremost, their performance was inconsistent. Gates bungled the Battle of Camden in catastrophic fashion. While Morgan and Greene did led Cornwallis on an epic chased across the Carolinians, they could not stop him from invading Virginia, freeing thousands of slaves, and nearly capturing Thomas Jefferson as he fled off his plantation. \n\nMost importantly, the Continental Army grew increasingly mutinous in the latter half of the war. Between 1780 and 1783, the Pennsylvania Line mutinied twice abs the New Jersey line once in situations that stretched for days and resulted in some loss of life. Connecticut troops rioted in their camps at Morristown and West Point. The officers of the army gave some support to a plan by one of Gates' aides to march on Congress if they weren't guaranteed pensions. \n\nIn short, if I were drafting an army for a fantasy league of 18th century empires, I would not take the Continental Army. Their greatest victories involved either surviving defeat or substantial French assistance. They frequently seethed with discontent, as mutiny never seemed far away for both Continental soldiers and their officers. \n\n**Sources**\n\nMartin and Lender, *A Respectable Army*\n\nFischer, *Washington's Crossing*\n\nNiemeyer, *America Goes to War*",
"provenance": null
},
{
"answer": "Follow up: Why was it said that the Civil War armies made a miserable show compared to European armies? I thought the Civil war is considered one of the first \"modern\" wars.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "10778278",
"title": "Valley Forge",
"section": "Section::::Organizational challenges.:Training.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 614,
"text": "Increasing military efficiency, morale, and discipline improved the army's well-being with better supply of food and arms. The Continental Army had been hindered in battle because units administered training from a variety of field manuals, making coordinated battle movements awkward and difficult. They struggled with basic formations and lacked uniformity, thanks to multiple drilling techniques taught in various ways by different officers. The task of developing and carrying out an effective training program fell to Baron Friedrich von Steuben, a Prussian drill master who had recently arrived from Europe.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2187390",
"title": "Regular Army (United States)",
"section": "Section::::Continental Army.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 450,
"text": "For varying short periods of time during the war, many state militia units and separate volunteer state regiments (usually organized only for local service) supported the Continental Army. Although training and equipping part-time or short-term soldiers and coordinating them with professionally-trained regulars was especially difficult, this approach also enabled the Americans to prevail without having had to establish a large or permanent army.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4384382",
"title": "Reconquista (Spanish America)",
"section": "Section::::Expeditionary campaigns.:The royalist military.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 808,
"text": "Overall, Europeans formed only about a tenth of the royalist armies in Spanish America, and only about half of the expeditionary units once they were deployed in the Americas. Since each European soldier casualty was substituted by a Spanish American soldier, over time, there were more and more Spanish American soldiers in the expeditionary units. For example Pablo Morillo, commander in chief of the expeditionary force sent South America, reported that he only had 2,000 European soldiers under his command in 1820, in other words, only half of the soldiers of his expeditionary force were European. It is estimated that in the Battle of Maipú only a quarter of the royalist forces were European soldiers, in the Battle of Carabobo about a fifth, and in the Battle of Ayacucho less than 1% was European.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58255510",
"title": "History of the Office of the Inspector General of the United States Army",
"section": "Section::::Background.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 600,
"text": "When the Continental Army was formed, it was largely modeled on the British Army. None of the European systems of inspecting worked well for the Continental Army. The British was inadequate as it relied on an experienced, highly trained and well disciplined army; the French because it interfered with the chain of command; and the Prussian as it relied on uniform units and practices. The Massachusetts Bay Colony and Colony of Virginia both had militias with muster masters and muster master-generals respectively serving as inspectors. Elements from all four systems were eventually incorporated.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6435234",
"title": "Grand Review of the Armies",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 523,
"text": "The Grand Review of the Armies was a military procession and celebration in the national capital city of Washington, D.C., on May 23–24, 1865, following the close of the American Civil War (1861–1865). Elements of the Union Army in the United States Army paraded through the streets of the capital to receive accolades from the crowds and reviewing politicians, officials, and prominent citizens, including United States President Andrew Johnson, a month after the assassination of United States President Abraham Lincoln.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "863",
"title": "American Civil War",
"section": "Section::::General features of the War.:Mobilization.\n",
"start_paragraph_id": 66,
"start_character": 0,
"end_paragraph_id": 66,
"end_character": 425,
"text": "From a tiny frontier force in 1860, the Union and Confederate armies had grown into the \"largest and most efficient armies in the world\" within a few years. European observers at the time dismissed them as amateur and unprofessional, but British historian John Keegan concluded that each outmatched the French, Prussian and Russian armies of the time, and but for the Atlantic, would have threatened any of them with defeat.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "771",
"title": "American Revolutionary War",
"section": "Section::::Analysis of combatants.:United States.\n",
"start_paragraph_id": 173,
"start_character": 0,
"end_paragraph_id": 173,
"end_character": 796,
"text": "The new Continental Army suffered significantly from a lack of an effective training regime, and largely inexperienced officers and sergeants. The inexperience of its officers was compensated for in part by a few senior officers. The Americans solved their training dilemma during their stint in Winter Quarters at Valley Forge, where they were relentlessly drilled and trained by General Friedrich Wilhelm von Steuben, a veteran of the famed Prussian General Staff. He taught the Continental Army the essentials of military discipline, drills, tactics and strategy, and wrote the Revolutionary War Drill Manual. When the Army emerged from Valley Forge, it proved its ability to equally match the British troops in battle when they fought a successful strategic action at the Battle of Monmouth.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
dvr30j
|
what is a cocaine analogue?
|
[
{
"answer": "It means it acts on the e.g. brain receptors or whatever it is you're interested in, in the same way as cocaine does without all the other effects. Usually such analogues either take cocaine as a starting point for their synthesis or have a synthetic component very similar to part of the cocaine molecule.",
"provenance": null
},
{
"answer": "An analogue is just a compound that has a similar chemical structure to the one of interest. \n\nCocaine bind to a receptor on dopamine neurons in the brain called Dopamine Transporter (DAT). Normally, DAT transports dopamine back **inside** the neuron. However, when cocaine binds to DAT, it prevents DAT from working. ie. DAT leaves dopamine neurons out in the synapse where they are still active. This excess of dopamine alters perception; therefore, cocaine is *psychoactive*. On the other hand, Ioflupane can fit into and bind DAT but it does not inhibit DAT's function very much at all. Therefore, Ioflupane is **not** psychoactive. \n\nFor an analogy, \n\nImagine you drive a Honda Civic (the honda civic is DAT, which cocaine & its analog, Ioflupane bind to). **Your** key to **your** car (cocaine) turns fits in the ignition **AND** starts the car. If you got someone else's key from a different Honda Civic (Ioflupane) it would fit in the ignition BUT it would **NOT** start.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "19681345",
"title": "Crack cocaine",
"section": "Section::::Society and culture.:Legal status.:United States.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 258,
"text": "In the United States, cocaine is a Schedule II drug under the Controlled Substances Act, indicating that it has a high abuse potential but also carries a medicinal purpose. Under the Controlled Substances Act, crack and cocaine are considered the same drug.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14638076",
"title": "List of cocaine analogues",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 828,
"text": "This is a list of cocaine analogues. A cocaine analogue is a (usually) artificial construct of a novel chemical compound from (often the starting point of natural) cocaine's molecular structure, with the result product sufficiently similar to cocaine to display similarity in, but alteration to, its chemical function. Within the scope of analogous compounds created from the structure of cocaine, so named \"cocaine analogues\" retain 3\"β\"-benzoyloxy or similar functionality (the term specifically used usually distinguishes from phenyltropanes, but in the broad sense generally, as a category, includes them) on a tropane skeleton, as compared to other stimulants of the kind. Many of the semi-synthetic cocaine analogues \"proper\" which have been made & studied have consisted of among the nine following classes of compounds:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10534087",
"title": "Serotonin–norepinephrine–dopamine reuptake inhibitor",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 311,
"text": "Cocaine is a naturally occurring SNDRI with a fast onset and short duration (about two hours) that is widely encountered as a drug of abuse. Although their primary mechanisms of action are as NMDA receptor antagonists, ketamine and phencyclidine are also SNDRIs and are similarly encountered as drugs of abuse.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10534087",
"title": "Serotonin–norepinephrine–dopamine reuptake inhibitor",
"section": "Section::::Legality.\n",
"start_paragraph_id": 127,
"start_character": 0,
"end_paragraph_id": 127,
"end_character": 217,
"text": "Cocaine is a controlled drug (Class A in the UK; Schedule II in the USA); it has not been entirely outlawed in most countries, as despite having some \"abuse potential\" it is recognized that it does have medical uses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35822485",
"title": "Epigenetics of cocaine addiction",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 269,
"text": "Cocaine addiction is the compulsive use of cocaine despite adverse consequences. It arises through epigenetic modification (e.g., through HDAC, sirtuin, and G9a) and transcriptional regulation (primarily through ΔFosB's AP-1 complex) of genes in the nucleus accumbens.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17820426",
"title": "Difluoropine",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 260,
"text": "It is not explicitly illegal anywhere in the world , but might be considered to be a controlled substance analogue of cocaine on the grounds of its related chemical structure, in some jurisdictions such as the United States, Canada, Australia and New Zealand.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19681345",
"title": "Crack cocaine",
"section": "Section::::Society and culture.:Legal status.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 312,
"text": "Cocaine is listed as a Schedule I drug in the United Nations 1961 Single Convention on Narcotic Drugs, making it illegal for non-state-sanctioned production, manufacture, export, import, distribution, trade, use and possession. In most states (except in the U.S.) crack falls under the same category as cocaine.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1smota
|
What are the actual bytes made up of in a computer?
|
[
{
"answer": "That depends entirely on the type of medium. On a mechanical harddrive, those bits are magnetic. In RAM or within the CPU, they're represented by electrical currents or charges. Data could be written to paper as visual or physical data if someone chose to. The list goes on. Ultimately, \"bits\" are just an abstraction of whatever medium that's been chosen to represent the binary data.\n\nMechanical harddrives store the data magnetically. The data is 'converted' from an electrical to magnetic charge when being written (and vice versa when being read). SSDs use completely solid-state chips to store the data. The type of chips and arrangement allow for the device to store data when powered down. These are pretty big simplifications, but that's the general idea of the differences.",
"provenance": null
},
{
"answer": "They are stored as either an electric or magnetic field depending on the type of memory. a bit (1/8th of a byte) in RAM for instance is stored as an electric field and hard drive memory (the platter type) is stored as a magnetic field. SDDs are a similar technology to RAM.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3365",
"title": "Byte",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 285,
"text": "The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48662",
"title": "Computer number format",
"section": "Section::::Binary number representation.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 808,
"text": "A \"byte\" is a bit string containing the number of bits needed to represent a character. On most modern computers, this is an eight bit string. Because the definition of a byte is related to the number of bits composing a character, some older computers have used a different bit length for their byte. In many computer architectures, the byte is used to address specific areas of memory. For example, even though 64-bit processors may address memory sixty-four bits at a time, they may still split that memory into eight-bit pieces. This is called byte-addressable memory. Historically, many CPUs read data in some multiple of eight bits. Because the byte size of eight bits is so common, but the definition is not standardized, the term octet is sometimes used to explicitly describe an eight bit sequence.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57143357",
"title": "Glossary of computer science",
"section": "Section::::B.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 325,
"text": "BULLET::::- Byte – is a unit of digital information that most commonly consists of eight bits, representing a binary number. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18075548",
"title": "Units of information",
"section": "Section::::Units derived from bit.:Byte.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 799,
"text": "Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture; but today it almost always means eight bits – that is, an octet. A byte can represent 256 (2) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies \"B\" (upper case) as the symbol for byte (IEC 80000-13 uses \"o\" for octet in French, but also allows \"B\" in English, which is what is actually being used). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4240997",
"title": "Octet (computing)",
"section": "Section::::Definition.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 571,
"text": "The international standard IEC 60027-2, chapter 3.8.2, states that a byte is an octet of bits. However, the unit byte has historically been platform-dependent and has represented various storage sizes in the history of computing. Due to the influence of several major computer architectures and product lines, the byte became overwhelmingly associated with eight bits. This meaning of \"byte\" is codified in such standards as ISO/IEC 80000-13. While \"byte\" and \"octet\" are often used synonymously, those working with certain legacy systems are careful to avoid ambiguity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14794",
"title": "Integer (computer science)",
"section": "Section::::Common integral data types.:Bytes and octets.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 410,
"text": "The term \"byte\" initially meant 'the smallest addressable unit of memory'. In the past, 5-, 6-, 7-, 8-, and 9-bit bytes have all been used. There have also been computers that could address individual bits ('bit-addressed machine'), or that could only address 16- or 32-bit quantities ('word-addressed machine'). The term \"byte\" was usually not used at all in connection with bit- and word-addressed machines.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3365",
"title": "Byte",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 577,
"text": "The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size – byte-sizes from 1 to 48 bits are known to have been used in the past. Early character encoding systems often used six bits, and machines using six-bit and nine-bit bytes were common into the 1960s. These machines most commonly had memory words of 12, 24, 36, 48 or 60 bits, corresponding to two, four, six, eight or 10 six-bit bytes. In this era, bytes in the instruction stream were often referred to as \"syllables\", before the term byte became common.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
9qyyre
|
Is the ethical debate over killing and farming animals only a recent thing? Or did it span back further?
|
[
{
"answer": "Related question, how and when did the practice of vegetarianism and similar diets go from being associated mostly with religious asceticism to being a non-denominational choice based on animal rights? Basically, if ancient arguments for vegetarianism were that it was unclean or sinful, how did that turn into the modern argument that animals should be respected as sentient creatures?",
"provenance": null
},
{
"answer": "Although there are some of the probably older traditions around the world (like ancient Indian jainism, which advocated a path of non-violence towards all living beings, animals and humans alike) I'll limit my answer to ancient Greek philosophy – field of my academical interest.\n\nWe'll start with Pythagoras (Samos, c. 570 – c. 495 BC), presocratic philosopher who, same as Socrates, didn't write down his thoughts, but had a faithful bunch of students and followers to whom we own his historical record. Dicaearchus (Aristotle's pupil) wrote how Pythagoras': „most universally celebrated opinions, however, were that the soul is immortal; then that it migrates into other sorts of living creature \\[...\\]“\n\nNow in this, his doctrine of the transmigration of souls (*metempsychosis*), lies bio-ethical stance on killing animals. For one more example of the metempsychosis doctrine we can observe quote from Empedocles (c. 490 – c. 430 BC): „For already have I once been a boy, and a girl, and a bush, and a fish that jumps from the sea as it swims.“ \n\nThe important part of this doctrine is that it *proclaims personal survival of bodily death.* By Xenophanes' writing considering this context, Pythagoras even recognized the dog as one of his late friends. Now, if I held this opinion, we can see how easy it would be for me to defend non killing and farming animals – how can I eat a chicken, if there's possibility that that chicken was my mother?\n\nBoth Empedocles and Pythagoras were vegeterians. By Empedocles' surviving fragments, which are concerned with „not killing living creatures“, we learn that we are enjoined to abstain from „harsh-sounding bloodshed“, to avoid sacrifice and moreover, we must not eat meat, beans or bay leaves. As explained, the sheep you slaughter and eat was once a man, and once, perhaps, your son or your father. That said, to avoid patricide and filicide you must avoid all bloodshed. \n\nPythagorean school was an interesting mix of philosophy and mysticism, with an unique set of rules, such as not taking roads which public uses (out of fear of being defiled by the inpure, as Aristotle explains), dietary restrictions, vows of silence for new initiates, etc., and I encourage you to read further on this funky crew.\n\nFor now and to conclude, let's jump few centuries forward to confront Empedocles and Pythagoras with Aristotle. Aristotle (384–322 BCE) assigned to men and animals the faculty of sentience (capacity to suffer), but which gives men a title to moral consideration. That means animals don't have rationality or moral qualities which could match ones that we find in humans. He argued how plants are created for the sake of animals, and animals for the sake of men. What he did is establish some sort of hierarchy, taxonomical categorization - *scala naturae* (or Great Chain of Being) and at the top of that chain are masters who are gifted with rationality – men.\n\nAs we can see, Aristotle's position that humans and animals create two opposite moral circles, one rational and one non-rational is directly clashed with Pythagoras' and Empedocles' stance. To further this ancient debate, Aristotle's pupils, Theophrastus (c. 371 – c. 287 BC) argued also for vegeterianism, as he tried to explain how animals can feel same as men do and thus killing them is morally wrong. These are the beginnings of the ethical debate of killing animals in Western philosophy, and further thinkers followed in next centuries – Seneca, Plutarch, Plotinus and Porphyry, to name ancient Roman ones, for example.\n\n & #x200B;\n\n**Sources:**\n\n & #x200B;\n\nJonathan Barnes, *The Presocratic Philosophers*, Routledge, 1979.\n\nTerence Irwin, *The Development of Ethics: A Historical and Critical Study*, Oxford, 2007.\n\n*Historia Animālium*, Aristotle\n\nRobert Audi, *The Cambridge Dictionary of Philosophy*, Cambridge, 1999.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "174438",
"title": "Animal welfare",
"section": "Section::::Animal welfare issues.:Farm animals.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 573,
"text": "Another concern about the welfare of farm animals is the method of slaughter, especially ritual slaughter. While the killing of animals need not necessarily involve suffering, the general public considers that killing an animal reduces its welfare. This leads to further concerns about premature slaughtering such as chick culling by the laying hen industry, in which males are slaughtered immediately after hatching because they are superfluous; this policy occurs in other farm animal industries such as the production of goat and cattle milk, raising the same concerns.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2600328",
"title": "Animal rights movement",
"section": "Section::::Gender, class, and other factors.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 495,
"text": "Another factor feeding the animal rights movement was revulsion to televised slaughters. In the United States, many public protest slaughters were held in the late 1960s and early 1970s by the National Farmers Organization. Protesting low prices for meat, farmers would kill their own animals in front of media representatives. The carcasses were wasted and not eaten. However, this effort backfired because it angered television audiences to see animals being needlessly and wastefully killed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29045",
"title": "Speciesism",
"section": "Section::::Law and policy.:Law.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 668,
"text": "Humane Slaughter Act, which was created to alleviate some of the suffering felt by livestock during slaughter, was passed in 1958. Later the Animal Welfare Act of 1966, passed by the 89th United States Congress and signed into law by President Lyndon B. Johnson, was designed to put much stricter regulations and supervisions on the handling of animals used in laboratory experimentation and exhibition but has since been amended and expanded. These groundbreaking laws foreshadowed and influenced the shifting attitudes toward nonhuman animals in their rights to humane treatment which Richard D. Ryder and Peter Singer would later popularize in the 1970s and 1980s.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41819335",
"title": "Farm Animal Rights Movement",
"section": "Section::::Legacy.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 301,
"text": "BULLET::::- FARM has been largely responsible for turning the U.S. animal rights movement mission from vivisection to animal farming, which accounts for 98% of all animal abuse and killing. FARM’s Veal Ban Campaign and World Farm Animals Day were the first farmed animal advocacy programs in the U.S.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19374",
"title": "Model organism",
"section": "Section::::Ethics.\n",
"start_paragraph_id": 65,
"start_character": 0,
"end_paragraph_id": 65,
"end_character": 1036,
"text": "Debate about the ethical use of animals in research dates at least as far back as 1822 when the British Parliament enacted the first law for animal protection preventing cruelty to cattle. This was followed by the Cruelty to Animals Act of 1835 and 1849, which criminalized ill-treating, over-driving, and torturing animals. In 1876, under pressure from the National Anti-Vivisection Society, the Cruelty to Animals Act was amended to include regulations governing the use of animals in research. This new act stipulated that 1) experiments must be proven absolutely necessary for instruction, or to save or prolong human life; 2) animals must be properly anesthetized; and 3) animals must be killed as soon as the experiment is over. Today, these three principles are central to the laws and guidelines governing the use of animals and research. In the U.S., the Animal Welfare Act of 1970 (see also Laboratory Animal Welfare Act) set standards for animal use and care in research. This law is enforced by APHIS’s Animal Care program.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7851193",
"title": "List of government animal eradication programs",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 606,
"text": "Historically, there have been cases where the extermination of animal species has been politically endorsed because the animals have been considered harmful. In some cases the animals have been hunted because the animals present a danger to human lives, at other times they have been hunted because they are harmful to human interests such as livestock farming. More recently, eradication efforts have focused on invasive vertebrates as they are the leading cause of extinction of native species, particularly on islands. This article refers to animals in a more limited sense; it does not include humans.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42314466",
"title": "Dog meat consumption in South Korea",
"section": "Section::::Legal status.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 377,
"text": "In June 2018, the municipal court of the city of Bucheon ruled that killing dogs for their meat was illegal. The landmark decision came after much criticism from animal advocates in the country. The court case was brought forward by animal rights group Coexistence of Animal Rights on Earth (Care) against a dog farm, which they said was killing animals without a real reason.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2ih061
|
what led to the end of people's beliefs in mythology as a religion?
|
[
{
"answer": "That never happened. The vast majority of religion today are based on a mythology. Abrahamic mythology is still very prevalent today as organized religions. The Garden of Eden isn't any different than Pandora's Box or Thor crossing the Bifröst. The religions around some mythology's simply died out as other religions converted people. ",
"provenance": null
},
{
"answer": "Mythology is just what you call a religion that doesn't have many active followers in the modern world. Christianity, Islam, Hinduism and all of the others can be called \"mythology\" exactly the same as Egyptian, Greek, Roman or Norse beliefs.",
"provenance": null
},
{
"answer": " > Specifically, Greek Mythology (e.g. Zeus, Hera...).\n\nYour uneditable title is very misleading, and although I don't have an answer for that specific case, I'm quoting this here to point people at your true question, which is about the **greek** mythos in particular. ",
"provenance": null
},
{
"answer": "They were supplanted by new beliefs, through various means. For example states giving funds, resources, and legal backing to strengthen the messages of new belief systems, like the Roman empire adopting Christianity, persecution of those who held onto old belief systems, assimilation of old belief systems into the new system, and so on.",
"provenance": null
},
{
"answer": "The end of classical Greco-Roman religion can be mainly attributed to the rise of Christianity. Once the first few centuries AD had set in, paganism/polytheism were often persecuted, and people gradually made the change. I believe the last few groups who worshipped the Greek gods converted or died out by the end of the first millennium AD.\n\nStill, I should point out that your title is poorly worded. Not only are there many ancient religions that we currently study as mythology (Greek, Egyptian, Norse, Etruscan, and Celtic, to name a few), but you're conflating the words \"mythology\" and \"religion,\" which are two different things.\n\nMythology: the stories developed by a culture to explain nature, customs, and history\n\nReligion: a collection of beliefs, cultures, and morals that relate humanity to some sort of reason or order for existence.\n\nBy these definitions, even modern religions like Christianity, Judaism, Islam, Hinduism, etc, can be said to have, at their core, a specific \"mythology,\" and that term shouldn't be taken offensively, as though it's belittling or invalidating people's beliefs. It's just the word to describe their explanations for the history and physical rules of the world.\n\nSource: Minored in (and briefly double majored in) Classical Civilizations/Mythology). Plenty of ancient religions and modern religious studies credits.\n\nEdit: Added my source.\n\n",
"provenance": null
},
{
"answer": "I think there is a common misinterpretation of the word mythology, due to it's association with \"Greek Mythology\" or \"Egyptian Mythology\". \n\nFrom wikipedia: Mythology can refer either to the collected myths of a group of people—their body of stories which they tell to explain nature, history, and customs—or to the study of such myths.\n\nWith this definition in hand, one could define the collection of Christian or Islamic beliefs as \"Christian Mythology\" or \"Islamic Mythology\". Beliefs evolve over time and are slowly swallowed up by other religions. Hinduism is a religion that did an excellent job of this, which helped it to spread. Christianity adopted many of the beliefs of Zoroastrianism and Manichaeism such as the belief in a \"end of days scenario\", otherwise known as eschatology, and the Saoshyant, or savior figure. \n\nI'm no expert, but Greek \"mythology\" was probably left behind as more and more people adopted more popular religions. The religion of the Egyptians was probably in conflict with those of their frequent conquerors: The Hyksos, the Kushites, the Assyrians, the Macedonians, the Ptolemaics, the Romans, etc. \n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "48438298",
"title": "Barbara Mor",
"section": "Section::::Early work.:\"The Great Cosmic Mother\".\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 617,
"text": "Mor's comprehensive analysis of world mythologies traces the origin of ancient spiritual beliefs, ceremonies, and rituals as related to Women. She argues that the goddess-centered beliefs that dated from humanity's Paleolithic age were violently destroyed and replaced by war-like patriarchal cultures and religions during the rise of agriculture and the erection of the first cities in Sumeria and elsewhere. With the displacement of matriarchy by patriarchy, the foundations of \"modernity\" were laid down, helping explain the rise of war, the manifestation of the Inquisition, and even the Salem witchcraft trials.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8582",
"title": "Deism",
"section": "Section::::Features.:History of religion and the deist mission.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 471,
"text": "A clear implication of this deist creation myth was that primitive societies, or societies that existed in the distant past, should have religious beliefs less encrusted with superstitions and closer to those of natural theology. This position gradually became less plausible as thinkers such as David Hume began studying the natural history of religion and suggesting that the origins of religion lay not in reason but in the emotions, specifically fear of the unknown.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "228211",
"title": "Futurama",
"section": "Section::::Themes.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 831,
"text": "Religion is a prominent part of society, although the dominant religions have evolved. A merging of the major religious groups of the 20th century has resulted in the First Amalgamated Church, while Voodoo is now mainstream. New religions include Oprahism, Robotology, and the banned religion of \"Star Trek\" fandom. Religious figures include Father Changstein-El-Gamal, the Robot Devil, Reverend Lionel Preacherbot, and passing references to the Space Pope, who appears to be a large crocodile-like creature. Several major holidays have robots associated with them, including the murderous Robot Santa and Kwanzaa-bot. While very few episodes focus exclusively on religion within the \"Futurama\" universe, they do cover a wide variety of subjects including predestination, prayer, the nature of salvation, and religious conversion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21069417",
"title": "Egyptian mythology",
"section": "Section::::Content and meaning.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 1199,
"text": "One commonly suggested reason for inconsistencies in myth is that religious ideas differed over time and in different regions. The local cults of various deities developed theologies centered on their own patron gods. As the influence of different cults shifted, some mythological systems attained national dominance. In the Old Kingdom (c. 2686–2181 BC) the most important of these systems was the cults of Ra and Atum, centered at Heliopolis. They formed a mythical family, the Ennead, that was said to have created the world. It included the most important deities of the time but gave primacy to Atum and Ra. The Egyptians also overlaid old religious ideas with new ones. For instance, the god Ptah, whose cult was centered at Memphis, was also said to be the creator of the world. Ptah's creation myth incorporates older myths by saying that it is the Ennead who carry out Ptah's creative commands. Thus, the myth makes Ptah older and greater than the Ennead. Many scholars have seen this myth as a political attempt to assert the superiority of Memphis' god over those of Heliopolis. By combining concepts in this way, the Egyptians produced an immensely complicated set of deities and myths.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2117675",
"title": "The First Sex",
"section": "Section::::Synopsis.:The Patriarchal Revolution.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 1452,
"text": "In this section of the book, Gould Davis examined how mythology and society changed as a result of a suggested violent conversion from matriarchy to patriarchy. Her theory proposed that patriarchal revolution resulted from the violent invasion of nomadic tribes who were warlike and destructive, overrunning the peaceful, egalitarian matriarchies. These nomads (Semites from the Arabian Peninsula) are argued to have never achieved a civilization of their own, but only to have destroyed or taken over older ones. Gould Davis asserted that many tales in the Old Testament were actually rewritings of older stories, with goddesses changed to male actors, or a goddess raped or overthrown and her powers usurped by the new father deity. This, she suggested, was part of a concerted effort to wipe out all evidence of female authority. Because the violent invaders wished to establish the a patrilineal system of inheritance, rigorous control of women's sexuality became paramount. Thus women's right to sexual pleasure was redefined as sinful, and virginity was conceived of as a property right of a woman's father or husband. Gould Davis discussed female circumcision as a means to protect the virginity of women and assure clear lines of paternity. This practice is described in the book in graphic detail, as performed with unsterilized instruments, without anaesthesia (conditions pertaining to all surgical practices before the nineteenth century).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23340",
"title": "Paganism",
"section": "Section::::Modern paganism.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 381,
"text": "Many of the revivals, Wicca and Neo-Druidism in particular, have their roots in 19th century Romanticism and retain noticeable elements of occultism or Theosophy that were current then, setting them apart from historical rural () folk religion. Most modern pagans, however, believe in the divine character of the natural world and paganism is often described as an Earth religion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25824",
"title": "Religion and mythology",
"section": "Section::::Academic views.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 457,
"text": "In the 20th century, many scholars have resisted this trend, defending myth from modern criticism. Mircea Eliade, a professor of the history of religions, declared that myth did not hold religion back, that myth was an essential foundation of religion, and that eliminating myth would eliminate a piece of the human psyche. Eliade approached myth sympathetically at a time when religious thinkers were trying to purge religion of its mythological elements:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
o4rli
|
Why does gravitational potential start at zero and become infinitely negative?
|
[
{
"answer": "Potential energy is relative, not absolute.\n\nOne way of saying it is that you define where your zero point is and that becomes your reference.\n\nAnother way of saying it is that the you are really looking for a potential energy **difference** (Delta-U).",
"provenance": null
},
{
"answer": "Well it doesn't necessarily start at zero. Only gradients of potentials are observable, so potentials are defined up to a constant. A point of reference if you will. However, your potential starting at zero is just a very useful choice!\n\nNow why does it become infinity negative? Well let's say we have a very small mass point particle at infinity, not moving (so its energy is zero), and some other very massive point object that the first will be attracted to. As it starts moving, its kinetic energy will increase, but its total energy has to remain zero because energy is conserved. So the gravitational has to be negative for it to balance out. As the particle gets close to the large mass, its kinetic energy growth without bound (becomes infinitely positive), so its gravitational potential energy must become infinitely negative.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "579026",
"title": "Gravitational potential",
"section": "Section::::Mathematical form.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 262,
"text": "where \"G\" is the gravitational constant, and F is the gravitational force. The potential has units of energy per unit mass, e.g., J/kg in the MKS system. By convention, it is always negative where it is defined, and as \"x\" tends to infinity, it approaches zero.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23703",
"title": "Potential energy",
"section": "Section::::Gravitational potential energy.:Negative gravitational energy.\n",
"start_paragraph_id": 75,
"start_character": 0,
"end_paragraph_id": 75,
"end_character": 616,
"text": "As with all potential energies, only differences in gravitational potential energy matter for most physical purposes, and the choice of zero point is arbitrary. Given that there is no reasonable criterion for preferring one particular finite \"r\" over another, there seem to be only two reasonable choices for the distance at which \"U\" becomes zero: formula_34 and formula_35. The choice of formula_36 at infinity may seem peculiar, and the consequence that gravitational energy is always negative may seem counterintuitive, but this choice allows gravitational potential energy values to be finite, albeit negative.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44017873",
"title": "Negative energy",
"section": "Section::::Gravitational energy.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 846,
"text": "The strength of the gravitational attraction between two objects represents the amount of gravitational energy in the field which attracts them towards each other. When they are infinitely far apart, the gravitational attraction and hence energy approach zero. As two such massive objects move towards each other, the motion accelerates under gravity causing an increase in the positive kinetic energy of the system. At the same time, the gravitational attraction - and hence energy - also increase in magnitude, but the law of energy conservation requires that the net energy of the system not change. This issue can only be resolved if the change in gravitational energy is negative, thus cancelling out the positive change in kinetic energy. Since the gravitational energy is getting stronger, this decrease can only mean that it is negative.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44672530",
"title": "Quadratic integrate and fire",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 285,
"text": "where formula_2 is a real positive constant. Note that a solution to this differential equation is the tangent function, which blows up in finite time. Thus a \"spike\" is said to have occurred when the solution reaches positive infinity, and the solution is reset to negative infinity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16704344",
"title": "Gauss's law for gravity",
"section": "Section::::Poisson's equation and gravitational potential.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 205,
"text": "Since the gravitational field has zero curl (equivalently, gravity is a conservative force) as mentioned above, it can be written as the gradient of a scalar potential, called the gravitational potential:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "579026",
"title": "Gravitational potential",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 468,
"text": "In classical mechanics, the gravitational potential at a location is equal to the work (energy transferred) per unit mass that would be needed to move the object from a fixed reference location to the location of the object. It is analogous to the electric potential with mass playing the role of charge. The reference location, where the potential is zero, is by convention infinitely far away from any mass, resulting in a negative potential at any finite distance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10346",
"title": "Gravitational redshift",
"section": "Section::::Early historical development of the theory.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 244,
"text": "Since the rate of clocks and the gravitational potential have the same derivative, they are the same up to a constant. The constant is chosen to make the clock rate at infinity equal to 1. Since the gravitational potential is zero at infinity:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3vk9nm
|
if nothing can exceed the speed of light, how can we measure something as being 10,000 light years away without it taking 20,000 years to measure?
|
[
{
"answer": "If we were measuring the distance with something like a laser range finder, that would be true.\n\nObviously nobody measure the distance to such far away object be pointing a beam at them and waiting until it gets bounced back.\n\nThis method is actually used with close by objects like the moon where astronauts left some mirrors just for that.\n\nIn astronomy however many interesting object are far to far away for this to work. So other methods are used.\n\nOne for relative close by object is measuring the stellar parallax.\n\nYou know how you can hold a thumb up and close your left and right eye alternatively and have the thumb move left and right against the background.\n\nIf you stay awake in math class and learn trigonometry you can use the distance the thumb appears to move to calculate the distance of objects in the background.\n\nThe same works in astronomy but the left and right eye is our earth travelling half a year around the sum and instead of the thumb being a fixed distance we have the really far away stars as 'fixed' and calculate the distance of more nearby stars by comparison and how much they move back and fourth against the background.\n\nThat is why the unit parsec (for parallax arc second) is sometimes used instead of the similarly sized light-years for such distances.\n",
"provenance": null
},
{
"answer": "A lightyear is a set distance. If we know how many miles away something is, we can use the speed of light to figure out how many lightyears it is. \n\nBefore we knew how far the Earth was from the sun, everything in space was set at a relative scale. We said \"the earth is 1 Astronomical Unit away from the sun\" and used geometry to figure out the relative distances of the rest of the planets. We couldn't convert that to miles until we had two people in different countries observing a planet's position relative to stars. We knew how far apart the people were in miles, and with the geometry of parallax, we then learned how many miles were in an astronomical unit. \n\nAside on how parallax works: hold your arm straight out in front of you. Close one eye and use your thumb to block something. Now don't move your arm, but switch eyes. Notice how things seem to jump? That jump is related to the distance between your eyes. Put two telescopes at a known distance. Your thumb represents the star you're trying to measure. The background object represents background stars that are so far away, they don't appear to move.\n\nWhen parallax stops working, astronomers have different methods. Measuring spectral lines is one of them. Basically, we know stars have a lot of hydrogen. This means that they produce a lot of extra light at very specific wavelengths. This spectrum gets shifted around, but the relative position of the lines remains the same. If we can figure out which lines are which and how much they've been stretched or shifted, we can figure out (1) how far a star is, (2) whether there are actually two stars moving around each other, (3) if the light is from a single star or a whole distant galaxy, and much more. ",
"provenance": null
},
{
"answer": "The light scientists use to observe distant astronomical objects is light which was emitted a very long time ago; if it's 10,000 light-years away, they observe light that is 10,000 years old. The scientists did not shoot a beam of light at the object, they're observing the light which is already there.\n\nIn order to determine how far an object is, scientists use a whole series of tests, depending on the distances involved. For relatively near objects, (such at those within 10,000 light years) they use parallax, a technique you can actually try out for yourself. Hold a finger up a few centimeters in front of your nose, and then look at it with only your left eye, and then only your right eye. The fingers seems to move back and forth as you switch from eye to eye, yes? This is because your eyes are not in the same place; they need to look at different angles in order to see the finger (this is true of all objects you look at, but the effect is most noticeable with something right in front of your nose). If you were to measure those angles, and the distance between your eyes, you could construct a triangle using those measurements, and figure out the distance from you to the finger using simple geometry.\n\nScientists do the same thing to figure out the distance to stars and other objects. Of course, astronomical objects are much too far away for you to be able to do the one-eye-at-a-time trick; instead, what they do is measure the object's position in the sky at one point on Earth's orbit, and then measure it again when they're on the opposite side of the orbit. Since we know how big earth's orbit is, we now have two angles and a side, and can use geometry to work out how far away the star is.\n\nOf course, this doesn't work for objects which are very far away; past a certain point, even our most precise instruments aren't able to detect the change in position from one side of orbit to the other. Fortunately, Astronomers have other methods they can bust out when working on those sorts of distances. For more details, I suggest reading [this article](_URL_0_) which describes them in significant depth, without being TOO overwhelmingly technical.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "24896900",
"title": "Rømer's determination of the speed of light",
"section": "Section::::Later discussion.:Did Rømer measure the speed of light?\n",
"start_paragraph_id": 58,
"start_character": 0,
"end_paragraph_id": 58,
"end_character": 479,
"text": "\"If one considers the vast size of the diameter KL, which according to me is some 24 thousand diameters of the Earth, one will acknowledge the extreme velocity of Light. For, supposing that KL is no more than 22 thousand of these diameters, it appears that being traversed in 22 minutes this makes the speed a thousand diameters in one minute, that is 16-2/3 diameters in one second or in one beat of the pulse, which makes more than 11 hundred times a hundred thousand toises;\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "740817",
"title": "Mathematical coincidence",
"section": "Section::::Some examples.:Numerical coincidences in numbers from the physical world.:Speed of light.\n",
"start_paragraph_id": 64,
"start_character": 0,
"end_paragraph_id": 64,
"end_character": 427,
"text": "The speed of light is (by definition) exactly 299,792,458 m/s, very close to 300,000,000 m/s. This is a pure coincidence, as the meter was originally defined as 1/10,000,000 of the distance between the Earth's pole and equator along the surface at sea level, and the Earth's circumference just happens to be about 2/15 of a light-second. It is also roughly equal to one foot per nanosecond (the actual number is 0.9836 ft/ns).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8065677",
"title": "Distance measures (cosmology)",
"section": "Section::::Details.:Light-travel distance.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 385,
"text": "This distance is the time (in years) that it took light to reach the observer from the object multiplied by the speed of light. For instance, the radius of the observable universe in this distance measure becomes the age of the universe multiplied by the speed of light (1 light year/year) i.e. 13.8 billion light years. Also see misconceptions about the size of the visible universe.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "878327",
"title": "Engineering notation",
"section": "Section::::Overview.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 358,
"text": "Another example: when the speed of light (exactly by the definition of the meter and second) is expressed as 3.00 × 10 m/s or 3.00 × 10 km/s then it is clear that it is between 299 500 km/s and 300 500 km/s, but when using 300 × 10 m/s, or 300 × 10 km/s, 300 000 km/s, or the unusual but short 300 Mm/s, this is not clear. A possibility is using 0.300 Gm/s.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28736",
"title": "Speed of light",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 291,
"text": "After centuries of increasingly precise measurements, in 1975 the speed of light was known to be with a measurement uncertainty of 4 parts per billion. In 1983, the metre was redefined in the International System of Units (SI) as the distance travelled by light in vacuum in 1/ of a second.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "142488",
"title": "Harmonic series (mathematics)",
"section": "Section::::Applications.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 529,
"text": "Calculating the sum (iteratively) shows that to get to the speed of light the time required is only 97 seconds. By continuing beyond this point (exceeding the speed of light, again ignoring special relativity), the time taken to cross the pool will in fact approach zero as the number of iterations becomes very large, and although the time required to cross the pool appears to tend to zero (at an infinite number of iterations), the sum of iterations (time taken for total pool crosses) will still diverge at a very slow rate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33710707",
"title": "Planck units",
"section": "Section::::Planck units and the invariant scaling of nature.\n",
"start_paragraph_id": 80,
"start_character": 0,
"end_paragraph_id": 80,
"end_character": 639,
"text": "If the speed of light \"c\", were somehow suddenly cut in half and changed to \"c\" (but with the axiom that \"all\" dimensionless physical quantities remain the same), then the Planck length would \"increase\" by a factor of 2 from the point of view of some unaffected observer on the outside. Measured by \"mortal\" observers in terms of Planck units, the new speed of light would remain as 1 new Planck length per 1 new Planck time – which is no different from the old measurement. But, since by axiom, the size of atoms (approximately the Bohr radius) are related to the Planck length by an unchanging dimensionless constant of proportionality:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3mgqp5
|
what happens to food when exposed to air that justifies having a "consume within x days of opening" date?
|
[
{
"answer": "Air has bacteria in it. Smoothies are great environments for bacteria to live in. Once it's opened, the number of bacteria will rapidly become high enough to threaten your health (and the taste of the smoothie). \n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "20305364",
"title": "Refrigerate after opening",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 568,
"text": "Once opened for consumption, the product is immediately exposed to atmospheric oxygen and floating dust particles containing bacteria and mold spores, and all protections from the preservation process are immediately lost. At room temperature, mold and bacteria growth resumes almost immediately, and warmer temperatures can lead to an explosion of growth that rapidly degrades the food product. This organism growth can result in the accumulation of poisonous bacterial substances in the food product such as botulin, that lead to food poisoning, sickness, or death.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "531611",
"title": "Foodborne illness",
"section": "Section::::Mechanism.:Incubation period.\n",
"start_paragraph_id": 129,
"start_character": 0,
"end_paragraph_id": 129,
"end_character": 489,
"text": "The delay between the consumption of contaminated food and the appearance of the first symptoms of illness is called the incubation period. This ranges from hours to days (and rarely months or even years, such as in the case of listeriosis or bovine spongiform encephalopathy), depending on the agent, and on how much was consumed. If symptoms occur within one to six hours after eating the food, it suggests that it is caused by a bacterial toxin or a chemical rather than live bacteria.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41076998",
"title": "Food labelling in Canada",
"section": "Section::::Requirements.:Date markings.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 307,
"text": "It should be acknowledged that a durable life date is NOT an indicator of food safety. Once something is opened, depending on how it is stored, the shelf life can change. For example, an open box of crackers meant to expire in two weeks, will expire much faster should the seal be left open after each use.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "600368",
"title": "Shelf life",
"section": "Section::::Background.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 721,
"text": "\"Sell by date\" is a less ambiguous term for what is often referred to as an \"expiration date\". Most food is still edible after the expiration date. A product that has passed its shelf life might still be safe, but quality is no longer guaranteed. In most food stores, waste is minimized by using stock rotation, which involves moving products with the earliest sell by date from the warehouse to the sales area, and then to the front of the shelf, so that most shoppers will pick them up first and thus they are likely to be sold before the end of their shelf life. Some stores can be fined for selling out of date products; most if not all would have to mark such products down as wasted, resulting in a financial loss.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47412303",
"title": "Expiration date",
"section": "Section::::United States.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 790,
"text": "\"Sell by date\" is a less ambiguous term for what is often referred to as an \"expiration date\". Most food is still edible after the expiration date. A product that has passed its shelf life might still be safe, but quality is no longer guaranteed. In most food stores, waste is minimized by using stock rotation, which involves moving products with the earliest sell by date from the warehouse to the sales area, and then to the front of the shelf, so that most shoppers will pick them up first and thus they are likely to be sold before the end of their shelf life. This is important, as consumers enjoy fresher goods, and furthermore some stores can be fined for selling out of date products; most if not all would have to mark such products down as wasted, resulting in a financial loss.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35633282",
"title": "Ingestive behaviors",
"section": "Section::::Satiety signals.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 331,
"text": "There are two primary sources of signals that stop eating: short-term signals come from immediate effects of eating a meal, beginning before food digestion, and long-term signals, that arise in adipose tissue, control the intake of calories by monitoring the sensitivity of brain mechanisms to hunger and satiety signals received.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58552133",
"title": "Exercise induced anaphylaxis",
"section": "Section::::Food-dependent exercise-induced anaphylaxis (FDEIAn).\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 216,
"text": "Ingestion of the trigger food most often precedes exercise by minutes or hours in cases of an attack; there are, however, reported incidents of attacks occurring when ingestion transpires shortly following activity.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3g8j76
|
kernels (computing)
|
[
{
"answer": "A kernel does all kind of commonly needed stuff for you:\n\n* interacting with hardware and provide a standardized interface (so you don't have to write separate code for - as an example - each different sound card in each program that wants to use sound)\n* make sure your programs don't mess with each other, like overwriting each others memory\n* providing mechanisms for controlled communication between programs and components (eg. your program wants to send a message to the notification area, which is another program)\n* other useful functionality (getting random numbers, implementing common communication protocols, and so on)",
"provenance": null
},
{
"answer": "A kernel standardized interacting with hardware. \n\nFrom a programmers perspective all you normally see is read, write, open, close. While in reality all these operations work on tapes, CD's, DVD, thumb drives, hard drives, and TCP/IP connections. \n\nAll of these devices are different. Have different physical jobs, and different standards. The OS manages that for you.\n\nAt the same time it manages memory, CPU clock frequency, power saving, task switching, and library loading. \n\nThese things are done to save the programmer time. So you don't have to worry about what device your talking too.",
"provenance": null
},
{
"answer": "The kernel is the heart of an operating system. The most basic kernel does very few things. It manages and restricts access to memory, it manages processes and threads, it manages communication between processes, and controls access to the disks.\n\n1) Memory access. In order to run more than one program at a time, you need to have space in memory set aside for each program to use. Each program needs to be restricted to using it's own space in memory, and *only* it's own space. If you don't do this, badly behaving programs will damage data from other programs. Commonly, each program won't even know the memory used for other processes exists.\n\n2) Process management. In order to run more than one program at once, you need some code to keep track of all the programs, and to schedule time for each of them to use the processor.\n\nIn most desktop and server operating systems, this is done in some semblance of fairness, allowing each program to request time on the processor, and rotating through each program in turn to give roughly equal access to each, though a process can give up some of it's time voluntarily.\n\nThere are other ways to do this, like real-time computing, which allows programs to request hard deadlines, a time that they *must* be finished processing by. This is often used in some kind of control system, where the computer must take input, and process it quickly enough to respond to whatever is happening in the real world.\n\nThere's also the fact that most modern computers have more than one processor available. This can allow programs to split themselves into multiple pieces that are more or less independent, in order to run in parallel to each other. Making sure that programs actually gain performance out of this takes some adjustments to how you schedule processes.\n\n3) Communication between processes. Often, programs will be divided into smaller units that perform some task, and then hand a result off to another part of the same program. Or you want to be able to take advantage of multiple processors available, and run some of your code on each. Or your code uses some standard library or device driver in order to perform a standard task, or to talk to some piece of hardware.\n\nTo split your code into multiple processes, you need a way to communicate. In kernel land, this mostly takes the form of semaphores, pipes, message queues and shared memory.\n\nSemaphores let two or more programs control how they execute in relation to each other. It's essentially a flag that each program can raise and lower to let the other process know they've reached some specific point in their code. They're commonly used to let parallel processes share some common resource without stomping all over the other processes using it. If the flag is up, one program is using it, and any other programs using the same resource shouldn't touch it until the flag goes down.\n\nPipes are common in the Unix based world. They take the output from one program, and feed it into another. Usually, they're very temporary, only existing long enough to transition from one program to another.\n\nMessage queues are used for longer term communication. One program can put data into a queue, and when another related program gets it's turn on the processor, it can read from the queue. Messages stack up in sequence, so the reading program reads them in the same order they were sent.\n\nShared memory is actually just a careful breaking of a concern from 1). Normally, each program should be kept separate in memory, so they don't stomp all over each other. But, if they want to communicate large pieces of data, the kernel can set aside another piece of memory that they both have access to. Each program can read and write to this spot in memory when it's their turn.\n\n4) Disk access. Programs need to have access to disk, and while it's less critical than memory and processor time that it's shared equally, you also don't want programs stomping all over each other's data.\n\nThe more important point is actually that disk access is slow. In computing time scales, it's glacially slow. It's so slow that it would be a really stupid idea for the processor to be idle while waiting for a response to come back when it could run through a few more turns for other programs in the time it would have just been sitting there.\n\nThe kernel controls this. When a program requests disk input or output, it will send the request off to the disk, then put the program in a waiting state. When the response comes back, it triggers an interrupt, a special circuit in the processor that stops everything and loads up the kernel to process the response. The kernel then loads whatever program was waiting for the response, and lets it finish what it was doing as if the processor *was* sitting idle instead of letting other programs cut in line.\n\n---\n\nYou'll notice that all of these functions for the kernel have something in common. They're all about letting you run more than one program at a time. Before we'd written any operating system's and kernel's, running one program at a time is just how using a computer worked. You'd manually load a program into memory, manually start it processing, and the computer would do only that one thing until it had finished and gave you some output.\n\nAlso, it should be pointed out that modern kernels do more than just these four things. They typically come with all kinds of additional tools to make writing programs easier. A modern kernel is a one-stop shop for every programming shortcut you might need.\n\nBut, if you wanted to write something that would be called a kernel, at a minimum, you'd want it to do these four things.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "21346982",
"title": "Kernel (operating system)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 477,
"text": "The kernel is a computer program that is the core of a computer's operating system, with complete control over everything in the system. On most systems, it is one of the first programs loaded on start-up (after the bootloader). It handles the rest of start-up as well as input/output requests from software, translating them into data-processing instructions for the central processing unit. It handles memory and peripherals like keyboards, monitors, printers, and speakers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3740391",
"title": "Comparison of operating system kernels",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 224,
"text": "A kernel is the most fundamental component of a computer operating system. A comparison of system kernels can provide insight into the design and architectural choices made by the developers of particular operating systems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1350138",
"title": "Micro-Controller Operating Systems",
"section": "Section::::µC/OS-II.:Kernels.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 452,
"text": "The kernel is the name given to the program that does most of the housekeeping tasks for the operating system. The boot loader hands control over to the kernel, which initializes the various devices to a known state and makes the computer ready for general operations. The kernel is responsible for managing tasks (i.e., for managing the CPU’s time) and communicating between tasks. The fundamental service provided by the kernel is context switching.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42100545",
"title": "Glossary of operating systems terms",
"section": "Section::::K.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 323,
"text": "BULLET::::- kernel: In computing, the kernel is a computer program that manages input/output requests from software and translates them into data processing instructions for the central processing unit and other electronic components of a computer. The kernel is a fundamental part of a modern computer's operating system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6097297",
"title": "Linux",
"section": "Section::::Hardware support.\n",
"start_paragraph_id": 67,
"start_character": 0,
"end_paragraph_id": 67,
"end_character": 794,
"text": "The Linux kernel is a widely ported operating system kernel, available for devices ranging from mobile phones to supercomputers; it runs on a highly diverse range of computer architectures, including the hand-held ARM-based iPAQ and the IBM mainframes System z9 or System z10. Specialized distributions and kernel forks exist for less mainstream architectures; for example, the ELKS kernel fork can run on Intel 8086 or Intel 80286 16-bit microprocessors, while the µClinux kernel fork may run on systems without a memory management unit. The kernel also runs on architectures that were only ever intended to use a manufacturer-created operating system, such as Macintosh computers (with both PowerPC and Intel processors), PDAs, video game consoles, portable music players, and mobile phones.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22194",
"title": "Operating system",
"section": "Section::::Components.:Kernel.:Multitasking.\n",
"start_paragraph_id": 105,
"start_character": 0,
"end_paragraph_id": 105,
"end_character": 505,
"text": "An operating system kernel contains a scheduling program which determines how much time each process spends executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22194",
"title": "Operating system",
"section": "Section::::Components.:Kernel.\n",
"start_paragraph_id": 77,
"start_character": 0,
"end_paragraph_id": 77,
"end_character": 478,
"text": "With the aid of the firmware and device drivers, the kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6lcr5e
|
When did people start calling themselves "Italians" and "Germans"?
|
[
{
"answer": "[This question came up about a year ago](_URL_0_), although it's a good question that can certainly merit more discussion.\n\nFor Italy, the short answer is, \"When Napoleon crowned himself King of Italy.\" Prior to then the concept of an Italian language and culture was solely the purview of a small intellectual elite, who would have nonetheless identified more with their local identity more so than a common one spanning the whole peninsula. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "152735",
"title": "Germans",
"section": "Section::::Etymology.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 264,
"text": "The English term \"Germans\" is only attested from the mid-16th century, based on the classical Latin term \"Germani\" used by Julius Caesar and later Tacitus. It gradually replaced \"Dutch\" and \"Almains\", the latter becoming mostly obsolete by the early 18th century.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "152735",
"title": "Germans",
"section": "Section::::Etymology.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1036,
"text": "While in most Romance languages the Germans have been named from the Alamanni (in what became Swabia) (some, like standard Italian \"tedeschi\", retain an older borrowing of the endonym, while the Romanian 'germani' stems from the historical correlation with the ancient region of Germania), the Old Norse, Finnish, and Estonian names for the Germans were taken from that of the Saxons. In Slavic languages, the Germans were given the name of \" \" (singular \"\"), originally meaning \"not us\". A variety old Slavic dialects were present and dominant in the area of modern Germany. However, under the Celto-German influence and furthered by the violent Partitions of Poland, the modern Polish language has in many of its words has adopted the letter \"G\" in place of \"H\". The original common pronunciation was Her-man which relates directly to \"Herr\" in German. \"Her\" in old Slavic meant \"Mountain\" and the Slavs (particularly the Po-Lech) used the combination of words \"Her-Man\" to describe the dwellers of the Alps as the \"Mountain People\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4280721",
"title": "Italians in Germany",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 447,
"text": "Italians in Germany consist of ethnic Italian migrants to Germany and their descendants, both those originating from Italy as well as from among the communities of Italians in Switzerland. Most Italians moved to Germany for reasons of work, others for personal relations, study, or political reasons. Today, Italians in Germany form one of the largest Italian diasporas in the world and account for one of the largest immigrant groups in Germany.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8308199",
"title": "List of ethnic slurs by ethnicity",
"section": "Section::::Broader ethnic categories.:Mediterranean/Southern European.\n",
"start_paragraph_id": 115,
"start_character": 0,
"end_paragraph_id": 115,
"end_character": 213,
"text": "BULLET::::- Dago: In the U.S., refers specifically to Italians. In UK and Commonwealth, may refer to Italians, Spaniards, Portuguese, and potentially Greek peoples, possibly derived from the Spanish name \"Diego\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6622110",
"title": "Italians in the United Kingdom",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 532,
"text": "Italians in the United Kingdom, also known as British Italians or colloquially Britalians, are citizens or residents of the United Kingdom of Italian heritage. The phrase may refer to someone born in the United Kingdom of Italian descent, someone who has emigrated from Italy to the United Kingdom or someone born elsewhere (e.g. the United States), who is of Italian descent and has migrated to the UK. More specific terms used to describe Italians in the United Kingdom include: Italian English, Italian Scots, and Italian Welsh.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38580553",
"title": "Mozart's nationality",
"section": "Section::::\"Germany\" as cultural concept.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 509,
"text": "However, the word \"German\" (in German: \"deutsch\") was in use well before this time, designating the people of central Europe who shared German language and culture. To give an example, when in 1801 Mozart's old colleague Emanuel Schikaneder opened the Theater an der Wien in Vienna, a Leipzig music journal praised the new theater as \"the \"most comfortable and satisfactory in the whole of Germany\". The city of Salzburg, owing to its fine ecclesiastical architecture, was sometimes called \"the German Rome\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "55527752",
"title": "German Club, Adelaide",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 331,
"text": "Early organizations to which German immigrants specifically belonged include the Macclesfield United English and German Rifle Club (1851), German Rifle Club (1853), German Glee Club, and several Liedertafels, notably Adelaide and Tanunda. Several German-language newspapers appeared, notably the \"Südaustralische Zeitung\" in 1849.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2shkpc
|
In the second world war, was there ever an incident of a ship being captured by one side, then pressed into service for use against its former operator?
|
[
{
"answer": "The IJN used several ships captured from the British, Dutch and Americans as convoy escorts. These weren't captured in the traditional sense - they were never boarded. Instead, they were scuttled in port by their former owners when capture of the port seemed likely. The Japanese would later salvage them, and press them into service against their former owners. \n\nThe RN lost two ships in this way. HMS Thracian was on station at Hong Kong in December 1941. She remained at Hong Kong to provide fire support to the garrison, while the remainder of her squadron left for Singapore. She was scuttled after being heavily damaged by Japanese aircraft. She would be refloated in July 1942, and operated as patrol boat PB-101. She was recaptured in Yokosuka in 1945, and scrapped in 1946. The river gunboat HMS Moth would also be captured at Hong Kong, and operated on inland waterways in Japanese-occupied China. \n\nThe USN lost six ships in this way. USS Stewart, a Clemson class destroyer, fled the Philippines, and operated with ABDA forces in Indonesia. Heavily damaged after the Battle of the Bandung Strait, she was put into a floating drydock in Surabaya. She was not repaired before Java fell, and the drydock was scuttled with her inside. She was raised in 1943, and put back into service as PB-102. She would also be recaptured at the end of the war, and sunk as a target in 1946. The minesweeper USS Finch, fleet tug USS Genesse and Philippine customs vessel Arayat would be captured in Manila in various states of repair. They would enter Japanese service as PB-103, PB-107 and PB-105 respectively. Two gunboats, the USS Wake and USS Luzon were also captured, and put into service as the gunboats Tatara and Karatsu. Tatara fulfilled a similar role to the ex-HMS Moth, while Karatsu operated in the Philippines. PB-107 was destroyed by American carrier aircraft in Manila Bay at the start of November 1944, while PB-105 would be sunk by PT boats at the end of the month while escorting a convoy near Leyte. USS Finch would be sunk by aircraft from TF-38 while escorting a convoy off Vietnam in January 1945. Four Dutch ships were also captured, including one destroyer and three patrol boats. Another ship, the minesweeper Regulus, would be captured while under construction.\n\nWhile purpose built warships were repurposed, in some cases merchant ships could be reused. The first escort carrier was built by the RN on the hull of a merchant ship captured from the Germans. The banana carrier Hannover was captured in the Caribbean in 1939. She was operated by the Merchant Navy until January 1941, when she was selected to become an escort carrier. She was the first to enter service, being commissioned in June 1941. She was renamed HMS Audacity in service. Audacity would operate with the RN for 6 months, escorting convoys to Gibraltar. Her fighters claimed 7 German aircraft before she was sunk escorting convoy HG76 by U-751.\n\nSources:\nTabulated Records of Movement for the Japanese Navy, available at [_URL_1_](_URL_0_) - look under the sections for escorts and gunboats.\n\nThe Fleet Air Arm Handbook 1939-1945, David Wragg, 2003, Sutton Publishing.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "47536336",
"title": "October 1914",
"section": "Section::::October 17, 1914 (Saturday).\n",
"start_paragraph_id": 157,
"start_character": 0,
"end_paragraph_id": 157,
"end_character": 390,
"text": "BULLET::::- While searching for survivors during the aftermath of Battle off Texel, the was seized, even though war conventions stipulated for navies never to do so. The Royal Navy justified the seizure as coded radio messages were monitored coming from the ship, the ship's wireless was destroyed, and the crew was observed throwing documents overboard. The ship was renamed SS \"Huntley\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35695594",
"title": "Sinking of the RMS Lusitania",
"section": "Section::::Controversies.:Cruiser rules and exclusion zones.\n",
"start_paragraph_id": 181,
"start_character": 0,
"end_paragraph_id": 181,
"end_character": 647,
"text": "The \"Prize rules\" or \"Cruiser rules\", laid down by the Hague Conventions of 1899 and 1907, governed the seizure of vessels at sea during wartime, although changes in technology such as radio and the submarine eventually made parts of them irrelevant. Merchant ships were to be warned by warships, and their passengers and crew allowed to abandon ship before they were sunk, unless the ship resisted or tried to escape, or was in a convoy protected by warships. Limited armament on a merchant ship, such as one or two guns, did not necessarily affect the ship's immunity to attack without warning, and neither did a cargo of munitions or materiel.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33584865",
"title": "List of ships captured in the 19th century",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1386,
"text": "Throughout naval history during times of war battles, blockades, and other patrol missions would often result in the capture of enemy ships or those of a neutral country. If a ship proved to be a valuable prize efforts would sometimes be made to capture the vessel while inflicting the least amount of damage as was practically possible. Both military and merchant ships were captured, often renamed, and then used in the service of the capturing country's navy, or in many cases sold to private individuals who would break them up for salvage, or use them as merchant vessels, whaling ships, slave ships, or the like. As an incentive to search far and wide for enemy ships, the proceeds of the sale of the vessels and their cargoes were divided up as prize money among the officers and crew of capturing crew members with the distribution governed by regulations the captor vessel's government had established. Throughout the 1800s war prize laws were established to help opposing countries settle claims amicably. Private ships were also authorized by various countries at war through a Letter of marque, legally allowing a ship and commander to engage and capture vessels belonging to enemy countries. In these cases contracts between the owners of the vessels on the one hand, and the captains and the crews on the other, established the distribution of the proceeds from captures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8078890",
"title": "Thrasher incident",
"section": "Section::::Background.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 604,
"text": "After the issuance of captured British orders, all merchant vessels were directed to paint over their names and ports of registry and to fly under the flag of a neutral nation. They were instructed not to stop when challenged by a submarine but instead to open fire at once or, if unarmed, to attempt to ram the sub. In response, German orders came from Kaiser Wilhelm who declared that as of February 18, 1915, the waters surrounding England, including the Channel, were a war zone. Any merchant ship found in that zone would be immediately destroyed without first determining if the ship were neutral.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "198201",
"title": "Liberty ship",
"section": "Section::::History and service.:Use in battle.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 469,
"text": "On 27 September 1942 the was the first (and only) US merchant ship to sink a German surface combatant during the war. Ordered to stop, \"Stephen Hopkins\" refused to surrender, the heavily armed German commerce raider and her tender with one machine gun opened fire. Although greatly outgunned, the crew of \"Stephen Hopkins\" fought back, replacing the Armed Guard crew of the ship's lone gun with volunteers as they fell. The fight was short, and both ships were wrecks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5219762",
"title": "Prize (law)",
"section": "Section::::End of privateering and the decline of naval prizes.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 238,
"text": "Under contemporary international law and treaties, nations may still bring enemy vessels before their prize courts, to be condemned and sold. But no nation now offers a share to the officers or crew who risked their lives in the capture:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28508686",
"title": "Armed trawler Ethel & Millie",
"section": "Section::::Fate.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 466,
"text": "They were not reported as prisoners of war, and none returned to Britain at the end of hostilities. The suspicion at the time, and subsequently, is that they were disposed of by the U-boat crew, for example by being left to drown while the U-boat submerged. The German government had made it clear they regarded the crews of merchant ships who fought back against U-boat attacks as \"francs-tireurs\", and thus liable to execution, as had happened in the Fryatt case.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2uunfv
|
If one were to throw a magnet at a metal object, would it accelerate before it hits the metal? If so, where does the change in kinetic energy come from?
|
[
{
"answer": " > would it accelerate before it hits the metal?\n\nYes, it would. The extra kinetic energy comes the potential energy that was stored in the system due to two magnets between positioned at a distance from each other.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "229103",
"title": "Spot welding",
"section": "Section::::Physics.:Fields.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 472,
"text": "During spot welding, the large electric current induces a large magnetic field, and the electric current and magnetic field interact with each other to produce a large magnetic force field too, which drives the melted metal to move very fast at a velocity up to 0.5 m/s. As such, the heat energy distribution in spot welding could be dramatically changed by the fast motion of the melted metal. The fast motion in spot welding can be observed with high speed photography.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "69656",
"title": "Reactive armour",
"section": "Section::::Electric reactive armour.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 489,
"text": "Another electromagnetic alternative to ERA uses layers of plates of electromagnetic metal with silicone spacers on alternate sides. The damage to the exterior of the armour passes electricity into the plates causing them to magnetically move together. As the process is completed at the speed of electricity the plates are moving when struck by the projectile causing the projectile energy to be deflected whilst the energy is also dissipated in parting the magnetically attracted plates.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "758604",
"title": "Electron-beam welding",
"section": "Section::::Physics of electron-beam heating.:Penetration of electron beam during welding.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 747,
"text": "When electrons from the beam impact the surface of a solid, some of them may be reflected (as \"backscattered\" electrons), while others penetrate the surface, where they collide with the particles of the solid. In non-elastic collisions they lose their kinetic energy. It has been proved, both theoretically and experimentally, that they can \"travel\" only a very small distance below the surface before they transfer all their kinetic energy into heat. This distance is proportional to their initial energy and inversely proportional to the density of the solid. Under conditions usual in welding practice the \"travel distance\" is on the order of hundredths of a millimeter. Just this fact enables, under certain conditions, fast beam penetration.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "65907",
"title": "Elastic collision",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 440,
"text": "During the collision of small objects, kinetic energy is first converted to potential energy associated with a repulsive force between the particles (when the particles move against this force, i.e. the angle between the force and the relative velocity is obtuse), then this potential energy is converted back to kinetic energy (when the particles move with this force, i.e. the angle between the force and the relative velocity is acute).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1831487",
"title": "Eddy current brake",
"section": "Section::::Lab experiment.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 374,
"text": "In physics education a simple experiment is sometimes used to illustrate eddy currents and the principle behind magnetic braking. When a strong magnet is dropped down a vertical, non-ferrous, conducting pipe, eddy currents are induced in the pipe, and these retard the descent of the magnet, so it falls slower than it would if free-falling. As one set of authors explained\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "857973",
"title": "Contact electrification",
"section": "Section::::Electrolytic-metallic contact.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 1340,
"text": "If a piece of metal is touched against an electrolytic material, the metal will spontaneously become charged, while the electrolyte will acquire an equal and opposite charge. Upon first contact, a chemical reaction called a 'half-cell reaction' occurs on the metal surface. As metal ions are transferred to or from the electrolyte, and as the metal and electrolyte become oppositely charged, the increasing voltage at the thin insulating layer between metal and electrolyte will oppose the motion of the flowing ions, causing the chemical reaction to come to a stop. If a second piece of a different type of metal is placed in the same electrolyte bath, it will charge up and rise to a different voltage. If the first metal piece is touched against the second, the voltage on the two metal pieces will be forced closer together, and the chemical reactions will run constantly. In this way the 'contact electrification' becomes continuous. At the same time, an electric current will appear, with the path forming a closed loop which leads from one metal part to the other, through the chemical reactions on the first metal surface, through the electrolyte, then back through the chemical reactions on the second metal surface. In this way, contact electrification leads to the invention of the Galvanic cell or battery. \"See also: Dry pile\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17093263",
"title": "List of Inhumans",
"section": "Section::::Marvel Cinematic Universe.:Appearing in Agents of S.H.I.E.L.D..\n",
"start_paragraph_id": 335,
"start_character": 0,
"end_paragraph_id": 335,
"end_character": 364,
"text": "BULLET::::- Joey Gutierrez – An Inhuman with the ability to melt any metallic object in a range of three meters around him, especially by touching them. The melting effect does not only affect the item he is touching, but also other items in a close range. His power tends to cause explosions when touching vehicles, as the fuel explodes due to the decompression.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6wkps8
|
according to data we have discovered 14% of all organisms on earth. where does this number come from, if the other 86% of haven't been discovered yet (and therefore we don't know if they exist)?
|
[
{
"answer": "Well it's a bit outside my area of expertise, but if I had to make such an estimate, I would look at the rate at which we're discovering new species, and how that rate has changed over time. That would allow me to estimate how many species we're likely to find in the future. If the number is much larger than the number of species we already know, that would get your 14%.",
"provenance": null
},
{
"answer": "It's based on the difficulty of finding new species, if everywhere we looked we found a new species, then we must not have found very many. If we've looked for decades and nobody found any new species then it's likely because there aren't any more left to be found.\n\nObviously, we find new species at a fairly consistent rate, and there is a very large number known already. I'm sure someone has used these numbers to make a guess, I have no idea how accurate they are.",
"provenance": null
},
{
"answer": "Not an exact answer but two that may help.\n\nMany of the worlds species are insects, and many of those are thought to be beetles.\n\nOne bit of information that supports this theory comes from an attempt to document the number of species within a, relatively, small forest encampment. The researchers found that with each tree that was shaken to gather the insects on them, that there were many unique beetle species. Almost a new species per tree.\n\nAditionally, and unrelated to beetles, is that fact that we havent explored our oceans all that much. And that means that we've yet to find all the species that inhabit the oceans.",
"provenance": null
},
{
"answer": "A large chunk of it would come from barely explored / completely unexplored areas, i would be curious to see the source from which you got these 14/86 numbers and its age. \n\nBut back to the point, we barely understand the ocean and have trouble even fathoming the deep ocean. \n\nbased on the phrasing of organism this would include bacteria and other single cell organisms, which is where a vast majority of that percentage would be made up, as well as Extremophile {things that live in previously thought unlivable places, super cold/hot/acidic ect.}\n}\n\ni hope that helped, i'll try to get back to any replies \n \np.s. this is not my field of expertise just a thing i find interesting and have recently been refreshed on with a book and some youtube vids.",
"provenance": null
},
{
"answer": "(Undergrad Marine Bio student here) From my understanding, organisms that we may see as a singular species because they look identical could be very genetically different, to the extent that what we thought one was one species 10/20 years ago could be 2 or 3 different examples. \nOfcourse there's going to be species we haven't discovered but I think atleast half of that number is down to misidentification or lack of technology to analyse DNA. \nThe term species has changed a lot over the century too, which doesn't help. \nHope this is alright; I can find some examples if you want. ",
"provenance": null
},
{
"answer": "Statistics like this are created based on looking at what is identified within a group.\n\nPerhaps an easier example.\n\nLet's say people are inspecting defects in a product. Someone in charge intentionally adds 10 defects. Then they watch and see what comes through the line, what is discovered by the process. If people only find 3 of the defects, then they can estimate they're catching 30% of the defects overall, letting 70% of the defects go through. On the other hand, if all 10 defects are discovered, then they know they're catching all or nearly all of the defects. The percentage of things they know about should roughly match the percentage of things they don't know about.\n\nIt applies to other statistics as well, like crime stats. They can look at crimes they know happened but weren't reported through official channels, and look at crimes they know about and were reported. Looking at the difference shows about how many crimes go unreported. It is not exact, but if people are careful about how they create the stats they can be fairly accurate. \n\nFor counting species there are several ways it can be done. One way is like above, to have one group track the number of species in an area and another group figure out how many are new. Another method is a linear regression, figuring out an approximately how many species there should be based on estimates and comparing it to how many have actually been identified.\n\nAlso, most of the species that aren't discovered are small things. We're down to small numbers of new birds and mammals, often they are sub-species that get reclassified as a new species, or they're highly specialized species living in a remote and small geographic area. \n\nIt is mostly bugs, fungi, and other small organism that are being discovered in large numbers. These are things that are hard to spot and identify, many only identified because of genetic testing on tiny or microscopic organisms.\n",
"provenance": null
},
{
"answer": "It wouldn't surprise me of 84% is bacteria, viruses, plants and minuscule animals and only 2% are \"normal\" animals.",
"provenance": null
},
{
"answer": "At least in the microbial world, scientists have catalogued genes found in a random spoon of soil and found that only a minuscule percent of genes belonged to organisms they knew of, which led them to conclude that only 2% of microbes are classified based on percent of known and unknown genes, or something to that effect. \n\nEdit: ELI5: People have looked at DNA found in dirt and figured out we only have seen very small percent of it before and most of it is unknown. ",
"provenance": null
},
{
"answer": "Assuming this includes microscopic organisms, such as bacteria, we may not even have methods available to detect some existing species. Some bacteria can't be differentiated from one another unless using genomic data or other molecular markers, and these methods weren't invented until recently. There could be millions of microorganism species left to discover.",
"provenance": null
},
{
"answer": "So it's like in Pokémon where you only know the outline of the animal but have to actually see it to get the full picture/\"discover\" it.",
"provenance": null
},
{
"answer": "Humans probably wrought their extinction. Just like half of the vertebrae we have annihilated in our timely progresión. ",
"provenance": null
},
{
"answer": "Fun fact related to this:\n\n25% of all known organisims in the animal kingdom are beetles: come from the order, coleoptera.\n\nI learned that from visiting the insectarium, in Montreal.",
"provenance": null
},
{
"answer": "You just make a grid, say 1 foot by 1 foot and you check and see how many species are known within that grid. ",
"provenance": null
},
{
"answer": "All sounds good, next question do we Exist?",
"provenance": null
},
{
"answer": "My knowledge of cryptids and inner earth beings as well as air beings suggest this number is accurate. ",
"provenance": null
},
{
"answer": "An estimated 86% of species are undiscovered, but not 86% of organisms. Many many many of this 86% will either have teeny-tiny populations, or will be so similar to already-discovered species that only an expert could tell the difference. \n\nAnd don't forget boring stuff that's easy to overlook like all the different kinds of lichen and bacteria. \n\nI don’t want to be too much of a killjoy though, there's almost certainly a bunch of mad shit in the deep oceans we haven't discovered yet. When we develop the technology to properly explore them it's going to be a whole new age of discovery for biologists. ",
"provenance": null
},
{
"answer": "Scientists look at the rate of new species discovery each year for the past however many years and fit it to what is called a Logistic Growth Curve. The simplest way to think about it is like this: every time someone encounters what they believe is a new species they do some work to verify if it's really new or not. If the rate at which this actually results in a new species discovery is 50% then we are at the 'inflection point' of the curve and we know we've identified about half of the species on the planet (with slight variance). This percentage is tracked every year for the scientific community as a whole and from millions of these data point we can predict both where we are on the curve right now and where the curve will eventually level off. We aren't anywhere near the inflection point yet so the rate keeps increasing every year. \n\nWhere we are right now tells use the percentage discovered and where it levels off is the scientific prediction for the total number of species on the planet. To the average person this might seem really uncertain but the statistical significance of a result with millions of data points would only be off by a tiny fraction of a percent 99.9...% of the time at which time scientists use the data to claim a high degree of certainty.",
"provenance": null
},
{
"answer": "I think the number is a fraud. The number has been created by people looking for grants. The more research which can be paid for, the better, in their eyes.",
"provenance": null
},
{
"answer": "First of all - I found this story shocking and disturbing when I first heard it and I certainly don't condone these actions....but they happened and were detailed in the video I saw.\n\nI remember a documentary in the 90s where a guy would take a large tarp of plastic or cloth and stretch it out underneath a large tree in the Amazon rain forest. They would then shoot some gas up into the air that would kill like 99% of all creatures it contacted....or maybe it was just insects, I can't remember but I think it only affected insects.\n\nAnyways, for the next several hours the jungle would \"rain down\" the carcasses of dead insects onto the tarp and these scientists would collect the insects and categorize them. \n\nThey said that every time they did this they would get something like 20,000 different insects but what was surprising to them was that ~~90%~~ (correction it's 80%) of what they found were \"new to science\" every time they did this experiment.\n\nIt didn't seem to matter where they went.....they repeated this many, many times and every time they did it, ~~90%~~ (correction it's 80%) of what they catalogued was new to science.\n\nIt literally blew the top off of the previously held estimates for the number of species on earth and at the end, the scientists had to conclude that they simply had no idea where it would end nor how many species of living things were on the earth anymore.\n\n**EDIT - Found The Vid but not the clip I was referring to. It's called \"Web of Life: Exploring Biodiversity\" and was produced by PBS back in the 90s.** \n[Here's a clip...but not the one I referenced](_URL_2_)\n\n\n**EDIT 2 - [The original clip I was talking about](_URL_1_)** \nThanks to /u/QuietLuck for finding this (_URL_0_)",
"provenance": null
},
{
"answer": "Well the way you calculate it is with a bit of estimation.\n\nSay that we have a box with a hole, and the box has 1000 colored balls inside you can take out through the hole. What we don't know is which colors, or how many colors the balls are.\n\nSo we take out one ball, and it happens to be blue! So now we know there's blue balls in the box. Great!\n\nNow imagine that there's only blue balls coming out. You pull out 10 balls and they are all blue, you keep pulling out 100 balls and they are all blue. Now we've only seen 10% of all balls inside the box, but we can start guessing that the chance that we never got another color means that most, if not all, balls are blue. Maybe we got lucky and only found the 100 blue balls first, but the chance of that happening are really small, theres 5958926632240478155489389057946132722598279588777288866613428027720091866834339557556406953783393337191792337384343797137527180562707601151082428455887739138152983603695993602780124665235348032787297990137398327480690965409929969664334240631387010833309096272433060469800960000000000000000000000000 different permutations (that is groups of balls we would pull out in an order) of balls we could have had, but only 1 of them would be if all of the balls were blue.\n\nNow lets imagine what would happen if instead the second ball we pulled out was a red ball. This time we know there's more than just blue balls in the box. Now imagine that every time we pull a ball it comes out a different color, and after pulling out 100 they are all a different color each. Now there's a chance that there's 100 colors and nothing else, and we just happened to pull out one ball of each color with no repeats. The chance of this is a little bit higher than taking out all the balls of one color, but it still is very very very small. You'd have to be very lucky, it's a better guess to think there's still many colors we've yet to find. Maybe not 1000, but certainly more than 100.\n\nSo we can use the history of how many new colors we discovered as we saw each ball and create a good guess of how many colors probably exist in the box, and how many we know already.\n\nThe same thing can be done with organisms. We know more or less how many insects, fungi, plants, animals, bacteria etc. exist on earth by knowing how many resources they need, how much spaces is available, doing thermodynamic studies, etc. From there we notice that as we look at animals and see what species they are, we find new species every so much. Just like with the colors, we can guess how many species we probably haven't seen yet.",
"provenance": null
},
{
"answer": "The vast majority of distinct living organisms on earth are single-celled bacteria and archaea. Greater than 90% of these cannot be cultured in laboratory conditions to enable their study and classification. Consequently, the majority of currently extant, distinct species on Earth have not been described.",
"provenance": null
},
{
"answer": "Progress towards any achievement such as this can usually be found through the main menu, reached by pressing the 'start' button.",
"provenance": null
},
{
"answer": "Scientists have estimated ranges for the number of organisms that have yet to be discovered. All of the methods involve extrapolation from known data. For example, some scientists have used size estimates. The bigger the creature, the more likely it is that we have found it. The opposite is true for smaller organisms. So scientists make estimates that there are x number of small creatures yet to find. Another example is to use discovery rates and types of organisms. We are discovering fewer and fewer new mammals but are still discovering new fungi or bugs. So therefore the amount of bugs yet to find is greater than mammals. Another way is to use symbiotic or close relationships. If we know that there are x types of trees in the forest and know that each type of tree is likely to be home to x number of unique bugs then we can estimate how many unknown trees may have x unknown bugs. ",
"provenance": null
},
{
"answer": "It would help if you were to point us at where someone said that we have discovered 14%.\n\nAnyway, here's an example: you want to know how many tigers there are in a forest. You can't measure that directly, but you can do the following:\n\n* Capture some tigers (say 100) and tag them.\n* Come back later and capture more tigers (100 of them). See how many that you caught this time have tags (say 10).\n\nYou can use that to calculate how many tigers there are. The best estimate is that 10% (10/100) tigers have tags on them. If you have tagged 100 tigers and that is 10% of the tigers, then there are 1000 tigers in the forest. This is all approximate, and you can do statistics to determine the probability distribution of tiger numbers.\n\nYou would also tag the second set of tigers, so a total of 190 tigers would then have tags. The third time, you would expect that of your 100 tigers you catch that about 19 (190/1000) would have tags on them. \n\nYou can do the same thing with anything that you are sampling. Find an organism, see if we have found it before, and repeat. That will tell you how many of the things that you find are new and how many we have already discovered. \n\n",
"provenance": null
},
{
"answer": "I do this! Woohoo!\n\nEducated guesses are how science works. When enough educated guesses are unable to be disproved, there's a consensus. In this case, a bunch of people came up with different statistical methods to estimate diversity (fancy examples include Chao1, Simpson diversity index and rarefaction). A pretty good estimate can be made when enough scientists approach it enough ways.\n\nMore advanced: Take a given sample or dataset (e.g. soil sale or ocean) and perform relatively standardized genetic similarity analysis to estimate species (known and unknown categories). Then do bootstrapped subsampling of species diversity per fraction of the sample (e.g.\n10 unique species at 0.1 of the total sample, 50 unique species at 0.2... Repeat a lot). Fit a regression (usually nonlinear) and estimate the total unique species, including those you never observed, in that sample with CIs. Do this for a bunch of different types of samples, build a final model and you get a good idea of what we're missing!",
"provenance": null
},
{
"answer": "From a recent paper:\n\n > Global species richness, whether **estimated by taxon, habitat,\nor ecosystem**, is a key biodiversity metric. Yet, despite the\nglobal importance of biodiversity and increasing threats to\nit (e.g., [1–4]), we are no better able to estimate global species\nrichness now than we were six decades ago [5]. **Estimates of\nglobal species richness remain highly uncertain and are often\nlogically inconsistent** [5]. They are also difficult to validate\nbecause estimation of global species richness requires\nextrapolation beyond the number of species known [6–13].\nGiven that somewhere between 3% and > 96% of species on\nEarth may remain undiscovered [4], depending on the\nmethods used and the taxa considered, such extrapolations,\nespecially from small percentages of known species, are\nlikely to be highly uncertain [13, 14]. **An alternative approach\nis to estimate all species, the known and unknown, directly.\nUsing expert taxonomic knowledge of the species already\ndescribed and named, those already discovered but not yet\ndescribed and named, and those still awaiting discovery,** we\nestimate there to be 830,000 (95% credible limits: 550,000–\n1,330,000) multi-cellular species on coral reefs worldwide,\nexcluding fungi. Uncertainty surrounding this estimate and\nits components were often strongly skewed toward larger\nvalues, indicating that many more species on coral reefs is\nmore plausible than many fewer. The uncertainties revealed\nhere should guide future research toward achieving convergence\nin global species richness estimates for coral reefs\nand other ecosystems via adaptive learning protocols\nwhereby such estimates can be tested and improved, and\ntheir uncertainties reduced, as new knowledge is acquired\n\n > Current Biology\nVolume 25, Issue 4, 16 February 2015, Pages 500-505\nSpecies Richness on Coral Reefs and the Pursuit of Convergent Global Estimates\n_URL_0_",
"provenance": null
},
{
"answer": "There's a lot of information that goes into this. For example, Charles Darwin once predicted the existence of a type of moth with an unusually long proboscis. He based this prediction on the existence of a flower, varieties of which were pollinated by moths elsewhere. This particular flower held its important bits at the bottom of a long, narrow shaft. \n \nDarwin was right; the moth was discovered years later. It could have been something else, but the point here is that *something* had to be pollinating that flower, and it was nothing that was known at the time he made his observations. \n \nSpecific niches, like the above, are one factor. Mathematical studies are another; \"in this kind of environment elsewhere, with similar temperatures and other conditions, we see 'X' in terms of diversity.\" It's never guaranteed: most of these factors are educated guesses, but when estimates are published they actually lean towards the conservative end of the range -- just to be safe. \n \nThis is why popular news often presents new discoveries as \"surprising scientists\" and \"upsetting estimates\" in terms of their diversity, range, etc. Science has to fight tooth and nail for recognition and funding as it is -- so, when scientists think there might be 4-12 of ABC, they'll say \"we're expecting to find 4 ABC; even 3 would be remarkable, really.\" \n \nThey wind up finding 6-8 ABC, whereupon the public shakes its head and goes \"Silly scientists don't know what they're talking about, but let's support more research, since there's obviously a lot more ABC out there than they thought.\"",
"provenance": null
},
{
"answer": "Imagine you sample every organism in a 1m patch of grass, and find 25 species. Next, count all the species in a 5m patch, and you get 60 species, but 20 are the same as before (so 40 are new). Keep doing this over multiple habitats and habitat size, and you build a curve that describes how many new species you expect to find in a new area of size x. This is called rarefaction, and extrapolating over the area of earth gives a rough approximation of how many species we expect to find. \n\nMany ecologists have studied this statistical phenomenon. Search species - area curve, biogeography, or MacArthur to learn more. ",
"provenance": null
},
{
"answer": "I think it's kinda like how we haven't discovered all the different types of sandwiches yet. Every time I go to the deli there's a couple new ones on the menu.\n\nYou're welcome.",
"provenance": null
},
{
"answer": "I assume by \"organisms\" they mean \"species\".\n\nBut, what does it even mean to be a species? \n\nAn article in Science just demonstrated that all the Major Big Cats (lions, leopards, etc) have been interbreeding for millions of years.\n\nThey all have pretty much the same genes, just shuffled around.\n\nSo, every time you find a different combination or permutation, is it reasonable to call it a different \"organism\"?\n\nThe accident of infertility between certain combinations can hardly be taken anymore as the definition of a species.\n\nLife begins to look more like a multi-dimensional continuum, with some neighborhoods being more densely populated than others. \n",
"provenance": null
},
{
"answer": "85% of statistics are made up on the spot. I know because my dad was a statistics professor at UGA. ",
"provenance": null
},
{
"answer": "There are various methods of population estimation.\n\nA common method is as follows: \n* You spend a set interval catching the critters of interest. \n* You tag all of the critters you catch and release them. \n* A short while later, you do this again. \n* Some of the critters in the second round are _already tagged_.\n\nThe total catch per interval and the fraction of repeat catches can be plugged into some relatively straightforward statistical functions to estimate the total population.\n\nYou can do this for as many iterations as you like to get an arbitrarily accurate estimate. \n\nAnother method is to designate a certain amount of space and count _everything_. It might be all the fish in a cove, all the plants in a 2mx2m patch of field, all the bugs in a tree, etc. You do this a couple times, and then multiply your critters/unit number by the total number of units.\n\nThe figure you're quoting can be arrived at by a combination of the two. If you catalogued _everything_ in some space, and you'll consistently get about 14% previously-identified species and the other 86% would be new (and probably mostly beetles).",
"provenance": null
},
{
"answer": "So there's hope for samquantch?",
"provenance": null
},
{
"answer": "Earth scientists and biologists: What do they know, do they know things? Let's find out!",
"provenance": null
},
{
"answer": "I have heard that most organisms are actually in the ocean, since the earth is mostly water, and we don't have the ability to explore the ocean that far beyond the surface. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "18393",
"title": "Life",
"section": "Section::::Origin.\n",
"start_paragraph_id": 58,
"start_character": 0,
"end_paragraph_id": 58,
"end_character": 771,
"text": "Although the number of Earth's catalogued species of lifeforms is between 1.2 million and 2 million, the total number of species in the planet is uncertain. Estimates range from 8 million to 100 million, with a more narrow range between 10 and 14 million, but it may be as high as 1 trillion (with only one-thousandth of one percent of the species described) according to studies realized in May 2016. The total number of related DNA base pairs on Earth is estimated at 5.0 x 10 and weighs 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon). In July 2016, scientists reported identifying a set of 355 genes from the Last Universal Common Ancestor (LUCA) of all organisms living on Earth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53365898",
"title": "Earliest known life forms",
"section": "Section::::Overview.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 712,
"text": "Some estimates on the number of Earth's current species of life forms range from 10 million to 14 million, of which about 1.2 million have been documented and over 86 percent have not yet been described. However, a May 2016 scientific report estimates that 1 trillion species are currently on Earth, with only one-thousandth of one percent described. The total number of DNA base pairs on Earth is estimated at 5.0 x 10 with a weight of 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as 4 trillion tons of carbon. In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49417",
"title": "Extinction",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 462,
"text": "More than 99 percent of all species, amounting to over five billion species, that ever lived on Earth are estimated to have died out. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million have been documented and over 86 percent have not yet been described. In 2016, scientists reported that 1 trillion species are estimated to be on Earth currently with only one-thousandth of one percent described.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19653842",
"title": "Organism",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 365,
"text": "Estimates on the number of Earth's current species range from 10 million to 14 million, of which only about 1.2 million have been documented. More than 99% of all species, amounting to over five billion species, that ever lived are estimated to be extinct. In 2016, a set of 355 genes from the last universal common ancestor (LUCA) of all organisms was identified.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30485345",
"title": "2011 in science",
"section": "Section::::Events, discoveries and inventions.:August.\n",
"start_paragraph_id": 289,
"start_character": 0,
"end_paragraph_id": 289,
"end_character": 277,
"text": "BULLET::::- The natural world contains about 8.7 million species, according to a new estimate described by scientists as the most accurate ever. However, the vast majority of these species have not been identified – cataloguing them all could take more than 1,000 years. (BBC)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37486008",
"title": "Lists of organisms by population",
"section": "Section::::Number of species.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1333,
"text": "More than 99 percent of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million have been documented and over 86 percent have not yet been described. According to another study, the number of described species has been estimated at 1,899,587. 2000–2009 saw approximately 17,000 species described per year. The total number of undescribed organisms is unknown, but marine microbial species alone could number 20,000,000. The number of quantified species will \"ipso facto\" always lag behind the number of described species, and species contained in these lists tend to be on the K side of the r/K selection continuum. More recently, in May 2016, scientists reported that 1 trillion species are estimated to be on Earth currently with only one-thousandth of one percent described. The total number of related DNA base pairs on Earth is estimated at 5.0 x 10 and weighs 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion [million million] tonnes of carbon). In July 2016, scientists reported identifying a set of 355 genes from the Last Universal Common Ancestor (LUCA) of all organisms living on Earth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20377",
"title": "Microorganism",
"section": "Section::::Classification and structure.:Archaea.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 360,
"text": "The biodiversity of the prokaryotes is unknown, but may be very large. A May 2016 estimate, based on laws of scaling from known numbers of species against the size of organism, gives an estimate of perhaps 1 trillion species on the planet, of which most would be microorganisms. Currently, only one-thousandth of one percent of that total have been described.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1p73hf
|
How do I calculate laser divergence?
|
[
{
"answer": "The units of the result are [radians](_URL_0_). \n\nThis is a unit that is actually dimensionless. That is, an angle of one radian is an angle that forms a circular arc with a length equal to its radius. Meaning, radians measure the ratio between two lengths, thus it is not actually a unit at all.\n\nRadians are used in most scientific and engineering contexts in college coursework and onward because they actually simplify many of the calculations we do, compared to using degrees to describe angles.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "15267398",
"title": "Laser beam profiler",
"section": "Section::::Measurements.:Beam divergence.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 1460,
"text": "The beam divergence of a laser beam is a measure for how fast the beam expands far from the beam waist. It is usually defined as the derivative of the beam radius with respect to the axial position in the far field, i.e., in a distance from the beam waist which is much larger than the Rayleigh length. This definition yields a divergence half-angle. (Sometimes, full angles are used in the literature; these are twice as large.) For a diffraction-limited Gaussian beam, the beam divergence is λ/(πw), where λ is the wavelength (in the medium) and w the beam radius (radius with 1/e intensity) at the beam waist. A large beam divergence for a given beam radius corresponds to poor beam quality. A low beam divergence can be important for applications such as pointing or free-space optical communications. Beams with very small divergence, i.e., with approximately constant beam radius over significant propagation distances, are called collimated beams. For the measurement of beam divergence, one usually measures the beam radius at different positions, using e.g. a beam profiler. It is also possible to derive the beam divergence from the complex amplitude profile of the beam in a single plane: spatial Fourier transforms deliver the distribution of transverse spatial frequencies, which are directly related to propagation angles. See US Laser Corps application note for a tutorial on how to measure the laser beam divergence with a lens and CCD camera.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40781",
"title": "Beam divergence",
"section": "",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 339,
"text": "where formula_7 is the laser wavelength and formula_8 is the radius of the beam at its narrowest point, which is called the \"beam waist\". This type of beam divergence is observed from optimized laser cavities. Information on the diffraction-limited divergence of a coherent beam is inherently given by the N-slit interferometric equation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33870431",
"title": "Laser detuning",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 438,
"text": "In optical physics, laser detuning is the tuning of a laser to a frequency that is slightly off from a quantum system's resonant frequency. When used as a noun, the laser detuning is the difference between the resonance frequency of the system and the laser's optical frequency (or wavelength). Lasers tuned to a frequency below the resonant frequency are called \"red-detuned\", and lasers tuned above resonance are called \"blue-detuned\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40781",
"title": "Beam divergence",
"section": "",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 647,
"text": "Like all electromagnetic beams, lasers are subject to divergence, which is measured in milliradians (mrad) or degrees. For many applications, a lower-divergence beam is preferable. Neglecting divergence due to poor beam quality, the divergence of a laser beam is proportional to its wavelength and inversely proportional to the diameter of the beam at its narrowest point. For example, an ultraviolet laser that emits at a wavelength of 308 nm will have a lower divergence than an infrared laser at 808 nm, if both have the same minimum beam diameter. The divergence of good-quality laser beams is modeled using the mathematics of Gaussian beams.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40781",
"title": "Beam divergence",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 218,
"text": "The divergence of a beam can be calculated if one knows the beam diameter at two separate points far from any focus (\"D\", \"D\"), and the distance (\"l\") between these points. The beam divergence, formula_1, is given by \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31728698",
"title": "Self-mixing interferometry",
"section": "Section::::Background.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 507,
"text": "Laser interferometry techniques are widely used in a range of sensing applications, including the measurement of vibration, displacement and velocity of objects. These methods involve the mixing (or superposition) of coherent light waves. Typically, the light from a laser is split into two. Each beam follows a different path before being recombined. A detector is then used to measure the intensity of the light. The Michelson interferometer and Mach–Zehnder interferometers are examples of such systems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17556",
"title": "Laser",
"section": "Section::::Continuous and pulsed modes of operation.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 709,
"text": "A laser can be classified as operating in either continuous or pulsed mode, depending on whether the power output is essentially continuous over time or whether its output takes the form of pulses of light on one or another time scale. Of course even a laser whose output is normally continuous can be intentionally turned on and off at some rate in order to create pulses of light. When the modulation rate is on time scales much slower than the cavity lifetime and the time period over which energy can be stored in the lasing medium or pumping mechanism, then it is still classified as a \"modulated\" or \"pulsed\" continuous wave laser. Most laser diodes used in communication systems fall in that category.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ar9n23
|
how do showers pump water to the highest floor?
|
[
{
"answer": "I would say that it mostly depends on your location. A lot of places such as New York City have water towers placed on top of the buildings that feed via gravity.\nA lot of small towns where you see large water towers up high above the town essentially do the same thing.\nThen there will of course be situations where gravity cannot be used and that's pressurized system will be put in place.",
"provenance": null
},
{
"answer": "Pumps mostly. The pump puts pressure on the water in the system. They also push water up into water towers so that when there is a lot of demand on the system, gravity on the water in the tower can help keep the pressure high enough.\n\nThe liquid doesn't need to be compressed to be under pressure. You can have a cinder block and put a heavy weight on top and the cinder block remains the same shape, but there's still pressure on it.\n\n > How is a force even transmitted everywhere in a liquid?\n\nWhen you put the liquid in an enclosed space like a pipe and you start applying pressure, it can't compress the water because of that electrostatic force that you mentioned. So that pressure has to go somewhere. One section of the pipe pushes on the next, which pushes on the next, and so on. It's just like a train pushing cars.\n\nIn something very tall, the building will have additional pumps inside the building to help maintain pressure up on the higher floors. They will often also have their own water towers on the roof to help keep the pressure even at all times.\n\n\n[Here's a neat video about water pumping and water towers.](_URL_0_)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "30859389",
"title": "ArtScience Museum",
"section": "Section::::Architecture.:Sustainability features.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 236,
"text": "Rainwater is harvested and channelled down the centre of the building, flowing through its bowl-shaped roof into a reflecting pond at the lowest level of the building. The rainwater is then recycled for use in the building's restrooms.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6788",
"title": "Chrysler Building",
"section": "Section::::Architecture.:Basement.\n",
"start_paragraph_id": 66,
"start_character": 0,
"end_paragraph_id": 66,
"end_character": 205,
"text": "The basement also had a \"hydrozone water bottling unit\" that would filter tap water into drinkable water for the building's tenants. The drinkable water would then be bottled and shipped to higher floors.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18437342",
"title": "Transfer bench",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 445,
"text": "A transfer bench, (also known as a showering bench, shower bench, or transfer chair) is a bath safety mobility device which the user sits on to get into a bathtub. The user usually sits on the bench, which straddles the side of the tub, and gradually slides from the outside to the inside of the tub. Tub transfer benches are used by people who have trouble getting over the tub wall or into the shower, either because of illness or disability.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15749468",
"title": "Council House 2",
"section": "Section::::Design.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 518,
"text": "Shower towers are used on the southern façade. These towers draw outside air from above street level and cool the air by evaporation to form the shower of water. The cool air is then supplied to the retail spaces and the cool water is used to pre-cool the water coming from the chilled water panels. The towers are made from tubes of lightweight fabric 1.4 meters in diameter. Testing from these towers has shown a temperature reduction of 4 to 13 degrees Celsius from the top of the tower to the bottom of the tower.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "555732",
"title": "Plumbing fixture",
"section": "Section::::Common fixtures.:Waste.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 627,
"text": "Lavatories and water closets normally connect to the water supply by means of a \"supply\", which is a tube, usually of nominal 3/8 in (U.S.) or 10 or 12 mm diameter (Europe and Middle East), which connects the water supply to the fixture, sometimes through a flexible (braided) hose. For water closets, this tube usually ends in a flat neoprene washer that tightens against the connection, while for lavatories, the supply usually ends in a conical neoprene washer. Kitchen sinks, tubs and showers usually have supply tubes built onto their valves which then are soldered or 'fast jointed' directly onto the water supply pipes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50296488",
"title": "Porky's (video game)",
"section": "Section::::Gameplay.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 809,
"text": "The shower room consists of a series of ladders that Pee Wee must climb to retrieve the objects while a girl showers. If Pee Wee crosses into the girl's line of sight at any time, the film character Ms. Balbricker will appear and begin chasing after him. Pee Wee must move the object from atop the shower down through a hole in the floor and climb out of the room through an opening at the top. If Ms. Balbricker \"latches onto\" him, or if Pee Wee steps into the hole in the floor, he falls back into the swamp and must try again. However, this time he is only allowed to vault onto the leftmost platform as Porky's brother, the local sheriff, is standing on the right platform; he will remain there for each subsequent time Pee Wee falls. Despite this new hazard, Pee Wee does not have to rebuild the ladder.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "477822",
"title": "Shower",
"section": "Section::::Types of shower heads.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 236,
"text": "BULLET::::- Ceiling-mounted faucets—Ceiling-mounted shower-faucets are typically rain-drop shower-heads mounted in one shower ceiling. Water-rains down, at low or medium pressure, using the gravity to shower on one from directly above.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ue5bx
|
Do Large Lakes Serves as Natural Storm Breaks?
|
[
{
"answer": "Yes, you're correct. **Lake modified air** can have an impact on these types of storms. Mostly in a way directly [opposite to this](_URL_2_) in the winter. In the spring/summer the lake is relatively cool compared to the land and induces subsidence (sinking air) that can inhibit thunderstorms which need [heat and rising air](_URL_0_) to survive. The larger the lake (e.g. Great Lakes) the more of an impact it will have for longer into the spring/summer.\n\n\nAnother impact can be topography that sometimes surround lakes. From [this diagram](_URL_1_) you can see that the moist air is forced up over the terrain and the water precipitates out, leaving little moisture on the lee side of the mountains or hills.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "24038878",
"title": "Hydrology of Hungary",
"section": "Section::::Lakes.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 402,
"text": "Sudden storms can whip up dangerous, steep waves on the surface of the lake. Their average height is , and their average length is . A prevailing north-easterly or south-westerly wind can push the water from the eastern basin of the lake (to the east of the Tihany Peninsula) into the western basin or on the contrary, resulting in water levels differing by from normal, which creates strong currents.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12010",
"title": "Great Lakes",
"section": "Section::::Climate.:Lake effect.\n",
"start_paragraph_id": 61,
"start_character": 0,
"end_paragraph_id": 61,
"end_character": 698,
"text": "The Great Lakes have been observed to help intensify storms, such as Hurricane Hazel in 1954, and the 2011 Goderich, Ontario tornado, which moved onshore as a tornadic waterspout. In 1996 a rare tropical or subtropical storm was observed forming in Lake Huron, dubbed the 1996 Lake Huron cyclone. Rather large severe thunderstorms covering wide areas are well known in the Great Lakes during mid-summer; these Mesoscale convective complexes or MCCs can cause damage to wide swaths of forest and shatter glass in city buildings. These storms mainly occur during the night, and the systems sometimes have small embedded tornadoes, but more often straight-line winds accompanied by intense lightning.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12010",
"title": "Great Lakes",
"section": "Section::::Geology.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 353,
"text": "A notable modern phenomenon is the formation of ice volcanoes over the lakes during wintertime. Storm-generated waves carve the lakes' ice sheet and create conical mounds through the eruption of water and slush. The process is only well-documented in the Great Lakes, and has been credited with sparing the southern shorelines from worse rocky erosion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "614397",
"title": "Lake breakout",
"section": "Section::::Process.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 537,
"text": "The walls of such lakes can be unstable and may be breached after fresh earthquakes or because of erosion. As water rushes outwards, the initial channel is cut wider and deeper, further increasing the flow. This may cause the lake's rim to collapse abruptly. The usual result is for huge amounts of water to be displaced, incorporating a great deal of sediment which increases it in volume by as much as two or four times, or even more. This produces violent floods and lahars with devastating effects for any settlements in their path.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18955999",
"title": "Desert",
"section": "Section::::Physical geography.:Water.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 907,
"text": "Lakes may form in basins where there is sufficient precipitation or meltwater from glaciers above. They are usually shallow and saline, and wind blowing over their surface can cause stress, moving the water over nearby low-lying areas. When the lakes dry up, they leave a crust or hardpan behind. This area of deposited clay, silt or sand is known as a playa. The deserts of North America have more than one hundred playas, many of them relics of Lake Bonneville which covered parts of Utah, Nevada and Idaho during the last ice age when the climate was colder and wetter. These include the Great Salt Lake, Utah Lake, Sevier Lake and many dry lake beds. The smooth flat surfaces of playas have been used for attempted vehicle speed records at Black Rock Desert and Bonneville Speedway and the United States Air Force uses Rogers Dry Lake in the Mojave Desert as runways for aircraft and the space shuttle.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18842431",
"title": "Lake",
"section": "Section::::Types of lakes.:Landslide lakes.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 344,
"text": "Landslide lakes are lakes created by the blockage of a valley by either mudflows, rockslides, or screes. Such lakes are common in mountainous regions. Although landslide lakes may be large and quite deep, they are typically short-lived. An example of a landslide lake is Quake Lake, which formed as a result of the 1959 Hebgen Lake earthquake.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "383793",
"title": "Lake Peipus",
"section": "Section::::Topography and hydrography.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 322,
"text": "The lake water is fresh, with a low transparency of about due to plankton and suspended sediments caused by the river flow. Water currents are weak ; they are induced by the wind and stop when it ceases. However, during the spring flood, there is a constant surface current from north to south \"(it does not make sense)\".\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2l1j5i
|
During the Carboniferous, O2 levels were 163% modern levels while CO2 was 800ppm. With so many plants, why were CO2 levels so high relative to modern levels?
|
[
{
"answer": "Are you sure you mean the Carboniferous here? That period actually saw a huge drop in global CO2 level, indicated by \"C\" [in this graph](_URL_0_) around 300-350 million years ago. \n\nThose pCO2 levels had been steadily falling since the Cambrian period, but likely saw an extra large drop during the drop as the climate transitioned from greenhouse to icehouse and massive glaciations occured...and as temperatures fall, ocean CO2 solubility increases.",
"provenance": null
},
{
"answer": "If you think of the Carboniferous period as the time that coal deposits formed as plant matter couldn't decay as efficiently then as today (fungi and bacteria hadn't yet evolved the ability to digest cellulose), it makes sense. There is a set amount of carbon near the surface of the earth. It can either be in the air as CO2 or locked up in plant matter or buried a bit under ground. The Carboniferous period was a transition time during which atmospheric carbon was moved underground. Today we are digging up those deposits and putting them back in the air, potentially returning closer to those at the beginning of the Carboniferous.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "23743",
"title": "Phanerozoic",
"section": "Section::::Eras of the Phanerozoic.:Paleozoic Era.:Carboniferous Period.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 1084,
"text": "The Carboniferous spans from 359 million to 299 million years ago. During this period, average global temperatures were exceedingly high: the early Carboniferous averaged at about 20 degrees Celsius (but cooled to 10 degrees during the Middle Carboniferous). Tropical swamps dominated the Earth, and the large amounts of trees created much of the carbon that became coal deposits (hence the name Carboniferous). The high oxygen levels caused by these swamps allowed massive arthropods, normally limited in size by their respiratory systems, to proliferate. Perhaps the most important evolutionary development of the time was the evolution of amniotic eggs, which allowed amphibians to move farther inland and remain the dominant vertebrates throughout the period. Also, the first reptiles and synapsids evolved in the swamps. Throughout the Carboniferous, there was a cooling pattern, which eventually led to the glaciation of Gondwana as much of it was situated around the south pole, in an event known as the Permo-Carboniferous glaciation or the Carboniferous Rainforest Collapse.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23234",
"title": "Paleozoic",
"section": "Section::::Geology.:Periods of the Paleozoic Era.:Carboniferous Period.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 1071,
"text": "The Carboniferous spanned from 359 million to 299 million years ago. During this time, average global temperatures were exceedingly high; the early Carboniferous averaged at about 20 degrees Celsius (but cooled to 10 °C during the Middle Carboniferous). Tropical swamps dominated the Earth, and the lignin stiffened trees grew to greater heights and number. As the bacteria and fungi capable of eating the lignin had not yet evolved, their remains were left buried, which created much of the carbon that became the coal deposits of today (hence the name \"Carboniferous\"). Perhaps the most important evolutionary development of the time was the evolution of amniotic eggs, which allowed amphibians to move farther inland and remain the dominant vertebrates for the duration of this period. Also, the first reptiles and synapsids evolved in the swamps. Throughout the Carboniferous, there was a cooling trend, which led to the Permo-Carboniferous glaciation or the Carboniferous Rainforest Collapse. Gondwana was glaciated as much of it was situated around the south pole.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "419483",
"title": "Pyrenoid",
"section": "Section::::Origin.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 727,
"text": "However, alternative hypotheses have been proposed. Predictions of past CO levels suggest that they may have previously dropped as precipitously low as that seen during the expansion of land plants: approximately 300 MYA, during the Proterozoic Era. This being the case, there might have been a similar evolutionary pressure that resulted in the development of the pyrenoid, though it must be noted that in this case, a pyrenoid or pyrenoid-like structure could have developed, and have been lost as CO levels then rose, only to be gained or developed again during the period of land colonisation by plants. Evidence of multiple gains and losses of pyrenoids over relatively short geological time spans was found in hornworts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28058203",
"title": "Joseph D'Aleo",
"section": "Section::::Views.:Climate change.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 325,
"text": "BULLET::::5. Reconstruction of paleoclimatological CO2 concentrations demonstrates that carbon dioxide concentration today is near its lowest level since the Cambrian Era some 550 million years ago, when there was almost 20 times as much CO2 in the atmosphere as there is today without causing a “runaway greenhouse effect.”\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "85746",
"title": "Stoma",
"section": "Section::::Stomata and climate change.:Future adaptations during climate change.\n",
"start_paragraph_id": 63,
"start_character": 0,
"end_paragraph_id": 63,
"end_character": 240,
"text": "It is expected for [CO] to reach 500–1000 ppm by 2100. 96% of the past 400 000 years experienced below 280 ppm CO levels. From this figure, it is highly probable that genotypes of today’s plants diverged from their pre-industrial relative.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "419483",
"title": "Pyrenoid",
"section": "Section::::Origin.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 555,
"text": "There are several hypotheses as to the origin of pyrenoids. With the rise of large terrestrial based flora following the colonisation of land by ancestors of Charophyte algae, CO levels dropped dramatically, with a concomitant increase in O atmospheric concentration. It has been suggested that this sharp fall in CO levels acted as an evolutionary driver of CCM development, and thus gave rise to pyrenoids in doing so ensuring that rate of supply of CO did not become a limiting factor for photosynthesis in the face of declining atmospheric CO levels.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2605074",
"title": "Peat swamp forest",
"section": "Section::::In Indonesia.:The problem.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 506,
"text": "A study for the European Space Agency found that up to 2.57 billion tons of carbon were released to the atmosphere in 1997 as a result of burning peat and vegetation in Indonesia. This is equivalent to 40% of the average annual global carbon emissions from fossil fuels, and contributed greatly to the largest annual increase in atmospheric CO2 concentration detected since records began in 1957. Additionally, the 2002-3 fires released between 200 million to 1 billion tons of carbon into the atmosphere.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
27jxv7
|
What are the neurological differences, if any, between reading a physical book and reading online?
|
[
{
"answer": "I'm not entirely sure about the standards of this journal, but it does have citations. I'll try to sum up the author's major points that are at least supported by studies:\n\n1) Reading online requires much more \"cognitive space\" for a number of different reasons. The use of hyperlinks embedded within texts leads to more decision-making to be made, which of course requires more use of the brain. Just like when reading a paper newspaper, when one is confronted with many different choices in terms of what to read, one needs to make more cognitive decisions in selecting the most appealing thing to read. Also, websites that require scrolling to read the full text leads to greater brain activation that websites that do not require scrolling.\n\n2) The framework of the text (paratext) has an influence on the reader's response to the text. In a normal book, we have the acknowledgements page, title page, etc. that all shape our view that \"we are reading a book\". Similarly, a study showed that subjects were more likely to perceive humor in a text when reading on a lighter, clearer device.\n\n3) Online multitasking while reading on a screen most likely leads to a reduction in comprehension of and \"deeper\" thinking about the text. However, a link between reading comprehension and reading on paper vs screen is still not definitive, because studies have found conflicting results.\n\nThis is clearly still an emerging field of study, as e-readers and e-books have only become popular in recent years. I'll be curious to see what the results are from more careful scientific studies.\n\n_URL_0_\nBarry W Cull, \"Reading Revolutions: Online digital text and implications for reading in academe\"",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "8434151",
"title": "Donald Shankweiler",
"section": "Section::::Representative publications.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 275,
"text": "BULLET::::- Shankweiler, D. P., Mencl, W. E., Braze, D., Tabor, W., Pugh, K. R., & Fulbright, R. K. (2008). Reading differences and brain: Cortical integration of speech and print in sentence processing varies with reader skill. \"Developmental Neuropsychology\", 33, 745-776.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5008639",
"title": "Teaching reading: whole language and phonics",
"section": "Section::::Whole language.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 353,
"text": "Various approaches to reading presume that students learn differently. The phonics emphasis in reading draws heavily from behaviorist learning theory that is associated with the work of the Harvard psychologist B.F. Skinner while the whole language emphasis draws from cognitivist learning theory and the work of the Russian psychologist Lev Vygotsky. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18581264",
"title": "Reading",
"section": "Section::::Reading skills.:Methods of reading.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 772,
"text": "BULLET::::- \"Multiple intelligences\"-based methods, which draw on the reader's diverse ways of thinking and knowing to enrich appreciation of the text. Reading is fundamentally a linguistic activity: one can basically comprehend a text without resorting to other intelligences, such as the visual (e.g., mentally \"seeing\" characters or events described), auditory (e.g., reading aloud or mentally \"hearing\" sounds described), or even the logical intelligence (e.g., considering \"what if\" scenarios or predicting how the text will unfold based on context clues). However, most readers already use several kinds of intelligence while reading. Doing so in a more disciplined manner—i.e., constantly, or after every paragraph—can result in a more vivid, memorable experience.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59812372",
"title": "J. Bruce Tomblin",
"section": "Section::::Representative Publications.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 220,
"text": "BULLET::::- Catts, H. W., Fey, M. E., Zhang, X., & Tomblin, J. B. (1999). Language basis of reading and reading disabilities: Evidence from a longitudinal investigation. \"Scientific Studies of Reading\", \"3\"(4), 331-361.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8434151",
"title": "Donald Shankweiler",
"section": "Section::::Current work.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 584,
"text": "Shankweiler's current work , done in conjunction with David Braze and other colleagues at Haskins Laboratories, identifies sources of reading-related comprehension difficulties that are most subject to individual differences, and studies their cognitive and neurobiological underpinnings. This novel project brings together the knowledge base on reading differences and advanced psycholinguistic methods for studying on-line sentence processing, including tracking eye movements during reading and tracking brain activity (using fMRI) during coordinated reading and listening tasks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8305",
"title": "Dyslexia",
"section": "Section::::Causes.:Neuroanatomy.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 1043,
"text": "Modern neuroimaging techniques such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have shown a correlation between both functional and structural differences in the brains of children with reading difficulties. Some dyslexics show less electrical activation in parts of the left hemisphere of the brain involved with reading, such as the inferior frontal gyrus, inferior parietal lobule, and the middle and ventral temporal cortex. Over the past decade, brain activation studies using PET to study language have produced a breakthrough in the understanding of the neural basis of language. Neural bases for the visual lexicon and for auditory verbal short-term memory components have been proposed, with some implication that the observed neural manifestation of developmental dyslexia is task-specific (i.e. functional rather than structural). fMRIs in dyslexics have provided important data which point to the interactive role of the cerebellum and cerebral cortex as well as other brain structures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9404689",
"title": "Brain-reading",
"section": "Section::::Applications.:Human-machine interfaces.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 475,
"text": "Brain-reading has also been proposed as a method of improving human-machine interfaces, by the use of EEG to detect relevant brain states of a human. In recent years, there has been a rapid increase in patents for technology involved in reading brainwaves, rising from fewer than 400 from 2009–2012 to 1600 in 2014. These include proposed ways to control video games via brain waves and \"neuro-marketing\" to determine someone's thoughts about a new product or advertisement.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
a8avzp
|
Does Mars have enough mass to support a habitable atmosphere?
|
[
{
"answer": "Titan, a moon of Saturn, has about 1/5 the mass of Mars but holds an atmosphere with 1.5 times Earth's atmospheric pressure. So theoretically yes.\n\nOne big difference is that Titan is much colder than Mars and nowhere near habitable, but it turns out even with Mars's higher temp, it should be able to hold onto a significant atmosphere over billions of years. So why hasn't it?\n\nHere I quote from [Principles of Planetary Climate](_URL_0_) Chapter 8\n\n > It takes rather little impactor mass in the late stage veneer to deplete the atmosphere of an Earthlike planet – only a tenth of a percent of Earth’s mass, which is not an unreasonable amount to be left over after the assembly of an Earth-sized planet. An important result is that if Mars were to start out with a 2 bar CO2 atmosphere (as suggested by some climate calculations based on evidence for warm, wet early conditions), its atmosphere would not be much more subject to erosion than Earth’s. The mass of available impactors required to erode such a Martian atmosphere would be fully 70% of the corresponding mass for Earth. The main reason the estimates are so similar is that a 2 bar atmosphere on Mars has much more mass per unit area than Earth’s atmosphere, requiring a higher critical mass of impactor as compared with Earth. A more tenuous Martian atmosphere is much more erodable than Earth’s, as illustrated by the 100 mb Mars case in the table. Similarly, if Venus had an Earthlike atmosphere, its atmosphere would be essentially as erodable as Earth’s, whereas the actual dense Venus atmosphere requires about seven times as much available impactor mass to erode. The hypothetical Super-Earth case is only a bit less subject to erosion than Earth, in this case because a 1 bar atmosphere on a large planet has less mass per unit area than Earth’s atmosphere. The importance of the atmospheric mass effect shows also in the hypothetical planetary Titan case, which, owing to its very massive atmosphere, requires nearly as much available impactor mass to erode as does the 2 bar Early Mars case. The real Titan, in contrast, is very difficult to erode, requiring an available impactor mass of nearly a tenth of Earth’s mass, owing to the competition with Saturn for impacts.\n\n > The essential puzzle posed by the results of Table 8.7 is that it looks quite plausible that Earth’s atmosphere would be subject to loss by impact erosion in the Sweep stage, and that a dense Early Mars atmosphere would not be appreciably less erodable than Earth. How, then, to account for the present tenuous Martian atmosphere, while Earth has a substantial atmosphere remaining? One potential scenario is that Earth’s atmosphere was indeed lost by impact erosion, but was regenerated by outgassing from the interior. Consistent with this picture, we note that while Mars requires nearly as much available impactor mass as Earth, this impactor mass is delivered over a much longer time, owing to the smaller cross- section of Mars. Combined with the relatively early shutdown of tectonic activity and hence outgassing on Mars (owing to its small size) it could be that the essential difference between the planets resides not so much in ability to hold an atmosphere as in ability to regenerate an atmosphere. A severe difficulty with this picture, however, is the abundance of N2 in Earth’s atmosphere. A CO2 or water vapor atmosphere could be easily regenerated, but it is not easy to hide enough N2 in the mantle to allow this component to be regenerated. And recall that Venus has even more N2 in its atmosphere than Earth, suggesting that even if Venus went through an early stage with far less CO2 in its atmosphere, it did not suffer total atmosphere loss by impacts during that stage. Could it be that there is an ability to sequester a bar or two of N2 in a planet’s mantle? Could it be that Earth started out with much more N2 in its atmosphere and that what we have today is the small bit left over after substantial impact erosion? Or could it be that the mass of impactors was not in fact sufficient to deplete Earth’s atmosphere and that the tenuous Martian atmosphere has some other explanation? Perhaps it never generated a dense atmosphere, because it never received enough oxygen-bearing material to turn carbon into carbonate and CO2. Perhaps Mars lost its atmosphere in a chance giant impact which got rid of Martian N2, whereas Earth’s Moon-forming impact was not big enough to get rid of all the N2. If a giant impact removed most of the primordial N2 on Mars, then perhaps the rest could have been lost by non-thermal escape and solar wind erosion. But if Mars lost its atmosphere too early then it becomes hard to account for the large, extensive water-carved channels on Mars, some of which suggest persistence of active surface hydrology up to 3.5 billion years ago, with episodic recurrence of less extensive river networks extending billions of years later. More precise dating of these hydrological features, which will come ultimately with sample return missions from Mars, will go far to help resolve these puzzles. Still, the Mystery of the Missing Martian Atmosphere is likely to remain one of the Big Questions for a long while to come.\n\n > How do giant impacts fit into the picture? Giant impacts do not come in a continuous stream, but lunar to Mars-sized bodies are common enough in the late stages of planetary formation that it is likely that one or more giant impact occurs before the planet attains its final size. The very existence of the Moon provides evidence that Earth experienced a giant impact, while the anomalous retrograde rotation of Venus has been taken as evidence that a giant impact occurred there as well. The Martian crust exhibits a striking dichotomy between rugged thick-crusted and heavily cratered southern hemisphere highlands and smoother, thinner northern hemisphere lowlands; this has sometimes been taken as having resulted from a giant impact, though one smaller in relative scale than Earth’s Moon-forming impact. A single giant impact can blow off an entire atmosphere, but this is not inevitable; depending on the energy of the impactor, there can be a substantial proportion of the original atmosphere left. The issues in reconciling the histories of Earth and Mars are essentially the same as for impact erosion at the Sweep stage: how do we account for the story of N2 on Earth (or Venus, for that matter)? And how are we to account for the hydrology of Early Mars if a giant impact blew off the primordial Martian atmosphere but the planet was unable to regenerate a new CO2 atmosphere by outgassing?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "4923933",
"title": "Terraforming of Mars",
"section": "Section::::Advantages.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 474,
"text": "According to scientists, Mars exists on the outer edge of the habitable zone, a region of the Solar System where liquid water on the surface may be supported if concentrated greenhouse gases could increase the atmospheric pressure. The lack of both a magnetic field and geologic activity on Mars may be a result of its relatively small size, which allowed the interior to cool more quickly than Earth's, although the details of such a process are still not well understood.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21051206",
"title": "Carbonate–silicate cycle",
"section": "Section::::The cycle on other planets.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 911,
"text": "Mars is such a planet, since it is located at the edge of our solar system’s habitable zone, which means its surface is too cold for liquid water to form without a greenhouse effect. With its thin atmosphere, Mars's mean surface temperature is -55 °C. In attempting to explain Mars’ topography that resembles fluvial channels despite seemingly insufficient incoming solar radiation, some have suggested that a cycle similar to Earth's carbonate-silicate cycle could have existed – similar to a retreat from Snowball Earth periods. It has been shown using modeling studies that gaseous CO and HO acting as greenhouse gases could not have kept Mars warm during its early history when the sun was fainter because CO would condense out into clouds. Even though CO clouds do not reflect in the same way that water clouds do on Earth, which means it could not have had much of a carbonate-silicate cycle in the past.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1072857",
"title": "Biosignature",
"section": "Section::::Examples.:Antibiosignatures.:Martian atmosphere.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 477,
"text": "The Martian atmosphere contains high abundances of photochemically produced CO and H, which are reducing molecules. Mars' atmosphere is otherwise mostly oxidizing, leading to a source of untapped energy that life could exploit if it used a metabolism compatible with one or both of these reducing molecules. Because these molecules can be observed, scientists use this as evidence for an antibiosignature. Scientists have used this concept as an argument against life on Mars.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1744360",
"title": "Colonization of Mars",
"section": "Section::::Relative similarity to Earth.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 312,
"text": "BULLET::::- Mars has a surface area that is 28.4% of Earth's, which is only slightly less than the amount of dry land on Earth (which is 29.2% of Earth's surface). Mars has half the radius of Earth and only one-tenth the mass. This means that it has a smaller volume (~15%) and lower average density than Earth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "404891",
"title": "The Case for Mars",
"section": "Section::::Colonization.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 769,
"text": "\"The Case for Mars\" acknowledges that any Martian colony will be partially Earth-dependent for centuries. However, it suggests that Mars may be a profitable place for two reasons. First, it may contain concentrated supplies of metals of equal or greater value to silver which have not been subjected to millennia of human scavenging and may be sold on Earth for profit. Secondly, the concentration of deuterium – a possible fuel for commercial nuclear fusion – is five times greater on Mars. Humans emigrating to Mars thus have an assured industry and the planet will be a magnet for settlers as wage costs will be high. The book asserts that “the labor shortage that will prevail on Mars will drive Martian civilization toward both technological and social advances.”\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4923933",
"title": "Terraforming of Mars",
"section": "Section::::Proposed methods and strategies.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 787,
"text": "Terraforming Mars would entail three major interlaced changes: building up the magnetosphere, building up the atmosphere, and raising the temperature. The atmosphere of Mars is relatively thin and has a very low surface pressure. Because its atmosphere consists mainly of , a known greenhouse gas, once Mars begins to heat, the may help to keep thermal energy near the surface. Moreover, as it heats, more should enter the atmosphere from the frozen reserves on the poles, enhancing the greenhouse effect. This means that the two processes of building the atmosphere and heating it would augment each other, favoring terraforming. However, it would be difficult to keep the atmosphere together because of the lack of a protective global magnetic field against erosion by the solar wind.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1744360",
"title": "Colonization of Mars",
"section": "Section::::Differences from Earth.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 428,
"text": "BULLET::::- Because Mars is about 52% farther from the Sun, the amount of solar energy entering its upper atmosphere per unit area (the solar constant) is only around 43.3% of what reaches the Earth's upper atmosphere. However, due to the much thinner atmosphere, a higher fraction of the solar energy reaches the surface. The maximum solar irradiance on Mars is about 590 W/m compared to about 1000 W/m at the Earth's surface.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
177lwi
|
the american game-show 'jeopardy'
|
[
{
"answer": "just a trivia show. There are three rounds. First and Second rounds have 7 (I think) categories with 5 questions each. Wrong answers deduct dollars.\n\nLast round is a single question. Players are given the category ahead of time and can wager whatever amount of money they have on whether they'll get the question right.\n\nMost money at the end wins.",
"provenance": null
},
{
"answer": "Not much to explain. It's a trivia show quiz show where answers must be given in the form of a question.\n\n_URL_0_",
"provenance": null
},
{
"answer": "Standard quiz/trivia show. The players' scores are in \"jeopardy\" throughout because wrong answers subtract from their total. Another gimmick is that the clues are provided in the form of a statement and the participants have to respond in the form of a question.\n\n3 rounds: Jeopardy, Double Jeopardy, and Final Jeopardy\n\nJeopardy has 6 categories of 5 questions each (ranging from $200 to $1000). \n\nExample: \nPlayer - I'll take Websites for $400, Alex. \nAlex Trebek - It's a place where user-submitted links are voted up or down by the rest of the users. \nPlayer - *rings in* What is reddit?\n\nFailing to phrase your response as a question counts as incorrect, but you have a few seconds to correct your mistake if you forgot.\n\nDouble Jeopardy is the same, but the values are doubled (ranging from $400 to $2000 per question/answer).\n\nThe first two rounds have Daily Doubles as well (one in the first round, two in the second). When selecting a clue, the player gets a special clue just for them (the other players can't ring in), and they get to wager how much they want to earn/lose on the result. So making something a \"true Daily Double\" means betting your entire score.\n\nFinal Jeopardy is like a Daily Double for all players. The category is given, players get to decide how much they want to bet on a single question, then they hear the clue and have 30 seconds to write down their response.\n\n*edit* Daily Doubles are hidden, so there's no way to know which clue is going to be one. They tend to be in the lower half (i.e. higher-scoring clues) of the board.",
"provenance": null
},
{
"answer": "Contestants are given the opportunity to answer questions with associated monetary values. A correct answer credits the contestant with the associated value; an incorrect answer debits the contestant in the same amount. So there's a penalty to just guessin'.\n\nThe questions are well known for being challenging, but the range is quite broad. Some questions have obvious answers anyone could guess; others have obvious answers but are worded in such a way as to make them tricky to guess. Still others are questions you either know the answer to or you don't. Who was Henry VIII buried next to? Either you know that or you don't. (It was Jane Seymour.)\n\nThe gimmick of the game is that the \"questions\" are phrased as if they were answers, and the contestants are required to provide the questions to which those are answers. In the above example, the \"question\" might have been \"She's the wife of Henry VIII next to whom he was buried,\" and an \"answer\" might be, \"Who was Jane Seymour?\"\n\nThis is a formality more than anything. Many consider the game to be *slightly* more challenging because the \"questions\" must first be parsed to figure out what the correct response needs to be. First you must unpack the \"question,\" then you have to come up with the correct answer to the question, then you must phrase the correct answer in a way that's acceptable to the judges. This makes the whole game a *bit* tricker and more interesting than just answering questions. How much tricker, and how much more interesting, is of course in the eye of the beholder.",
"provenance": null
},
{
"answer": "It's essentially a trivia game. \n\nYou pick a category and a dollar value and then you get a clue in the form of an \"answer,\" for which you have to provide the question. So, you might get an \"answer\" like \"This president was elected in 2008.\" The correct response would be \"Who is Barack Obama?\"\n\nThere are three rounds. In the first, the clues are valued at between $200 and $1000, and there is one \"Daily Double,\" a clue for which you can bet all, part, or none of your current winnings. In the second round the clues are worth $400 - $2000 and there are two daily doubles. The third round is \"Final Jeopardy,\" in which the players are given the name of a category. Then they must bet all, part, or none of their winnings before seeing the question. The winner's the person with the most money at the end of Final Jeopardy. ",
"provenance": null
},
{
"answer": "Answer questions, get money. ",
"provenance": null
},
{
"answer": "To add a few more details to the other excellent descriptions:\n\nThe winner from the show gets to come back the next night and play against two new contestants. This continues until the person is defeated.\n\nThe interesting thing about Jeopardy is that it's a show where you have to have pretty good breadth of knowledge to do well, and contestants appear to be pretty smart individuals. On many other game shows in the U.S., contestants seem to be more like \"regular people\" and aren't necessarily that exceptional. Yet the prizes for Jeopardy aren't that lavish--you can be a jeopardy winner and take home less than $10,000, depending on how the game goes. Taken together it means that being a contestant on Jeopardy, or even just playing along at home, has more status & intellectual cachet than competing on/watching other shows.",
"provenance": null
},
{
"answer": "Probably the most confuddling thing about it is the way questions are posed and asked. Jeopardy! was created at a time when there was a big scandal about game shows being rigged, where contestants were \"given the answers\". \n\nSo when Jeopardy was created, the joke was that the contestants *were* given the answers, openly. The task for them was to provide the questions. So that's why the questions are always nonsense like like \"what is Damascus?\"",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "39458110",
"title": "Jeopardy! (franchise)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 352,
"text": "Jeopardy! is an American media franchise that began with a television quiz show created by Merv Griffin, in which contestants are presented with clues in the form of answers, and must phrase their responses in the form of a question. Over the years, the show has expanded its brand beyond television and been licensed into products of various formats.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6015428",
"title": "Jeopardy! audition process",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 675,
"text": "\"Jeopardy!\" is an American television quiz show created by Merv Griffin, in which contestants are presented with clues in the form of answers, and must phrase their responses in the form of a question. Throughout its run, the show has regularly offered auditions for potential contestants, taking place in the Los Angeles area and occasionally in other locations throughout the United States. Unlike those of many other game shows, \"Jeopardy!\"'s audition process involves passing a difficult test of knowledge on a diversity of subjects, approximating the breadth of material encountered by contestants on the show. Since 2006, an online screener test is conducted annually.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29339253",
"title": "List of Jeopardy! tournaments and events",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 313,
"text": "\"Jeopardy!\" is an American television quiz show created by Merv Griffin, in which contestants are presented with clues in the form of answers and must phrase their responses in the form of questions. Over the years, the show has featured many tournaments and special events since Alex Trebek became host in 1984.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29276606",
"title": "List of Jeopardy! contestants",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 889,
"text": "\"Jeopardy!\" is an American television game show. Its format is a quiz competition in which contestants are presented with general knowledge clues in the form of answers, and must phrase their responses in question form. Many contestants throughout the show's history have received significant media attention because of their success on \"Jeopardy!\", particularly Brad Rutter, who has never lost to a human player and has won the most money on the show including tournaments; Ken Jennings, who has the show's longest winning streak; and James Holzhauer, who holds several of the show's highest overall daily scores. Rutter and Jennings also hold the first- and second-place records respectively for most money ever won on American game shows, whereas Holzhauer ranks fourth overall. Other contestants went on to accomplish much, such as U.S. senator and presidential candidate John McCain.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44663958",
"title": "Jeopardy! (British game show)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 399,
"text": "Jeopardy! is a game show based on the US version of the same name. It was originally aired on Channel 4 from 12 January 1983 to 2 July 1984, hosted by Derek Hobson, then was revived by ITV from 3 September 1990 to 9 April 1993, first hosted by Chris Donat in 1990 and then hosted by Steve Jones from 1991 to 1993 and then finally on Sky One from 4 December 1995 to 7 June 1996, hosted by Paul Ross.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27748226",
"title": "Jeopardy!",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 633,
"text": "Jeopardy! is an American television game show created by Merv Griffin. The show features a quiz competition in which contestants are presented with general knowledge clues in the form of answers, and must phrase their responses in the form of questions. The original daytime version debuted on NBC on March 30, 1964, and aired until January 3, 1975. A weekly nighttime syndicated edition aired from September 1974 to September 1975, and a revival, \"The All-New Jeopardy!\", ran on NBC from October 1978 to March 1979. The current version, a daily syndicated show produced by Sony Pictures Television, premiered on September 10, 1984.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9941118",
"title": "Jeopardy! broadcast information",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 510,
"text": "Jeopardy! is an American television quiz show created by Merv Griffin, in which contestants are presented with trivia clues in the form of answers and must phrase their responses in the form of a question. The show has experienced a long life in several incarnations over the course of nearly a half-century, spending more than 11 years as a daytime network program and having currently run in syndication for 35 seasons. It has also gained a worldwide following with a multitude of international adaptations.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6zlgr0
|
what is the premise of the 'shadow' that carl jung wrote about?
|
[
{
"answer": "The idea is that we look at ourselves in a good light, and this casts a shadow that hides from us our true selves.\n\nHe believed that we had to face that shadow in our journey to self realization. Facing that shadow means recognizing that all the worst parts of humanity are in you too. If you were born in Nazi Germany to a German family, there is a good chance you would have been a Nazi. You wouldn't have had some moral epiphany and rallied against your people, you would likely have taken part in the Holocaust.\n\nFor a better look at that idea. What it takes for a normal person, you or me, to turn into that kind of a monster, read Ordinary Men. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "560394",
"title": "Shadow (psychology)",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 659,
"text": "Carl Jung stated the shadow to be the unknown dark side of the personality. According to Jung, the shadow, in being instinctive and irrational, is prone to psychological projection, in which a perceived personal inferiority is recognized as a perceived moral deficiency in someone else. Jung writes that if these projections remain hidden, \"The projection-making factor (the Shadow archetype) then has a free hand and can realize its object—if it has one—or bring about some other situation characteristic of its power.\" These projections insulate and harm individuals by acting as a constantly thickening veil of illusion between the ego and the real world.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "560394",
"title": "Shadow (psychology)",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 426,
"text": "Contrary to a Freudian definition of shadow, the Jungian shadow can include everything outside the light of consciousness and may be positive or negative. \"Everyone carries a shadow,\" Jung wrote, \"and the less it is embodied in the individual's conscious life, the blacker and denser it is.\" It may be (in part) one's link to more primitive animal instincts, which are superseded during early childhood by the conscious mind.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "448370",
"title": "Analytical psychology",
"section": "Section::::Fundamentals.:Shadow.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 965,
"text": "The shadow is an unconscious complex defined as the repressed, suppressed or disowned qualities of the conscious self. According to Jung, the human being deals with the reality of the shadow in four ways: denial, projection, integration and/or transmutation. According to analytical psychology, a person's shadow may have both constructive and destructive aspects. In its more destructive aspects, the shadow can represent those things people do not accept about themselves. For instance, the shadow of someone who identifies as being kind may be harsh or unkind. Conversely, the shadow of a person who perceives himself to be brutal may be gentle. In its more constructive aspects, a person's shadow may represent hidden positive qualities. This has been referred to as the \"gold in the shadow\". Jung emphasized the importance of being aware of shadow material and incorporating it into conscious awareness in order to avoid projecting shadow qualities on others.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "102999",
"title": "Collective unconscious",
"section": "Section::::Exploration.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 543,
"text": "Jung considered that 'the shadow' and the anima and animus differ from the other archetypes in the fact that their content is more directly related to the individual's personal situation'. These archetypes, a special focus of Jung's work, become autonomous personalities within an individual psyche. Jung encouraged direct conscious dialogue of the patient's with these personalities within. While the shadow usually personifies the personal unconscious, the anima or the Wise Old Man can act as representatives of the collective unconscious.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20611803",
"title": "Hawksmoor (novel)",
"section": "Section::::Structure and narrative mode, style, symbolism.:Symbolism.:Shadow.\n",
"start_paragraph_id": 52,
"start_character": 0,
"end_paragraph_id": 52,
"end_character": 482,
"text": "The word \"shadow\" symbolizes not only Dyer's occult belief system but literally his dark side himself since he appears later on in the novel as a shadow killing people. Dyer admonishes his assistant Walter: \"the art of shaddowes you must know well, Walter\" because \"it is only the Darknesse that can give trew Forme to our Work\". The name Dyer gives his occultism is \"Scientia Umbrarum\" (shadowy knowledge) The murder victims all fall prey to an ominous figure called \"the shadow\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "560394",
"title": "Shadow (psychology)",
"section": "Section::::Encounter with.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 998,
"text": "The eventual encounter with the shadow plays a central part in the process of individuation. Jung considered that \"the course of individuation...exhibits a certain formal regularity. Its signposts and milestones are various archetypal symbols\" marking its stages; and of these \"the first stage leads to the experience of the SHADOW\". If \"the breakdown of the persona constitutes the typical Jungian moment both in therapy and in development\", it is this that opens the road to the shadow within, coming about when \"Beneath the surface a person is suffering from a deadly boredom that makes everything seem meaningless and empty ... as if the initial encounter with the Self casts a dark shadow ahead of time.\" Jung considered as a perennial danger in life that \"the more consciousness gains in clarity, the more monarchic becomes its content...the king constantly needs the renewal that begins with a descent into his own darkness\"—his shadow—which the \"dissolution of the persona\" sets in motion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2596759",
"title": "The Shadow (fairy tale)",
"section": "Section::::Analysis.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 340,
"text": "\"The Shadow\" is an exemplary story in Andersen's darker fairy tales. Throughout the tale, the writer is portrayed as a moral person, concerned with the good and true in the world. But as it says, the people around him are not much interested in his feelings on the subject. Indeed, his shadow says he does not see the world as it truly is.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
40le3p
|
how can the suns rays make you feel mentally/psychologically better?
|
[
{
"answer": "Mostly your brain (pineal gland) produces two kinds of \"drugs\" (hormones): one for the day (serotonin) and one for the night (melatonin). This is part of your inner clock. Serotonin keeps you awake and melatonin makes you sleepy. Light, specially sunlight, stops the production of melatonin. So if you would stay in dark places without (sun)light over a long period of time, the levels of melatonin would be very high and you would be, more or less, feeling sleepy all the time. When the balance of serotonin and melatonin in your body is messed up, your inner clock is also messed up. This leads to e.g. sleep disorders, depressions and some other stuff that isn't very healthy either. \n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "13108668",
"title": "Dasha (astrology)",
"section": "Section::::The Mahadashas.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 588,
"text": "If the Sun is strong and favourably placed the soul will feel strong, one may make efforts for self-realisation, live splendidly, travel far and wide, engage in strife or hostility that yields good dividend, rise in position and status, gains through trading, and benefits from father or father will benefit. If the Sun is weak and afflicted one may feel inner weakness, suffer decline in physical and mental prowess, health troubles, demotion in rank and status, displeasure of the government, trading losses and suffer at the hands of father or his father may suffer ill-health or die.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21911",
"title": "Naturism",
"section": "Section::::Philosophy.:Naturism for health.\n",
"start_paragraph_id": 92,
"start_character": 0,
"end_paragraph_id": 92,
"end_character": 345,
"text": "Sunlight has been shown to be beneficial in some skin conditions and enables the body to make vitamin D, but with the increased awareness of skin cancer, wearing of sunscreen is now part of the culture. Sun exposure prompts the body to produce nitric oxide that helps support the cardiovascular system and the feelgood brain-chemical serotonin.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "170803",
"title": "Mood (psychology)",
"section": "Section::::Types of mood.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1111,
"text": "There have been many studies done on the effect of positive emotion on the cognitive mind and there is speculation that positive mood can affect our minds in good or bad ways. Generally, positive mood has been found to enhance creative problem solving and flexible yet careful thinking. Some studies have stated that positive moods let people think creatively, freely, and be more imaginative. Positive mood can also help individuals in situations in which heavy thinking and brainstorming is involved. In one experiment, individuals who were induced with a positive mood enhanced performance on the Remote Associates Task (RAT), a cognitive task that requires creative problem solving. Moreover, the study also suggests that being in a positive mood broadens or expands the breadth of attentional selection such that information that may be useful to the task at hand becomes more accessible for use. Consequently, greater accessibility of relevant information facilitates successful problem solving. Positive mood also facilitates resistance to temptations, especially with regards to unhealthy food choices.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1165522",
"title": "Mindfulness",
"section": "Section::::Scientific research.\n",
"start_paragraph_id": 182,
"start_character": 0,
"end_paragraph_id": 182,
"end_character": 516,
"text": "Nevertheless, MBSR can have a beneficial effect helping with the depression and psychological distress associated with chronic illness. Meditation also may allow you to modulate pain stronger. When participants in research were exposed to pain from heating, the brainsscans of the mindfulness meditation group (by use of functional magnetic resonance imaging) showed their brains notice the pain equally, however it does not get converted to a perceived pain signal. As such they experienced up to 40–50% less pain.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3502474",
"title": "Miracle of the Sun",
"section": "Section::::Criticism.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 1385,
"text": "Others, such as professor of physics Auguste Meessen, suggest that optical effects created by the human eye can account for the reported phenomenon. Meessen presented his analysis of apparitions and \"Miracles of the Sun\" at the International Symposium \"Science, Religion and Conscience\" in 2003. While Meessen felt those who claim to have experienced miracles were \"honestly experiencing what they report\", he stated Sun miracles cannot be taken at face value and that the reported observations were optical effects caused by prolonged staring at the Sun. Meessen contends that retinal after-images produced after brief periods of Sun gazing are a likely cause of the observed dancing effects. Similarly, Meessen concluded that the color changes witnessed were most likely caused by the bleaching of photosensitive retinal cells. Meessen observes that Sun Miracles have been witnessed in many places where religiously charged pilgrims have been encouraged to stare at the Sun. He cites the apparitions at Heroldsbach, Germany (1949) as an example, where many people within a crowd of over 10,000 testified to witnessing similar observations as at Fátima. Meessen also cites a \"British Journal of Ophthalmology\" article that discusses some modern examples of Sun Miracles. Prof. Dr. Stöckl, a meteorologist from Regensburg, also proposed a similar theory and made similar observations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4829543",
"title": "Heliophobia",
"section": "Section::::Causes.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 758,
"text": "The Pacific Health Center suggested that people have been staying away from the sunlight because of a growing fear of skin cancer or blindness. This is not technically heliophobia, but simply an unfounded and illogical solution. Obsessive compulsive disorder, if it includes an intense fear of being harmfully affected by exposure to the sun or to bright lights, can also cause heliophobia. Forms of heliophobia based on such fears can cause the sufferer to eventually develop fear of being in public or fear of people in general by association, as a crippling fear of bright light can significantly limit the places a heliophobe can comfortably visit, as well as prevent that person from going outside during the daytime, when most other people are active.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25669714",
"title": "Health effects of sunlight exposure",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 717,
"text": "The ultraviolet radiation in sunlight has both positive and negative health effects, as it is both a principal source of vitamin D and a mutagen. A dietary supplement can supply vitamin D without this mutagenic effect. Vitamin D has been suggested as having a wide range of positive health effects, which include strengthening bones and possibly inhibiting the growth of some cancers. UV exposure also has positive effects for endorphin levels, and possibly for protection against multiple sclerosis. Visible sunlight to the eyes gives health benefits through its association with the timing of melatonin synthesis, maintenance of normal and robust circadian rhythms, and reduced risk of seasonal affective disorder.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
fq5jqz
|
how can people take old videos and upscale them to 4k?
|
[
{
"answer": "That video was recorded in 1985 on film from my understanding.\n\nFilm itself has a resolution way beyond 4K depending on the grain size. As long as the film is preserved, it can be re-scanned using a higher resolution scanner.\n\nWe will probably get an 8K cut in a few years.",
"provenance": null
},
{
"answer": "The video would have been recorded onto film given its era. Photographic film has really high resolution we just don't associate high resolution with the old analog TV era because the TVs didn't have much to work with, but movie film is somewhere in the 4k-16k range depending on the size and quality of film. If they had a good quality recording then its just a matter of scanning it in really nicely and you have a 4k music video.\n\nThis is why old movies can also be upscaled to 4k(but they often have film grain and anomalies from years in storage) but more recent movies that were shot and edited digitally cannot be. If the movie was captured on an early digital camera at 2k resolution (roughly 1920x1080 or 1080P) then you don't have the raw data to work with. You can fudge it in post processing(which your TV will do) but its not quite the same",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "47262026",
"title": "Ultra-high-definition television",
"section": "Section::::History.:2013.\n",
"start_paragraph_id": 73,
"start_character": 0,
"end_paragraph_id": 73,
"end_character": 282,
"text": "On December 25, 2013, YouTube added a \"2160p 4K\" option to its videoplayer. Previously, a visitor had to select the \"original\" setting in the video quality menu to watch a video in 4K resolution. With the new setting, YouTube users can much more easily identify and play 4K videos.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18110995",
"title": "4mations",
"section": "Section::::Categories.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 268,
"text": "Videos can be viewed by selecting the most discussed, the top rated, the most watched, featured videos (which are highlighted by the 4mations team) or the newest to be uploaded. Alternatively users can determine the category they wish to look at by channel or format.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32362461",
"title": "Massive open online course",
"section": "Section::::Student experience and pedagogy.:Instructional design.\n",
"start_paragraph_id": 72,
"start_character": 0,
"end_paragraph_id": 72,
"end_character": 993,
"text": "Some view the videos and other material produced by the MOOC as the next form of the textbook. \"MOOC is the new textbook\", according to David Finegold of Rutgers University. A study of edX student habits found that certificate-earning students generally stop watching videos longer than 6 to 9 minutes. They viewed the first 4.4 minutes (median) of 12- to 15-minute videos. Some traditional schools blend online and offline learning, sometimes called flipped classrooms. Students watch lectures online at home and work on projects and interact with faculty while in class. Such hybrids can even improve student performance in traditional in-person classes. One fall 2012 test by San Jose State and edX found that incorporating content from an online course into a for-credit campus-based course increased pass rates to 91% from as low as 55% without the online component. \"We do not recommend selecting an online-only experience over a blended learning experience\", says Coursera's Andrew Ng.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1936304",
"title": "VideoNow",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 287,
"text": "However, at least one video has been posted on YouTube showing how VideoNow Color players can be easily modified to accept standard-sized CDs with a bit of cutting and gluing. Full-sized CDs can hold roughly 42 minutes of total video, and play with no difference in the modified player.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8120244",
"title": "Go!Cam",
"section": "Section::::Video recording length.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 320,
"text": "Go!Edit videos can only be taken for a length of 15 seconds and then edited; however, accessing the camera through the XMB menu means the video recording length depends on the size of your Memory Stick. Also note that video quality can be changed, so the lower the quality, the longer the recording time and vice versa.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "55851302",
"title": "Elsagate",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 845,
"text": "Most videos in this category are either live action films or crude digital animations, although a few channels have been using more elaborate techniques such as clay animation. Despite YouTube's age restriction policies, these videos are sometimes tagged in such a way to circumvent the inbuilt child safety algorithms, even making their way into YouTube Kids, and are thus difficult to moderate due to the large scale of the platform. In order to capture search results and attract attention from users, their titles and descriptions feature names of famous characters, as well as keywords like \"education,\" \"learn colors,\" \"nursery rhymes,\" etc. They also include automatically-placed ads, making them lucrative to their owners and YouTube. Despite the objectionable and often confusing nature of these videos, many attract millions of views.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42405565",
"title": "Coub",
"section": "Section::::Function.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 256,
"text": "Using Coub's web-based editor, users can extract a snippet up to 10 seconds long from a video already hosted on YouTube or Vimeo, or one that they've uploaded, and add a full-length audio track to play along with the clip. The video can be set to reverse.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
byeeg1
|
why does water and air feel different at the same temp? full question below.
|
[
{
"answer": "Transmission of energy. Water holds and absorbs tremendously more energy than air, that's why it takes so much more airflow volume to create the same cooling effect as water.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "900160",
"title": "Internal wave",
"section": "Section::::Buoyancy, reduced gravity and buoyancy frequency.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 669,
"text": "If formula_7, formula_8 is positive though generally much smaller than formula_4. Because water is much more dense than air, the displacement of water by air from a surface gravity wave feels nearly the full force of gravity (formula_10). The displacement of the thermocline of a lake, which separates warmer surface from cooler deep water, feels the buoyancy force expressed through the reduced gravity. For example, the density difference between ice water and room temperature water is 0.002 the characteristic density of water. So the reduced gravity is 0.2% that of gravity. It is for this reason that internal waves move in slow-motion relative to surface waves.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "223970",
"title": "Relative humidity",
"section": "Section::::Other important facts.\n",
"start_paragraph_id": 61,
"start_character": 0,
"end_paragraph_id": 61,
"end_character": 600,
"text": "Water vapor is a lighter gas than other gaseous components of air at the same temperature, so humid air will tend to rise by natural convection. This is a mechanism behind thunderstorms and other weather phenomena. Relative humidity is often mentioned in weather forecasts and reports, as it is an indicator of the likelihood of precipitation, dew, or fog. In hot summer weather, it also increases the apparent temperature to humans (and other animals) by hindering the evaporation of perspiration from the skin as the relative humidity rises. This effect is calculated as the heat index or humidex.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41270451",
"title": "Broadband acoustic resonance dissolution spectroscopy",
"section": "Section::::Principles of the BARDS response.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 222,
"text": "Water is approximately 800 times more dense than air. However, air is approximately 15,000 times more compressible than water. The velocity of sound, \"υ\", in a homogeneous liquid or gas is given by the following equation:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33978",
"title": "Weather",
"section": "Section::::Causes.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 628,
"text": "Surface temperature differences in turn cause pressure differences. A hot surface warms the air above it causing it to expand and lower the density and the resulting surface air pressure. The resulting horizontal pressure gradient moves the air from higher to lower pressure regions, creating a wind, and the Earth's rotation then causes deflection of this air flow due to the Coriolis effect. The simple systems thus formed can then display emergent behaviour to produce more complex systems and thus other weather phenomena. Large scale examples include the Hadley cell while a smaller scale example would be coastal breezes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41822",
"title": "Troposphere",
"section": "Section::::Pressure and temperature structure.:Temperature.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 406,
"text": "If the air contains water vapor, then cooling of the air can cause the water to condense, and the behavior is no longer that of an ideal gas. If the air is at the saturated vapor pressure, then the rate at which temperature drops with height is called the saturated adiabatic lapse rate. More generally, the actual rate at which the temperature drops with altitude is called the environmental lapse rate. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "89547",
"title": "Water vapor",
"section": "Section::::Properties.:Impact on air density.:At equal temperatures.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 831,
"text": "At the same temperature, a column of dry air will be denser or heavier than a column of air containing any water vapor, the molar mass of diatomic nitrogen and diatomic oxygen both being greater than the molar mass of water. Thus, any volume of dry air will sink if placed in a larger volume of moist air. Also, a volume of moist air will rise or be buoyant if placed in a larger region of dry air. As the temperature rises the proportion of water vapor in the air increases, and its buoyancy will increase. The increase in buoyancy can have a significant atmospheric impact, giving rise to powerful, moisture rich, upward air currents when the air temperature and sea temperature reaches 25 °C or above. This phenomenon provides a significant driving force for cyclonic and anticyclonic weather systems (typhoons and hurricanes).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "229446",
"title": "Anomalous propagation",
"section": "Section::::Causes.:Air temperature profile.:Super refraction.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 409,
"text": "It is very common to have temperature inversions forming near the ground, for instance air cooling at night while remaining warm aloft. This happens equally aloft when a warm and dry airmass overrides a cooler one, like in the subsidence aloft cause by a high pressure intensifying. The index of refraction of air increases in both cases and the EM wave bends toward the ground instead of continuing upward. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
bndjtz
|
The Great Arab Revolt
|
[
{
"answer": "I'll try to answer your question, sorry if it doesn't satisfy you.\n\nFirst, despite its name (and what was believed to be by Arab and Turkish nationalist historiography), there was no general \"Arab Revolt\" against the Ottoman Empire. Firstly, Husayn, the leader of the revolt, was not an Arab nationalist, and he did not adopt the ideology of Arabism. He was an ambitious dynast who used his Islamic status as a *sharif* and the *amir* of Mecca in an attempt to acquire a hereditary kingdom or principality for his family. He even stated that his objective was to free the caliph from the \"atheistic\" clutches of the CUP regime rather than overthrowing him. Second, Although clandestine support for the revolt existed in some parts of Syria, Husayn’s call failed to generate any organized or widespread response in the Arabic-speaking provinces. Many Arab public figures even accused Husayn of being a traitor and condemned his actions as dividing the Ottoman-Islamic Empire at a time when unity was crucial. Rather than a popular uprising against the Ottoman Empire, the Arab Revolt was a more narrowly based enterprise relying on tribal levies from Arabia and dominated by the Hashemite family. Huge subversion among Arabs in the Ottoman army as had been expected by the British Arab Bureau never materialized, even after Sharif Hussein’s revolt in 1916. No Arab units of the Ottoman army came over to Hussein, and from British intelligence memorandum, many Arab soldiers continued to demonstrate loyalty not only to Islam but also to the Ottoman government. Except for a few thousand tribesmen, most Arabs remained loyal to the empire during the traumatic events of the times\n\nIn my knowledge, the Arab mobilization during the First World War was less researched, and the fragmentary evidence we have doesn't help. However, using a general calculation, the Ottoman armed forces would have comprised by 47% Turks and Anatolian Muslims, 37% Arabs, 8% Ottoman Greeks, 7% Armenians and 1% Jews. The motivation for the Arabs to join the Ottoman army varied.\n\nSome enthusiastically joined, motivated with nationalist fervour or *jihad* propaganda. In Damascus, the population was opposed to Great Britain, Russia and France while the Muslim population of Palestine held anti-British feelings as well. With the onset of the war, propaganda and rumours filled the town that the army intended to invade Egypt and free it from the British rule. The propaganda succeeded in winning the wholehearted support of the Arab Muslims and soldiers, a few weeks before the expedition the enthusiasm and excitement of the people reached a ‘fever pitch’ in Jaffa. Parades and celebrations of all kinds in anticipation of the triumphal March into Egypt were taking place and the enmity against the Entente states was at the centre of the propaganda. Even Arabs that made bitter remarks against Germany for not helping the Ottomans during the war against Italy soon underwent a change and they came to realize that the Ottomans had taken up arms against Russia and that Russia was considered first and foremost the arch-enemy. Reports on German victories also had a powerful effect on them. Similar propaganda was directed at the soldiers who would invade Egypt since many of the Arab soldiers were not acquainted with the disciplined character of military life. To increase their enthusiasm, Cemal Pasha, the theater commander used both jihad propaganda and the argument that the Egyptians were ready to revolt against British rule. He had many Arab scholars preach to the Arab soldiers before and during the first attack against Egypt. These military employees strolled through the camps and delivered vehement speeches. Their orations were so influential among the common Arab soldiers that some had fits of hysteria due to the excited preaching.\n\nOn the other hand, some were also forcefully mobilized with no other choice. These forced conscripts had almost no option but to join the army. The alternative was often death by starvation. Moreover, the conscripts, isolated in their camp life, developed a critical distance from the normative ethics of their original communities when they moved to the margins of major cities like Alexandria and Cairo. Families also mourned the loss of their sons, who were the backbone of the family. They dodged conscription with hiding in villages, prepared hiding places in the houses, fields, caves, with Bedouin families or in other out-of-the way places. When apprehended, suspected draft evaders were usually convicted by military court and often sentenced to flogging. Many also mutilated themselves to avoid draft, but since the ultimate consequence of capture was often military service, applying effective deterrent measures was nearly impossible. Many also avoided draft by moving to Mecca and Medina, as the the cities were exempted from mobilization. Evidently, the number of young pilgrims to Mecca spiked during recruitment. For the less pious, two popular options for circumventing military service remained substitution (sending a personal replacement) and payment of forty to fifty liras, which get increasingly higher as the war progressed. For the non-Muslims, changing nationalities, fleeing abroad, or paying the individual exemption fee was the common course to avoid draft\n\n**Sources:**\n\n*A History of the Modern Middle East 6th Edition* by William Cleveland and Martin Burton\n\n*The Ottoman Mobilization of Manpower in the First World War* by Mehmet Beşikçi\n\n*A Land of Aching Hearts* by Leila Tarawi Faraz",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "450921",
"title": "Arab Revolt",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 536,
"text": "The Arab Revolt (, ; ) or Great Arab Revolt (, ) was a military uprising of Arab forces against the Ottoman Empire in the Middle Eastern theatre of World War I. On the basis of the McMahon–Hussein Correspondence, an agreement between the British government and Hussein bin Ali, Sharif of Mecca, the revolt was officially initiated at Mecca on June 10, 1916. The aim of the revolt was the creation a single unified and independent Arab state stretching from Aleppo in Syria to Aden in Yemen, which the British had promised to recognize.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "266431",
"title": "McMahon–Hussein Correspondence",
"section": "Section::::Arab Revolt, June 1916 to October 1918.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 552,
"text": "The Arab Revolt began in June 1916, when an Arab army of around 70,000 men moved against Ottoman forces. They participated in the capture of Aqabah and the severing of the Hejaz railway, a vital strategic link through the Arab peninsula which ran from Damascus to Medina. Meanwhile, the Egyptian Expeditionary Force under the command of General Allenby advanced into the Ottoman territories of Palestine and Syria. The British advance culminated in the Battle of Megiddo in September 1918 and the capitulation of the Ottoman Empire on 31 October 1918.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4764461",
"title": "World War I",
"section": "Section::::Progress of the war.:Southern theatres.:Ottoman Empire.\n",
"start_paragraph_id": 91,
"start_character": 0,
"end_paragraph_id": 91,
"end_character": 357,
"text": "The Arab Revolt, instigated by the Arab bureau of the British Foreign Office, started June 1916 with the Battle of Mecca, led by Sherif Hussein of Mecca, and ended with the Ottoman surrender of Damascus. Fakhri Pasha, the Ottoman commander of Medina, resisted for more than two and half years during the Siege of Medina before surrendering in January 1919.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35510108",
"title": "June 1916",
"section": "Section::::June 5, 1916 (Monday).\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 209,
"text": "BULLET::::- The Arab Revolt began against the Ottoman Empire when Emirs Ali of Hejaz and Faisal I of Iraq, both sons of Hussein bin Ali, Sharif of Mecca, organized an attack on the Ottoman garrison in Medina.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "266431",
"title": "McMahon–Hussein Correspondence",
"section": "Section::::Arab Revolt, June 1916 to October 1918.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 729,
"text": "The Arab revolt is seen by historians as the first organized movement of Arab nationalism. It brought together different Arab groups for the first time with the common goal to fight for independence from the Ottoman Empire. Much of the history of Arabic independence stemmed from the revolt beginning with the kingdom founded by Hussein. After the war was over, the Arab revolt had implications. Groups of people were put into classes based on if they had fought in the revolt or not and what their rank was. In Iraq, a group of Sharifian Officers from the Arab Revolt formed a political party which they were head of. Still to this day the Hashemite kingdom in Jordan is influenced by the actions of Arab leaders in the revolt.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "450921",
"title": "Arab Revolt",
"section": "Section::::Aftermath.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 729,
"text": "The Arab revolt is seen by historians as the first organized movement of Arab nationalism. It brought together different Arab groups for the first time with the common goal to fight for independence from the Ottoman Empire. Much of the history of Arabic independence stemmed from the revolt beginning with the kingdom founded by Hussein. After the war was over, the Arab revolt had implications. Groups of people were put into classes based on if they had fought in the revolt or not and what their rank was. In Iraq, a group of Sharifian Officers from the Arab Revolt formed a political party which they were head of. Still to this day the Hashemite kingdom in Jordan is influenced by the actions of Arab leaders in the revolt.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5424261",
"title": "Royal Jordanian Army",
"section": "Section::::Origins – 1920–1947.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 204,
"text": "On 10 June 1916, Sherif Hussien Bin Ali prince of Mecca, officially declared the Great Arab Revolt against the Ottoman Empire to rid Arab nations of the Turkish rule that had lasted about four centuries.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4tfbu3
|
A poor family living in the woods circa 1500; how did they find husbands and wives for their children?
|
[
{
"answer": "Hi, can you specify which region/culture you're asking about? That will greatly assist anyone contemplating answering here. Thanks! ",
"provenance": null
},
{
"answer": "The average woods-living family would not have been quite as isolated as the family in that movie. Remember that they were explicitly ostracized and thus unable to associate with society. But the average family would have had plenty of opportunities to meet new people. \n\nWhile you did specify time and place, the answer is going to be much the same for any traditional, rural, Christian community. For the record, my specialty is in Italian society of the same period. \n\nFirst, obviously, is church. Every Sunday, at least, they would have made the trip to the nearest one. There they would have gotten to know the other children who lived nearby and probably had their first flirtations.\n\nThen there are dozens of holidays and festivals, both Christian and secular (i.e. Harvest, May Day), that would have brought families from the countryside down to the village centers. As opposed to church, these visits would have given young people the chance to run around and socialize with their peers with minimal adult supervision. \n\nFinally, depending on your father's profession, you might accompany him to town on market days or business trips. This way you would meet his associates, friends, or business partners and possibly their children. If you don't manage to find someone on your own, there is a good chance your future spouse will be selected from this pool. \n\nAs for marriage, if you are a teenage girl around the age of the film's protagonist you generally have two options: you meet a boy and get your father's approval, or your parents set you up with someone and you approve or disapprove. Despite the common conception that the father's word was law, many parents would have been willing to consider their daughters' opinions. This is especially true in the lower classes, where the stakes of marriage were not as high. That is, while the daughter of a duke or rich merchant may have been basically sold off to forge family alliances, a farmer or fur trapper would have wanted little more than a son in law who was well-raised, polite, and had a promising job, or a daughter-in-law who was attractive, healthy, and well-mannered (i.e. obedient--hate to say it, but that's how people thought back then). Research has found that in the Early Modern Era, love marriages were much more common among the poor than the rich.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "445633",
"title": "Forests of Poland",
"section": "Section::::Inhabitation.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 660,
"text": "Families of the woodsmen produced their own food through gardening and hunting, as well as their own clothing. In some cases, their sewing of intricate laces became well known outside the forest, resulting in additional family income. Because of their isolation from society in general, woodsmen and their families developed their own style of dress, music, sewing, dialect, celebrations, and the type of dwellings. The Masovia woodsmen for example, known as Kurpie people, who lived in the forested region known in Poland as the White Wilderness (Puszcza Biała) and the Green Wilderness, still proudly proclaim and celebrate their unique culture and customs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "464779",
"title": "Building material",
"section": "Section::::Naturally occurring substances.:Wood and timber.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 426,
"text": "Many families or communities, in rural areas, have a personal woodlot from which the family or community will grow and harvest trees to build with or sell. These lots are tended to like a garden. This was much more prevalent in pre-industrial times, when laws existed as to the amount of wood one could cut at any one time to ensure there would be a supply of timber for the future, but is still a viable form of agriculture.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5359852",
"title": "Shita-kiri Suzume",
"section": "Section::::Plot.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 607,
"text": "Once upon a time there lived a poor old woodcutter with his wife, who earned their living by cutting wood and fishing. The old man was honest and kind but his wife was arrogant and greedy. One morning, the old man went into the mountains to cut timber and saw an injured sparrow crying out for help. Feeling sorry for the bird, the man took it back to his home and fed it some rice to try to help it recover. His wife, being very greedy and rude, was annoyed that he would waste precious food on such a small and insignificant little thing as a sparrow. The old man, however, continued caring for the bird.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5222687",
"title": "The Master Thief",
"section": "Section::::Synopsis.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 372,
"text": "A poor cottager had nothing to give his three sons, so he walked with them to a crossroad, where each son took a different road. The youngest went into a great woods, and a storm struck, so he sought shelter in a house. The old woman there warned him that it is a den of robbers, but he stayed, and when the robbers arrived, he persuaded them to take him on as a servant.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5324733",
"title": "Clan Hunter",
"section": "Section::::History.:18th and 19th centuries.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 538,
"text": "The family suffered from financial problems in the early eighteenth century. These problems were resolved by yet another Robert Hunter, a younger son of the twenty second Laird who succeeded to the estate and managed it well. He was succeeded by his daughter, Eleanora, who married her cousin, Robert Caldwell. He assumed the name Hunter and together they improved the estate and built the present Hunterston House. Their son had two daughters: Jane Hunter who married Gould Weston and Eleanor who married Robert William Cochran-Patrick.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17644589",
"title": "Bedford Purlieus National Nature Reserve",
"section": "Section::::The Wansford Estate.:Grubbing up.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 412,
"text": "The woods remained in the Russell family through a further nine generations, although by no means all of it remained as woodland. Of the described in the charter of 1639, around was woodland. Now only half of that remains. Between 1862 and 1868 the western half of the wood was grubbed up and converted to agricultural land. The \"Peterborough Advertiser\" of 7 December 1912, looking back 50 years, describes how\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13662070",
"title": "Tröbnitz",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 238,
"text": "As the last member of the house of the Meusebacher died in 1753 things changed. The woodsmen decided to work as farmers. The new farmers bought huge areas of the former large scale land-holding and a long lasting economy was established.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
285h83
|
What is the largest molecule, and how is a single molecule defined (as opposed to an amount of a certain compound)?
|
[
{
"answer": "My guess (depending on your definition) would be some sort of plastic, seeing as how they basically form long chains from repeating units (*monomers* forming *polymers*). For plastics, this size can either be defined as the number of individual units (monomers) incorporated into the chain, or by the weight of the entire chain.\n\nTo answer the second part of your question, the size of a single molecule is typically defined by the weight of that single molecule. The amount of a certain compound may also be defined through weight (X grams of table salt, or sodium chloride), but the most common unit would probably be x amount of [moles](_URL_0_). \n\nFUN FACT: The largest protein (which is also a molecule, sometimes called biomolecule) is titin, which has the chemical formula: C^169723 H^270464 N^45688 O^52243 S^912",
"provenance": null
},
{
"answer": "There are many types of polymers that could easily be called \"one molecule.\" Your base question amounts to \"what the largest number?\" the answer to which is of course always at least one bigger that the number you state. \n\nWhen molecules start getting into the tens of thousands in molecular weight they aren't really referred to as molecules anymore, but there is no definitional line at which this occurs, it's just practice. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "106231",
"title": "Macromolecule",
"section": "Section::::Definition.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 357,
"text": "Usage of the term to describe large molecules varies among the disciplines. For example, while biology refers to macromolecules as the four large molecules comprising living things, in chemistry, the term may refer to aggregates of two or more molecules held together by intermolecular forces rather than covalent bonds but which do not readily dissociate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "477588",
"title": "Small molecule",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 726,
"text": "Within the fields of molecular biology and pharmacology, a small molecule is a low molecular weight (< 900 daltons) organic compound that may regulate a biological process, with a size on the order of 1 nm. Many drugs are small molecules. Larger structures such as nucleic acids and proteins, and many polysaccharides are not small molecules, although their constituent monomers (ribo- or deoxyribonucleotides, amino acids, and monosaccharides, respectively) are often considered small molecules. Small molecules may be used as research tools to probe biological function as well as leads in the development of new therapeutic agents. Some can inhibit a specific function of a protein or disrupt protein–protein interactions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "385334",
"title": "List of particles",
"section": "Section::::Composite particles.:Molecules.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 422,
"text": "Molecules are the smallest particles into which a non-elemental substance can be divided while maintaining the physical properties of the substance. Each type of molecule corresponds to a specific chemical compound. Molecules are a composite of two or more atoms. See list of compounds for a list of molecules. A molecule is generally combined in a fixed proportion. It is the most basic unit of matter and is homogenous.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19555",
"title": "Molecule",
"section": "Section::::History and etymology.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 507,
"text": "The definition of the molecule has evolved as knowledge of the structure of molecules has increased. Earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties. This definition often breaks down since many substances in ordinary experience, such as rocks, salts, and metals, are composed of large crystalline networks of chemically bonded atoms or ions, but are not made of discrete molecules.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3694845",
"title": "Fine chemical",
"section": "Section::::Products.:Big molecules.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 250,
"text": "\"Big molecules\", also called \"high molecular weight\", HMW molecules, are mostly oligomers or polymers of small molecules or chains of amino acids. Thus, within pharma sciences, peptides, proteins and oligonucleotides constitute the major categories.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5180",
"title": "Chemistry",
"section": "Section::::Modern principles.:Matter.:Molecule.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 570,
"text": "A \"molecule\" is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. However, this definition only works well for substances that are composed of molecules, which is not true of many substances (see below). Molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19555",
"title": "Molecule",
"section": "Section::::Molecular size.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 600,
"text": "Most molecules are far too small to be seen with the naked eye, although molecules of many polymers can reach macroscopic sizes, including biopolymers such as DNA. Molecules commonly used as building blocks for organic synthesis have a dimension of a few angstroms (Å) to several dozen Å, or around one billionth of a meter. Single molecules cannot usually be observed by light (as noted above), but small molecules and even the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope. Some of the largest molecules are macromolecules or supermolecules.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
dy1d3b
|
why does some movie theaters get to show a movie a day or two before it's actual release date?
|
[
{
"answer": "They're called advanced screenings, and they have several purposes.\n\nThey're leveraged for publicity and marketing for many shows. A private screening for film critics means getting reviews out early. Including a few VIPs and well-connected people can build hype. A few people describing how they saw the show early and they loved it can help pump the excitement before the big launch.\n\nThey serve another useful purpose. They allow theaters to test that the movie is all present, that it is the correct movie, and that all the equipment is functioning normally before the big initial showing. Occasionally there are mistakes made, such as theaters being sent mislabeled reels or reels being incompatible with the viewing equipment. An advanced showing gives an opportunity to verify those things.\n\nIn some locations an advanced screening is required by law, since \"blind sales\" are prohibited. Somebody representing the theater must view it at least once to verify that they're receiving the thing they expected.\n\nThey can serve all the purposes above; the screening provides a teaser and advertising for the community, and it is a test of the equipment on a small scale, and it meets the terms of the law. \n\nFor some shows --- especially the shows of lower quality --- sometimes the advance showing is done privately, with no critics or private audiences except for the theater owner and only to satisfy the law and ensure the equipment works.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "6512248",
"title": "List of highest-grossing openings for films",
"section": "Section::::Biggest worldwide openings on record.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 360,
"text": "This list charts films the 50 biggest worldwide openings. Since films do not open on Fridays in many markets, the 'opening' is taken to be the gross between the first day of release and the first Sunday following the movie's release. Figures prior to the year 2002 are not available. Country-by-country variations in release dates are not taken into account. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "954617",
"title": "Vue Cinemas",
"section": "Section::::Operation hours.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 446,
"text": "Every cinema is open everyday except Christmas Day. The company operates a policy of all cinemas opening 15 minutes before the first film of the day, meaning that local cinemas usually open at about 10:30 am, whereas more major cities like Birmingham, the West End, and Manchester open at 10 am. Seasonally, all cinemas are open from 8.30 am if a big blockbuster is on during the Christmas and New Year season, alongside school holiday periods. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41943717",
"title": "Mattuthavani (film)",
"section": "Section::::Release.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 229,
"text": "The delay of the film meant that planned release dates were regularly evaded, with the earliest such date being August 2007. The film stepped up promotions to have an April 2009 releasem though such plans were also unsuccessful.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18948312",
"title": "Home video",
"section": "Section::::Time gap until home video release.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 559,
"text": "A time period is usually allowed to elapse between the end of theatrical release and the home video release to encourage movie theater patronage and discourage piracy. Home video release dates used to be five or six months after the theatrical release, but now most films have been arriving on video after three or four months. Christmas and other holiday-related movies were generally not released on home video until the following year when that holiday was celebrated again, but this practice ended starting with holiday movies that were released in 2015.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30470648",
"title": "Wrath of the Titans",
"section": "Section::::Reception.:Box office.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 899,
"text": "Dan Fellman, Warner Bros. president of domestic distribution, said the comparison between the opening of the first and second film was not fair because the original opened on Good Friday, when more teenagers were out on spring break. He lamented on the film's poor box office performance saying, \"we made a decision to open a week before the holiday this time and generate positive word-of-mouth since we had issues with the last one regarding the 3-D conversion, we're gonna get there – we're just gonna get there in a different way.\" However, despite not opening on a holiday weekend, the film had the advantage of playing a week before Easter in which the company could avail the spring break, which was staggered over the next two weeks. However, all this didn't necessarily aid the film's further box office performance. Warner Bros. said they didn't expect the sequel to reach the same level.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1276990",
"title": "Christmas by medium",
"section": "Section::::Films.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 595,
"text": "In North America, the holiday movie season often includes release of studios' most prestigious pictures, in an effort both to capture holiday crowds and to position themselves for Oscar consideration. Next to summer, this is the second-most lucrative season for the industry. In fact, a few films each year open on the actual Christmas Day holiday. Christmas movies generally open no later than Thanksgiving, as their themes are not so popular once the season is over. Likewise, the home video release of these films is typically delayed until the beginning of the next year's Christmas season.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "304692",
"title": "Battle Royale (film)",
"section": "Section::::Theatrical release.:Special edition.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 253,
"text": "A special edition of the film was released after the original which has eight extra minutes of running time. Unusually, the extra material includes scenes newly filmed after the release of the original. Inserted scenes include (but are not limited to):\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6q0dhp
|
how are damages caused by disasters calculated and reported? how accurate should i expect them to be?
|
[
{
"answer": "Hi there molycow,\n\nJust as a disclaimer, I'm not an expert in disaster losses calculation.\nHowever, based on my experience from the hail storms that occurred in Sydney (it's happened many times, always every 2-3 years).\n\nUsually if you have insurance for your home or car, and your property is damaged you usually call up the company and they bring in an assessor. I'm not sure if that assessor is a third party, or is part of the company.\n\nFrom there, they usually estimate the damage, usually they might have other examples to go by. Though if this disaster hasn't been seen before, I would imagine that the process of estimating the losses would have to take longer.\n\nIf that is the case, I reckon they would get you to breakdown what losses have occurred and how much it has cost you.\n\nBut then again it really depends what type of damages your talking about. It could be damages to residential, commercial or industrial property.\n\nIf you include commercial/industrial you will need to take into account not only the damage that has occurred already, but also the losses that will occur as a result of not being able to continue business. Usually if this is a long period this cost can sometimes outweigh the costs of damages that were caused \"physically\".\n\nThen again you could expand to a whole city that has been affected by a whole disaster, which is a more difficult task to estimate.\n\nI'll probably leave it there for any other more experienced people to answer your question. But hopefully that provides some clarity to what you have been asking.",
"provenance": null
},
{
"answer": "I have done damage calculations for FEMA for flooding in two scenarios:\n\nWhile a flooding event is going on, we take live data from water gauges, run them through models, estimate the size of flood waters, then calculate the number of structures (data quality varies) that intersect with the estimated flood extent, and take the estimated flood level. We run this many times as the flooding event unfolds. We figure out how many homes are impacted by 1-2, 2.01 - 5, 5.01 - 8, and 8+ feet of water. Unfortunately, I don't get to see what happens after we ship the data but I am told that the data really helps to better direct resources. Later the points (often tens of thousands) are checked for accuracy by comparing the damage estimation to aerial photography.\n\nAfter a flood event, (this is massively simplified) FEMA may supplement local communities work forces by sending building inspectors into the field. They spend about 15 minutes at each structure collecting the high water mark (the most important piece of data) and a few dozen characteristics of the building. It goes into software thay spits out a damage estimation. This goes to the local community who then uses it for permitting for reconstruction and whatever else. The structure owner has the opportunity to contest the determination (whether they think should have a higher or lower rating, it depends) with the local community. \n\nThe accuracy really depends in the data quality which is far from perfect but keeps getting better every year. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1455589",
"title": "Emergency management",
"section": "Section::::Phases and personal activities.:Preparedness measures.\n",
"start_paragraph_id": 125,
"start_character": 0,
"end_paragraph_id": 125,
"end_character": 469,
"text": "Disasters take a variety of forms to include earthquakes, tsunamis or regular structure fires. That a disaster or emergency is not large scale in terms of population or acreage impacted or duration does not make it any less of a disaster for the people or area impacted and much can be learned about preparedness from so-called small disasters. The Red Cross states that it responds to nearly 70,000 disasters a year, the most common of which is a single family fire. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20668503",
"title": "Earthquake Engineering Research Institute",
"section": "Section::::California earthquake assessments.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 655,
"text": "In 2006 an engineering firm related to the EERI has projected over $122 billion in damages, if a repeat of the 1906 San Francisco earthquake occurs. This number includes damages to homes and structures, excluding fire damage. The EERI lobbies for government funding to prevent natural disasters. The money is best spent before loss of life and large-scale structural damage, though often it is not seen until afterward, as evidenced by Hurricane Katrina. The EERI and the USGS have identified that a potential large earthquake in Los Angeles would cause more damage than Katrina at New Orleans, with up to $250 billion in total damages and 18,000 deaths.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "69902",
"title": "Extreme weather",
"section": "Section::::Damage.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 414,
"text": "According to IPCC (2011) estimates of annual losses have ranged since 1980 from a few billion to above US$200 billion (in 2010 dollars), with the highest value for 2005 (the year of Hurricane Katrina). The global weather-related disaster losses, such as loss of human lives, cultural heritage, and ecosystem services, are difficult to value and monetize, and thus they are poorly reflected in estimates of losses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31326350",
"title": "List of disasters by cost",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 660,
"text": "The costs of disasters vary considerably depending on a range of factors, such as the geographical location where they occur. When a large disaster occurs in a wealthy country, the financial damage may be large, but when a comparable disaster occurs in a poorer country, the actual financial damage may appear to be relatively small. This is in part due to the difficulty of measuring the financial damage in areas that lack insurance. For example, the 2004 Indian Ocean earthquake and tsunami, with a death toll of over 230,000 people, cost a 'mere' $15 billion, whereas in the Deepwater Horizon oil spill, in which 11 people died, the damages were six-fold.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23531358",
"title": "1995 Neftegorsk earthquake",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 219,
"text": "The Belgian Centre for Research on the Epidemiology of Disasters' EM-DAT database places the total damage at $64.1 million, while the United States' National Geophysical Data Center assesses the damage at $300 million.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3373620",
"title": "Atlantic hurricane",
"section": "Section::::Extremes.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 286,
"text": "BULLET::::- The most damaging hurricane was both Hurricane Katrina and Hurricane Harvey of the 2005 and 2017 seasons, respectively, both of which caused $125 billion in damages in their respective years. However, when adjusted for inflation, Katrina is the costliest with $161 billion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24462957",
"title": "Hazard",
"section": "Section::::Disasters.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 526,
"text": "Disaster can be defined as a serious disruption, occurring over a relatively short time, of the functioning of a community or a society involving widespread human, material, economic, societal or environmental loss and impacts, which exceeds the ability of the affected community or society to cope using its own resources. Disaster can manifest in various forms, threatening those people or environments specifically vulnerable. Such impacts include loss of property, death, injury, trauma or post-traumatic stress disorder.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
519pa8
|
how does social science work?
|
[
{
"answer": "Social science is very much a soft science. It is a science, in that the Scientific Method is applied to try and find facts. However, there is an inherent, recognized difficulty in accurately measuring and testing in these fields.\n\nPhilosophy itself, while under the umbrella of social science, is not necessarily a science. In fact, science itself is a form of Philosophy. Science is a set of rules and ideas that are used to evaluate the world around you. How do we then evaluate philosophy? Through the use of logic. Logic is probably the one Philosophy that is agreed on. Without it, all analysis becomes impossible. \n\nThe \"proof\" of a Philosophy is that it is logically consistent and has supporting evidence for its validity. As was mentioned earlier though, there are inherent problems in accurately gathering and analyzing evidence. The human brain is INSANELY complex. We understand some of the chemical reactions, but know one really knows how the brain does what it does or thinks what it thinks. Even without that, you need to filter out cultural and societal biases and deal with the reality of humans being dishonest about their thoughts and actions.\n\nGoing back to Social Science as a whole, some of it is well researched, and some of it is pure drivel. The best thing to do is go back to the original studies and experiments done. You will be surprised to see that many of \"truths\" about humanity are supported by poorly done studies with comically small sample sizes.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "9145213",
"title": "Outline of science",
"section": "Section::::Branches of science.:Social science.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 655,
"text": "Social science – study of the social world constructed between humans. The social sciences usually limit themselves to an anthropomorphically centric view of these interactions with minimal emphasis on the inadvertent impact of social human behavior on the external environment (physical, biological, ecological, etc.). 'Social' is the concept of exchange/influence of ideas, thoughts, and relationship interactions (resulting in harmony, peace, self enrichment, favoritism, maliciousness, justice seeking, etc.) between humans. The scientific method is utilized in many social sciences, albeit adapted to the needs of the social construct being studied.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12362106",
"title": "Knowledge mobilization",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 459,
"text": "Social science research deals with the people side of quality of life issues and nation-building that are so crucial to the future of humanity. Human, technological and cultural developments are needed for economic prosperity, environmental sustainability, social harmony and cultural vitality. Yet using research in the social sciences presents particular challenges because the issues are often complex and long-term, and deeply affected by local contexts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "146717",
"title": "Social work",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 872,
"text": "Social work is an academic discipline and profession that concerns itself with individuals, families, groups and communities in an effort to enhance social functioning and overall well-being. Social functioning is the way in which people perform their social roles, and the structural institutions that are provided to sustain them. Social work applies social sciences, such as sociology, psychology, political science, public health, community development, law, and economics, to engage with client systems, conduct assessments, and develop interventions to solve social and personal problems; and to bring about social change. Social work practice is often divided into micro-work, which involves working directly with individuals or small groups; and macro-work, which involves working with communities, and - within social policy - fostering change on a larger scale.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26700",
"title": "Science",
"section": "Section::::Branches of science.:Social science.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 1114,
"text": "Social science is concerned with society and the relationships among individuals within a society. It has many branches that include, but are not limited to, anthropology, archaeology, communication studies, economics, history, human geography, jurisprudence, linguistics, political science, psychology, public health, and sociology. Social scientists may adopt various philosophical theories to study individuals and society. For example, positivist social scientists use methods resembling those of the natural sciences as tools for understanding society, and so define science in its stricter modern sense. Interpretivist social scientists, by contrast, may use social critique or symbolic interpretation rather than constructing empirically falsifiable theories, and thus treat science in its broader sense. In modern academic practice, researchers are often eclectic, using multiple methodologies (for instance, by combining both quantitative and qualitative research). The term \"social research\" has also acquired a degree of autonomy as practitioners from various disciplines share in its aims and methods.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15978543",
"title": "National Curriculum Framework (NCF 2005)",
"section": "Section::::Main Features of the NCF 2005.:Curricular area, School stages and assessment.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 573,
"text": "Social Sciences - Social science a subject is included in schools to assist students to explore their interests and aptitudes in order to choose appropriate university courses and/or careers. To encourage them to explore higher levels of knowledge in different disciplines. To promote problem-solving abilities and creative thinking in the citizens of tomorrow, to introduce students to different ways of collecting and processing data and information in specific disciplines, and help them arrive at conclusions, and to generate new insights and knowledge in the process.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "373212",
"title": "Social research",
"section": "Section::::Methodological assumptions.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 1026,
"text": "Social research involves creating a theory, operationalization (measurement of variables) and observation (actual collection of data to test hypothesized relationship). Social theories are written in the language of variables, in other words, theories describe logical relationships between variables. Variables are logical sets of attributes, with people being the \"carriers\" of those variables (for example, gender can be a variable with two attributes: male and female). Variables are also divided into independent variables (data) that influences the dependent variables (which scientists are trying to explain). For example, in a study of how different dosages of a drug are related to the severity of symptoms of a disease, a measure of the severity of the symptoms of the disease is a dependent variable and the administration of the drug in specified doses is the independent variable. Researchers will compare the different values of the dependent variable (severity of the symptoms) and attempt to draw conclusions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "585538",
"title": "Participation (decision making)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 497,
"text": "Participation in social science refers to different mechanisms for the public to express opinions – and ideally exert influence – regarding political, economic, management or other social decisions. Participatory decision-making can take place along any realm of human social activity, including economic (i.e. participatory economics), political (i.e. participatory democracy or parpolity), management (i.e. participatory management), cultural (i.e. polyculturalism) or familial (i.e. feminism).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1k6mmy
|
How far has the crown gone through the royal family tree to find the closest living relative?
|
[
{
"answer": "Depends how you look at the War of the Roses. In terms of the most distantly related successor, Henry VII was the third cousin once removed of Richard III. However, this took place in the Wars of the Roses when there were often several competing claims, and Henry claimed that Richard was never the rightful king in the first place (though Henry was still only the second cousin of the man he claimed was his rightful predecessor - Henry VI).\n\nScotland had a similar case in the late 13th/early 14th centuries where the main royal line collapsed and the Bruce and Balliol families (both of whom were only distantly related to the previous king) both claimed the throne and fought each other for it.\n\nIn the times of more clearly defined rules, Anne and George I were second cousins.\n\nIf you're looking for a case of a really large \"distance\" between monarchs, then look at France rather than Britain. Britain has a male-preference primogeniture succession law, which means that while men come first, women do count in the line of succession. France historically had what's called Salic Law, meaning only male ancestry counts. This means they've often had to go a longer way to pass on the crown. The biggest example of this was Henry IV, who was a *ninth* cousin once removed from his predecessor Henry III; their closest common male-line ancestor had died over 300 years before.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "25870281",
"title": "Family tree of British monarchs",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 358,
"text": "The following is a simplified family tree of the English and British monarchs. For a more detailed chart see: English monarchs family tree (from Alfred the Great till Queen Elizabeth I); and the British monarchs' family tree for the period from Elizabeth's successor, James I, until the present day. For kings before Alfred, see House of Wessex family tree.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7107655",
"title": "Lurie",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 301,
"text": "It has one of the oldest family trees in the world, claiming to trace back at least to King David born c. 1037 BCE, as documented by Neil Rosenstein in his book \"The Lurie Legacy\". It contains many famous members such as Karl Marx, Sigmund Freud, Felix Mendelssohn, Martin Buber, Rashi, and Hezekiah.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1482150",
"title": "Treetops Hotel",
"section": "Section::::Accession of Queen Elizabeth II.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 868,
"text": "Treetops became famous around the world when Princess Elizabeth, as she then was, stayed there at the time of the death of her father, King George VI. This occurred on the night of 5–6 February 1952. She learned of the king‘s death, however, after having departed, while the couple were at Sagana Lodge. She was the first British monarch since King George I to be outside the country at the moment of succession, and also the first in modern times not to know the exact time of her accession because her father had died in his sleep at an unknown time. On the night her father died, before the event was known, Sir Horace Hearne, then Chief Justice of Kenya, had escorted the princess and her husband, Prince Philip, to a state dinner at the Treetops Hotel. After word of George VI's death reached the new Queen the following day, she returned immediately to Britain.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1179158",
"title": "Family tree of English monarchs",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 357,
"text": "This is the English monarchs' family tree for England (and Wales after 1282) from Alfred the Great to Elizabeth I of England. The House of Wessex family tree precedes this family tree and the British monarchs' family tree follows it. The Scottish monarchs' family tree covers the same period in Scotland and also precedes the British monarchs' family tree.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41934927",
"title": "Queen Elizabeth Oak",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 706,
"text": "The Queen Elizabeth Oak is a large sessile oak tree in Cowdray Park near the village of Lodsworth in the Western Weald, West Sussex, England. It lies within the South Downs National Park. It has a girth of 12.5-12.8 metres, and is about 800–1000 years old. According to this estimate it began to grow in the 11th or 12th century AD. In June 2002, the Tree Council designated the Queen Elizabeth Oak one of fifty Great British Trees in recognition of its place in the national heritage. According to the Woodland Trust, the tree is the third largest sessile oak tree to be recorded in the United Kingdom after the Pontfadog Oak in Wales and the Marton Oak in Cheshire, although this tree is now fragmented.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "839957",
"title": "Carl Johan Bernadotte",
"section": "Section::::Ancestry.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 523,
"text": "On 29 June 2011, he surpassed his elder brother, Sigvard (1907–2002), as the longest-lived of Queen Victoria's male descendants, a record he would hold until being surpassed by Prince Philip, Duke of Edinburgh on 13 December 2016. He was the last surviving great-grandchild of Queen Victoria of the United Kingdom, following the 2007 death of Princess Katherine of Greece and Denmark. He was also the last surviving child of Gustaf VI Adolf and the last surviving grandchild of both Gustaf V and Arthur, Duke of Connaught.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "370241",
"title": "Royal Oak",
"section": "Section::::Current situation.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 623,
"text": "The tree standing on the site today is not the original Royal Oak, which is recorded to have been destroyed during the seventeenth and eighteenth centuries by tourists who cut off branches and chunks as souvenirs. The present day tree is believed to be a two or three hundred year-old descendant of the original and is thus known as 'Son of Royal Oak'. In 2000, Son of Royal Oak was badly injured during a violent storm and lost many branches. In September 2010, it was found to have developed large and dangerous cracks. Since 2011 the tree has been surrounded by a outer perimeter fence to ensure the safety of visitors.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1p1rsx
|
Did the 'Cult of the Feathered Serpent' play a significant role in the end of Mayan civilization?
|
[
{
"answer": "No, not really.\n\nThe \"Feathered Serpent Cult\" is a name that archaeologists and iconographers have given to a pan-Mesoamerican explosion of imagery associated with Quetzalcoatl/Kukulkan dating to the Epiclassic period (around the time of the Classic Maya collapse). It appears to be a larger religious movement associated with sacrifice, the ballgame, and the nobility. It's most prominent in Central Mexican sites like Tula and among the cultures of the Gulf Coast such as the Totonac city of El Tajin. Some time during the Early Postclassic a group of people from the Gulf Coast (Putun and Itza peoples, specifically) migrated into the Northern Yucatan and created a kind of hybridized culture with the Maya who were living there. At this time cities like Chichen Itza begin to show an increased focus on Kukulkan and ballgames in imagery.\n\nAlthough it's tempting to see the Feathered Serpent Cult as a kind of Mesoamerican *opus dei*, that's not really accurate. I'm not even sure the word \"cult\" is a fairly accurate descriptor. Feathered Serpent Tradition might be better. Here's Susan Toby Evans (2008:386) discussing this cultural shift:\n\n > Turning to the central Yucatan Peninsula, the motivations for the intrusion of Central Mexican stylistic motifs are more difficult to recover. Large-scale migration seems unlikely. Religious proselytization, in the form of an emphasis upon Central Mexican belief systems, may have been an important factor, but seems secondary to both military conquest and securing trade routes.\n\nThis had virtually nothing to do with the collapse of Classic Maya centers, except that it happened at about the same point in time. The Maya \"collapse\" was fairly localized. The densely populated southern lowlands had a major demographic collapse, but the Northen Yucatan (where the Feathered Serpent \"Cult\" took hold) was largely not affected other than in the loss of trading partners.\n\n* Evans, Susan Toby. 2008 *Ancient Mexico and Central America: Archaeology and Culture History* 2nd edition. Thames and Hudson.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "19230475",
"title": "Quetzalcoatl",
"section": "Section::::Feathered serpent deity in Mesoamerica.:Iconographic depictions.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 575,
"text": "During the epi-classic period, a dramatic spread of feathered serpent iconography is evidenced throughout Mesoamerica, and during this period begins to figure prominently at sites such as Chichén Itzá, El Tajín, and throughout the Maya area. Colonial documentary sources from the Maya area frequently speak of the arrival of foreigners from the central Mexican plateau, often led by a man whose name translates as \"Feathered Serpent\". It has been suggested that these stories recall the spread of the feathered serpent cult in the epi-classic and early post-classic periods.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11389013",
"title": "Feathered Serpent",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 690,
"text": "The earliest representations of feathered serpents appear in the Olmec culture (c. 1400–400 BCE). The Olmec culture predates the Maya and the Aztec, and once reached from the Gulf of Mexico to Nicaragua. Most surviving representations in Olmec art, such as at La Venta and a painting in the Juxtlahuaca cave (see below), show the Feathered Serpent as a crested rattlesnake, sometimes with feathers covering the body and legs, and often in close proximity to humans. It is believed that Olmec supernatural entities such as the feathered serpent were the forerunners of many later Mesoamerican deities, although experts disagree on the feathered serpent's religious importance to the Olmec.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "337012",
"title": "Vision Serpent",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 282,
"text": "The serpent was a very important social and religious symbol, revered by the Maya. Maya mythology describes serpents as being the vehicles by which celestial bodies, such as the sun and stars, cross the heavens. The shedding of their skin made them a symbol of rebirth and renewal.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18449273",
"title": "Maya civilization",
"section": "Section::::Religion and mythology.:Deities.\n",
"start_paragraph_id": 165,
"start_character": 0,
"end_paragraph_id": 165,
"end_character": 838,
"text": "In common with other Mesoamerican cultures, the Maya worshipped feathered serpent deities. Such worship was rare during the Classic period, but by the Postclassic the feathered serpent had spread to both the Yucatán Peninsula and the Guatemalan Highlands. In Yucatán, the feathered serpent deity was Kukulkan, among the Kʼicheʼ it was Qʼuqʼumatz. Kukulkan had his origins in the Classic period War Serpent, \"Waxaklahun Ubah Kan\", and has also been identified as the Postclassic version of the Vision Serpent of Classic Maya art. Although the cult of Kukulkan had its origins in these earlier Maya traditions, the worship of Kukulkan was heavily influenced by the Quetzalcoatl cult of central Mexico. Likewise, Qʼuqʼumatz had a composite origin, combining the attributes of Mexican Quetzalcoatl with aspects of the Classic period Itzamna.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19230475",
"title": "Quetzalcoatl",
"section": "Section::::Feathered serpent deity in Mesoamerica.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 511,
"text": "A feathered serpent deity has been worshiped by many different ethnopolitical groups in Mesoamerican history. The existence of such worship can be seen through studies of the iconography of different Mesoamerican cultures, in which serpent motifs are frequent. On the basis of the different symbolic systems used in portrayals of the feathered serpent deity in different cultures and periods, scholars have interpreted the religious and symbolic meaning of the feathered serpent deity in Mesoamerican cultures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11389013",
"title": "Feathered Serpent",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 233,
"text": "The Feathered Serpent was a prominent supernatural entity or deity, found in many Mesoamerican religions. It was called Quetzalcoatl among the Aztecs, Kukulkan among the Yucatec Maya, and Q'uq'umatz and Tohil among the K'iche' Maya.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19230475",
"title": "Quetzalcoatl",
"section": "Section::::Feathered serpent deity in Mesoamerica.:Iconographic depictions.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 552,
"text": "The archaeological record shows that after the fall of Teotihuacan that marked the beginning of the epi-classic period in Mesoamerican chronology around 600 AD, the cult of the feathered serpent spread to the new religious and political centers in central Mexico, centers such as Xochicalco, Cacaxtla and Cholula. Feathered serpent iconography is prominent at all of these sites. Cholula is known to have remained the most important center of worship to Quetzalcoatl, the Aztec/Nahua version of the feathered serpent deity, in the post-classic period.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
36tsg4
|
why, despite the various laws against it, is vigilantism wrong?
|
[
{
"answer": "Per the Constitution, accused criminals have a lot of rights. They need to be investigated by the police, tried by the DA, represented by a lawyer, found guilty by a jury, and sentenced by a judge. There's a lot of people involved in that, who should be making sure everyone else is doing their job correctly, and affording the accused their civil rights.\n\nWith vigilantism, you're removing the whole criminal justice process, and basically deciding guilt and punishment based on one person's whim.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "30586932",
"title": "Antifragility",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 641,
"text": "Antifragility is a property of systems that increase in capability to thrive as a result of stressors, shocks, volatility, noise, mistakes, faults, attacks, or failures. It is a concept developed by Professor Nassim Nicholas Taleb in his book, \"Antifragile\", and in technical papers. As Taleb explains in his book, antifragility is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure). The concept has been applied in risk analysis, physics, molecular biology, transportation planning, engineering, Aerospace (NASA), and computer science.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58325",
"title": "Vigilantism",
"section": "Section::::Vigilante conduct.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 337,
"text": "\"Vigilante justice\" is often rationalized by the belief that proper legal forms of criminal punishment are either non-existent, insufficient, or inefficient. Vigilantes normally see the government as ineffective in enforcing the law; such individuals often claim to justify their actions as a fulfillment of the wishes of the community.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7806603",
"title": "Poor relief",
"section": "Section::::Tudor era.:Parliament and the parish.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 790,
"text": "However, despite its introduction of such violent actions to deter vagabonding, the Act of 1572 was the first time that parliament had passed legislation which began to distinguish between different categories of vagabonds. \"Peddlers, tinkers, workmen on strike, fortune tellers, and minstrels\" were not spared these gruesome acts of deterrence. This law punished all able bodied men \"without land or master\" who would neither accept employment nor explain the source of their livelihood. In this newly established definition of what constituted a vagabond, men who had been discharged from the military, released servants, and servants whose masters had died were specifically exempted from the Act's punishments. This legislation did not establish any means to support these individuals.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "743365",
"title": "Nonresistance",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 588,
"text": "The term nonresistance was later used to refer to the Established Church during the religious troubles in England following the English Civil War and Protestant Succession. In the Anabaptist churches, the term is defined in contrast with pacifism. Advocates of non-resistance view pacifism as a more liberal theology since it advocates only physical nonviolence and allows its followers to actively oppose an enemy. In the 20th century, there have been differences of opinion between and within Amish and Mennonite churches, as they disagreed on the ethics of nonresistance and pacifism.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "741881",
"title": "Hiibel v. Sixth Judicial District Court of Nevada",
"section": "Section::::Majority opinion.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 1089,
"text": "However, the Court has identified a constitutional difficulty with many modern vagrancy laws. In \"Papachristou v. Jacksonville\", , the Court held that a traditional vagrancy law was void for vagueness because its \"broad scope and imprecise terms denied proper notice to potential offenders and permitted police officers to exercise unfettered discretion in the enforcement of the law.\" In \"Brown v. Texas\", , the Court struck down Texas's stop-and-identify law as violating the Fourth Amendment because it allowed police officers to stop individuals without \"specific, objective facts establishing reasonable suspicion to believe the suspect was involved in criminal activity.\" And in \"Kolender v. Lawson\", , the Court struck down a California stop-and-identify law that required a suspect to provide \"credible and reliable identification\" upon request. The words \"credible and reliable\" were vague because they \"provided no standard for determining what a suspect must do to comply with [the law], resulting in virtually unrestrained power to arrest and charge persons with a violation.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "338825",
"title": "Toleration",
"section": "Section::::In the Enlightenment.:Locke.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 726,
"text": "Unlike Thomas Hobbes, who saw uniformity of religion as the key to a well-functioning civil society, Locke argued that more religious groups actually prevent civil unrest. In his opinion, civil unrest results from confrontations caused by any magistrate's attempt to prevent different religions from being practiced, rather than tolerating their proliferation. However, Locke denies religious tolerance for Catholics, for political reasons, and also for atheists because \"Promises, covenants, and oaths, which are the bonds of human society, can have no hold upon an atheist\". A passage Locke later added to \"An Essay Concerning Human Understanding\" questioned whether atheism was necessarily inimical to political obedience.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "186123",
"title": "Conscience",
"section": "Section::::Conscientious acts and the law.\n",
"start_paragraph_id": 75,
"start_character": 0,
"end_paragraph_id": 75,
"end_character": 1472,
"text": "Expressed justifications for refusing to obey laws because of conscience vary. Many conscientious objectors are so for religious reasons—notably, members of the historic peace churches are pacifist by doctrine. Other objections can stem from a deep sense of responsibility toward humanity as a whole, or from the conviction that even acceptance of work under military orders acknowledges the principle of conscription that should be everywhere condemned before the world can ever become safe for real democracy. A conscientious objector, however, does not have a primary aim of changing the law. John Dewey considered that conscientious objectors were often the victims of \"moral innocency\" and inexpertness in moral training: \"the moving force of events is always too much for conscience\". The remedy was not to deplore the wickedness of those who manipulate world power, but to connect \"conscience\" with forces moving in another direction- to build institutions and social environments predicated on the rule of law, for example, \"then will conscience itself have compulsive power instead of being forever the martyred and the coerced.\" As an example, Albert Einstein who had advocated \"conscientious objection\" during the First World War and had been a longterm supporter of War Resisters' International reasoned that \"radical pacifism\" could not be justified in the face of Nazi rearmament and advocated a world federalist organization with its own professional army.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.