id
stringlengths 5
6
| input
stringlengths 3
301
| output
list | meta
null |
---|---|---|---|
1u2ce3
|
When will the Andromeda Galaxy be close enough to be visible to the naked eye? How big would it be in the night sky?
|
[
{
"answer": "Andromeda is visible to the naked eye now, in decent light conditions. It has an apparent magnitude of 3.4. It is more than six times the width of the moon in the sky, but the full diameter is not bright enough to be seen.",
"provenance": null
},
{
"answer": "I do not know when it will be visible to the naked eye, however NASA has a great image showing different stages of what the collision will look like as it happens (as seen from earth, of course) over the next 7 billion years (which may be a moot point as our sun has less than that much time left to it if I recall correctly), which is half of your question answered:\n\n_URL_0_\n\nAnd this youtube video has a beautiful simulation of the collision (just not from an earth view):\n\n_URL_1_",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "82780",
"title": "Crab Nebula",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 424,
"text": "At an apparent magnitude of 8.4, comparable to that of Saturn's moon Titan, it is not visible to the naked eye but can be made out using binoculars under favourable conditions. The nebula lies in the Perseus Arm of the Milky Way galaxy, at a distance of about from Earth. It has a diameter of , corresponding to an apparent diameter of some 7 arcminutes, and is expanding at a rate of about , or 0.5% of the speed of light.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1750193",
"title": "Ursa Major I Dwarf",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 225,
"text": "It estimated to be located at a distance of about 330,000 light-years (100 kpc) from the Earth. That is about twice the distance to the Large Magellanic Cloud; the largest and most luminous satellite galaxy of the Milky Way.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "74331",
"title": "Andromeda Galaxy",
"section": "Section::::Amateur observing.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 734,
"text": "The Andromeda Galaxy is bright enough to be seen with the naked eye, even with some light pollution. Andromeda is best seen during autumn nights in the Northern Hemisphere, when from mid-latitudes the galaxy reaches zenith (its highest point at midnight) so can be seen almost all night. From the Southern Hemisphere, it is most visible at the same months, that is in spring, and away from our equator does not reach a high altitude over the northern horizon, making it difficult to observe. Binoculars can reveal some larger structures and its two brightest satellite galaxies, M32 and M110. An amateur telescope can reveal Andromeda's disk, some of its brightest globular clusters, dark dust lanes and the large star cloud NGC 206.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30659",
"title": "Triangulum",
"section": "Section::::Features.:Deep-sky objects.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 933,
"text": "The Triangulum Galaxy, also known as Messier 33, was discovered by Giovanni Battista Hodierna in the 17th century. A distant member of the Local Group, it is about 2.3 million light-years away, and at magnitude 5.8 it is bright enough to be seen by the naked eye under dark skies. Being a diffuse object, it is challenging to see under light-polluted skies, even with a small telescope or binoculars, and low power is required to view it. It is a spiral galaxy with a diameter of 46,000 light-years and is thus smaller than both the Andromeda Galaxy and the Milky Way. A distance of less than 300 kiloparsecs between it and Andromeda supports the hypothesis that it is a satellite of the larger galaxy. Within the constellation, it lies near the border of Pisces, 3.5 degrees west-northwest of Alpha Trianguli and 7 degrees southwest of Beta Andromedae. Within the galaxy, NGC 604 is an H II region where star formation takes place.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16291039",
"title": "NGC 4889",
"section": "Section::::Properties.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 485,
"text": "NGC 4889 is probably the largest and the most massive galaxy out to the radius of 100 Mpc (326 million light years) of the Milky Way. The galaxy has an effective radius which extends at 2.9 arcminutes of the sky, translating it to the diameter of 239,000 light years, about the size of the Andromeda Galaxy. In addition it has an immense diffuse light halo extending to 17.8 arcminutes, roughly half the angular diameter of the Sun, translating to 1.3 million light years in diameter.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "173484",
"title": "Magellanic Clouds",
"section": "Section::::Characteristics.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 773,
"text": "The Large Magellanic Cloud and its neighbour and relative, the Small Magellanic Cloud, are conspicuous objects in the southern hemisphere, looking like separated pieces of the Milky Way to the naked eye. Roughly 21° apart in the night sky, the true distance between them is roughly 75,000 light-years. Until the discovery of the Sagittarius Dwarf Elliptical Galaxy in 1994, they were the closest known galaxies to our own (since 2003, the Canis Major Dwarf Galaxy was discovered to be closer still, and is now considered the actual nearest neighbor). The LMC lies about 160,000 light years away, while the SMC is around 200,000. The LMC is about twice the diameter of the SMC (14,000 ly and 7,000 ly respectively). For comparison, the Milky Way is about 100,000 ly across.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29389753",
"title": "UDF 2457",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 356,
"text": "The Milky Way galaxy is about 100,000 light-years in diameter, and the Sun is about 25,000 light-years from the galactic center. The small common star UDF 2457 may be one of the farthest known stars inside the main body of the Milky Way. Globular clusters (such as Messier 54 and NGC 2419) and stellar streams are located further out in the galactic halo.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ptrs1
|
How did Newton derive the Law of Universal Gravitation?
|
[
{
"answer": "Observations demonstrated that the planets in our solar system orbited the Sun in ellipses. Newton discovered through his investigation (IE invention) of Calculus that a body undergoing acceleration proportional to the inverse square of their distance would travel in an ellipse. He then made a mental leap and realized that observable natural laws we see on Earth also are obeyed in the heavens, and thus concluded that the process that causes an apple to fall to the ground is the same force that keeps the Moon in orbit around the Earth and the planets in orbit around the Sun. He then reasoned that every particle in the universe acts on every other particle with this force, meaning that the force between two objects must be related to each of their masses.\n\nIt was this series of mental leaps that led to his deduction of the law of gravitation. There was very little theoretical derivation involved; it was mainly an assertion which turned out to be demonstrably accurate. The gravitational constant G was not known for nearly a century afterwards when it was finally empirically measured. No known first principle derivation can produce G, so it remains an empirical observation.",
"provenance": null
},
{
"answer": "I think we'll never know his exact train of thought. Looking in the Principia isn't very helpful, since it's all written in lemma/theorem/corollary style math. I don't know if he wrote about the less rigorous version of how he got to the idea.\n\nIf you wanted to try to guess at it, without knowing the answer, you could probably come up with something that looks similar to Newton's equation. Once you decide that *there exists an attractive force between massive objects*, which I think is the big intuitive leap, you look around and say, well these two apples aren't hurtling towards each other, they hurtle towards the earth. This force should be proportional to the masses, so that bigger masses apply more force. It could be the square of the masses, but why complicate matters from the start? Next, since the earth is round, and the planets move around the sun in ellipses (known by Newton's time), it is reasonable to suppose that this force emanates radially from a mass, equally in all directions. If this force fell on the points of a sphere of radius r, it would be spread over an area of 4 pi r^2. Therefore, the same object would have a lesser effect farther away, as the lines of force it was emanating would be less dense, and this drop off would be inversely proportional to r^2.\n\nFortunately, using his new mechanics and the calculus, this simplest law that takes into account observations turns out to be the correct one, as Newton showed by reproducing all known planetary motion from one universal law. It worked so well that it took a few centuries before scientists found the holes and plugged them up with General Relativity.",
"provenance": null
},
{
"answer": "Exactly how Newton derived gravitation can be found in his Philosophiæ Naturalis Principia Mathematica. \n\nHowever, from a more approachable standpoint: you can take Kepler's Laws of Planetary Motion, which are based on observation, and derive the Newton's inverse square law. It is a fairly standard homework problem for intermediate mechanics level physics students. \n\nEssentially, you start with Kepler's 1st law - make sure to include the radial and tangential unit vectors - and take two time derivatives. You will get an acceleration that has a piece in the radial direction (r-hat) and a piece in the tangential direction (theta-hat). Newton makes the assumption that the force between two celestial bodies acts only on the line between them, i.e.: only in the r-hat direction. Therefore, the terms in the theta-hat direction must equal zero. \n\nBy setting these to zero you get an expression that expression that is equivalent to conservation of angular moment. dL/dt = 0 & L = r^2 d(theta)/dt. You can leverage this new found constant into the rest of the second derivative to arrive at the inverse square law.",
"provenance": null
},
{
"answer": "Newton actually modified Kepler's Laws of planetary motion which had previously assumed that the Sun was immovable.\n\nBecause for every action there is an equal and opposite reaction, Newton realized that in the planet-Sun system the planet does not orbit around a stationary Sun.\n\nInstead, Newton proposed that both the planet and the Sun orbited around the common center of mass for the planet-Sun system.\n\nNewton simply changed this to include centre of mass as a consideration.\n\nNewton defined the force on a planet to be the product of its mass and the acceleration. What he changed was Kepler's assumption that: \n\nA) Every planet is attracted towards the Sun.\n\nB) The force on a planet is in direct proportion to the mass of the planet and in inverse proportion to the square of the distance from the Sun.\n\nHere the Sun plays an unsymmetrical part which is unjustified. So he assumed Newton's law of universal gravitation:\n\nA) All bodies in the solar system attract one another.\n\nB) The force between two bodies is in direct proportion to the product of their masses and in inverse proportion to the square of the distance between them.\n\nThe gravitational constant that he posited was not calculated till much later.\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2748665",
"title": "Copernican Revolution",
"section": "Section::::Reception.:Isaac Newton.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 679,
"text": "Newton used Kepler's laws of planetary motion to derive his law of universal gravitation. Newton's law of universal gravitation was the first law he developed and proposed in his book \"Principia\". The law states that any two objects exert a gravitational force of attraction on each other. The magnitude of the force is proportional to the product of the gravitational masses of the objects, and inversely proportional to the square of the distance between them. Along with Newton's law of universal gravitation, the \"Principia\" also presents his three laws of motion. These three laws explain inertia, acceleration, action and reaction when a net force is applied to an object.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "83754",
"title": "Geocentric model",
"section": "Section::::Gravitation.\n",
"start_paragraph_id": 52,
"start_character": 0,
"end_paragraph_id": 52,
"end_character": 1062,
"text": "In 1687, Isaac Newton stated the law of universal gravitation, described earlier as a hypothesis by Robert Hooke and others. His main achievement was to mathematically derive Kepler's laws of planetary motion from the law of gravitation, thus helping to prove the latter. This introduced gravitation as the force which both kept the Earth and planets moving through the universe and also kept the atmosphere from flying away. The theory of gravity allowed scientists to rapidly construct a plausible heliocentric model for the Solar System. In his \"Principia\", Newton explained his theory of how gravity, previously thought to be a mysterious, unexplained occult force, directed the movements of celestial bodies, and kept our Solar System in working order. His descriptions of centripetal force were a breakthrough in scientific thought, using the newly developed mathematical discipline of differential calculus, finally replacing the previous schools of scientific thought, which had been dominated by Aristotle and Ptolemy. However, the process was gradual.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12191272",
"title": "Newton's theorem of revolving orbits",
"section": "Section::::Historical context.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 1594,
"text": "With the publication of his \"Principia\" roughly eighty years later (1687), Isaac Newton provided a physical theory that accounted for all three of Kepler's laws, a theory based on Newton's laws of motion and his law of universal gravitation. In particular, Newton proposed that the gravitational force between any two bodies was a central force \"F\"(\"r\") that varied as the inverse square of the distance \"r\" between them. Arguing from his laws of motion, Newton showed that the orbit of any particle acted upon by one such force is always a conic section, specifically an ellipse if it does not go to infinity. However, this conclusion holds only when two bodies are present (the two-body problem); the motion of three bodies or more acting under their mutual gravitation (the \"n\"-body problem) remained unsolved for centuries after Newton, although solutions to a few special cases were discovered. Newton proposed that the orbits of planets about the Sun are largely elliptical because the Sun's gravitation is dominant; to first approximation, the presence of the other planets can be ignored. By analogy, the elliptical orbit of the Moon about the Earth was dominated by the Earth's gravity; to first approximation, the Sun's gravity and those of other bodies of the Solar System can be neglected. However, Newton stated that the gradual apsidal precession of the planetary and lunar orbits was due to the effects of these neglected interactions; in particular, he stated that the precession of the Moon's orbit was due to the perturbing effects of gravitational interactions with the Sun.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "244611",
"title": "Newton's law of universal gravitation",
"section": "Section::::History.:Newton's work and claims.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 842,
"text": "Newton, faced in May 1686 with Hooke's claim on the inverse square law, denied that Hooke was to be credited as author of the idea. Among the reasons, Newton recalled that the idea had been discussed with Sir Christopher Wren previous to Hooke's 1679 letter. Newton also pointed out and acknowledged prior work of others, including Bullialdus, (who suggested, but without demonstration, that there was an attractive force from the Sun in the inverse square proportion to the distance), and Borelli (who suggested, also without demonstration, that there was a centrifugal tendency in counterbalance with a gravitational attraction towards the Sun so as to make the planets move in ellipses). D T Whiteside has described the contribution to Newton's thinking that came from Borelli's book, a copy of which was in Newton's library at his death.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17553",
"title": "Kepler's laws of planetary motion",
"section": "Section::::History.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 564,
"text": "Newton was credited with understanding that the second law is not special to the inverse square law of gravitation, being a consequence just of the radial nature of that law; while the other laws do depend on the inverse square form of the attraction. Carl Runge and Wilhelm Lenz much later identified a symmetry principle in the phase space of planetary motion (the orthogonal group O(4) acting) which accounts for the first and third laws in the case of Newtonian gravitation, as conservation of angular momentum does via rotational symmetry for the second law.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1864889",
"title": "Cosmology",
"section": "Section::::Disciplines.:Physical cosmology.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 548,
"text": "Isaac Newton's \"Principia Mathematica\", published in 1687, was the first description of the law of universal gravitation. It provided a physical mechanism for Kepler's laws and also allowed the anomalies in previous systems, caused by gravitational interaction between the planets, to be resolved. A fundamental difference between Newton's cosmology and those preceding it was the Copernican principle—that the bodies on earth obey the same physical laws as all the celestial bodies. This was a crucial philosophical advance in physical cosmology.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1963519",
"title": "History of general relativity",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 673,
"text": "Before the advent of general relativity, Newton's law of universal gravitation had been accepted for more than two hundred years as a valid description of the gravitational force between masses, even though Newton himself did not regard the theory as the final word on the nature of gravity. Within a century of Newton's formulation, careful astronomical observation revealed unexplainable variations between the theory and the observations. Under Newton's model, gravity was the result of an attractive force between massive objects. Although even Newton was bothered by the unknown nature of that force, the basic framework was extremely successful at describing motion.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
57rts5
|
can the body repair a rupture of a spinal disk?
|
[
{
"answer": "So I ruptured the disk in my neck. The doctor explained it like this:\n\nThe disk is like a krispy Kreme doughnut. When it ruptures the jam inside spills out. It could split on the inside or on the outside. On the outside is not such a problem. However on the inside it can directly apply pressure to the nerves it surrounds. After a while this jam can 'dry' out and can retract back inside. This is painful for obvious reasons.\n\nNow the non ELI5 bit. See a doctor if you can. If you loose feeling or get pins and needles in a limb see one immediately. This is, according to my doctor 'really bad'. Now I just had extreme pain, we are talking screaming and being taken away in an ambulance high on ketamine bad. The 'jam' was only 2.6mm according to the scans. It has healed but is weaker and occasionally happens again to a less serious degree each time. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "210269",
"title": "Intervertebral disc",
"section": "Section::::Clinical significance.:Herniation.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 983,
"text": "A spinal disc herniation, commonly referred to as a slipped disc, can happen when unbalanced mechanical pressures substantially deform the anulus fibrosus, allowing part of the nucleus to obtrude. These events can occur during peak physical performance, during traumas, or as a result of chronic deterioration, typically accompanied with poor posture and has been associated with a \"Propionbacterium acnes\" infection. Both the deformed anulus and the gel-like material of the nucleus pulposus can be forced laterally, or posteriorly, distorting local muscle function, and putting pressure on the nearby nerve. This can give the symptoms typical of nerve root entrapment. These symptoms can vary between parasthaesia, numbness, chronic and/or acute pain, either locally or along the dermatome served by the entrapped nerve, loss of muscle tone and decreased homeostatic performance . The disc is not physically slipped; it bulges, usually in just one direction. Risk of Cauda Equina.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6737633",
"title": "Spinal disc herniation",
"section": "Section::::Signs and symptoms.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 442,
"text": "It is possible to have a herniated disc without pain or noticeable symptoms if the extruded \"nucleus pulposus\" material doesn't press on soft tissues or nerves. A small-sample study examining the cervical spine in symptom-free volunteers found focal disc protrusions in 50% of participants, suggesting that a considerable part of the population might have focal herniated discs in their cervical region that do not cause noticeable symptoms.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33607453",
"title": "Vertebral column",
"section": "Section::::Clinical significance.:Disease.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 243,
"text": "Spinal disc herniation, more commonly called a \"slipped disc\", is the result of a tear in the outer ring (anulus fibrosus) of the intervertebral disc, which lets some of the soft gel-like material, the nucleus pulposus, bulge out in a hernia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46521228",
"title": "Vertebra",
"section": "Section::::Clinical significance.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 330,
"text": "Spinal disc herniation, more commonly called a \"slipped disc\", is the result of a tear in the outer ring (anulus fibrosus) of the intervertebral disc, which lets some of the soft gel-like material, the nucleus pulposus, bulge out in a hernia. This may be treated by a minimally-invasive endoscopic procedure called Tessys method.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6737633",
"title": "Spinal disc herniation",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 558,
"text": "Disc herniation is normally a further development of a previously existing disc protrusion, in which the outermost layers of the \"anulus fibrosus\" are still intact, but can bulge when the disc is under pressure. In contrast to a herniation, none of the central portion escapes beyond the outer layers. Most minor herniations heal within several weeks. Anti-inflammatory treatments for pain associated with disc herniation, protrusion, bulge, or disc tear are generally effective. Severe herniations may not heal of their own accord and may require surgery. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4793282",
"title": "Failed back syndrome",
"section": "Section::::Pathology.:Recurrent or persistent disc herniation.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 1811,
"text": "Removal of a disc at one level can lead to disc herniation at a different level at a later time. Even the most complete surgical excision of the disc still leaves 30-40% of the disc, which cannot be safely removed. This retained disc can re-herniate sometime after surgery. Virtually every major structure in the abdomen and the posterior retroperitoneal space has been injured, at some point, by removing discs using posterior laminectomy/discectomy surgical procedures. The most prominent of these is a laceration of the left internal iliac vein, which lies in close proximity to the anterior portion of the disc. In some studies, recurrent pain in the same radicular pattern or a different pattern can be as high as 50% after disc surgery. Many observers have noted that the most common cause of a failed back syndrome is caused from recurrent disc herniation at the same level originally operated. A rapid removal in a second surgery can be curative. The clinical picture of a recurrent disc herniation usually involves a significant pain-free interval. However, physical findings may be lacking, and a good history is necessary. The time period for the emergence of new symptoms can be short or long. Diagnostic signs such as the straight leg raise test may be negative even if real pathology is present. The presence of a positive myelogram may represent a new disc herniation, but can also be indicative of a post operative scarring situation simply mimicking a new disc. Newer MRI imaging techniques have clarified this dilemma somewhat. Conversely, a recurrent disc can be difficult to detect in the presence of post op scarring. Myelography is inadequate to completely evaluate the patient for recurrent disc disease, and CT or MRI scanning is necessary. Measurement of tissue density can be helpful.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29856945",
"title": "Disc protrusion",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 262,
"text": "A disc protrusion is a disease condition which can occur in some vertebrates, including humans, in which the outermost layers of the anulus fibrosus of the intervertebral discs of the spine are intact, but bulge when one or more of the discs are under pressure.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
89finh
|
Have there ever been a situation where a booming city or town went "out of business"?
|
[
{
"answer": "This sort of thing is THE hallmark (not just A hallmark!) of the Intermountain West. Towns frequently burst into existence around the discovery of an ore body with precious metals. But precious metals being what they are, they are typically rare, and so while an ore body might seem promising, it was a finite resource and typically limited at that. This resulted in a rush to the new location with people eager for an opportunity to strike a claim, find a job, or to provide essential services. This often resulted is a sudden rise in population and the building of a lot of structures. When the ore was exhausted, the population would just as suddenly dwindle. Buildings were often hauled away to the next boom - or they fell victim to a harsh environment.\n\nIf the community was sufficiently long-lived and promising, it might succeed in establishing itself as the seat of county government, with the promise of a few jobs when the boom subsided. That said, seats of government can be moved - and they frequently are - so even this source of revenue and employment was vulnerable, and some communities went from shadows of their former selves to complete oblivion.\n\nThere are hundreds of instances of all of this in the vast uninhabited outback of Nevada, the seventh largest state in the nation (with roughly 87% of its land under federal management). In the nineteenth century, Hamilton was home to an important mining boom and became the county seat of White Pine County. It yielded that title to Ely, which retains it to this day. Dayton prospered as a result of it being able to offer milling to the Comstock Mining District, but as the mines failed, so did Dayton. It retained the title of Lyon County seat until its courthouse burned in 1910 (locals maintain it was arson caused by someone who wanted to move the seat of government). Dayton subsequently dwindled, although it has enjoyed a recent resurgence as a bedroom community. Yerington took the seat of government, and it retains that title.\n\nAustin in central Nevada became a boomtown with such promise that everyone thought it would overtake Virginia City, which was suffering a slump (Virginia City's mines prospered for an astounding two decades before it also crashed). An entrepreneur even moved Virginia City's International Hotel to Austin in the mid 1860s, and Austin became the seat of government for Lander County (building what is likely the nation's last Greek revival courthouse, erected in 1872). Its mines quickly failed. It lost a part of Lander County to neighboring Eureka County (experiencing another mining boom-bust cycle), but it retained its government for nearly another century until Battle Mountain was able to take the seat of Lander County government.\n\nOne of the most dramatic examples of this phenomenon involved Goldfield, often put forward as the site of the last gold rush in the continental US. Shortly after the turn of the century, thousands flooded into remote south-central Nevada and established the town of Goldfield, which quickly took the title of Esmeralda County government from Hawthorne (which had earlier taken it from failing Aurora, now a ghost town). At one point Goldfield had about 25,000 people and was the largest city in Nevada. Its mines failed within a decade, and eventually Hawthorne began a fight to regain its seat of government: although its mines had also failed, it was able to claim other means of support. Ultimately, the Nevada government split the enormous Esmeralda County to form two new counties, so both places could have the benefit of county government. Goldfield and its diminished Esmeralda County declined until that enormous piece of real estate (rivalling the size of some smaller states) had only a little more than 300 residents.\n\nThis cycle of boom and bust has resulted in what may be a record for the most times that \"largest community in the territory/state\" has been exchanged: That title has moved from Dayton (AKA Chinatown) to Mormon Station (AKA Genoa) to Carson City to Virginia City to Reno, to Goldfield, back to Reno, and finally (or so far!!!) to Las Vegas. That's eight times that the title has moved; whether that is record could be contested here, but it is certainly remarkable.\n\nThis is a pattern throughout the mining West, caused by an industry with resources located in remote, often inhabitable land, where cities boomed into existence because of the attraction of wealth, and then just as quickly disappear. Two state parks are dedicated to this phenomenon: California's Bodie State Park and Nevada's Berlin State Park both commemorate the nineteenth-century quest for wealth and ingenuity that it took to scrape together an existence where nature provided little - together with the inevitable abandonment of the towns when it was no longer possible tor resist the effects of gravity.",
"provenance": null
},
{
"answer": "I spent some time as a youth in a small town in Missouri named \"Excelsior Springs\" which I've always thought of as a great example of a boom town.\n\nThe town of Excelsior Springs was formed around several fresh water springs in the area that all had different alleged \"healing properties.\" Within a year of the town being officially founded by two men named Flack and Wyman in 1880, almost 200 homes had been built in the area.\n\nBy late 1881, schools, an opera, and several hotels (including [this famous 1888 hotel I worked at briefly](_URL_0_) ) were being built. No other city in Missouri had ever seen as much growth as Excelsior Springs in a single year by this point. It continued to expand, building a golf course and becoming a destination for a few US Presidents.\n\nNow before I continue, what made/makes Excelsior Springs springs so unique is their incredibly diverse and dense combinations of different minerals found in the waters. For example, there are only 4 springs in Europe that have relevant levels of Iron and Manganese, and the only two in the United States are located in Excelsior Springs. There are over 20 different springs with different levels of types of minerals, making it one of the most dense areas of mineral-enriched spring water in the world. This was the primary reason for the idea of the waters' healing properties, and the explosive growth of the town.\n\nIn 1963, with help from national organizations like the Arthritis and Rheumatism Foundation, legislation was passed that prevented many of the clinics that had opened in the area from advertising the spring water as \"cures.\" In addition, the Saturday Evening Post published an article called [The Hucksters of Pain](_URL_1_) which destroyed a lot of the credibility of the springs' healing properties.\n\nA combination of modern medicine, the dying of 3 railroad nodes in the area, and former citizens leaving the city for greener pastures essentially killed off the city by the 1970's. You could say their main source of revenue... *evaporated*\n\nIt still exists today, and the Springs are still a tourist destination, but the amount of abandoned Art Deco buildings and boarding houses still seen today are both beautiful and indicative of a lost age for the small town.\n\n\n",
"provenance": null
},
{
"answer": "I'll answer this from a slightly different angle, in how these towns entertained themselves, using three ice hockey teams as examples:\n\nThe Upper Peninsula of Michigan, was the home of the first openly professional ice hockey league during the early 1900s, a direct response to a mining boom, particularly copper, that occured. The governing bodies in Canada refused to allow professionals, but the US was a little more lax on amateurism (at least in regards to hockey, at that time), so the International Hockey League was formed in 1904, based in several towns that grew due to the mining industry (and Pittsburgh, as it had an arena with artificial ice, I believe the first to do so). But with the 1907 crash in commodities, and the subsequent legalisation of professionals in Canada at the same time, the IHL folded up; however most of the top ice hockey players of the era played at least a few games in the league during this time period, giving it some added notoriety. Just a small note about the impact of boom-towns and what happens around them.\n\nThe Klondike Goldrush also saw the formation of an ice hockey team in Dawson City, Yukon, though several years after the goldrush had ended. The team, the Dawson City Nuggets, are most known for their 1905 Stanley Cup challenge, in which they travelled across Canada, travelling variously by train, dog sled, boat, and walking, until they reached Ottawa and played the strongest team in the country, the Senators (known unofficially as the Silver Seven). The Nuggets would proceed to lose the two-game series 9-2 and then 23-2, the latter being the largest margin of victory in a Stanley Cup game.\n\nA similar situation happened with the Ontario town of Kenora, known as Rat Portage until 1905. It experience a minor boom when the Canadian Pacific Railway built a stop there in 1877, lasting until about 1908. They established a hockey team, the Thistles, comprised entirely of local players, and competed for the Stanley Cup four times between 1903 and 1907, winning in January 1907. With some 5000 people in the town at the time they were, and remain, the smallest city/town to win the Cup, and after losing it in March 1907, the shortest holders of the trophy. The economic downturn, and advent of professionalism spelt the end of the team, which folded in 1908.\n\nThere are probably more examples I could add, but the above three are the most famous ones in hockey, and show both the rise and fall of boomtowns and their economies. All of them happened for simple reasons: these rapidly-growing towns/regions had an influx of people (mainly young, single men) who suddenly had a lot of disposable income, and they needed entertainment. Sports proved to be a great answer to that, especially as betting was a huge part of sports culture in the early 1900s (and far more open and prominent than today). Of course once the economic upswing ended in these places, the teams died out, just as quickly as they arrived, and they are now largely footnotes, somewhat insignificant in the overall status of hockey today (while the IHL helped push forward professionalism in Canada, it was arguably only a matter of time before that happened).\n\nSources:\nInterestingly enough, there are several academic papers on this very topic (at least for the IHL and the Thistles; nothing on the Nuggets that I know of, yet):\n\n* Daniel Mason, *The Origins and Development of the International Hockey League and Its Effects on the Sport of Professional Ice Hockey in North America* (1994). Mason wrote his MA thesis on the IHL, which has come to be regarded as the major source for the league. He later published a condensed form as a journal article, \"The International Hockey League and the Professionalization of Ice Hockey, 1904-1907\" in *Journal of Sport History*\n\n* John Wong, *From Rat Portage to Kenora: The Death of a (Big-Time) Hockey Dream* another article in *Journal of Sport History* looks at the economic impact of the Thistles, and how it shows the rise and fall of these boomtowns.\n\n* For the overall impact of all three, I'd suggest:\n* Michael McKinnley, *Hockey's Rise from Sport to Spectacle* (2000). Not an academic work, but thorough (even though he lifted a passage in the book from somewhere else; yes, plagiarised), and goes over these three examples in particular.",
"provenance": null
},
{
"answer": "(1/2)\n\nWell I'm pretty late here but I thought I could maybe add something about buccaneer or pirate \"boomtowns\" in the 17th and early 18th centuries. The economic impact of theft at sea could be very important, not just to those whose wealth was stolen and not just to the thieves themselves but also to thriving communities in port side towns who set themselves up to benefit from this plunder, in many ways far more than the buccaneers and pirates ever did. These port side towns known today as \"pirate havens\" thrived on this sometimes huge influx of stolen plunder that the buccaneers brought back with them from their raids and usually recklessly spent ashore on alcohol and prostitution, or cheaply fenced to enterprising merchants. \n\nAlthough their glory days are long gone, the names of some of these major havens in the Caribbean like Tortuga, Port Royal and Nassau still live on and conjure up images of drunken and debauched pirates rampaging through the streets. Of course something like this has always been a stereotype of sailors in all eras but the added impact of huge amounts of stolen plunder made these piratical and violent sailors especially extravagant and important. However, as European nations increasingly grew to have more peaceful and mutually beneficial trading relations with one another in the late 17th and early 18th century, those governments that had once given tacit or explicit support to the buccaneers as lawful privateers began to turn on them and outlaw them as pirates. And with that crackdown on piracy, those port side towns that had once thrived on it also quieted down faded away into obscurity. I believe the last real \"pirate haven\" came to an end in 1718 with the British capture of Nassau in the Bahamas, although by then the era when buccaneers could readily rely on real safe-havens offered by complicit local governors had already disappeared at least several decades earlier.\n\n**The heyday of Tortuga and Port Royal**\n\nOne of the first notorious pirate havens in the Caribbean was the island of Tortuga off the north coast of Hispaniola (modern Haiti). Although claimed by the Spanish, it was first taken over and settled by French adventurers in about 1625 who survived by hunting wild cattle and pigs in the wilderness and then selling the hides and smoked meat to passing ships (the apparatus on which they smoked the meat was referred to as a *boucan* and this is where the words bacon and buccaneer come from). Despite only numbering a few dozen at first and despite being attacked and driven away several times by the Spanish, the French settlers on Tortuga and nearby Hispaniola always returned and quickly grew in number. By the early 1630s they were striking back at their Spanish attackers by undertaking acts of piracy against passing Spanish ships using small boats. By the 1640s and 1650s, the buccaneers had grown bolder and they were joined in Tortuga by hundreds of English and Dutch privateers who banded with them to attack the Spanish, either by capturing Spanish ships at sea or raiding coastal towns. \n\nBy the 1660s, buccaneers in Tortuga had reached the height of their power and frequently sacked large Spanish towns along the Caribbean before returning to Tortuga to spend their loot. The former buccaneer surgeon Alexandre Exquemelin writing in 1678 describes a typical return voyage of buccaneers under their ruthless leader Francois l'Olonnais after sacking the Spanish towns of Maracaibo and Gibraltar in 1666:\n\n > Having divided the spoils, the buccaneers set sail for Tortuga, where they arrived with great joy a month later. For some the joy was short-lived -- many could not keep their money three days before it was gambled away. However, those who had lost what they had were helped by the others. A short time previously, three ships had arrived from France with cargoes of wine and brandy, so liquor was very cheap. But this did not continue for long: prices quickly went up, and soon the buccaneers were paying four pieces of eight [equivalent to about $200] for a flagon of brandy. Tortuga at that time was full of traders and dealers. The governor got the ship laden with cacao for a twentieth of what it was really worth. The tavern-keepers got part of their money and the whores took the rest, so once more the buccaneers -- including l'Olonnais their chief -- had to consider ways of obtaining more booty. (Exquemelin, 104)\n\nAlthough the governors and merchants in Tortuga benefited tremendously from all this, their ultimate control over things was often tenuous and nominal at best. The buccaneers tended to be strongly independent and violently ready to oppose government regulations. The French governor of Tortuga, Jean le Vasseur, was killed by the buccaneers in 1653 after a dispute. In 1669 when Bertrand d'Ogeron, the French appointed governor of Tortuga, attempted to impose trade restrictions and tariffs on the buccaneers living on the island, the buccaneers rose up in arms and shot at him in his boat. Then a group of them planned to attack the governor's fortress and kill him, only being dissuaded when two fully armed French warships showed up to intimidate the buccaneers into retreating to the woods. The French soldiers landed and burned down the buccaneers' houses but then negotiated and agreed to the buccaneers' demands of free trading rights.\n\nThese towns were often extremely violent places because pirates themselves were engaged in a very violent business. Exquemelin describes how buccaneers would frequently get into quarrels and kill each other or fight to the death in duels. The behavior of later pirates in the 1720s in a base they had established in Madagascar was described by one observer like this:\n\n > When we bartered with the Pyrates at Ranter-Bay for Provisions, they frequently shewed the Wickedness of their Dispositions, by quarrelling and fighting with each other upon the most trifling Occasions. It was their Custom never to go abroad, except armed with Pistols or a naked Sword in their Hand, to be in Readiness to defend themselves or to attack others. (Downing, 115)\n\nDuring this same time in the English town of Port Royal in Jamaica, the situation was no less violent and chaotic. Captured from the Spanish in 1655, Port Royal quickly became an alternate base for buccaneers to freely spend their loot. As in Tortuga, the settlement in Jamaica was under constant threat of Spanish invasion to recapture the island and this contributed to it becoming a refuge for buccaneers who were longstanding enemies of Spain. As in Tortuga, the English, French and Dutch buccaneers came there to recklessly spend their loot on alcohol and women before departing again to bring back more. This was vividly described by the former buccaneer quoted earlier in his 1678 book *The Buccaneers of America:*\n\n > Captain Rock sailed for Jamaica with his prize, and lorded it there with his mates until all was gone. For that is the way with these buccaneers -- whenever they have hold of something, they don't keep it for long. They are busy dicing, whoring and drinking so long as they have anything to spend. Some of them will get through a good two or three thousand pieces of eight in a day -- and next day not have a shirt to their back. I have seen a man in Jamaica give 500 pieces of eight to a whore, just to see her naked. Yes, and many other impieties.\n\n > My own master often used to buy a butt of wine and set it in the middle of the street with the barrel-head knocked in, and stand barring the way. Every passer-by had to drink with him, or he'd have shot them dead with a gun he kept handy. Once he bought a cask of butter and threw the stuff at everyone who came by, bedaubing their clothes or their head, wherever best he could reach.\n\n > The buccaneers are generous to their comrades: if a man has nothing, the others will come to his help. The tavern-keepers let them have a good deal of credit, but in Jamaica one ought not to trust these people, for often they will sell you for debt, a thing I have seen happen many a time. Even the man I have just been speaking about, the one who gave the whore so much money to see her naked, and at that time had a good 3,000 pieces of eight -- three months later he was sold for his debts, by a man in whose house he had spent most of his money. \n\n > ... but to return to our tale. Captain Rock soon squandered all his money, and was obliged to put to sea again with his mates.... (Exquemelin, 81-82)\n\nThis \"Captain Rock\" mentioned here was known by the nickname Rock Braziliano and was originally a Dutchman. He was known to be incredibly violent and Exquemelin goes on to describe how in the early days of Port Royal he would get drunk and prowl the streets with his henchmen, attacking random people who got in his way and hacking off their limbs or killing them with his cutlass. Apparently the governor and any law-keeping forces in Port Royal were too afraid to do anything or arrest him. In fact, they probably not only feared violent retaliation from Braziliano's own crew (who were probably capable of killing the governor just as French buccaneers had done in Tortuga) but more importantly if they made an example of him that it would scare off other buccaneers from using Port Royal as a safe-haven. That would not only deprive Port Royal of its main source of revenue but leave the town virtually defenseless against Spanish attacks.\n\nPort Royal was grew to be referred by contemporaries as \"the wickedest place on earth\" and the \"Sodom of the New World\" one English clergyman wrote this description:\n\n > This town is the Sodom of the New World and since the majority of its population consists of pirates, cutthroats, whores and some of the vilest persons in the whole of the world, I felt my permanence there was of no use and I could better preach of the Word of God elsewhere among a better sort of folk. (Talty, 139-40)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "635087",
"title": "Tenby",
"section": "Section::::History.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 390,
"text": "With limited infrastructure, resources and people, the town's economy fell into decline. Most of the merchant and business class left, resulting in the town's decay and ruin. By the end of the 18th century, John Wesley noted during his visit how: \"Two-thirds of the old town is in ruins or has entirely vanished. Pigs roam among the abandoned houses and Tenby presents a dismal spectacle.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32071426",
"title": "Riede's City Bakery",
"section": "Section::::History.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 411,
"text": "Riede continued to operate his business until 1908. As the city's population continued to decline, many of its buildings from the boom years became vacant and neglected. They fell victim to fire or the effects of the severe winters at nearly of elevation in the mountains. The bakery building, still referred to by the name of its onetime owner, remained. It was a second-hand shop during the Great Depression.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3552691",
"title": "Petersburg, Georgia",
"section": "",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 1124,
"text": "The town had a brief life; it was not developed until after the American Revolutionary War and after 1810 its population started declining, until it was abandoned. After the last person left, the buildings deteriorated, and the area finally reverted to agricultural land. The last known sale of a numbered lot occurred in 1837 (Elliott 1988:113-116). Several reasons have been advanced for the decline. The tobacco monopoly was squeezed out by cotton, which was 'thrown upon boats all along the river without being inspected' (Sherwood 1837:215). Other reasons given were the advent of steamboats (which were not practicable above Augusta). Later, the rivers proved to be obstacles to construction of railroads through the area, considered essential for the economic life of towns after 1850. But above all, the opportunity of new land to the west available for development attracted its inhabitants to keep moving west.(Coulter 1965:167-173). The Petersburg post office was moved to nearby Lisbon, Georgia in 1844, and closed in 1855 (Kraków 1999:174-175). The town of Vienna, South Carolina also declined and disappeared.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3354676",
"title": "Ponce de Leon, Missouri",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 311,
"text": "The resort and town prospered, and with a population of around 1000, it was the largest town in the county. Businesses included a sawmill, a gristmill, a bank and tomato cannery. However, the boom did not last and by 1885, damage by flash floods and other economic problems led to decay and loss of population.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "179184",
"title": "History of Florida",
"section": "Section::::Since 1900.:Boom of 1920s.\n",
"start_paragraph_id": 95,
"start_character": 0,
"end_paragraph_id": 95,
"end_character": 550,
"text": "By 1925, the market ran out of buyers to pay the high prices, and soon the boom became a bust. The 1926 Miami Hurricane, which nearly destroyed the city further depressed the real estate market. In 1928 another hurricane struck Southern Florida. The 1928 Okeechobee hurricane made landfall near Palm Beach, severely damaging the local infrastructure. In townships near Lake Okeechobee, the storm breached a dike separating the water from land, creating a storm surge that killed over 2,000 people and destroying the towns of Belle Glade and Pahokee.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43975798",
"title": "History of Cary, North Carolina",
"section": "Section::::Twentieth Century.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 454,
"text": "Like many towns and cities throughout the United States, the Great Depression adversely affected the population. The Bank of Cary, which had been founded in 1921, closed down. The town went bankrupt in 1932. However, due to the efforts of Franklin Roosevelt's New Deal, several areas of Cary were conserved as green space. For example, the William B. Ulmstead State Park was created by the CCC by repurposing abandoned farmland along the Crabtree Creek.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24315352",
"title": "List of ghost towns in Kansas",
"section": "Section::::Classifications.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 280,
"text": "BULLET::::- Industry/employment – Towns that catered to a specific industry like coal mining or military housing were boom towns that quickly died when their markets collapsed. Some towns were abandoned in the 1930s during the Dust Bowl period which mainly relied on Agriculture.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5mxjni
|
why does your body temperature increase when you're nauseous and or vomiting?
|
[
{
"answer": "Body temperature doesn't increase because you're nauseous or vomiting.\n\n It rises so it can kill the intruder, like bacteria. \n\nVomiting happens so the body can get rid of the bacteria. I assume the main reason for vomiting is that the body tries to get rid of the material that has the intruder in it. Like spoiled food.\n\nIt also happens even if you ingest safe substance because the body doesn't know it's safe. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2566171",
"title": "Exercise-induced nausea",
"section": "Section::::Cause.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 314,
"text": "Another possible cause of exercise induced nausea is overhydration. Drinking too much water before, during, or after extreme exercise (such as a marathon) can cause nausea, diarrhea, confusion, and muscle tremors. Excessive water consumption reduces or dilutes electrolyte levels in the body causing hyponatremia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "300465",
"title": "Diuresis",
"section": "Section::::Immersion diuresis.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 962,
"text": "The \"temperature\" component is caused by water drawing heat away from the body and causing vasoconstriction of the cutaneous blood vessels within the body to conserve heat. The body detects an increase in the blood pressure and inhibits the release of vasopressin (also known as antidiuretic hormone (ADH)), causing an increase in the production of urine. The \"pressure\" component is caused by the hydrostatic pressure of the water directly increasing blood pressure. Its significance is indicated by the fact that the temperature of the water does not substantially affect the rate of diuresis. Partial immersion of only the limbs does not cause increased urination. Thus, the hand in warm water trick (immersing the hand of a sleeping person in water to make him/her urinate) has no support from the mechanism of immersion diuresis. On the other hand, sitting up to the neck in a pool for a few hours clearly increases the excretion of water, salts, and urea.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "531611",
"title": "Foodborne illness",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 495,
"text": "Symptoms often include vomiting, fever, and aches, and may include diarrhea. Bouts of vomiting can be repeated with an extended delay in between, because even if infected food was eliminated from the stomach in the first bout, microbes, like bacteria, (if applicable) can pass through the stomach into the intestine and begin to multiply. Some types of microbes stay in the intestine, some produce a toxin that is absorbed into the bloodstream, and some can directly invade deeper body tissues.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8507183",
"title": "Vomiting",
"section": "Section::::Complications.:Dehydration and electrolyte imbalance.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 698,
"text": "Prolonged and excessive vomiting depletes the body of water (dehydration), and may alter the electrolyte status. Gastric vomiting leads to the loss of acid (protons) and chloride directly. Combined with the resulting alkaline tide, this leads to hypochloremic metabolic alkalosis (low chloride levels together with high and and increased blood pH) and often hypokalemia (potassium depletion). The hypokalemia is an indirect result of the kidney compensating for the loss of acid. With the loss of intake of food the individual may eventually become cachectic. A less frequent occurrence results from a vomiting of intestinal contents, including bile acids and , which can cause metabolic acidosis.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53951",
"title": "Diarrhea",
"section": "Section::::Definition.:Osmotic.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 1230,
"text": "Osmotic diarrhea occurs when too much water is drawn into the bowels. If a person drinks solutions with excessive sugar or excessive salt, these can draw water from the body into the bowel and cause osmotic diarrhea. Osmotic diarrhea can also be the result of maldigestion (e.g. pancreatic disease or coeliac disease), in which the nutrients are left in the lumen to pull in water. Or it can be caused by osmotic laxatives (which work to alleviate constipation by drawing water into the bowels). In healthy individuals, too much magnesium or vitamin C or undigested lactose can produce osmotic diarrhea and distention of the bowel. A person who has lactose intolerance can have difficulty absorbing lactose after an extraordinarily high intake of dairy products. In persons who have fructose malabsorption, excess fructose intake can also cause diarrhea. High-fructose foods that also have a high glucose content are more absorbable and less likely to cause diarrhea. Sugar alcohols such as sorbitol (often found in sugar-free foods) are difficult for the body to absorb and, in large amounts, may lead to osmotic diarrhea. In most of these cases, osmotic diarrhea stops when the offending agent (e.g. milk, sorbitol) is stopped.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "972656",
"title": "Hypokalemia",
"section": "Section::::Causes.:Gastrointestinal or skin loss.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 588,
"text": "A more common cause is excessive loss of potassium, often associated with heavy fluid losses that \"flush\" potassium out of the body. Typically, this is a consequence of diarrhea, excessive perspiration, or losses associated with muscle-crush injury, or surgical procedures. Vomiting can also cause hypokalemia, although not much potassium is lost from the vomitus. Rather, heavy urinary losses of K in the setting of post-emetic bicarbonaturia force urinary potassium excretion (see Alkalosis below). Other gastrointestinal causes include pancreatic fistulae and the presence of adenoma.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53951",
"title": "Diarrhea",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 544,
"text": "Diarrhea, also spelled diarrhoea, is the condition of having at least three loose, liquid, or watery bowel movements each day. It often lasts for a few days and can result in dehydration due to fluid loss. Signs of dehydration often begin with loss of the normal stretchiness of the skin and irritable behaviour. This can progress to decreased urination, loss of skin color, a fast heart rate, and a decrease in responsiveness as it becomes more severe. Loose but non-watery stools in babies who are exclusively breastfed, however, are normal.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
698f5i
|
why do kids and some adults jump up and down when excited or happy?
|
[
{
"answer": "Have you never done this? Never gotten so excited and filled with energy you just have to move? It's an energy release. We're also social animals and this is a way to express our excitement. \n\nPlus it's good for ventilation, moves the air around.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "210454",
"title": "Williams syndrome",
"section": "Section::::Signs and symptoms.:Social and psychological.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 1009,
"text": "While these children often came off as happy due to their sociable nature, often there are internal drawbacks to the way they act. 76–86% of these children were reported as believing that they either had few friends or problems with their friends. This is possibly due to the fact that although they are very friendly to strangers and love meeting new people, they may have trouble interacting on a deeper level. 73–93% were reported as unreserved with strangers, 67% highly sensitive to rejection, 65% susceptible to teasing, and the statistic for exploitation and abuse was unavailable. This last one is a significant problem. People with Williams syndrome are frequently very trusting and want more than anything to make friends, leading them to submit to requests that under normal circumstances would be rejected. There are external problems as well. 91–96% demonstrate inattention, 75% impulsivity, 59–71% hyperactivity 46–74% tantrums, 32–60% disobedience, and 25–37% fighting and aggressive behavior.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31379772",
"title": "Ellen MacArthur Cancer Trust",
"section": "Section::::Impact.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 361,
"text": "The young people have fun, they adventure together and achieve, overcoming their fears, changing their self-perception and feeling important, and because they socialise with others like them they feel like they belong, are more positive, don't feel judged, feel their anxiety reduce and start to think differently about themselves and what they are capable of.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "422247",
"title": "Self-awareness",
"section": "Section::::Adolescence.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 701,
"text": "One becomes conscious of their emotions during adolescence. Most children are aware of emotions such as shame, guilt, pride and embarrassment by the age of two, but do not fully understand how those emotions affect their life. By age 13, children become more in touch with these emotions and begin to apply them to their own lives. A study entitled \"The Construction of the Self\" found that many adolescents display happiness and self-confidence around friends, but hopelessness and anger around parents due to the fear of being a disappointment. Teenagers were also shown to feel intelligent and creative around teachers, and shy, uncomfortable and nervous around people they were not familiar with.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16166681",
"title": "Thulluvadho Ilamai",
"section": "Section::::Plot.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 219,
"text": "Happy that they are save, they start laughing and mocking each other. The principal tells the parents that they are not worried about what happened and are happy. Feeling ashamed, the parents leave the children alone. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58689076",
"title": "Social emotional development",
"section": "Section::::Early childhood (birth to 3 years old).:Emotional experiences.:Emotional expression.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 768,
"text": "Beginning at birth, newborns have the capacity to signal generalized distress in response to unpleasant stimuli and bodily states, such as pain, hunger, body temperature, and stimulation. They may smile, seemingly involuntarily, when satiated, in their sleep, or in response to pleasant touch. Infants begin using a “social smile,” or a smile in response to a positive social interaction, at approximately 2 to 3 months of age, and laughter begins at 3 to 4 months. Expressions of happiness become more intentional with age, with young children interrupting their actions to smile or express happiness to nearby adults at 8–10 months of age, and with markedly different kinds of smiles (e.g., grin, muted smile, mouth open smile) developing at 10 to 12 months of age.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31379772",
"title": "Ellen MacArthur Cancer Trust",
"section": "Section::::Impact.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 224,
"text": "BULLET::::- Happy - young people experience a positive change in perspective on their illness and life. Fun is important, the trips are life changing and an escape from daily life. 91% of parents say their child is happier.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10203857",
"title": "Contentment",
"section": "Section::::General.:Contentment and positive psychology.:Leisure (also Leisure satisfaction).\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 210,
"text": "This happy state of life is that generally experienced by the pre-school child and is gradually lost when duties and responsibilities of school life and subsequently the adult work-life enter into the picture.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
415cpk
|
Did the U.S. Government encourage people to move to the suburbs during the Cold War in order to avert catastrophic population losses from nuclear attacks?
|
[
{
"answer": "I've never found any evidence suggesting it did.\n\nKathleen Tobin has written [an article](_URL_0_) claiming that policymakers' fears of atomic attack was a significant factor in population dispersal. I find it to be an astonishing piece of rhetorical sleight-of-hand, using a few magazine articles discussing the *concept* of dispersion to prove that it was federal policy—despite the absence of a single federal law or regulation on the subject.\n\nAs for the role of Interstate highways, those were proposed long before the Cold War, and throughout the years of congressional debate, military strategists repeatedly testified that they didn't need any particular routes or geometric specifications, always saying that highways built to promote commerce would also serve their needs. To Pres. Eisenhower, the public-works and job-creation aspects of the system were about as important as defense aspects. I am not aware of any serious civil defense or military rationale that was part of Congressional debate. The words \"and Defense\" were added to the name of the \"National System of Interstate Highways\" in conference committee, almost as an afterthought, and played no role in congressional voting. See *Congressional Record* 102, Part 8, pp. 10991-10997. The definitive source on this history is Rose, Mark H. *Interstate: Express Highway Politics, 1939-1989.*",
"provenance": null
},
{
"answer": "There were definitely a lot of government _discussion_ about the value of dispersion, from the point of view of civil defense (ameliorating damage from a nuclear attack). An article that discusses them in some detail is Peter Galison's \"War Against the Center\" (2006). In the 1950s, for example, the Bureau of Commerce directed planners in metropolitan areas to move new industry outside of city centers. Project East River, in 1952, studied the problems of dispersion specifically as civil defense issues, and the ways in which you could encourage it to happen (e.g. by making it a consideration in federal loans, insurance, and contracts). There were also some cases of specific federal agencies (like the Atomic Energy Commission) having their headquarters being located outside of assumed target areas (in this case, the AEC was moved to Germantown, MD, rather than Washington, DC). Apparently there were some tax incentives put into place for moving industry out of prime metro areas in the early-to-mid 1950s. \n\nSo there was urging, there was planning, there were many pamphlets and studies. Is this why dispersion and suburbanization actually _happened_? That, I think, is a harder argument to make. There are lots of other reasons (economic and social) that can be more directly attributed to those movements. I am not sure (and Galison's article does not indicate) whether these more heavy-handed inducements (other than the aforementioned tax benefits) actually were put into place. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2194173",
"title": "Home front during World War II",
"section": "Section::::Axis.:Japan.\n",
"start_paragraph_id": 129,
"start_character": 0,
"end_paragraph_id": 129,
"end_character": 647,
"text": "The government began making evacuation plans in late 1943, and started removing entire schools from industrial cities to the countryside, where they were safe from bombing and had better access to food supplies. In all 1.3 million children were moved—with their teachers but not their parents. When the American bombing began in earnest in late 1944, 10 million people fled the cities to the safety of the countryside, including two-thirds of the residents of the largest cities and 87% of the children. Left behind were the munitions workers and government officials. By April 1945, 87% of the younger children had been moved to the countryside.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4831178",
"title": "United States civil defense",
"section": "Section::::History.:Cold War.:Evacuation plans.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 399,
"text": "At the dawn of the nuclear age, evacuation was opposed by the federal government. The Federal Civil Defense Administration produced a short movie called \"Our Cities Must Fight.\" It argued that in the event of a nuclear war, people need to stay in cities to help repair the infrastructure and man the recovering industries. \"Nuclear radiation,\" it advised, \"would only stay in the air a day or two.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53683",
"title": "Nuclear fallout",
"section": "Section::::Fallout protection.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 276,
"text": "During the Cold War, the governments of the U.S., the USSR, Great Britain, and China attempted to educate their citizens about surviving a nuclear attack by providing procedures on minimizing short-term exposure to fallout. This effort commonly became known as Civil Defense.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38754861",
"title": "Three Sisters Bridge",
"section": "Section::::Proposing the bridge.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 696,
"text": "During World War II, the population of the District of Columbia rose by about 30 percent to 861,000 people. The terrible overcrowding and traffic jams in the city convinced many that not only was a subway system needed, but that vastly enlarged and improved highways were required as well. Post-war projections showed D.C. losing population to the suburbs, and planning for nuclear war emphasized moving many federal agencies into the suburbs as a means of reducing the government's vulnerability to attack. These and other factors meant that new superhighways would be needed to bring workers into town to work, and to move them from agency to agency throughout the day quickly and efficiently.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2822315",
"title": "In the Presence of Mine Enemies",
"section": "Section::::Setting.:World politics and geography.:The fate of the United States and Canada.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 849,
"text": "Between the 1960s and 70s, Germany and the Axis powers have defeated the United States and Canada in the Third World War with the nuclear bombs they developed first. The key American cities of Washington, D.C. and Philadelphia were destroyed by the bombs and their environments are rendered uninhabitable for years to come. Other cities such as New York City, St. Louis, and Chicago are damaged by bombing raids. The capital of the US was moved to Omaha, Nebraska, where a pro-Nazi puppet government was set up, and the Reich maintains Wehrmacht occupation forces in New York City, Chicago, St. Louis, and Omaha itself. Upon conquering the US, the Einsatzkommandos and the American white supremacists systematically kill the country's Jewish and most of the Black populations with any remaining Black people being used for slave labor by the Reich.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2891861",
"title": "Tule Lake",
"section": "Section::::History.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 784,
"text": "During World War II, the United States federal government forced the evacuation of Japanese nationals and Japanese Americans, including citizens born in the United States, to numerous camps built in the interior of California and inland states. They were forced to sell their businesses and homes, and suffered enormous economic and psychological losses by being treated as potential enemies. The Tule Lake War Relocation Center, a Japanese American internment camp, is located to the east in neighboring Modoc County. Following World War II, the federal government awarded 86 farm sites on land reclaimed by the drainage of Tule Lake to returning veterans using a Land Lottery. A lottery was used because the number of applicants was greater than the number of homesteads available.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40945496",
"title": "Eugene Mall",
"section": "Section::::History.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 465,
"text": "The Post–World War II economic expansion created a gradual exodus from city core areas in the United States, and federally funded urban renewal projects empowered communities to demolish historic downtown areas and build new, modern structures. With dramatic increases in automobile purchases accompanied by a post-WWII decline in public transportation, many communities accepted urban renewal financing to demolish buildings and install much-needed parking areas.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ec8io3
|
how are roads on steep cliffs built?
|
[
{
"answer": "Dynamite. I remember going on a hike somewhere in Utah and there was a trail that was originally going to be turned into a road but they just left it unfinished and made it a trail instead. You could see where they drilled the holes and were going to blow the cliffside. It takes a lot of precision to keep it somewhat level and prevent the whole side of the cliff from just crumbling. To level the road, they'll just use all the dirt and small rock debris to make a flat surface, then pour asphalt over it",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "58561918",
"title": "Cox's Road and Early Deviations - Linden, Linden Precinct",
"section": "Section::::History.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 749,
"text": "The actual building of the road involved the definition of a trafficable route which was then cleared of vegetation (trees being cut-off below ground level but rarely \"grubbed out\"), boulders and rocky outcrops. The formation of the road itself was as minimal as the terrain allowed, with low side-cuttings and embankments as necessary. In very rocky terrain cuttings were made into the mountain itself, the natural rock providing the road surface or pavement. It is possible that some of the stepped rock platforms may have initially been partly filled or levelled with earthen ramps, although Karskens (1988) suggests that Cox mostly left the road pavement in an unformed, natural state due to the haste with which the road was being constructed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58563941",
"title": "Cox's Road and Early Deviations - Woodford, Old Bathurst Road Precinct",
"section": "Section::::History.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 749,
"text": "The actual building of the road involved the definition of a trafficable route which was then cleared of vegetation (trees being cut-off below ground level but rarely \"grubbed out\"), boulders and rocky outcrops. The formation of the road itself was as minimal as the terrain allowed, with low side-cuttings and embankments as necessary. In very rocky terrain cuttings were made into the mountain itself, the natural rock providing the road surface or pavement. It is possible that some of the stepped rock platforms may have initially been partly filled or levelled with earthen ramps, although Karskens (1988) suggests that Cox mostly left the road pavement in an unformed, natural state due to the haste with which the road was being constructed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3358049",
"title": "Steep Point",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 489,
"text": "Access to Steep Point is by four-wheel drive vehicles only, as tracks to the point are through sand dunes. The North West Coastal Highway is the closest sealed road and is 200 kilometres east of the point. An entry permit is required to travel to the point, which can be purchased at the rangers house in Edel Land National Park, which is about 20 kilometres east of Steep Point. Camping areas and basic facilities are also available in the park and can be purchased at the rangers house.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32001108",
"title": "The Amazing Race: China Rush 2",
"section": "Section::::Race summary.:Leg 4 (Fujian → Guangdong).\n",
"start_paragraph_id": 87,
"start_character": 0,
"end_paragraph_id": 87,
"end_character": 211,
"text": "In this Roadblock, one team member had to climb down a rocky cliff using a rope ladder, retrieve their clue from the tops of the trees at the bottom, and then climb back up the rope ladder to complete the task.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23421229",
"title": "Mountain Parkway Byway",
"section": "Section::::Mountain Parkway Backway.:Hanging Rock.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 305,
"text": "Just east of Poling is a large rock formation referred to locally as \"Hanging Rock.\" The cliff extends out over and along Replete Road for several hundred feet. Formed of sandstone, the rock was deposited around 313 million years ago and undercut by water erosion from the adjacent Left Fork Holly River.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2288842",
"title": "Helsby",
"section": "Section::::Landmarks.:Helsby Hill.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 770,
"text": "The craggy face of the hill provides many routes for rock climbers at a range of grades from easy climbs suitable for beginners (some of which do not require ropes), to challenging climbs up to a grade 6c. The cliff is also split into two lateral sections. The main face is easily accessible from the ground. At the top is a large grassy area, followed by an easily accessible 10-foot (or thereabouts) cliff to the summit, which is excellent for bouldering. Despite its often slimy appearance, the cliff's sandstone composition means it dries out quickly after rain, and, after several accidents, several large metal spikes were placed at the top of the main cliff for top-rope climbing that offer extra safety for climbers worried about the sandstone's crumbly nature.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37473024",
"title": "Carderock Recreation Area",
"section": "Section::::Rock climbing.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 804,
"text": "The rock cliffs are made of Wissahikon Mica-schist and range from , with the majority of the climbs about . They pack over 100 established climbs within approximately of the cliff. The rock has form of friction slabs, overhangs, and cracks. Most of the routes are easy and moderate top-rope routes, with a few harder climbs as well as numerous eliminate routes and boulder problems. Traditional climbing is not recommended since protection is often difficult to place and the schist has a reputation for being friable and breakable if a piece of gear is subjected to a leader fall. The area known for its esoteric bouldering, often very different in character from other bouldering areas and relying heavily on delicate footwork between quartz crystal knobs and nubbins imbedded in polished schist wall.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
16xr8c
|
Why is it so hard to distinguish between something that is cold versus something that is wet?
|
[
{
"answer": "Gonna take a shot here.\n\nAn object feels cold because we're feeling heat transfer away from our skin. This is dependent on both the temperature difference between our skin and the object, and the heat insulation/transmission properties of both. The faster heat transfers, the colder a contact feels.\n\nWater has a very high specific heat and transfers heat quickly, so a small temperature difference still transfers heat quickly as opposed to touching something like titanium with a large difference, since titanium transfers very slowly. A wet object with a ten degree difference will feel colder than titanium with a twenty degree difference.\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "16702705",
"title": "Dispersive adhesion",
"section": "Section::::Factors affecting adhesion strength.:Wetting.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 604,
"text": "Wetting is a measure of the thermodynamic compatibility of two surfaces. If the surfaces are well-matched, the surfaces will \"desire\" to interact with each other, minimizing the surface energy of both phases, and the surfaces will come into close contact. Because the intermolecular attractions strongly correlate with distance, the closer the interacting molecules are together, the stronger the attraction. Thus, two materials that wet well and have a large amount of surface area in contact will have stronger intermolecular attractions and a larger adhesive strength due to the dispersive mechanism.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1739001",
"title": "Wetting",
"section": "Section::::High-energy vs. low-energy surfaces.:Wetting of low-energy surfaces.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 240,
"text": "Differences in wettability between surfaces that are similar in structure are due to differences in the packing of the atoms. For instance, if a surface has branched chains, it will have poorer packing than a surface with straight chains. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23402053",
"title": "Wetware (brain)",
"section": "Section::::Usage.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 335,
"text": "The prefix \"wet\" is a reference to the water found in living creatures. Wetware is used to describe the elements equivalent to hardware and software found in a person, especially the central nervous system (CNS) and the human mind. The term wetware finds use both in works of fiction, in scholarly publications and in popularizations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33946289",
"title": "Clothing insulation",
"section": "Section::::Mechanisms of insulation.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 284,
"text": "Another important factor is humidity. Water is a better conductor of heat than air, thus if clothes are damp — because of sweat, rain, or immersion — water replaces some or all of the air between the fibres of the clothing, causing thermal loss through conduction and/or evaporation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7455643",
"title": "Thermal comfort",
"section": "Section::::Influencing factors.:Relative humidity.:Skin wettedness.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 581,
"text": "The wetness of skin in different areas also affects perceived thermal comfort. Humidity can increase wetness on different areas of the body, leading to a perception of discomfort. This is usually localized in different parts of the body, and local thermal comfort limits for skin wettedness differ by locations of the body. The extremities are much more sensitive to thermal discomfort from wetness than the trunk of the body. Although local thermal discomfort can be caused from wetness, the thermal comfort of the whole body will not be affected by the wetness of certain parts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7455643",
"title": "Thermal comfort",
"section": "Section::::Influencing factors.:Relative humidity.:Interplay of temperature and humidity.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 246,
"text": "There has been controversy over why damp cold air feels colder than dry cold air. Some believe it is because when the humidity is high, our skin and clothing become moist and are better conductors of heat, so there is more cooling by conduction.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1528350",
"title": "Cold-weather warfare",
"section": "Section::::Operational factors – land.:Weather conditions.:Temperature.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 331,
"text": "BULLET::::- \"Wet cold\" – From . Wet cold conditions occur when wet snow and rain often accompany wet cold conditions. This type of environment is \"more dangerous to troops and equipment\" than the colder, dry cold environments because the ground becomes slushy and muddy and clothing and equipment becomes perpetually wet and damp.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2u06kt
|
If I lived in the USSR during the purges, were there any choices or steps that I could take to guarantee my survival, and to what extent could this not require moral compromises like denouncing innocent neighbors?
|
[
{
"answer": "There wasn't anything that would guarantee survival, but there were things you could do to increase your chances. Even in the worst years, 1937-1939, the number of people who were \"repressed\" (contemporary term for those that were arrested and either deported, imprisoned, or executed) was between 1.5 and 2 million -- about 1% of the total population of the country. So your chances to survive we're pretty good -- unless you were one of the priesthood, or a relative of former czarist civil servants or officers, or rich, or Jewish, or a supporter of the wrong political movement du jour, or, funnily enough, being a communist party member -- in 1936-39 arrest rate among those was 50%!\n\nGenerally, not being one of the above, and not getting yourself noticed (e.g. making political statements in front of others, having a business or trading \"under the table\", being in someone's way (always a good chance of being denounced)), you would have very good chances to survive, above 99.9% I would say.\n\nSources: born there and know history; Conquest's \"The Great Terror\" (1990), Rogovin \"The Party of the Executed\" (in Russian). ",
"provenance": null
},
{
"answer": "If you were brave, in decent health, and had the resources, you could have done like the Lykov family and [fled into the Siberian wilderness](_URL_0_).\n\nOf course, this was not easy, by the Lykov account, and most of the family did not survive to old age. On the other hand, they lasted altogether for several decades, which might have been more than if they stayed.",
"provenance": null
},
{
"answer": "I hadn't intended to write so much on this so apologies if it comes across as somewhat disjointed. It started out as a reply to /u/Impstar2 but ended up rambling well beyond that.\n\n**Repression Deaths**\n\nJust a note on the figures. There's obviously been a huge amount of debate and controversy on these over the past few decades but the trend, as I can see it, has tended to favour lower numbers to those popularised by Conquest. \n\nThis debate became particularly acute when Getty *et al* published, in English, the official NKVD archival figures of 682,000 executions for 1937-38. Now, nobody accepts these numbers literally but they serve as a lower boundary to the estimates and are more useful than the previous estimates and guesswork that had informed Conquest's work.\n\nWith this in mind, today's estimates for 1937-38 (in English language literature) typically tend to range from 1-1.2m deaths. But, as I say, this is still a pretty contented area. Going beyond deaths and into the broader category of 'victims of repression' is even more difficult.\n\nThe important point is that while these numbers are high, and are higher when you add in Gulag figures, they do not represent a cull of the Soviet population in general. Soviet citizens would know about the purges but there was little to fear unless they were in one of the below victim groups.\n\n**Victims**\n\n/u/Impstar2 is right to point out the degree to which the elites suffered. The ranks of Communist functionaries was particularly gutted. There is a great story of two young graduates and junior party members (Ponomarenko and Chuyanov) being called up to party offices in Moscow only to be immediately packed off to the regions as the new heads of the Communist Party in Belarus and Stalingrad, respectively. In many ways this was a clean sweep of the pre-Purge party leadership.\n\nBut the Purges weren't an entirely elite affair, as was once thought. The majority of deaths were products of the 'mass operations', Order 00447 being justly infamous. In addition to those with suspect class backgrounds, at high-risk were those labelled 'socially harmful elements' (eg homeless, beggars, prostitutes, 'hooligans'), the intelligentsia and petty criminals. In addition, the 'national operations' targeted a range of suspect national minorities.\n\nAnd, of course, there was pure bad luck. Getty provides the example of Turkmenistan in 1938 where \"a fire at a factory became an occasion to meet 'quotas' for sabotage by arresting everybody who happened to be there and forcing them to name 'accomplices' (whose number soon exceeded one hundred persons)\". He also notes that \"it was always possible to round up people having the bad luck to be at the marketplace, where a beard made one suspect of the 'crime' of being a mullah.\"\n\nBut, while there is still debate on how far down the terror reached into society, for most people who never appeared on the state's radar there was, other than bad luck, little to fear. Workers in high-priority industries or with in-demand skills were relatively unscathed by the repression. (The draconian labour laws to tighten labour discipline didn't begin to appear until late 1938.) Workers were also protected to a degree by their enterprises, who were loath to lose skilled labour.\n\nEven those that were at risk could sometimes survive by moving (so called self-dekulakisation) and getting work elsewhere. (Occasionally the neighbouring village was far enough, the Soviet state apparatus being uncoordinated enough that this sometimes sufficed.) This was after all a society continually in flux.\n\n**Sources**\n\nMichael Ellman's *Soviet Repression Statistics* is a nice summary of the 1990s literature on the topic, particularly the repression deaths. \n\nThe key post-Soviet paper on repression is Getty *et al* *Victims of the Soviet Penal System in the Pre-War Years*, which gave rise to a range of related polemics and articles. That also serves as the reference for 'bad luck'.\n\nThe story on rapid promotion during the purges came from Shelia Fitzpatrick's *Education and Social Mobility in the USSR*.\n\nFor general discussion around the purges and the nature of its victims, see Paul Hagenloh's *Stalin's Police* and Gerland and Werth's chapter on mass violence in *Beyond Totalitarianism*.\n\nThe note on 'self-dekulakisation' came from Mark Edele's *Stalinist Society.* Also useful, in understanding the reactions of ordinary people, is Sarah Davies' *Popular Opinion in Stalin's Russia*.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "22069398",
"title": "Eastern Bloc politics",
"section": "Section::::Political repression.:Civil society groups.\n",
"start_paragraph_id": 72,
"start_character": 0,
"end_paragraph_id": 72,
"end_character": 676,
"text": "In addition, sizable resources were employed in the purge, such as in Hungary, where almost one million adults were employed to record, control, indoctrinate, spy on and sometimes kill targets of the purge. Unlike the repressions under Nazi occupation, no ongoing war existed that could bring an end to the tribulations of the Eastern Bloc, and morale severely suffered as a consequence. Because the party later had to admit the mistakes of much that occurred during the purges after Stalin's death, the purges also destroyed the moral base upon which the party operated. In doing so, the party abrogated its prior Leninist claim to moral infallibility for the working class.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22069398",
"title": "Eastern Bloc politics",
"section": "Section::::Political repression.:Civil society groups.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 677,
"text": "The purges often coincided with the introduction of the first \"Five Year Plans\" in the non-Soviet members of the Eastern Bloc. The objectives of those plans were considered beyond political rapproche even where they were absurd, such that workers that did not fulfill targets were targeted and blamed for economic woes, while at the same time, the ultimate responsibility for the economic shortcomings would be placed on prominent victims of the political purge. In Romania, Gheorghiu-Dej admitted that 80,000 peasants had been accused of siding with the class enemy because they resisted collectivization, while purged party elite Ana Pauker was blamed for this \"distortion\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1738221",
"title": "History of communism",
"section": "Section::::Early Marxist–Leninist states (1917–1944).:Marxism–Leninism.:Great Purge.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 574,
"text": "The Great Purge mainly operated from December 1936 to November 1938, although the features of arrest and summary trial followed by execution were well entrenched in the Soviet system since the days of Lenin as Stalin systematically destroyed the older generation of pre-1918 leaders, usually on the grounds they were enemy spies or simply because they were enemies of the people. In the Red Army, a majority of generals were executed and hundreds of thousands of other enemies of the people were sent to the gulag, where terrible conditions in Siberia led quickly to death.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2692270",
"title": "First five-year plan",
"section": "Section::::Failures of the first five-year plan.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 698,
"text": "Stalin’s vision and plan for Collectivization led to the death of millions of people due to famines and the imprisonment of others into labor camps. While some dangerous prisoners were released and forced into labor camps others were now set free in a failing economy with no work and no fair chance of survival and making ends meet. People were forced to live in communal apartments with many other families who also faced the horrors of being hungry, without work and the danger of being robbed for the possessions that they did manage to keep. With such living quarters people shared tight spaces with strangers accompanied by many other horrors such as theft, violence and stripped of privacy.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39902968",
"title": "Purges of the Communist Party of the Soviet Union",
"section": "",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 506,
"text": "Following Stalin's death in 1953 purges as systematic campaigns of expulsion from the party ended; thereafter, the center's political control was exerted instead mainly through loss of party membership and its attendant nomenklatura privileges, which effectively downgraded one's opportunities in societysee Trade unions in the Soviet Union#Role in the Soviet class system, chekism, and party rule. Recalcitrant cases could be reduced to nonpersons via involuntary commitment to a psychiatric institution.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4377635",
"title": "OZET",
"section": "Section::::Demise.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 344,
"text": "The first five year plans, intensive industrialization and militarization programs in the USSR required educated human resources and many Jews were able to find employment. On the other hand, collectivization in the USSR resulted in the failure of Soviet agriculture and many starving peasants of all ethnic backgrounds found escape in cities.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "71490",
"title": "Vyacheslav Molotov",
"section": "Section::::Biography.:Premiership (1930–1941).\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 483,
"text": "Set against this, the purges of the Red Army leadership, in which Molotov participated, weakened the Soviet Union's defence capacity and contributed to the military disasters of 1941 and 1942, which were mostly caused by unreadiness for war. The purges also led to the dismantling of privatised agriculture and its replacement by collectivised agriculture. This left a legacy of chronic agricultural inefficiencies and under-production which the Soviet regime never fully rectified.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
8skrco
|
how does 50% sodium salt exist?
|
[
{
"answer": "Well since salt is 50% Sodium and 50% Chloride technically speaking all salt is 50% sodium.",
"provenance": null
},
{
"answer": "They displace sodium chloride with potassium chloride. It doesn’t taste exactly the same, which is why light salt tastes a bit strange.\n\nSource: _URL_0_",
"provenance": null
},
{
"answer": "\"Salt\" is the name of a wide variety of compounds. Sodium Chloride is table salt, but other like potassium iodide are also salt.\n\nLow sodium salt is just a salt that uses no or less sodium. ",
"provenance": null
},
{
"answer": "We call NaCl \"salt\" like we call ethanol \"alcohol\"; there are many kinds of both salt and alcohol, but most people are only familiar with a few of them. \n\n50% sodium salt is just regular NaCl mixed with another salt, usually KCl. It's a bit ironic that people without sodium-sensitive medical conditions turn to it for health reasons because KCl can actually be harder to get rid of, especially for diabetics, and can be *more* detrimental to health than NaCl. This is a very common theme; chemistry illiteracy is so rampant that people often run from something relatively harmless to embrace something else that can be worse.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1605200",
"title": "Salt",
"section": "Section::::Edible salt.:Sodium consumption and health.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 835,
"text": "Table salt is made up of just under 40% sodium by weight, so a 6g serving (1teaspoon) contains about 2,300mg of sodium. Sodium serves a vital purpose in the human body: via its role as an electrolyte, it helps nerves and muscles to function correctly, and it is one factor involved in the osmotic regulation of water content in body organs (fluid balance). Most of the sodium in the Western diet comes from salt. The habitual salt intake in many Western countries is about 10 g per day, and it is higher than that in many countries in Eastern Europe and Asia. The high level of sodium in many processed foods has a major impact on the total amount consumed. In the United States, 75% of the sodium eaten comes from processed and restaurant foods, 11% from cooking and table use and the rest from what is found naturally in foodstuffs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "666",
"title": "Alkali metal",
"section": "Section::::Biological role and precautions.:Ions.\n",
"start_paragraph_id": 168,
"start_character": 0,
"end_paragraph_id": 168,
"end_character": 753,
"text": "Sodium and potassium occur in all known biological systems, generally functioning as electrolytes inside and outside cells. Sodium is an essential nutrient that regulates blood volume, blood pressure, osmotic equilibrium and pH; the minimum physiological requirement for sodium is 500 milligrams per day. Sodium chloride (also known as common salt) is the principal source of sodium in the diet, and is used as seasoning and preservative, such as for pickling and jerky; most of it comes from processed foods. The Dietary Reference Intake for sodium is 1.5 grams per day, but most people in the United States consume more than 2.3 grams per day, the minimum amount that promotes hypertension; this in turn causes 7.6 million premature deaths worldwide.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22615598",
"title": "Sodium in biology",
"section": "Section::::Sodium distribution in species.:Humans.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 717,
"text": "The minimum physiological requirement for sodium is between 115 and 500 milligrams per day depending on sweating due to physical activity, and whether the person is adapted to the climate. Sodium chloride is the principal source of sodium in the diet, and is used as seasoning and preservative, such as for pickling and jerky; most of it comes from processed foods. The Adequate Intake for sodium is 1.2 to 1.5 grams per day, but on average people in the United States consume 3.4 grams per day, the minimum amount that promotes hypertension. (Note that salt contains about 39.3% sodium by massthe rest being chlorine and other trace chemicals; thus the UL of 2.3g sodium would be about 5.9g of saltabout 1 teaspoon)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34900600",
"title": "Health effects of salt",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 202,
"text": "As an essential nutrient, sodium is involved in numerous cellular and organ functions. Salt intake that is too low, below 3 g per day, may also increase risk for cardiovascular disease and early death.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "666",
"title": "Alkali metal",
"section": "Section::::Occurrence.:On Earth.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 1086,
"text": "Sodium and potassium are very abundant in earth, both being among the ten most common elements in Earth's crust; sodium makes up approximately 2.6% of the Earth's crust measured by weight, making it the sixth most abundant element overall and the most abundant alkali metal. Potassium makes up approximately 1.5% of the Earth's crust and is the seventh most abundant element. Sodium is found in many different minerals, of which the most common is ordinary salt (sodium chloride), which occurs in vast quantities dissolved in seawater. Other solid deposits include halite, amphibole, cryolite, nitratine, and zeolite. Many of these solid deposits occur as a result of ancient seas evaporating, which still occurs now in places such as Utah's Great Salt Lake and the Dead Sea. Despite their near-equal abundance in Earth's crust, sodium is far more common than potassium in the ocean, both because potassium's larger size makes its salts less soluble, and because potassium is bound by silicates in soil and what potassium leaches is absorbed far more readily by plant life than sodium.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11787655",
"title": "Alkali soil",
"section": "Section::::Causes.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 986,
"text": "BULLET::::3. Many sodium salts are used in industrial and domestic applications such as Sodium carbonate, Sodium bicarbonate (baking soda), Sodium sulphate, Sodium hydroxide (caustic soda), Sodium hypochlorite (bleaching powder), etc. in huge quantities. These salts are mainly produced from Sodium chloride (common salt). All the sodium in these salts enter into the river / ground water during their production process or consumption enhancing water sodicity. The total global consumption of sodium chloride is 270 million tons in the year 2010. This is nearly equal to the salt load in the mighty Amazon River. Man made sodium salts contribution is nearly 7% of total salt load of all the rivers. Sodium salt load problem aggravates in the downstream of intensively cultivated river basins located in China, India, Egypt, Pakistan, west Asia, Australia, western US, etc. due to accumulation of salts in the remaining water after meeting various transpiration and evaporation losses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26826",
"title": "Sodium",
"section": "Section::::Biological role.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 1036,
"text": "In humans, sodium is an essential mineral that regulates blood volume, blood pressure, osmotic equilibrium and pH; the minimum physiological requirement for sodium is 500 milligrams per day. Sodium chloride is the principal source of sodium in the diet, and is used as seasoning and preservative in such commodities as pickled preserves and jerky; for Americans, most sodium chloride comes from processed foods. Other sources of sodium are its natural occurrence in food and such food additives as monosodium glutamate (MSG), sodium nitrite, sodium saccharin, baking soda (sodium bicarbonate), and sodium benzoate. The US Institute of Medicine set its Tolerable Upper Intake Level for sodium at 2.3 grams per day, but the average person in the United States consumes 3.4 grams per day. Studies have found that lowering sodium intake by 2 g per day tends to lower systolic blood pressure by about two to four mm Hg. It has been estimated that such a decrease in sodium intake would lead to between 9 and 17% fewer cases of hypertension.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1gnc53
|
If only one photon goes through a double-slit, is there an interference pattern on the other side?
|
[
{
"answer": "Yes. Have a look [here](_URL_0_) for more information. \n\nIt's not so much that the photon has multiple positions as that its position is not known accurately. This uncertainty in the position allows the photon's wavefunction to interfere with itself, because its probability distribution is spread over both slits.\n\nIf you observe the photon going through one slit, you lose your interference pattern because there's no longer any probability that the photon went through the other slit.",
"provenance": null
},
{
"answer": "Whether the photon has got multiple locations depends on which interpretation of quantum mechanics you follow. Which means that the question isn't answered by quantum mechanics, as \"having one position\" isn't covered by the equations (as /u/YouGotTheTouch said, it is a \"political affiliation of a banana\" kind of question if you only look at the equations). Why it changes also depends on the interpretation.\n\nIn the Copenhagen interpretation, the photon behaves like a wave and passes through both slits simultaneously, interacts with itself, and then behaves like a particle when it is observed. The reason that i starts behaving like a particles is that its wave function collapses.\n\nIn the many-world interpretation, different versions of the photon passes through different slits in different worlds. These worlds can interact, to a certain degree, forming the interference pattern. However, once the photon is detected, the world where is has been detected in one place cannot interact with the worlds where it was detected at another, so while one version of you sees one outcome and another version sees another, the two version of you cannot communicate. This decoupling of the worlds is the reason why it stop \"behaving like a wave\" (it never really did in the many world interpreation).\n\nIn the de Broglie-Bohm interpretation, the photon is always a particle, so it travels through one slit. However, it is affected by a wave function, which can travel through both slits. The wave function then interacts with itself, causing the particle to show an interference pattern. Here, it was always a particle with a definite position, and its speed is deterministically determined by the position of every particle in the universe.\n\nedit: typo",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "8667",
"title": "Double-slit experiment",
"section": "Section::::Variations of the experiment.:Other variations.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 344,
"text": "It was shown experimentally in 1972 that in a double-slit system where only one slit was open at any time, interference was nonetheless observed provided the path difference was such that the detected photon could have come from either slit. The experimental conditions were such that the photon density in the system was much less than unity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3474700",
"title": "Delayed-choice quantum eraser",
"section": "Section::::The experiment of Kim \"et al.\" (1999).:Significance.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 211,
"text": "This result is similar to that of the double-slit experiment, since interference is observed when it is not known from which slit the photon originates, while no interference is observed when the path is known.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3474980",
"title": "Wheeler's delayed-choice experiment",
"section": "Section::::Double-slit version.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 1692,
"text": "A second kind of experiment resembles the ordinary double-slit experiment. The schematic diagram of this experiment shows that a lens on the far side of the double slits makes the path from each slit diverge slightly from the other after they cross each other fairly near to that lens. The result is that at the two wavefunctions for each photon will be in superposition within a fairly short distance from the double slits, and if a detection screen is provided within the region wherein the wavefunctions are in superposition then interference patterns will be seen. There is no way by which any given photon could have been determined to have arrived from one or the other of the double slits. However, if the detection screen is removed the wavefunctions on each path will superimpose on regions of lower and lower amplitudes, and their combined probability values will be much less than the unreinforced probability values at the center of each path. When telescopes are aimed to intercept the center of the two paths, there will be equal probabilities of nearly 50% that a photon will show up in one of them. When a photon is detected by telescope 1, researchers may associate that photon with the wavefunction that emerged from the lower slit. When one is detected in telescope 2, researchers may associate that photon with the wavefunction that emerged from the upper slit. The explanation that supports this interpretation of experimental results is that a photon has emerged from one of the slits, and that is the end of the matter. A photon must have started at the laser, passed through one of the slits, and arrived by a single straight-line path at the corresponding telescope.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3474980",
"title": "Wheeler's delayed-choice experiment",
"section": "Section::::Experimental details.:Double-slits in lab and cosmos.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 981,
"text": "Wheeler's version of the double-slit experiment is arranged so that the same photon that emerges from two slits can be detected in two ways. The first way lets the two paths come together, lets the two copies of the wavefunction overlap, and shows interference. The second way moves farther away from the photon source to a position where the distance between the two copies of the wavefunction is too great to show interference effects. The technical problem in the laboratory is how to insert a detector screen at a point appropriate to observe interference effects or to remove that screen to reveal the photon detectors that can be restricted to receiving photons from the narrow regions of space where the slits are found. One way to accomplish that task would be to use the recently developed electrically switchable mirrors and simply change directions of the two paths from the slits by switching a mirror on or off. As of early 2014 no such experiment has been announced.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54717",
"title": "De Broglie–Bohm theory",
"section": "Section::::Overview.:Double-slit experiment.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 608,
"text": "The double-slit experiment is an illustration of wave-particle duality. In it, a beam of particles (such as electrons) travels through a barrier that has two slits. If one puts a detector screen on the side beyond the barrier, the pattern of detected particles shows interference fringes characteristic of waves arriving at the screen from two sources (the two slits); however, the interference pattern is made up of individual dots corresponding to particles that had arrived on the screen. The system seems to exhibit the behaviour of both waves (interference patterns) and particles (dots on the screen).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3474980",
"title": "Wheeler's delayed-choice experiment",
"section": "Section::::Introduction.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 853,
"text": "Given the interpretation of quantum physics that says a photon is either in its guise as a wave or in its guise as a particle, the question arises: When does the photon decide whether it is going to travel as a wave or as a particle? Suppose that a traditional double-slit experiment is prepared so that either of the slits can be blocked. If both slits are open and a series of photons are emitted by the laser then an interference pattern will quickly emerge on the detection screen. The interference pattern can only be explained as a consequence of wave phenomena, so experimenters can conclude that each photon \"decides\" to travel as a wave as soon as it is emitted. If only one slit is available then there will be no interference pattern, so experimenters may conclude that each photon \"decides\" to travel as a particle as soon as it is emitted.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8603",
"title": "Diffraction",
"section": "Section::::Coherence.\n",
"start_paragraph_id": 99,
"start_character": 0,
"end_paragraph_id": 99,
"end_character": 488,
"text": "If waves are emitted from an extended source, this can lead to incoherence in the transversal direction. When looking at a cross section of a beam of light, the length over which the phase is correlated is called the transverse coherence length. In the case of Young's double slit experiment, this would mean that if the transverse coherence length is smaller than the spacing between the two slits, the resulting pattern on a screen would look like two single slit diffraction patterns.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4d1w8x
|
Why can you rename, or change the path of, an open file in OS X but not Windows?
|
[
{
"answer": "The Windows filesystem identifies files by their paths (including the file names)—if you change a file’s path, applications and the operating system will perceive it as a new file with no connection to the original.\n\nThe OS X filesystem identifies files by an independent file ID, which remains fixed if the file is moved or renamed.",
"provenance": null
},
{
"answer": "Another aspect of the problem is that Windows has something called a *share mode* on open files—basically, an application can open a file in *exclusive* mode meaning that no other program can do anything with the file. It is not possible to circumvent the share mode. This is extensively used in Windows and part of the reason why you have to reboot to apply updates.\n\nUNIX-like systems (like Linux) only have *advisory* file locking which can be ignored by other processes if they decide to. Once a lock is violated, the process is notified of that circumstance and can proceed to handle the case. A rogue process cannot lock up critical files with no way out.",
"provenance": null
},
{
"answer": "I believe the explanation for this goes back from DOS (which was partially based on Unix and CPM). Open files are stored via FCB (file control blocks) which is an older system but was changed to file handles. These handles are nothing but integers that uniquely identifies the current file along with its complete path. \n\nIf a certain program or task if holding up a file handle for write mode, it locks any changes to the file from other users/tasks. This includes its current file system path.",
"provenance": null
},
{
"answer": "In Windows, the file is like your full name, and in unix like os it's like your social security number.\n\nIf you change your name, nobody knows you're you anymore. But if you change your name, you can still be identified by your social security number.\n\nIt's like a photograph of you wearing specific clothes versus a DNA imprint.\n\n",
"provenance": null
},
{
"answer": "The question is incorrect.\n\nWhile all these other answers do point out valid differences between Windows 95 and Linux, the [thing is that you actually can.](_URL_0_) It just depends on the lock level - if the lock level is too high (likely because the application cares about the path not changing) the file can't be moved.\n\nThe reason is simple: regardless of how the file system represents a file, both Windows and Unices (Mac, Linux, BSD, etc.) represent a file as a handle once you open it. The filename is only used to create that handle - it can change afterwards, it no longer matters.\n\nAs for NTFS, the on-disk representation has similarities to Linux. The argument about inodes only applies to FAT - i.e. Windows 9X.",
"provenance": null
},
{
"answer": "I'd just like to mention that, for some reason, you can change the filename of a video while it's playing in PotPlayer, and even move it to another folder, without having to stop the video. ..no idea why it works though.",
"provenance": null
},
{
"answer": "The big question I want answered... how come on either operating systems you have: \n\n* New\n* Save\n* Save as...\n* Close\n\nbut never a \"Rename\" option. It seems like such an obvious option. I suspect 75% of the time, when I use Save As I have to go and delete the old file, because really I just wanted to rename. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "445959",
"title": "File Explorer",
"section": "Section::::History.:Windows Vista and Windows Server 2008.:Other changes.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 570,
"text": "When moving or copying files from one folder to another, if two files have the same name, an option is now available to rename the file; in previous versions of Windows, the user was prompted to choose either a replacement or cancel moving the file. Also, when renaming a file, Explorer only highlights the filename without selecting the extension. Renaming multiple files is quicker as pressing Tab automatically renames the existing file or folder and opens the file name text field for the next file for renaming. Shift+Tab allow renaming in the same manner upwards.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "803458",
"title": "Filename mangling",
"section": "Section::::FAT Derivative Filesystem.:Legacy support under VFAT.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 825,
"text": "Normally, when using compatible Windows programs which use standard Windows methods of reading the disk, the I/O subsystem returns the long filename to the program — however, if an old DOS application or an old Windows application tries to address the file, it will use the older, 8.3-only APIs, or work at a lower level and perform its own disk access, which results in the return of an 8.3 filename. In this case, the filenames become mangled by taking the first six non-space characters in the filename and adding a tilde (~) and then a number to ensure the uniqueness of the 8.3 filename on the disk. This mangling scheme can turn (for example) codice_4 into codice_5. This technique persists today when people use DOSBox to play classic DOS games or use Windows 3.1 in conjunction to play Win16 games on 64-bit Windows.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "351542",
"title": "Filename",
"section": "Section::::Reserved characters and words.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 555,
"text": "In Windows utilities, the space and the period are not allowed as the final character of a filename. The period is allowed as the first character, but some Windows applications, such as Windows Explorer, forbid creating or renaming such files (despite this convention being used in Unix-like systems to describe hidden files and directories). Workarounds include appending a dot when renaming the file (that is then automatically removed afterwards), using alternative file managers, or saving a file with the desired filename from within an application.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7389152",
"title": "Template (word processing)",
"section": "Section::::Specific commands and file formats.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 247,
"text": "Template files may restrict users from saving changes with the original file name, such as with the case of Microsoft Office (.dot) filename extensions. In those cases, users are prompted to save the file with a new name as if it were a new file.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18933600",
"title": "File format",
"section": "Section::::Identifying file type.:Filename extension.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 471,
"text": "One artifact of this approach is that the system can easily be tricked into treating a file as a different format simply by renaming it—an HTML file can, for instance, be easily treated as plain text by renaming it from to . Although this strategy was useful to expert users who could easily understand and manipulate this information, it was often confusing to less technical users, who could accidentally make a file unusable (or \"lose\" it) by renaming it incorrectly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3738839",
"title": "Features new to Windows Vista",
"section": "Section::::Shell & User interface.:Windows Explorer.:File operations.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 356,
"text": "When renaming a file, even when extensions are being displayed, Explorer highlights only the filename without selecting the extension. Renaming multiple files is quicker as pressing Tab automatically renames the existing file or folder and opens the file name text field for the next file for renaming. Shift+Tab allow renaming in the same manner upwards.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "75743",
"title": "ICL VME",
"section": "Section::::Architecture.:SCL.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 570,
"text": "Note that \".\" is used to separate the parts of a hierarchic file name. A leading asterisk denotes a local name for a library, bound using the ASSIGN_LIBRARY command. The number in parentheses after a file name is a version number. The operating system associates a version number with every file, and requests for a file get the latest version unless specified otherwise. Creating a new file will by default create the next version and leave the previous version intact; this program however is deliberately choosing to create version 101, to identify a public release.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2ehyhn
|
Is it just complete coincidence that the outline of the moon fits nicely inside that of the sun during an eclipse? And if so, what an amazing coincidence it is...
|
[
{
"answer": "It's only a co-incidence for here and now. A billion years ago the moon was a lot closer (tides would have been a bitch). In a billion years it will be a lot farther away.",
"provenance": null
},
{
"answer": "The fact that the Moon and Sun are very similar angular sizes in our sky is indeed a complete coincidence. The Moon has retreated somewhat from the Earth over time, and earlier on it would have fully covered the Sun, including most of the corona.",
"provenance": null
},
{
"answer": "Exactly like you said it's an amazing coincidence. Moon was much closer at the beginning and it will be further away in the future. So it will be seen smaller than the sun in the sky. It's getting away about 3.78 cm per year. \n\nSo not only it's a coincidence that they are about the same size, it's a coincidence that we live in a time that they are. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "66109",
"title": "Eclipse cycle",
"section": "Section::::Periodicity.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 977,
"text": "Another thing to consider is that the motion of the Moon is not a perfect circle. Its orbit is distinctly elliptic, so the lunar distance from Earth varies throughout the lunar cycle. This varying distance changes the apparent diameter of the Moon, and therefore influences the chances, duration, and type (partial, annular, total, mixed) of an eclipse. This orbital period is called the anomalistic month, and together with the synodic month causes the so-called \"full moon cycle\" of about 14 lunations in the timings and appearances of full (and new) Moons. The Moon moves faster when it is closer to the Earth (near perigee) and slower when it is near apogee (furthest distance), thus periodically changing the timing of syzygies by up to ±14 hours (relative to their mean timing), and changing the apparent lunar angular diameter by about ±6%. An eclipse cycle must comprise close to an integer number of anomalistic months in order to perform well in predicting eclipses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5024887",
"title": "Solar eclipse",
"section": "Section::::Predictions.:Geometry.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 392,
"text": "The Moon's orbit around the Earth is inclined at an angle of just over 5 degrees to the plane of the Earth's orbit around the Sun (the ecliptic). Because of this, at the time of a new moon, the Moon will usually pass to the north or south of the Sun. A solar eclipse can occur only when new moon occurs close to one of the points (known as nodes) where the Moon's orbit crosses the ecliptic.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5024887",
"title": "Solar eclipse",
"section": "Section::::Types.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 259,
"text": "BULLET::::- An annular eclipse occurs when the Sun and Moon are exactly in line with the Earth, but the apparent size of the Moon is smaller than that of the Sun. Hence the Sun appears as a very bright ring, or annulus, surrounding the dark disk of the Moon.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9264",
"title": "Ecliptic",
"section": "Section::::Eclipses.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 522,
"text": "Because the orbit of the Moon is inclined only about 5.145° to the ecliptic and the Sun is always very near the ecliptic, eclipses always occur on or near it. Because of the inclination of the Moon's orbit, eclipses do not occur at every conjunction and opposition of the Sun and Moon, but only when the Moon is near an ascending or descending node at the same time it is at conjunction (new) or opposition (full). The ecliptic is so named because the ancients noted that eclipses only occur when the Moon is crossing it.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "804218",
"title": "Astronomical clock",
"section": "Section::::Generic description.:Dragon hand: eclipse prediction and lunar nodes.\n",
"start_paragraph_id": 135,
"start_character": 0,
"end_paragraph_id": 135,
"end_character": 930,
"text": "The moon's orbit is not in the same plane as the Earth's orbit around the sun, but crosses it in two places. The moon crosses the ecliptic plane twice a month, once when it goes up above the plane, and again 15 or so days later when it goes back down below the ecliptic. These two locations are the ascending and descending lunar nodes. Solar and lunar eclipses will occur only when the moon is positioned near one of these nodes, because at other times the moon is either too high or too low for an eclipse to be noticed from earth. Some astronomical clocks keep track of the position of the lunar nodes with a long pointer that crosses the dial. This so-called dragon hand makes one complete rotation around the ecliptic dial every 19 years. When the dragon hand and the new moon coincide, the moon is on the same plane as the earth and sun, and so there is every chance that an eclipse will be visible from somewhere on earth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5024887",
"title": "Solar eclipse",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 829,
"text": "If the Moon were in a perfectly circular orbit, a little closer to the Earth, and in the same orbital plane, there would be total solar eclipses every new moon. However, since the Moon's orbit is tilted at more than 5 degrees to the Earth's orbit around the Sun, its shadow usually misses Earth. A solar eclipse can only occur when the moon is close enough to the ecliptic plane during a new moon. Special conditions must occur for the two events to coincide because the Moon's orbit crosses the ecliptic at its orbital nodes twice every draconic month (27.212220 days) while a new moon occurs one every synodic month (29.530587981 days). Solar (and lunar) eclipses therefore happen only during eclipse seasons resulting in at least two, and up to five, solar eclipses each year; no more than two of which can be total eclipses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25709090",
"title": "Solar eclipse of June 16, 1806",
"section": "Section::::Observations.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 276,
"text": "outer atmosphere of the Sun seen during a total eclipse; he proposes that the corona must belong to the Sun, not the Moon, because of its great size. Ferrer also states, that during the total eclipse of 1806, the irregularities of the moon's surface were plainly discernible.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
x4i66
|
If someone died and had their brain preserved, could we map their entire neural network?
|
[
{
"answer": "Yes, but a dead brain may look very different than a live brain. \n\nCurrently, in vivo techniques (MRI, PET, CT, etc...) do not have the resolution required to do this. Serial block imaging along with automated taping lathe ultramicrotome and heavy duty custom neural imaging software can reconstruct every synapse and connection, but the data file would be in the petabyes. \n\nEssentially, here is how this works: a brain is frozen in a big block of ice, and a VERY thin blade shaves off a very small section at a time. Something in the micrometer range. After every time a piece of brain is shaved, an electron microscope takes an image. From there, computer software can reconstruct the brain in 3d and trace individual neurons and their connections. It is an exciting time for the field of connectomics. \n\nCheck out these links: \n\n[Quest for the connectome](_URL_0_)\n\n[Sebastian Seung's](_URL_1_) I am my connectome. His [faculty webpage at MIT](_URL_2_) is also a great read. \n\nEdit: Also to further answer your question, this cannot be done in an alive human. In alive people, the best resolution we can get is 1.5-2 mm, which contains thousands of neurons and other types of cells (such as glial cells)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "21347303",
"title": "Amnesia",
"section": "Section::::Signs and symptoms.:Declarative information.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 676,
"text": "Some patients with anterograde amnesia can still acquire some semantic information, even though it might be more difficult and might remain rather unrelated to more general knowledge. H.M. could accurately draw a floor plan of the home in which he lived after surgery, even though he had not lived there in years. The reason patients could not form new episodic memories is likely because the CA1 region of the hippocampus was a lesion, and thus the hippocampus could not make connections to the cortex. After an ischemic episode following surgery, an MRI of patient R.B. showed his hippocampus to be intact except for a specific lesion restricted to the CA1 pyramidal cells.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29826376",
"title": "Hippocampal prosthesis",
"section": "Section::::Memory Codes.:Goals for the future.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 534,
"text": "The research teams at USC and Wake Forest are working to possibly make this system applicable to humans whose brains suffer damage from Alzheimer's, stroke, or injury, the disruption of neural networks often stops long-term memories from forming. The system designed by Berger and implemented by Deadwyler and Hampson allows the signal processing to take place that would occur naturally in undamaged neurons. Ultimately, they hope to restore the ability to create long-term memories by implanting chips such as these into the brain.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20474671",
"title": "Sebastian Seung",
"section": "Section::::The Connectome Theory.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 646,
"text": "He proposes that every memory, skill, and passion is encoded somehow in the connectome. And when the brain is not wired properly it can result in mental disorders such as autism, schizophrenia, Alzheimer's, and Parkinson's. Understanding the human connectome may not only help cure such diseases with treatments but also possibly help doctors prevent them from occurring in the first place. And if we can represent the sum of all human experiences and memories in the connectome, then we can download human brains on to flash drives, save them indefinitely, and replay those memories in the future, thereby granting humans a kind of immortality.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37399719",
"title": "Eleanor Maguire",
"section": "Section::::Research and career.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 334,
"text": "This is also true of her other work such as that showing that patients with amnesia cannot imagine the future which several years ago was rated as one of the scientific breakthroughs of the year; and her other studies demonstrating that it is possible to decode people's memories from the pattern of fMRI activity in the hippocampus.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10974486",
"title": "Storage (memory)",
"section": "Section::::Models.:Multi-trace distributed memory model.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 698,
"text": "While it has been claimed that human memory seems to be capable of storing a great amount of information, to the extent that some had thought an infinite amount, the presence of such ever-growing matrix within human memory sounds implausible. In addition, the model suggests that to perform the recall process, parallel-search between every single trace that resides within the ever-growing matrix is required, which also raises doubt on whether such computations can be done in a short amount of time. Such doubts, however, have been challenged by findings of Gallistel and King who present evidence on the brain’s enormous computational abilities that can be in support of such parallel support.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21312318",
"title": "Recognition memory",
"section": "Section::::Neural underpinnings.:Lesioned brains.:Medial temporal lobe.\n",
"start_paragraph_id": 66,
"start_character": 0,
"end_paragraph_id": 66,
"end_character": 1326,
"text": "The medial temporal lobes and their surrounding structures are of immense importance to memory in general. The hippocampus is of particular interest. It has been well documented that damage here can result in severe retrograde or anterograde amnesia, the patient is unable to recollect certain events from their past or create new memories respectively. However, the hippocampus does not seem to be the \"storehouse\" of memory. Rather, it may function more as a relay station. Research suggests that it is through the hippocampus that short term memory engages in the process of consolidation (the transfer to long term storage). The memories are transferred from the hippocampus to the broader lateral neocortex via the entorhinal cortex. This helps explain why many amnesics have spared cognitive abilities. They may have a normal short term memory, but are unable to consolidate that memory and it is lost rapidly. Lesions in the medial temporal lobe often leave the subject with the capacity to learn new skills, also known as procedural memory. If experiencing anterograde amnesia, the subject cannot recall any of the learning trials, yet consistently improves with each trial. This highlights the distinctiveness of recognition as a particular and separate type of memory, falling into the domain of declarative memory.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15978180",
"title": "Connectome",
"section": "Section::::Mapping at the cellular level.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 388,
"text": "Current non-invasive imaging techniques cannot capture the brain's activity on a neuron-by-neuron level. Mapping the connectome at the cellular level in vertebrates currently requires post-mortem (after death) microscopic analysis of limited portions of brain tissue. Non-optical techniques that rely on high-throughput DNA sequencing have been proposed recently by Anthony Zador (CSHL).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
odd4w
|
What percentage of lift is generated by the shape of an airplane wing and what percent is generated by the angle of attack?
|
[
{
"answer": "Well, depends what kind of wing it is, first of all. Secondly, speed, pressure altitude, temperature, etc. Tons of variables to consider here aside from the shape of the airfoil and the AoA.\n\n\nWould be a good question for [/r/AskEngineers](/r/AskEngineers) or [/r/aviation](/r/aviation).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "81511",
"title": "Stall (fluid dynamics)",
"section": "Section::::Variation of lift with angle of attack.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 632,
"text": "The graph shows that the greatest amount of lift is produced as the critical angle of attack is reached (which in early-20th century aviation was called the \"burble point\"). This angle is 17.5 degrees in this case, but it varies from airfoil to airfoil. In particular, for aerodynamically thick airfoils (thickness to chord ratios of around 10%), the critical angle is higher than with a thin airfoil of the same camber. Symmetric airfoils have lower critical angles (but also work efficiently in inverted flight). The graph shows that, as the angle of attack exceeds the critical angle, the lift produced by the airfoil decreases.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18009",
"title": "Lift (force)",
"section": "Section::::Basic attributes of lift.:Angle of attack.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 448,
"text": "The angle of attack is the angle between the chord line of an airfoil and the oncoming airflow. A symmetrical airfoil will generate zero lift at zero angle of attack. But as the angle of attack increases, the air is deflected through a larger angle and the vertical component of the airstream velocity increases, resulting in more lift. For small angles a symmetrical airfoil will generate a lift force roughly proportional to the angle of attack.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26809634",
"title": "Custer CCW-5",
"section": "Section::::Design and development.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 1023,
"text": "In most situations an aircraft's lift comes chiefly from the low pressure generated on the upper surface by the locally enhanced higher air velocity. This latter may be the result of the movement of the aircraft through the air or, when lift at low air speeds is important for short take-off performance, produced by engine power. The channel wing, the brainchild of Willard Ray Custer, is an example of the latter, where the air velocity over the upper surface velocity in a U-shaped channel formed out of the wing was increased with a pusher propeller at the trailing edge. This near semi-circular channel laterally constrained the airflow produced by the propeller, even when the aircraft was at rest, producing higher flow velocities than over a conventional pusher wing. The need for wing mounted pusher engines made a pusher twin a natural configuration, and for his third channel wing design Custer chose to modify the existing Baumann Brigadier, a 5-seat mid wing pusher twin which itself did not reach production.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37892",
"title": "Thrust",
"section": "Section::::Concepts.:Thrust axis.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 640,
"text": "The thrust axis for an airplane is the line of action of the total thrust at any instant. It depends on the location, number, and characteristics of the jet engines or propellers. It usually differs from the drag axis. If so, the distance between the thrust axis and the drag axis will cause a moment that must be resisted by a change in the aerodynamic force on the horizontal stabiliser. Notably, the Boeing 737 MAX, with larger, lower-slung engines than previous 737 models, had a greater distance between the thrust axis and the drag axis, causing the nose to rise up in some flight regimes, necessitating a pitch-control system, MCAS.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "159472",
"title": "Flight",
"section": "Section::::Physics.:Forces.:Lift-to-drag ratio.\n",
"start_paragraph_id": 64,
"start_character": 0,
"end_paragraph_id": 64,
"end_character": 379,
"text": "Lift-to-drag ratios for practical aircraft vary from about 4:1 for vehicles and birds with relatively short wings, up to 60:1 or more for vehicles with very long wings, such as gliders. A greater angle of attack relative to the forward movement also increases the extent of deflection, and thus generates extra lift. However a greater angle of attack also generates extra drag. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5730974",
"title": "Stability derivatives",
"section": "Section::::Stability derivative contributions.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 908,
"text": "At low angles of attack, the lift is generated primarily by the wings, fins and the nose region of the body. The total lift acts at a distance formula_35 ahead of the centre of gravity (it has a negative value in the figure), this, in missile parlance, is the centre of pressure . If the lift acts ahead of the centre of gravity, the yawing moment will be negative, and will tend to increase the angle of attack, increasing both the lift and the moment further. It follows that the centre of pressure must lie aft of the centre of gravity for static stability. formula_35 is the static margin and must be negative for longitudinal static stability. Alternatively, positive angle of attack must generate positive yawing moment on a statically stable missile, i.e. formula_37 must be positive. It is common practice to design manoeuvrable missiles with near zero static margin (i.e. neutral static stability).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11304983",
"title": "Gliding flight",
"section": "Section::::Lift to drag ratio.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 408,
"text": "The lift-to-drag ratio, or \"L/D ratio\", is the amount of lift generated by a wing or vehicle, divided by the drag it creates by moving through the air. A higher or more favourable L/D ratio is typically one of the major goals in aircraft design; since a particular aircraft's needed lift is set by its weight, delivering that lift with lower drag leads directly to better fuel economy and climb performance.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1lnt5k
|
Is everything at least a tiny bit soluble in water?
|
[
{
"answer": "Enough energy isn't the problem, there's more than enough in a macroscopic amount of water, it's the probability of having enough energy in one spot (e.g. a single carbon atom) to cause the reaction you want (the carbon atom dissociating). The probability of that decreases exponentially with the (energy required)/(absolute temperature), so it's never an exactly zero probability except for the fictional situation where the substance would be at absolute zero. But since it's exponential, it also means it quickly becomes an astronomically small number if the energy required is large. \n\nThe equation in question that describes this is the Boltzmann distribution.\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "13036672",
"title": "Miscibility",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 370,
"text": "By contrast, substances are said to be immiscible if there are certain proportions in which the mixture does not form a solution. For example, oil is not soluble in water, so these two solvents are immiscible, while butanone (methyl ethyl ketone) is significantly soluble in water, these two solvents are also immiscible because they are not soluble in all proportions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3503227",
"title": "Solubility chart",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 671,
"text": "The following chart shows the solubilities of multiple independent and various compounds, in water, at a pressure of 1 atm and at room temperature (approx. 293.15 K). Any box that reads \"soluble\" results in an aqueous product in which no precipitate has formed, while \"slightly soluble\" and \"insoluble\" markings mean that there is a precipitate that will form (usually, this is a solid), however, \"slightly soluble\" compounds such as calcium sulfate may require heat to form its precipitate. Boxes marked \"other\" can mean that many different states of products can result. For more detailed information of the exact solubility of the compounds, see the solubility table.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58727",
"title": "Alum",
"section": "Section::::Chemical properties.:Solubility.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 225,
"text": "The solubility of the various alums in water varies greatly, sodium alum being readily soluble in water, while caesium and rubidium alums are only sparingly soluble. The various solubilities are shown in the following table.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "106231",
"title": "Macromolecule",
"section": "Section::::Properties.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 341,
"text": "Another common macromolecular property that does not characterize smaller molecules is their relative insolubility in water and similar solvents, instead forming colloids. Many require salts or particular ions to dissolve in water. Similarly, many proteins will denature if the solute concentration of their solution is too high or too low.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4480224",
"title": "Veratridine",
"section": "Section::::Chemistry.:Solubility.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 322,
"text": "Veratridine has a pKa of 9.54. It is slightly soluble in ether, soluble in ethanol and DMSO, and freely soluble in chloroform. Solubility in water is pH-dependent; the free base form is slightly soluble, but easily dissolves in 1 M HCl. Its nitrate salt is slightly soluble in water. Its sulfate salt is very hygroscopic.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59497",
"title": "Solubility",
"section": "Section::::Qualifiers used to describe extent of solubility.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 428,
"text": "The extent of solubility ranges widely, from infinitely soluble (without limit) (fully miscible) such as ethanol in water, to poorly soluble, such as silver chloride in water. The term \"insoluble\" is often applied to poorly or very poorly soluble compounds. A number of other descriptive terms are also used to qualify the extent of solubility for a given application. For example, U.S. Pharmacopoeia gives the following terms:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10967617",
"title": "Bissulfosuccinimidyl suberate",
"section": "Section::::Characteristics.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 385,
"text": "Water-soluble: BS3 is hydrophilic due to its terminal sulfonyl substituents and as a result dissociates in water, eliminating the need to use organic solvents which interfere with protein structure and function. Because organic solvents need not be used when BS3 is used as the crosslinker, it is ideal for investigations into protein structure and function in physiologic conditions.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4ul7wc
|
How does physics address the situation in Zeno's arrow paradox? (This is not the same as the Achilles and tortoise paradox)
|
[
{
"answer": "Velocity cannot be determined by the position of the arrow at one given instant of time. So you can't say \"at an instant in time, the arrow is still and not moving\". You have no idea whether the arrow is moving or not moving. You need at least two positions to calculate its average velocity over the time elapsed. As the time elapsed goes to zero, the average velocity goes to the exact instantaneous velocity.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "9162580",
"title": "Archer's paradox",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 262,
"text": "The archer's paradox is the phenomenon of an arrow traveling in the direction it is pointed at full draw, when it seems that the arrow would have to pass through the starting position it was in before being drawn, where it was pointed to the side of the target.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1522373",
"title": "Carroll's paradox",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 288,
"text": "In physics, Carroll's paradox arises when considering the motion of a falling rigid rod that is specially constrained. Considered one way, the angular momentum stays constant; considered in a different way, it changes. It is named after Michael M. Carroll who first published it in 1984.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34535",
"title": "Zeno's paradoxes",
"section": "Section::::Paradoxes of motion.:Arrow paradox.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 284,
"text": "In the arrow paradox, Zeno states that for motion to occur, an object must change the position which it occupies. He gives an example of an arrow in flight. He states that in any one (duration-less) instant of time, the arrow is neither moving to where it is, nor to where it is not.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34535",
"title": "Zeno's paradoxes",
"section": "",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 368,
"text": "The origins of the paradoxes are somewhat unclear. Diogenes Laërtius, a fourth source for information about Zeno and his teachings, citing Favorinus, says that Zeno's teacher Parmenides was the first to introduce the Achilles and the tortoise paradox. But in a later passage, Laërtius attributes the origin of the paradox to Zeno, explaining that Favorinus disagrees.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "154227",
"title": "What the Tortoise Said to Achilles",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 532,
"text": "\"What the Tortoise Said to Achilles\", written by Lewis Carroll in 1895 for the philosophical journal \"Mind\", is a brief allegorical dialogue on the foundations of logic. The title alludes to one of Zeno's paradoxes of motion, in which Achilles could never overtake the tortoise in a race. In Carroll's dialogue, the tortoise challenges Achilles to use the force of logic to make him accept the conclusion of a simple deductive argument. Ultimately, Achilles fails, because the clever tortoise leads him into an infinite regression.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34535",
"title": "Zeno's paradoxes",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 835,
"text": "Zeno's paradoxes are a set of philosophical problems generally thought to have been devised by Greek philosopher Zeno of Elea (c. 490–430 BC) to support Parmenides' doctrine that contrary to the evidence of one's senses, the belief in plurality and change is mistaken, and in particular that motion is nothing but an illusion. It is usually assumed, based on Plato's \"Parmenides\" (128a–d), that Zeno took on the project of creating these paradoxes because other philosophers had created paradoxes against Parmenides' view. Thus Plato has Zeno say the purpose of the paradoxes \"is to show that their hypothesis that existences are many, if properly followed up, leads to still more absurd results than the hypothesis that they are one.\" Plato has Socrates claim that Zeno and Parmenides were essentially arguing exactly the same point.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "628936",
"title": "The Tortoise and the Hare",
"section": "Section::::Applications.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 817,
"text": "In Classical times the story was annexed to a philosophical problem by Zeno of Elea in one of many demonstrations that movement is impossible to define satisfactorily. The second of Zeno's paradoxes is that of Achilles and the Tortoise, in which the hero gives the Tortoise a head start in a race. The argument attempts to show that even though Achilles runs faster than the Tortoise, he will never catch up with her because, when Achilles reaches the point at which the Tortoise started, the Tortoise has advanced some distance beyond; when Achilles arrives at the point where the Tortoise was when Achilles arrived at the point where the Tortoise started, the Tortoise has again moved forward. Hence Achilles can never catch the Tortoise, no matter how fast he runs, since the Tortoise will always be moving ahead.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3s6vio
|
how does preloading games on apps such as steam work?
|
[
{
"answer": "Two ways:\n\n1. The game files are encrypted, and the decryption key is given only when the game is released.\n\n2. Most of the game files are assets such as music, textures and models. The actual game executable is relatively small. Preloading downloads these assets but not the executable.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "898503",
"title": "Steam (software)",
"section": "Section::::Client functionality.:Developer features.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 677,
"text": "Valve added the ability for developers to sell games under an early access model with a special section of the Steam store, starting in March 2013. This program allows for developers to release functional, but not finished, products such as beta versions to the service to allow users to buy the games and help provide testing and feedback towards the final production. Early access also helps to provide funding to the developers to help complete their games. The early access approach allowed more developers to publish games onto the Steam service without the need for Valve's direct curation of games, significantly increasing the number of available games on the service.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "898503",
"title": "Steam (software)",
"section": "Section::::Client functionality.:User interface.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 1326,
"text": "Players can add non-Steam games to their libraries, allowing the game to be easily accessed from the Steam client and providing support where possible for Steam Overlay features. The Steam interface allows for user-defined shortcuts to be added. In this way, third-party modifications and games not purchased through the Steam Store can use Steam features. Valve sponsors and distributes some modifications free-of-charge; and modifications that use Steamworks can also use VAC, Friends, the server browser, and any Steam features supported by their parent game. For most games launched from Steam, the client provides an in-game overlay that can be accessed by a keystroke. From the overlay, the user can access his or her Steam Community lists and participate in chat, manage selected Steam settings, and access a built-in web browser without having to exit the game. Since the beginning of February 2011 as a beta version, the overlay also allows players to take screenshots of the games in process; it automatically stores these and allows the player to review, delete, or share them during or after his or her game session. As a full version on February 24, 2011, this feature was reimplemented so that users could share screenshots on websites of Facebook, Twitter, and Reddit straight from a user's screenshot manager.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40629237",
"title": "SteamOS",
"section": "Section::::Features.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 682,
"text": "Since SteamOS is solely for playing games without use of mouse or keyboard, it does not have many built-in functions beyond web browsing and playing games; for example, there is no file manager or image viewer installed by default. Users can, however, access the GNOME desktop environment and perform tasks like installing other software. Though the OS does not, in its current form, support streaming services, Valve is in talks with streaming companies such as Spotify and Netflix to bring their features to SteamOS. However Steam does have full-length films from indie movie makers available from their store. The OS natively supports Nvidia, Intel, and AMD graphics processors.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "898503",
"title": "Steam (software)",
"section": "Section::::Client functionality.:Software delivery and maintenance.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 992,
"text": "In September 2008, Valve added support for Steam Cloud, a service that can automatically store saved game and related custom files on Valve's servers; users can access this data from any machine running the Steam client. Games must use the appropriate features of Steamworks for Steam Cloud to work. Users can disable this feature on a per-game and per-account basis. In May 2012, the service added the ability for users to manage their game libraries from remote clients, including computers and mobile devices; users can instruct Steam to download and install games they own through this service if their Steam client is currently active and running. Product keys sold through third-party retailers can also be redeemed on Steam. For games that incorporate Steamworks, users can buy redemption codes from other vendors and redeem these in the Steam client to add the title to their libraries. Steam also offers a framework for selling and distributing downloadable content (DLC) for games.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18781202",
"title": "Shattered Horizon",
"section": "Section::::Development and release.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 257,
"text": "In August 2014, the game was removed from the Steam store, due to a lack of time from Futuremark, two years after the company was bought by Rovio Entertainment, however, you can still use keys bought from resellers to download and play the game from Steam.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40867610",
"title": "Playism",
"section": "Section::::Structure.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 337,
"text": "Rather than employing a separate client like Steam, users download purchased games directly from the Playism website. They have also started to distribute on a variety of platforms including Steam, GOG, Gamefly, PlayStation Store, Google Play and iOS. They have also announced that they are planning to bring some titles to PlayStation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10204351",
"title": "Kongregate",
"section": "Section::::History.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 447,
"text": "Kongregate announced plans in October 2016 to help developers bring their games to the Steam distribution platform with an updated software development kit to make it easy to port games between their web, mobile, and the Steam platforms (Windows, macOS, and Linux), and to support data sharing between these for players. This enabled games to take advantage of microtransactions through the Steam store for titles otherwise normally free-to-play.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6i9e1k
|
Are there any nuclear fusion processes that don't give off excess energy?
|
[
{
"answer": "Yes, many fusion reactions don't release energy, but *take* energy instead. Generally, fusion of two heavy nuclei will be endothermic.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "20766780",
"title": "Nuclear fusion–fission hybrid",
"section": "Section::::Rationale.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 945,
"text": "The fusion process alone currently does not achieve sufficient gain (power output over power input) to be viable as a power source. By using the excess neutrons from the fusion reaction to in turn cause a high-yield fission reaction (close to 100%) in the surrounding subcritical fissionable blanket, the net yield from the hybrid fusion–fission process can provide a targeted gain of 100 to 300 times the input energy (an increase by a factor of three or four over fusion alone). Even allowing for high inefficiencies on the input side (i.e. low laser efficiency in ICF and Bremsstrahlung losses in Tokamak designs), this can still yield sufficient heat output for economical electric power generation. This can be seen as a shortcut to viable fusion power until more efficient pure fusion technologies can be developed, or as an end in itself to generate power, and also consume existing stockpiles of nuclear fissionables and waste products.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20766780",
"title": "Nuclear fusion–fission hybrid",
"section": "Section::::Hybrid concepts.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 662,
"text": "This is a key concept in the hybrid concept, known as \"fission multiplication\". For every fusion event, several fission events may occur, each of which gives off much more energy than the original fusion, about 11 times. This greatly increases the total power output of the reactor. This has been suggested as a way to produce practical fusion reactors in spite of the fact that no fusion reactor has yet reached break-even, by multiplying the power output using cheap fuel or waste. However, a number of studies have repeatedly demonstrated that this only becomes practical when the overall reactor is very large, 2 to 3 GWt, which makes it expensive to build.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "55017",
"title": "Fusion power",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 549,
"text": "As a source of power, nuclear fusion is expected to have several theoretical advantages over fission. These include reduced radioactivity in operation and little high-level nuclear waste, ample fuel supplies, and increased safety. However, achieving the necessary temperature/pressure/duration combination has proven to be difficult to produce in a practical and economical manner. Research into fusion reactors began in the 1940s, but to date, no design has produced more fusion power output than the electrical power input, defeating the purpose.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "55017",
"title": "Fusion power",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 581,
"text": "Fusion reactors generally use hydrogen isotopes such as deuterium and tritium, which react more easily than hydrogen. The designs aim to heat their fuel to tens of millions of degrees using a wide variety of methods. The major challenge in realising fusion power is to engineer a system that can confine the plasma long enough at high enough temperature and density for many reactions to occur. A second issue that affects common reactions, is managing neutrons that are released during the reaction, which over time degrade many common materials used within the reaction chamber.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9273237",
"title": "World energy resources",
"section": "Section::::Nuclear fuel.:Nuclear fusion.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 719,
"text": "Fusion power is the process driving the sun and other stars. It generates large quantities of heat by fusing the nuclei of hydrogen or helium isotopes, which may be derived from seawater. The heat can theoretically be harnessed to generate electricity. The temperatures and pressures needed to sustain fusion make it a very difficult process to control. Fusion is theoretically able to supply vast quantities of energy, with relatively little pollution. Although both the United States and the European Union, along with other countries, are supporting fusion research (such as investing in the ITER facility), according to one report, inadequate research has stalled progress in fusion research for the past 20 years.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15111",
"title": "Interplanetary spaceflight",
"section": "Section::::Improved technologies and methodologies.:Improved rocket concepts.:Fusion rockets.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 512,
"text": "Fusion rockets, powered by nuclear fusion reactions, would \"burn\" such light element fuels as deuterium, tritium, or He. Because fusion yields about 1% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases only about 0.1% of the fuel's mass-energy. However, either fission or fusion technologies can in principle achieve velocities far higher than needed for Solar System exploration, and fusion energy still awaits practical demonstration on Earth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22151",
"title": "Nuclear reactor",
"section": "Section::::Reactor types.:Classifications.:By type of nuclear reaction.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 257,
"text": "In principle, fusion power could be produced by nuclear fusion of elements such as the deuterium isotope of hydrogen. While an ongoing rich research topic since at least the 1940s, no self-sustaining fusion reactor for power generation has ever been built.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
bxgaim
|
when you google a certain store or restaurant and it gives you a bar chart of peak times, where does the data come from?
|
[
{
"answer": "When you have Google Maps, you can turn on location tracking to help Google learn certain tasks. For example, it will learn where your home and work are, and what route you usually take to get there, so then it will send you a message when it's time for you to leave for work based on current traffic. \n\nWhen you have location tracking enabled, Google can use GPS to determine that you're probably at a particular store or restaurant if you linger in that location for a while. So if Google notices that around 6pm, not many phones are announcing their location at a particular restaurant, but at 7pm, a lot are, then at 8pm, not many are pinging again, they can surmise that 6pm and 8pm aren't very busy, but 7pm is. \n\nRepeat that over weeks and weeks and they can build a pretty good idea of how busy a restaurant or store will be at any given time.\n\nIf you use google Maps, you can even look at where Google thinks/knows you have been. Go to Menu > Your Timeline and it will show a history if you have location tracking enabled. For instance,[ here's some of my tracking from yesterday.](_URL_0_) I didn't have to do anything or even confirm I was at those places. Google Maps just knew from my GPS.",
"provenance": null
},
{
"answer": "If you have an android device or Google maps, they track your location 24/7. \n\nIf you're curious, Google lets you sign in and see the data they collected on their \"Google Maps Timeline\": [_URL_0_](_URL_0_) \n\nIt's incredible. Time-stamped, accurate to a few feet, going back YEARS. \n\nSo, when you see those peak usage times, that's essentially a count of android phones inside the store at certain times.\n\nThey do something similar for the Google Maps traffic. Track the speed of phones, and you know the speed of traffic, live.",
"provenance": null
},
{
"answer": "Simply put. Google knows where you are because you use Google apps and opted in for location tracking. So based on that data, they can predict the busy times by how many people are at that location at a given time.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "8640",
"title": "Database normalization",
"section": "Section::::Example of a step by step normalization.:Satisfying 4NF.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 251,
"text": "Assume the database is owned by a book retailer franchise that has several franchisees that own shops in different locations. And therefore the retailer decided to add a table that contains data about availability of the books at different locations:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6161533",
"title": "V. C. Morris Gift Shop",
"section": "Section::::Present day.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 253,
"text": "At the same time, though, the same page indicates store's period of operation is with dates (May 5 through May 22 (no year indicated)) and times. This operation appeared to have a very restricted term of occupancy, often referred to as a \"pop up\" shop.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5376868",
"title": "Google Personalized Search",
"section": "Section::::Data collection.:Types of data collected.:Location data.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 374,
"text": "Location data allows Google to provide information based upon current location and places that the user has visited in the past, based upon GPS location from an Android smartphone or the user's IP address. Google uses this location data to provide local listings grouped with search results using the Google Local platform featuring detailed reviews and ratings from Zagat.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5101133",
"title": "Google Trends",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 228,
"text": "Google Trends is a website by Google that analyzes the popularity of top search queries in Google Search across various regions and languages. The website uses graphs to compare the search volume of different queries over time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3161187",
"title": "Google Analytics",
"section": "Section::::Features.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 733,
"text": "On September 29, 2011, Google Analytics launched Real Time analytics, enabling a user to have insight about visitors currently on the site. A user can have 100 site profiles. Each profile generally corresponds to one website. It is limited to sites which have traffic of fewer than 5 million pageviews per month (roughly 2 pageviews per second) unless the site is linked to a Google Ads campaign. Google Analytics includes Google Website Optimizer, rebranded as \"Google Analytics Content Experiments\". Google Analytics' Cohort analysis helps in understanding the behaviour of component groups of users apart from your user population. It is beneficial to marketers and analysts for successful implementation of a marketing strategy.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3349146",
"title": "UK Singles Chart",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 388,
"text": "The OCC website contains the Top 100 chart. Some media outlets only list the Top 40 (such as the BBC) or the Top 75 (such as \"Music Week\" magazine) of this list. The chart week runs from 00:01 Friday to midnight Thursday, with most UK physical and digital singles being released on Fridays. From 3 August 1969 until 5 July 2015, the chart week ran from 00:01 Sunday to midnight Saturday.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15806478",
"title": "Carrying cost",
"section": "Section::::Ways to reduce carrying cost.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 588,
"text": "The database should include things like retailer, date, quantity, quality, degree of advertising and the time taken until sold out. This will make sure that the future employees can learn from the past experience while making decisions. For example, if the manager want to hold a big discount event to clear the products that have been left in stock for a long time. Then he can go through the past data to find out if there is any event like this before and how was the result. The manager would be able to forecast the budget and make some improvements base on the past events’ record.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
jvijf
|
Why do we get frustrated?
|
[
{
"answer": "Just because we do it, doesn't mean there is an evolutionary advantage or that it is even related to evolution.\n\n\n\nWe get frustrated because we are inherently selfish beings who want to succeed. Failure to succeed or proceed at a pace we like causes frustration.",
"provenance": null
},
{
"answer": "Just because we do it, doesn't mean there is an evolutionary advantage or that it is even related to evolution.\n\n\n\nWe get frustrated because we are inherently selfish beings who want to succeed. Failure to succeed or proceed at a pace we like causes frustration.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "26266653",
"title": "Frustration",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1316,
"text": "In psychology, frustration is a common emotional response to opposition, related to anger, annoyance and disappointment, frustration arises from the perceived resistance to the fulfillment of an individual's will or goal and is likely to increase when a will or goal is denied or blocked. There are two types of frustration; internal and external. Internal frustration may arise from challenges in fulfilling personal goals, desires, instinctual drives and needs, or dealing with perceived deficiencies, such as a lack of confidence or fear of social situations. Conflict, such as when one has competing goals that interfere with one another, can also be an internal source of frustration and can create cognitive dissonance. External causes of frustration involve conditions outside an individual's control, such as a physical roadblock, a difficult task, or the perception of wasting time. There are multiple ways individuals cope with frustration such as passive–aggressive behavior, anger, or violence, although frustration may also propel positive processes via enhanced effort and strive. This broad range of potential outcomes makes it difficult to identify the original cause(s) of frustration, as the responses may be indirect. However, a more direct and common response is a propensity towards aggression.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26266653",
"title": "Frustration",
"section": "Section::::Symptoms.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 676,
"text": "Frustration can be considered a problem–response behavior and can have a number of effects, depending on the mental health of the individual. In positive cases, this frustration will build until a level that is too great for the individual to contain or allow to continue, and thus produce action directed at solving the inherent problem in a disposition that does not cause social or physical harm. In negative cases, however, the individual may perceive the source of frustration to be outside their control, and thus the frustration will continue to build, leading eventually to further problematic behavior (e.g. violent reaction against perceived oppressors or enemies).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26266653",
"title": "Frustration",
"section": "Section::::Causes.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 738,
"text": "Frustration originates from feelings of uncertainty and insecurity which stems from a sense of inability to fulfill needs. If the needs of an individual are blocked, uneasiness and frustration are more likely to occur. When these needs are constantly ignored or unsatisfied, anger, depression, loss of self-confidence, annoyance, aggression, and sometimes violence are likely to follow. Needs can be blocked two different ways; internally and externally. Internal blocking happens within an individual's mind, either through lack of ability, confidence, conflicting goals and desires, and/or fears. External blocking happens to an individual outside their control such as physical roadblocks, difficult tasks, or perceived waste of time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19998838",
"title": "Fidgeting",
"section": "Section::::Causes and effects.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1082,
"text": "Fidgeting may be a result of nervousness, frustration, agitation, boredom, ADHD, excitement or a combination of these. When interested in a task, a seated person will suppress their fidgeting, a process described as Non-Instrumental Movement Inhibition. Some education researchers consider fidgeting along with noise-making as clear signs of inattention or low lecture quality, although educators point out that active engagement can take place without constantly directing attention to the instructor (i.e. engagement and attention are related but not equivalent ). Fidgeting is often a subconscious act and is increased during spontaneous mind-wandering. Some researchers have proposed that fidgeting is not only an indicator of diminishing attention, but is also a subconscious attempt to increase arousal in order to improve attention. While inattention is strongly associated with poor learning and poor information recall, research by Dr. Karen Pine and colleagues found that children that are allowed to fidget with their hands performed better in memory and learning tests.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23585",
"title": "Psychoanalysis",
"section": "Section::::Psychopathology (mental disturbances).:Adult patients.\n",
"start_paragraph_id": 78,
"start_character": 0,
"end_paragraph_id": 78,
"end_character": 1032,
"text": "Panic, phobias, conversions, obsessions, compulsions and depressions (analysts call these \"neurotic symptoms\") are not usually caused by deficits in functions. Instead, they are caused by intrapsychic conflicts. The conflicts are generally among sexual and hostile-aggressive wishes, guilt and shame, and reality factors. The conflicts may be conscious or unconscious, but create anxiety, depressive affect, and anger. Finally, the various elements are managed by defensive operations – essentially shut-off brain mechanisms that make people unaware of that element of conflict. \"Repression\" is the term given to the mechanism that shuts thoughts out of consciousness. \"Isolation of affect\" is the term used for the mechanism that shuts sensations out of consciousness. Neurotic symptoms may occur with or without deficits in ego functions, object relations, and ego strengths. Therefore, it is not uncommon to encounter obsessive-compulsive schizophrenics, panic patients who also suffer with borderline personality disorder, etc.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11838661",
"title": "Exaggeration",
"section": "Section::::Everyday and psycho-pathological contexts.:Cognitive distortions.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 252,
"text": "In depression, exaggerated all-or-nothing thinking can form a self-reinforcing cycle: these thoughts might be called \"emotional amplifiers\" because, as they go around and around, they become more intense. Here are some typical all-or-nothing thoughts:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10177813",
"title": "Splitting (psychology)",
"section": "Section::::Depression.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 238,
"text": "In depression, exaggerated all-or-nothing thinking can form a self-reinforcing cycle: these thoughts might be called \"emotional amplifiers\" because, as they go around and around, they become more intense. Typical all-or-nothing thoughts:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
7qlr9j
|
why can we listen to music at a loud volume, but once it cuts to a commercial or someone talking, it sounds a lot louder?
|
[
{
"answer": "Unfortunately certain media volumes are not regulated - television shows themselves might be at a uniform volume, but there's nothing forcing advertisers to make their commercials quieter or the same volume as the show / radio / video / etc.\n\nIt could be used as a tool to hype up someone's memory (evidence suggests that being startled helps you remember something better), or because they want to advertise to the largest audience possible and that might include people who are hard of hearing :\\",
"provenance": null
},
{
"answer": "You know how at the loud volume of your music the music itself has variations in loudness?\n\nSome sounds are louder some are softer.\n\nSo, without changing the volume of your speakers you can encode and change the volume of the music.\n\nAdvertisers know this and just make their sound super loud.\n\nIts illegal in most media in the US. There is a max encoded volume for television commercials, at any rate.",
"provenance": null
},
{
"answer": "What we are experiencing when we percieve the commercials as louder is called compression. \n\n\nWhen watching a movie, the audio is quite dynamic - some sounds are quiet, some are loud, and some are very loud. \n\n\nPeople making commercials don’t need dynamics, they want to be heard. So they apply compression, which means that the difference between the quiet and the loud sounds are diminished, often by a LOT. This gives a fat, dense sound which sounds louder than it actually is, since the audio is less dynamic, but also because the compression also applies to the frequencies of the sound, boosting the low and the high ones generally. \n\nBut, yes - commercials are also louder, since they use compression to get as even a signal as possible, and then turn it up close to max.\n\n\nIf applied poorly, compression can result in what’s known as ”ducking” - for example a commercial with a speaker voice that’s tweaked and compressed on its own, but then you apply too much compression on the main track, so that when the speaker pauses, the background noice or music rushes up to meet the required intensity. This often happens on radio, since they apply their own dynamic range compression, to compensate for differences in volume between albums and songs. Distortion and lack of dynamics are other wanted or unwanted side effects. \n\nGenerally, music is getting more and more compressed, more loud, and less dynamic. \n\nNine Inch Nails were known for producing really loud and compressed albums, but nowadays they don’t really stand out. Arcade Fire and Godspeed you black emperor are other bands that have been quite sucessful in using lots of compression, audio quality wise. ",
"provenance": null
},
{
"answer": "Because it is perceptibly louder, so you pay attention to the commercials.\nThis doesn't mean the volume has gone up, it's that it's been produced in such a way that you take notice when the music stops, for example, vocals in a song must coexist with the music, if your volume is 50 db, then the guitar, keyboards, bass, drums and vocals all sum up 50 db, each one contributes a little bit to the overall volume, but when the announcer comes up and you only hear that voice, then that voice can take up the whole sound spectrum up to 50 db. It can be louder because there is no music to compete with or it is really low.\n\nThe amount of volume hasn't changed but the voice is now louder than the vocals in music which means we now perceive it to be louder.\n\nJust like there are optical illusions, this is an example of an audio illusion. There's a whole field of study devoted to it called psycho acoustics.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1560437",
"title": "Portable media player",
"section": "Section::::Digital signal processing.:Sound around mode.\n",
"start_paragraph_id": 90,
"start_character": 0,
"end_paragraph_id": 90,
"end_character": 432,
"text": "Sound around mode allows for real time overlapping of music and the sounds surrounding the listener in her environment, which are captured by a microphone and mixed into the audio signal. As a result, the user may hear playing music and external sounds of the environment at the same time. This can increase user safety (especially in big cities and busy streets), as a user can hear a mugger following her or hear an oncoming car.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "87710",
"title": "Dolby noise-reduction system",
"section": "Section::::Usage.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 708,
"text": "The signal to noise ratio is simply how large the music signal is compared to the low level of the \"noise\" with no signal. When the music is loud, the low hiss is not noticeable, but when the music is soft or in silence, most of what can be heard is the noise. If the recording level is adjusted so that the music is always loud, then it could in theory be turned down later, and the noise volume would also be turned down. The idea is for electronics to automatically increase the recording volume when it is soft, but reduce the volume on playback. Some schemes like Dolby B concentrate only on the high frequencies so that the \"hiss\" sound of noise will be masked when volume is turned down for playback.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18080825",
"title": "Loud music",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 287,
"text": "Loud music is music that is played at a high volume, often to the point where it disturbs others and/or causes hearing damage. It may include music that is sung live with one or more voices, played with instruments, or broadcast with electronic media, such as radio, CD, or MP3 players.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44086751",
"title": "Orban (audio processing)",
"section": "Section::::Present day.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 306,
"text": "To the listener the overall sound appears louder, which is useful to commercial broadcasters as it draws attention to them when the casual listener is tuning across the band. It does introduce artifacts to the sound, which is an irritation to those with a good musical ear, especially for classical music.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "633263",
"title": "Radiotelephony procedure",
"section": "Section::::Microphone technique.\n",
"start_paragraph_id": 61,
"start_character": 0,
"end_paragraph_id": 61,
"end_character": 254,
"text": "BULLET::::- Speak in a normal, clear, calm voice. Talking loudly or shouting does not increase the volume of your voice at the receiving radios, but will distort the audio, because loud sounds result in over-modulation, which directly causes distortion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "520289",
"title": "Hearing aid",
"section": "Section::::Evolution of hearing aid applications.\n",
"start_paragraph_id": 182,
"start_character": 0,
"end_paragraph_id": 182,
"end_character": 365,
"text": "There are also applications that do not only adapt the sound of music to the user's hearing but also include some hearing aid functions. Such types of applications include sound amplification mode in accordance with the user's hearing characteristics as well as noise suppression mode and the mode allowing to hear the surrounding sounds without pausing the music.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1045729",
"title": "Watazumi Doso",
"section": "Section::::Quotations.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 364,
"text": "BULLET::::- \"When you hear some music or hear some sound, if for some reason you like it very well; the reason is that sound is in balance or in harmony with your pulse. And so making a sound, you try to make various different sounds that imitate various different sounds of the universe, but what you are finally making is your own sound, the sound of yourself.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4xtleh
|
Can ohms law be applied to any circuit?
|
[
{
"answer": "No, Ohm's law is a special case. In general, the relationship between the voltage you apply across a given circuit element to the current that flows through it (or current density to electric field) is not linear.\n\nFor example, see diodes, transistors, operational amplifiers, etc.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "49090",
"title": "Ohm's law",
"section": "Section::::Circuit analysis.:Linear approximations.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 562,
"text": "Ohm's law is one of the basic equations used in the analysis of electrical circuits. It applies to both metal conductors and circuit components (resistors) specifically made for this behaviour. Both are ubiquitous in electrical engineering. Materials and components that obey Ohm's law are described as \"ohmic\" which means they produce the same value for resistance (R = V/I) regardless of the value of V or I which is applied and whether the applied voltage or current is DC (direct current) of either positive or negative polarity or AC (alternating current).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49090",
"title": "Ohm's law",
"section": "Section::::Other versions.\n",
"start_paragraph_id": 65,
"start_character": 0,
"end_paragraph_id": 65,
"end_character": 666,
"text": "Ohm's law, in the form above, is an extremely useful equation in the field of electrical/electronic engineering because it describes how voltage, current and resistance are interrelated on a \"macroscopic\" level, that is, commonly, as circuit elements in an electrical circuit. Physicists who study the electrical properties of matter at the microscopic level use a closely related and more general vector equation, sometimes also referred to as Ohm's law, having variables that are closely related to the V, I, and R scalar variables of Ohm's law, but which are each functions of position within the conductor. Physicists often use this continuum form of Ohm's Law:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49090",
"title": "Ohm's law",
"section": "Section::::Circuit analysis.:Resistive circuits.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 301,
"text": "Ohm's law holds for circuits containing only resistive elements (no capacitances or inductances) for all forms of driving voltage or current, regardless of whether the driving voltage or current is constant (DC) or time-varying such as AC. At any instant of time Ohm's law is valid for such circuits.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4462484",
"title": "Ohm",
"section": "Section::::Definition.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 268,
"text": "The ohm is defined as an electrical resistance between two points of a conductor when a constant potential difference of one volt, applied to these points, produces in the conductor a current of one ampere, the conductor not being the seat of any electromotive force.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4462484",
"title": "Ohm",
"section": "Section::::History.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 1283,
"text": "A \"legal\" ohm, a reproducible standard, was defined by the international conference of electricians at Paris in 1884 as the resistance of a mercury column of specified weight and 106 cm long; this was a compromise value between the B. A. unit (equivalent to 104.7 cm), the Siemens unit (100 cm by definition), and the CGS unit. Although called \"legal\", this standard was not adopted by any national legislation. The \"international\" ohm was recommended by unanimous resolution at the International Electrical Congress 1893 in Chicago. The unit was based upon the ohm equal to 10 units of resistance of the C.G.S. system of electromagnetic units. The international ohm is represented by the resistance offered to an unvarying electric current in a mercury column of constant cross-sectional area 106.3 cm long of mass 14.4521 grams and 0 °C. This definition became the basis for the legal definition of the ohm in several countries. In 1908, this definition was adopted by scientific representatives from several countries at the International Conference on Electric Units and Standards in London. The mercury column standard was maintained until the 1948 General Conference on Weights and Measures, at which the ohm was redefined in absolute terms instead of as an artifact standard.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49090",
"title": "Ohm's law",
"section": "Section::::Scope.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 406,
"text": "Ohm's law is an empirical law, a generalization from many experiments that have shown that current is approximately proportional to electric field for most materials. It is less fundamental than Maxwell's equations and is not always obeyed. Any given material will break down under a strong-enough electric field, and some materials of interest in electrical engineering are \"non-ohmic\" under weak fields.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31743909",
"title": "History of the metric system",
"section": "Section::::Development of non-coherent metric systems.:Electrical units.\n",
"start_paragraph_id": 74,
"start_character": 0,
"end_paragraph_id": 74,
"end_character": 365,
"text": "In the 1820s Georg Ohm formulated Ohms Law which can be extended to relate power to current, electric potential (voltage) and resistance. During the following decades the realisation of a coherent system of units that incorporated the measurement of electromagnetic phenomena and Ohm's law was beset with problems – several different systems of units were devised.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1s8gmg
|
How long did it take for Poland's post-WWI borders to be established?
|
[
{
"answer": "The Versailles Treaty in 1919 assigned to Poland the territories of Poznan (Great Poland) and Gdansk Pomerania (is Germany and the West also known as Western Prussia or Danzig Corridor) and decided that Upper Silesia and southern part of the Eastern Prussia will be plebiscite territories.\n\nGreater Poland was already controlled by local Polish authorities since the uprsing in December 1918; Pomerania was taken over in July 1919.\n\nThe Prussian plebiscite took part in July 1920 and was quite a disaster for Poland who only won a few villages.\n\nUpper Silesia was finally was divided after 3 Polish uprising and a plebiscite in October 1921.\n\nPolish southern borders were created after heavy disputes (sometimes open conflict) with Czechoslovakia over the former Duchy of Teschen/Cieszyn and Spis(z) and Orava regions. The conflict was somehow resolved in 1920 with the arbitration of the Western Powers, on terms that were rather unfavorable to Poland. Some very small border changes were later made in 1924.\n\nPolish eastern border largely defined by the March 1921 Riga Peace Treaty with the Soviet republics of Russia, Ukraine and Belarus (USSR not yet existing)\n\nThis didn't include the important areas of Vilnius and Eastern Galicia through. Vilnius was taken over by a \"rebellious\" (actually acting on the orders of Polish supreme leader Józef Piłsudski) Polish general in 1920; for some times it functioned as an \"independent\" state of Middle Lithuania before it was officially annexed in 1922.\n\nEastern Galicia was captured by Poles in 1919 but its legal status was complicated: Western allies only consented to Polish administration for the period of 25 years, not incorporation. Polish ownership was offically recognized in 1923.\n\n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "13772",
"title": "History of Poland",
"section": "Section::::Second Polish Republic (1918–1939).:Securing national borders, war with Soviet Russia.\n",
"start_paragraph_id": 110,
"start_character": 0,
"end_paragraph_id": 110,
"end_character": 847,
"text": "After more than a century of foreign rule, Poland regained its independence at the end of World War I as one of the outcomes of the negotiations that took place at the Paris Peace Conference of 1919. The Treaty of Versailles that emerged from the conference set up an independent Polish nation with an outlet to the sea, but left some of its boundaries to be decided by plebiscites. The largely German-inhabited Free City of Danzig was granted a separate status that guaranteed its use as a port by Poland. In the end, the settlement of the German-Polish border turned out to be a prolonged and convoluted process. The dispute helped engender the Greater Poland Uprising of 1918–1919, the three Silesian uprisings of 1919–1921, the East Prussian plebiscite of 1920, the Upper Silesia plebiscite of 1921 and the 1922 Silesian Convention in Geneva.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "546042",
"title": "Territorial changes of Poland immediately after World War II",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 488,
"text": "The territorial changes of Poland immediately after World War II were very extensive, the Oder–Neisse line became Poland's western border and the Curzon Line its eastern border. In 1945, after the defeat of Nazi Germany, Poland's borders were redrawn in accordance with the decisions made first by the Allies at the Tehran Conference of 1943 where the Soviet Union demanded the recognition of the military outcome of the top secret Nazi–Soviet Pact of 1939 of which the West was unaware.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "499479",
"title": "Former eastern territories of Germany",
"section": "Section::::History.:Treaty of Versailles, 1919.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 527,
"text": "The Treaty of Versailles of 1919 that ended World War I restored the independence of Poland, known as the Second Polish Republic, and Germany was compelled to cede territories to it, most of which were taken by Prussia in the three Partitions of Poland, and had been part of the Kingdom of Prussia and later the German Empire for over 100 years. The territories ceded to Poland in 1919 were those with an apparent Polish majority, such as the Province of Posen, the east-southern part of Upper Silesia and the Polish Corridor.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13450072",
"title": "Wołyń Voivodeship (1921–1939)",
"section": "Section::::History.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 739,
"text": "After a century of foreign rule, the Second Polish Republic was reborn in the aftermath of World War I. The borders of the republic were ratified by the Treaty of Versailles signed on 28 June 1919. They were a result of several cross-national conflicts including Polish–Ukrainian War (November 1918 – July 1919), the Greater Poland Uprising (December 1918 – February 1919), as well as Polish–Soviet War (May – October 1920), resulting from Semyon Budyonny's August 1920 military foray into former Russian Poland as far as Warsaw. The Soviets withdrew in panic during the 1920 major Polish counter-offensive. The newly re-established sovereign Poland created Wołyń Voivodeship as one of the 16 main administrative divisions of the country.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36937502",
"title": "Germany–Poland border",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 449,
"text": "After Poland regained independence following World War I and the 123 years of partitions, a long German-Polish border was settled on, long (including a border with East Prussia). The border was partially shaped by the Treaty of Versailles and partially by plebiscites (East Prussian plebiscite and the Silesian plebiscite, the former also affected by the Silesian Uprisings). The shape of that border roughly resembled that of pre-partition Poland.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24319289",
"title": "Oder–Neisse line",
"section": "Section::::Considerations during the war.:Background.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 1519,
"text": "Initially the Polish government in exile envisioned territorial changes after the war which would incorporate East Prussia, Danzig (Gdańsk) and the Oppeln (Opole) Silesian region into post-war Poland, along with a straightening of the Pomeranian border and minor acquisition in the Lauenburg (Lębork) area. The border changes were to provide Poland with a safe border and to prevent the Germans from using Eastern Pomerania and East Prussia as strategic assets against Poland. Only with the changing situation during the war were these territorial proposals modified. In October 1941 the exile newspaper \"Dziennik Polski\" postulated a postwar Polish western border that would include East Prussia, Silesia up to the Lausitzer Neisse and at least both banks of the Oder's mouth. While these territorial claims were regarded as \"megalomaniac\" by the Soviet ambassador in London, in October 1941 Stalin announced the \"return of East Prussia to Slavdom\" after the war. On 16 December 1941 Stalin remarked in a meeting with the British Foreign Minister Anthony Eden, though inconsistent in detail, that Poland should receive all German territory up to the river Oder. In May 1942 General Władysław Sikorski, Prime Minister of the Polish government in exile, sent two memoranda to the US government, sketching a postwar Polish western border along the Oder and Neisse (inconsistent about the Eastern Glatzer Neisse and the Western Lausitzer Neisse). However, the proposal was dropped by the government-in-exile in late 1942.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "380909",
"title": "Peace of Riga",
"section": "Section::::Background.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 297,
"text": "World War I removed former imperial borders across Europe. In 1918, after the Russian Revolution had renounced Tsarist claims to Poland in the Treaty of Brest-Litovsk and the war had ended with Germany's surrender, Poland was able to re-establish its independence after a century of foreign rule.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
e9ec7i
|
Has there been a change in how academic history is written over the years?
|
[
{
"answer": "There have been overall shifts, but there is also a \"horizontal\" diversity in the approach taken by historians, based on different theoretical models. Today, almost any good academic history will contain a section discussing its methodology. Academic history essentially consists of writing _narratives_ which are then critiqued, rejected, amended and so forth in the broader historiographical discussion.\n\nOne broad shift that I always think of is from the Rankean paradigm of figuring out history \"as it really was\" based on identifying the most reliable sources or harmonizing diverging narratives. This kind of presupposes that a history should always represent what a historian think is, overall, the most plausible account of everything. The problem is that this generally leads to a prohibitively broad scope (or more likely, vastly suboptimal criteria for choosing how to read sources), and therefore, historians will often deliberately write histories from a particular perspective. For example, Richard Payne's \"A State of Mixture: Christians, Zoroastrians and Iranian Political Culture in Late Antiquity\" re-examines the notion of the zealous, theocratic Sasanians oppressing their Christian and Jewish minorities by reading the traditional Armenian and Aramaic martyrdoms as situated in an Iranian political context, and discussing how the Sasanian Empire could be understood as a pluralistic society under a supreme monarch, submission to whom was paramount. For instance, this perspective suggests that Khusrau II's pilfering of the True Cross from Jerusalem was not merely intended to humiliate his Christian adversaries, but also to yield a glorious trophy for his Christian subjects and a banner for his legitimacy among the Christians of the Eastern Mediterranean he intended to subjugate.\n\nThis reading has various advantages, such as highlighting potential tensions between a more pluralistic monarch and religiously conservative high nobility and clergy, and yielding a potential explanation of the dissolution of noble support for Khusrau at the zenith of his empire's power. But it isn't necessarily the most plausible or palatable narrative in all regards - it has a tendency to gloss over real religious violence and persecutions as a necessity to uphold the barriers Payne takes as essential to this \"state of mixture\". However, Payne's monograph is far more useful in this form as a point of reference, than would be the likely outcome of an attempt to consider _every possible angle and implication_ of _every single source_ to highlight _every single possible implication of interest_.\n\nSo yes, the era a history is written in should absolutely be taken into account, but writing will differ not just based on time but also on the individual historian and their preferences, whether they have a degree in history or something like philology, and so forth. Ultimately, you should look at the historian's argument for reading a source in a particular way, as well as why what that reading yields might be interestikng. In most cases, you are unlikely to find a single monograph that is a truly satisfactory account of an era; studying the actual historiography and differences between accounts is necessary to get a strong understanding.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2238559",
"title": "Mathematical Tripos",
"section": "Section::::Origin.:Early history.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 703,
"text": "The early history is of the gradual replacement during the middle of the eighteenth century of a traditional method of oral examination by written papers, with a simultaneous switch in emphasis from Latin disputation to mathematical questions. That is, all degree candidates were expected to show at least competence in mathematics. A long process of development of coaching—tuition usually outside the official University and college courses—went hand-in-hand with a gradual increase in the difficulty of the most testing questions asked. The standard examination pattern of \"bookwork\" (mostly memorised theorems) plus \"rider\" (problems to solve, testing comprehension of the bookwork) was introduced.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28255507",
"title": "John Gutch",
"section": "Section::::Scholarship.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 491,
"text": "Gutch's main act of scholarship was his edition of Anthony Wood's History of Oxford University, which had an involved publication history. By around 1668 Wood had finished a large manuscript, written in English, of the university's history. It was divided into two parts: the first dealt with the general history of the University up to 1648, and the second with the Schools, Lectureships, the Colleges and Halls, Libraries, and the chief Magistrates (\"Fasti\") - Chancellors, Provosts etc. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "314951",
"title": "Academic tenure in North America",
"section": "Section::::History in the United States.:From 1940 to 1972.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 1165,
"text": "The most significant adoption of academic tenure occurred after 1945, when the influx of returning GIs returning to school led to quickly expanding universities with severe professorial faculty shortages. These shortages dogged the Academy for ten years, and that is when the majority of universities started offering formal tenure as a side benefit. The rate of tenure (percent of tenured university faculty) increased to 52 percent. In fact, the demand for professors was so high in the 1950s that the American Council of Learned Societies held a conference in Cuba noting the too-few doctoral candidates to fill positions in English departments. During the McCarthy era, loyalty oaths were required of many state employees, and neither formal academic tenure nor the Constitutional principles of freedom of speech and association were protection from dismissal. Some professors were dismissed for their political affiliations. During the 1960s, many professors supported the anti-war movement against the Vietnam War, and more than 20 state legislatures passed resolutions calling for specific professorial dismissals and a change to the academic tenure system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "526936",
"title": "American Historical Association",
"section": "Section::::History.:Establishing a national history curriculum.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 344,
"text": "As the interests of historians in colleges and universities gained prominence in the association, other areas and activities tended to fall by the wayside. The Manuscripts and Public Archives Commissions were abandoned in the 1930s, while projects related to original research and the publication of scholarship gained ever-greater prominence.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "324570",
"title": "Academic publishing",
"section": "Section::::Publishing by discipline.:Humanities.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 1105,
"text": "The following describes the situation in the United States. In many fields, such as literature and history, several published articles are typically required for a first tenure-track job, and a published or forthcoming \"book\" is now often required before tenure. Some critics complain that this \"de facto\" system has emerged without thought to its consequences; they claim that the predictable result is the publication of much shoddy work, as well as unreasonable demands on the already limited research time of young scholars. To make matters worse, the circulation of many humanities journals in the 1990s declined to almost untenable levels, as many libraries cancelled subscriptions, leaving fewer and fewer peer-reviewed outlets for publication; and many humanities professors' first books sell only a few hundred copies, which often does not pay for the cost of their printing. Some scholars have called for a publication subvention of a few thousand dollars to be associated with each graduate student fellowship or new tenure-track hire, in order to alleviate the financial pressure on journals.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34899581",
"title": "Academic journal publishing reform",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 521,
"text": "Academic journal publishing reform is the advocacy for changes in the way academic journals are created and distributed in the age of the Internet and the advent of electronic publishing. Since the rise of the Internet, people have organized campaigns to change the relationships among and between academic authors, their traditional distributors and their readership. Most of the discussion has centered on taking advantage of benefits offered by the Internet's capacity for widespread distribution of reading material.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1214033",
"title": "Academic history",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 753,
"text": "What gives this concept of \"academic history\" its own historicity, or \"cubbyhole in time\", challenged by progress, is that an academic history was intended to be \"definitive\" even though its subject matter, unlike the marine biology mentioned above, was not \"objective\". When the volume on the Regency was published, for example, some may have thought that such would be the complete history of that era, and no one would need to do as much work in that field, because the best people with the best resources would already have written it down. Subsequent changes in scholarly perspective can alter that perception; for example the work of Lewis Namier on mid-18th century British politics caused one of the \"Oxford History\" volumes to appear outdated.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
couvu4
|
What happened to Supersymmetry?
|
[
{
"answer": "A little background info first: in particle physics you begin with the so-called \"Standard Model\". It's kind of like the periodic table of the elements, cataloguing and grouping the known fundamental particles. It's not just a table though, it's also a mathematical model that predicts how all particles behave. It predicts what you'll see when you smash, say, protons together (as they do at the LHC). Then you go smash the particles and see what actually happens. The prediction is called the background, and anything different is called an excess. Excesses are an indicator the SM might be overlooking something, a suppersymetric (SUSY) particle for instance. \n\nThe most recent LHC upgrade promised to show all sorts of new excesses but there really wasn't anything. Each time we upgrade to higher energy and don't see any evidence of SUSY, the SUSY models have to be changed and eventually you start running into problems like, say, violation of conservation of energy. The models are getting flimsier. It's becoming harder for SUSY to hold it's promises of unification and a potential explanation of dark energy. There are many different models theorists have made that use SUSY. The most prevalent is the minimally supersymmetric model or MSSM, and it's taken quite a beating from the lack of excess.\n\nThe hot thing in High Energy Physics (HEP) now is neutrinos which are known to exist but not much else is known about them. A great deal of effort is being expended just to determine their mass. All that's known right now is that it's between zero and the mass of an electron.",
"provenance": null
},
{
"answer": "To add to the excellent answer of u/cynfwar:\n\nWhile SUSY is running out of steam for phenomenological purposes, its still of interest for mathematical physicists. SUSY is in some sense a very strong symmetry: it helped in providing exact solutions for many observables in an idealised version of the Standard Model called 'N=4 Super-Yang-Mills theory' (SYM). Having those exact solutions is very valuable because it helps in checking the validity of the approximative tools used in the Standard Model and also to develop new tools which might just radically change the way we do quantum field theory. \n\nIn case you want to read up on this more, I'll list just a few (decreasingly specific and increasingly esoteric) new tools that have been developed with a lot of influence from the SUSY community: spinor helicity variables, the CHY formalism, the amplituhedron and the AdS/CFT correspondence. While the former two of those are actively being used to study the properties of real life quantum field theories as QCD and electroweak theory, the latter are still in development and it is unclear if they will ever be of any use to interpret collider measurements and the like. \n\nStill, those techniques are very interesting on their own: they shed some light on the connections between different quantum field theories, the fascinating duality between gauge theories and gravity and the underlying geometrical concepts of physics. All this progress wouldn't have been possible if not for SUSY.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "606970",
"title": "Minimal Supersymmetric Standard Model",
"section": "Section::::Background.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 843,
"text": "The only unambiguous way to claim discovery of supersymmetry is to produce superparticles in the laboratory. Because superparticles are expected to be 100 to 1000 times heavier than the proton, it requires a huge amount of energy to make these particles that can only be achieved at particle accelerators. The Tevatron was actively looking for evidence of the production of supersymmetric particles before it was shut down on 30 September 2011. Most physicists believe that supersymmetry must be discovered at the LHC if it is responsible for stabilizing the weak scale. There are five classes of particle that superpartners of the Standard Model fall into: squarks, gluinos, charginos, neutralinos, and sleptons. These superparticles have their interactions and subsequent decays described by the MSSM and each has characteristic signatures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "701141",
"title": "Supersymmetry breaking",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 391,
"text": "In particle physics, supersymmetry breaking is the process to obtain a seemingly non-supersymmetric physics from a supersymmetric theory which is a necessary step to reconcile supersymmetry with actual experiments. It is an example of spontaneous symmetry breaking. In supergravity, this results in a slightly modified counterpart of the Higgs mechanism where the gravitinos become massive.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8736971",
"title": "Supercontinuum",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 850,
"text": "In optics, a supercontinuum is formed when a collection of nonlinear processes act together upon a pump beam in order to cause severe spectral broadening of the original pump beam, for example using a microstructured optical fiber. The result is a smooth spectral continuum (see figure 1 for a typical example). There is no consensus on how much broadening constitutes a supercontinuum; however researchers have published work claiming as little as 60 nm of broadening as a supercontinuum. There is also no agreement on the spectral flatness required to define the bandwidth of the source, with authors using anything from 5 dB to 40 dB or more. In addition the term supercontinuum itself did not gain widespread acceptance until this century, with many authors using alternative phrases to describe their continua during the 1970s, 1980s and 1990s.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1939972",
"title": "Superdiamagnetism",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 312,
"text": "Superdiamagnetism is a feature of superconductivity. It was identified in 1933, by Walther Meissner and Robert Ochsenfeld, but it is considered distinct from the Meissner effect which occurs when the superconductivity first forms, and involves the exclusion of magnetic fields that already penetrate the object.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18320",
"title": "Lens (optics)",
"section": "Section::::Other types.\n",
"start_paragraph_id": 77,
"start_character": 0,
"end_paragraph_id": 77,
"end_character": 361,
"text": "Superlenses are made from negative index metamaterials and claim to produce images at spatial resolutions exceeding the diffraction limit. The first superlenses were made in 2004 using such a metamaterial for microwaves. Improved versions have been made by other researchers. the superlens has not yet been demonstrated at visible or near-infrared wavelengths.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "224636",
"title": "Supersymmetry",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 1418,
"text": "The main reasons for supersymmetry being supported by physicists is that the current theories are known to be incomplete and their limitations are well established, and supersymmetry would be an attractive solution to some of the major concerns. Direct confirmation would entail production of superpartners in collider experiments, such as the Large Hadron Collider (LHC). The first runs of the LHC found no previously-unknown particles other than the Higgs boson which was already suspected to exist as part of the Standard Model, and therefore no evidence for supersymmetry. Indirect methods include the search for a permanent electric dipole moment (EDM) in the known Standard Model particles, which can arise when the Standard Model particle interacts with the supersymmetric particles. The current best constraint on the electron electric dipole moment put it to be smaller than 10 e·cm, equivalent to a sensitivity to new physics at the TeV scale and matching that of the current best particle colliders. A permanent EDM in any fundamental particle points towards time-reversal violating physics, and therefore also CP-symmetry violation via the CPT theorem. Such EDM experiments are also much more scalable than conventional particle accelerators and offer a practical alternative to detecting physics beyond the standard model as accelerator experiments become increasingly costly and complicated to maintain.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16864252",
"title": "Superinsulator",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 327,
"text": "A superinsulator is a material that at low temperatures under certain conditions has an infinite resistance and no current will pass through it. The superinsulating state has many parallels to the superconducting state, and can be destroyed (in a sudden phase transition) by increased temperature, magnetic fields and voltage.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
11fc51
|
After the Big Bang, why does matter only exist in pockets of galaxies? Why is it not more 'homogeneous'?
|
[
{
"answer": "Gravity. If there's any initial fluctuation at all, everything condenses down to points and filaments under gravity.\n\nHere's a cool video of a calculation exploring how this happens. Dark matter's role in large-scale structure formation in the universe is a big topic in computational astrophysics right now.\n_URL_2_\n\nMore here:\n_URL_0_\n_URL_1_",
"provenance": null
},
{
"answer": "This is actually a phenomenally cool question. giant_snark does a great job explaining what happens once gravity takes over, but what triggers that?\n\nSmall quantum fluctuations that are completely random existed when our universe was only a fraction of a second old. Then, suddenly, the universe expanded by 50 orders of magnitude (that number is impossible to wrap your head around). This cooled the universe tremendously, and \"locked in\" the quantum fluctuations in temperature generated by quantum movements of the quarks. The temperature differences resulted in density differences, and thus gravity could start taking over.\n\nYou can actually still see this today in the [CMB](_URL_0_) Those temperature differences were determined 13.7 billion years ago.",
"provenance": null
},
{
"answer": "The Universe is uniform on the largest scales - that is, if you average over a few hundreds of millions of light years. On smaller scales, things like galaxies formed from tiny overdensities that grew and then collapsed under their own gravity. As the Universe expands, even a region just slightly denser than its surroundings will expand at a slower rate than average, and eventually turn around and begin to contract, until the pressure exerted by gas balances it out and stars and galaxies start to form.\n\nThe cool thing, though, is where those overdensities come from. The best answer we have is in something called *cosmic inflation*, our idea of the very earliest moments of cosmic history. The idea is that for the briefest of moments, not even a trillionth of a trillionth of a second, just after the Big Bang, the Universe expanded at an accelerating pace (much like it's doing today, in fact, though the acceleration was far greater then).\n\nWe didn't come up with inflation for any reasons to do with structure formation, though. In fact, we came up with it for essentially the opposite reason - we wanted to understand why the Universe was so *smooth* and spatially flat, and why we don't see exotic particles like magnetic monopoles (particles with just a magnetic north or south pole, but not both). Inflation has a tendency to smooth over any inhomogeneities and curvature and leave everything dark and featureless, with particles like monopoles being diluted so heavily that most observable patches of universe don't even contain a single one.\n\nSo it was pretty amazing when it was realized that inflation also explains, quite naturally, where the matter we *do* see comes from, and why it makes the inhomogeneous structures we see. The matter comes from the energy left over from inflation; as long as the energy driving inflation interacts with matter, which seems quite reasonable, then when inflation ends that energy will decay into matter and radiation.\n\nBut most exciting, inflation ties the small scale physics of quantum mechanics with the large-scale structure of the Universe. Quantum physics tells us that on the smallest scales, there's uncertainty in position and momentum, and so the density of matter on small scales fluctuates. Normally, these fluctuations average out with each other and die off in a fraction of a second. During inflation, though, the Universe is expanding so quickly that before those fluctuations could die off, they were expanding to be larger than the observable universe, so they couldn't communicate with other fluctuations! Thus they were left \"frozen in,\" quantum weirdness made bigger than the observable universe, leaving the Universe more dense in some regions and less dense in others. When inflation ended, these fluctuations all started to come back into contact with one another, eventually leaving the Universe with a patch of over- and underdense regions. It's the overdense ones that collapsed under their own gravity to form the galaxies and galaxy clusters we see today.",
"provenance": null
},
{
"answer": "visualize it like a few drops of oil floating on water, even though the oil could spread out thinly and cover the entire surface area of the container of the water, the oil will coalesce into tight droplets due to intermolecular forces, gravity is the binding force on a larger scale",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "11522684",
"title": "Inhomogeneous cosmology",
"section": "Section::::History.:Inhomogeneous universe.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 970,
"text": "While the universe began with homogeneously distributed matter, enormous structures have since coalesced over billions of years: hundreds of billions of stars inside of galaxies, clusters of galaxies, superclusters, and vast filaments of matter. These denser regions and the voids between them must, under general relativity, have some effect, as matter dictates how space-time curves. So the extra mass of galaxies and galaxy clusters (and dark matter, should particles of ever it be directly detected) must cause nearby space-time to curve more positively, and voids should have the opposite effect, causing space-time around them to take on negative curvatures. The question is whether these effects, called backreactions, are negligible or together comprise enough to change the universe's geometry. Most scientists have assumed that they are negligible, but this has partly been because there has been no way to average space-time geometry in Einstein's equations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1916944",
"title": "STAR detector",
"section": "Section::::The physics of STAR.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 567,
"text": "In the immediate aftermath of the Big Bang, the expanding matter was so hot and dense that protons and neutrons could not exist. Instead, the early universe comprised a plasma of quarks and gluons, which in today's cool universe are confined and exist only within composite particles (bound states) – the hadrons, such as protons and neutrons. Collisions of heavy nuclei at sufficiently high energies allow physicists to study whether quarks and gluons become deconfined at high densities, and if so, what the properties of this matter (i.e. quark–gluon plasma) are.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "189424",
"title": "Hot dark matter",
"section": "Section::::Role in Galaxy Formation.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 1299,
"text": "In terms of its application, the distribution of hot dark matter could also help explain how clusters and superclusters of galaxies formed after the Big Bang. Theorists claim that there exist two classes of dark matter: 1) those that \"congregate around individual members of a cluster of visible galaxies\" and 2) those that encompass \"the clusters as a whole.\" Because cold dark matter possesses a lower velocity, it could be the source of \"smaller, galaxy-sized lumps,\" as shown in the image. Hot dark matter, then, should correspond to the formation of larger mass aggregates that surround whole galaxy clusters. However, data from the cosmic microwave background radiation, as measured by the COBE satellite, is highly uniform, and such high-velocity hot dark matter particles cannot form clumps as small as galaxies beginning from such a smooth initial state, highlighting a discrepancy in what dark matter theory and the actual data are saying. Theoretically, in order to explain relatively small-scale structures in the observable Universe, it is necessary to invoke cold dark matter or WDM. In other words, Hot dark matter being the sole substance in explaining cosmic galaxy formation is no longer viable, placing hot dark matter under the larger umbrella of mixed dark matter (MDM) theory.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11971",
"title": "Galaxy formation and evolution",
"section": "Section::::Formation of disk galaxies.:Bottom-up theories.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 677,
"text": "More recent theories include the clustering of dark matter halos in the bottom-up process. Instead of large gas clouds collapsing to form a galaxy in which the gas breaks up into smaller clouds, it is proposed that matter started out in these “smaller” clumps (mass on the order of globular clusters), and then many of these clumps merged to form galaxies, which then were drawn by gravitation to form galaxy clusters. This still results in disk-like distributions of baryonic matter with dark matter forming the halo for all the same reasons as in the top-down theory. Models using this sort of process predict more small galaxies than large ones, which matches observations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43348949",
"title": "Galaxy group",
"section": "Section::::Types.:Compact Groups.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 348,
"text": "Compact groups of galaxies readily show the effect of dark matter, as the visible mass is greatly less than that needed to gravitationally hold the galaxies together in a bound group. Compact galaxy groups are also not dynamically stable over Hubble time, thus showing that galaxies evolve by merger, over the timescale of the age of the universe.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50650",
"title": "Astronomy",
"section": "Section::::Specific subfields.:Galactic astronomy.\n",
"start_paragraph_id": 95,
"start_character": 0,
"end_paragraph_id": 95,
"end_character": 262,
"text": "Kinematic studies of matter in the Milky Way and other galaxies have demonstrated that there is more mass than can be accounted for by visible matter. A dark matter halo appears to dominate the mass, although the nature of this dark matter remains undetermined.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "192904",
"title": "Ultimate fate of the universe",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 506,
"text": "Observations made by Edwin Hubble during the 1920s–1950s found that galaxies appeared to be moving away from each other, leading to the currently accepted Big Bang theory. This suggests that the universe began – very small and very dense – about 13.8 billion years ago, and it has expanded and (on average) become less dense ever since. Confirmation of the Big Bang mostly depends on knowing the rate of expansion, average density of matter, and the physical properties of the mass–energy in the universe.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1o2m99
|
Why did France (West Francia) end up more unified than the rest of the Holy Roman Empire (East Francia)?
|
[
{
"answer": "During what era? Because France wasn't particularly unified until Richelieu.",
"provenance": null
},
{
"answer": "By West Francia and East Francia I am assuming you mean the kingdoms controlled by Charlemagne's descendants? After Charlemagne's death his son, Louis the Pious (co-emperor with Charles) took over and kept the kingdom intact for the most part. After Louis' death his three sons split the Empire. This was accustom of Frankish boys to split their father possessions. Since Louis was claimed the legitimate heir to Charlemagne's throne he was the only one to receive his father's blessing and the kingdom (he did rule in a kind of co-emperor deal, but he was the head honcho if you will). Since Louis had four boys, three who were \"legit\" they split their grandfather's empire. Lothair was named Emperor but after a series of rebellions and negotiations each took an equal piece based on economic means. They were not split by geography but by what the land, cities, and import/exports were worth. Charles the Bald received the western portion of the Empire, Louis the German received the Eastern Part, and Lothair I (eldest) received the lands between the two and that stretched to Rome. He was also named Emperor and King of the Franks. The brothers often quarreled trying to dispel one another and seize each others lands. Louis almost obtained the Western portion but was stretched too thin along his eastern border and in Western Francia. \n\nThe reason why the German part of the Frankish empire dissolved after Louis the German's death was because of his sons, nephews, and the rival duchies on his borders like the Slavs and Magyars. So it was a political and cultural differences as to why the German portion couldn't remain together, remember their sons get their possessions. As for the Western Frankish Empire his sons had a buffer zone from those rivals but also had the support of strong rulers. \n\nsources: [Struggle for Empire: Kingship and Conflict under Louis the German](_URL_1_)\n\n[Early Carolingian Empire: Prelude to Empire](_URL_0_)\n\n\nAlso the label France is just for geography reasons. But for the medieval period it should labeled Frankish Kingdoms or Gaul depending on when and where you're referring.",
"provenance": null
},
{
"answer": "West Francia was not more unified than East Francia. East Francia did deal with the Magyar threat, but the Vikings devastated West Francia and helped to destabilize it. Basically the West Frankish Kings could not stop the vikings from pillaging at will, so they paid them off with ridiculous sums of silver, which bankrupted the Crown, which they then tried to pass off to the rest of nobility to collect. This goes on for 150 years or so of summer raids and destabilizes the government almost from the start, because nobility and ordinary citizens had no faith in the central government to protect them. Things were not much more centralized in East Francia, but it was the first major government in mainland Europe to really come together into a unified authority after Charlemagne, under Otto I in the early 10th century. As someone else suggested, West Francia did not really unite until after the 100 Years War.\n\nSource: Birth of the West, by Paul Collins",
"provenance": null
},
{
"answer": "Semantics aside, the HRE was less unified because the power of local landlords (particularly castellans) was disproportionate. A rich family would control and tax their own lands, and if they had a castle the emperor couldn't really control what they did. Geography played a big part in this, but I think the emperor being unable to control local powers is really the big reason.",
"provenance": null
},
{
"answer": "Paradoxically, it is because initially, power was much less centralised than in the East (with the Ottonians). The process of unification in France went from a totally shattered area in term of power distribution, slowly amalgamating itself, to one dominant player who took it all.\nIn Germany, it went from a powerful central power that kept being checked by growing regional ones. The main thing is that the German side loaded itself with the title of Emperor, and thus kept having imperial dreams for the longest time. For the Emperor, East Francia (Germany) was not a goal, just a step, a power BASE.\n\nThe French kings never really had such ambition: their one and only goal was to unify under their direct rule (the domain) their whole kingdom. For the French kings, France was the goal to achieve. They went at it for generations and generations, through marriage, wars, and acquisitions. It was a more modest goal and they got lucky to have one family rule (Capetians) for the longest time ever.\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "43008790",
"title": "County of Flanders",
"section": "Section::::History.:Carolingians.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 379,
"text": "After the Middle-Frankish kings died out, the rulers of the West and East-Frankish Kingdoms divided the Middle-Frankish kingdom amongst themselves in the treaty of Meerssen in 870. Now Western Europe had been divided into two sides: the solid West Francia (the later France) and the loose confederation of principalities of East Francia, that would become the Holy Roman Empire.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13289",
"title": "History of the Netherlands",
"section": "Section::::Early Middle Ages (411–1000).:Frankish dominance and incorporation into the Holy Roman Empire.\n",
"start_paragraph_id": 100,
"start_character": 0,
"end_paragraph_id": 100,
"end_character": 495,
"text": "The Carolingian empire would eventually include France, Germany, northern Italy and much of Western Europe. In 843, the Frankish empire was divided into three parts, giving rise to West Francia in the west, East Francia in the east, and Middle Francia in the centre. Most of what is today the Netherlands became part of Middle Francia; Flanders became part of West Francia. This division was an important factor in the historical distinction between Flanders and the other Dutch-speaking areas.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2324951",
"title": "Historic roads and trails",
"section": "Section::::Europe.:Frankish Empire.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 360,
"text": "Francia or the Frankish Empire was the largest post-Roman Barbarian kingdom in Western Europe. It was ruled by the Franks during Late Antiquity and the Early Middle Ages. It is the predecessor of the modern states of France and Germany. After the Treaty of Verdun in 843, West Francia became the predecessor of France, and East Francia became that of Germany.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42370",
"title": "History of Belgium",
"section": "Section::::Before independence.:Early Middle Ages.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 555,
"text": "The Frankish lands were divided and reunified several times under the Merovingian and Carolingian dynasties, but eventually were firmly divided into France and the Holy Roman Empire. The parts of the County of Flanders stretching out west of the river Scheldt (Schelde in Dutch, Escaut in French) became part of France during the Middle Ages, but the remainders of the County of Flanders and the Low Countries were part of the Holy Roman Empire, specifically they were in the stem duchy of Lower Lotharingia, which had a period as an independent kingdom.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "303481",
"title": "Francia",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 514,
"text": "Francia, also called the Kingdom of the Franks (), or Frankish Empire, was the largest post-Roman barbarian kingdom in Western Europe. It was ruled by the Franks during Late Antiquity and the Early Middle Ages. It is the predecessor of the modern states of France and Germany. After the Treaty of Verdun in 843, West Francia became the predecessor of France, and East Francia became that of Germany. Francia was among the last surviving Germanic kingdoms from the Migration Period era before its partition in 843.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21148",
"title": "Netherlands",
"section": "Section::::History.:Early Middle Ages (411–1000).\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 1566,
"text": "The Frankish Carolingian empire modeled itself after the Roman Empire and controlled much of Western Europe. However, as of 843, it was divided into three parts—East, Middle, and West Francia. Most of present-day Netherlands became part of Middle Francia, which was a weak kingdom and subject of numerous partitions and annexation attempts by its stronger neighbours. It comprised territories from Frisia in the north to the Kingdom of Italy in the south. Around 850, Lothair I of Middle Francia acknowledged the Viking Rorik of Dorestad as ruler of most of Frisia. When the kingdom of Middle Francia was partitioned in 855, the lands north of the Alps passed to Lothair II and consecutively were named Lotharingia. After he died in 869, Lotharingia was partitioned, into Upper and Lower Lotharingia, the latter part comprising the Low Countries that technically became part of East Francia in 870, although it was effectively under the control of Vikings, who raided the largely defenceless Frisian and Frankish towns lying on the Frisian coast and along the rivers. Around 879, another Viking raided the Frisian lands, Godfrid, Duke of Frisia. The Viking raids made the sway of French and German lords in the area weak. Resistance to the Vikings, if any, came from local nobles, who gained in stature as a result, and that laid the basis for the disintegration of Lower Lotharingia into semi-independent states. One of these local nobles was Gerolf of Holland, who assumed lordship in Frisia after he helped to assassinate Godfrid, and Viking rule came to an end.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46889478",
"title": "Name of the Franks",
"section": "Section::::Francia (France).\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 328,
"text": "Under the reign of the Franks' Kings Clovis I, Charles Martel, Pepin the Short, and Charlemagne, the country was known as Kingdom of Franks or Francia. At the Treaty of Verdun in 843, the Frankish Empire was divided in three parts : West Francia (\"Francia Occidentalis\"), Middle Francia and East Francia (\"Francia Orientalis\").\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5sv01u
|
How are the wave functions of antiparticles related to those of their "normal" counterparts?
|
[
{
"answer": "When you're talking about particles and antiparticles, you're generally outside the regime of standard one-body, nonrelativistic quantum mechanics. So the notion of a wavefunction is sort of abandoned.\n\nInstead you replace them with field operators. There is a mathematical operator denoted by \"C\" which is the \"charge conjugation operator\". It turns a particle into its antiparticle.\n\n > How does one mathematically describe particle/antiparticle annihilation?\n\nYou'd use quantum field theory. For example to calculate the probability of electron/positron annihilation into two photons using perturbation theory, you'd sum over a bunch of Feynman diagrams. The lowest-order contributions look like [this](_URL_0_).",
"provenance": null
},
{
"answer": "To describe particles properly you have to give up on the idea of wavefunctions and use fields instead. A field is an object with a value at every point in spacetime, and quantum fields are described by Quantum Field Theory.\n\nBasically you have a field for every particle-antiparticle pair. So you'd have the photon field (a.k.a. the EM field), or the electron-positron field. Particles like photons which are their own antiparticle are described by real fields, meaning the value of the field is a real number. Particles with distinct antiparticles like electrons and positrons are described by a complex field, so the field value is a complex number.\n\nHowever, it is *not* the case that the standard field corresponds to the particle and the conjugate field corresponds to the antiparticle. To see where particles and antiparticles come from you have to see how a field is actually constructed, which is a bit in-depth. Essentially, we use things called annihilation and creation operators (*A* and *A*\\*), which destroy or create a particle with some fixed given momentum. Importantly, they say *absolutely nothing* about the position of the particle (you can view this as a manifestation of the Heisenberg principle). So to get a value for a field, we basically do a Fourier transform and take a sum over all possible momenta:\n\n > ϕ = ∫d*p* 1/sqrt(2*E*) [*A*(*p*)e^(i*p**x*) + *A*\\*(*p*)e^(-i*p**x*)]\n\nThat is the definition of a (scalar) real quantum field. However, when we do this for a complex quantum field, we find that there are two separate sets of annihilation and creation operators: *B*, *B** and *C*, *C**. The field is then:\n\n\n > ψ = ∫d*p* 1/sqrt(2*E*) [*B*(*p*)e^(i*p**x*) + *C*\\*(*p*)e^(-i*p**x*)]\n\n > ψ* = ∫d*p* 1/sqrt(2*E*) [*B*\\*(*p*)e^(i*p**x*) + *C*(*p*)e^(-i*p**x*)]\n\nEach pair of operators corresponds to a unique particle. These are interpreted as the particle and the antiparticle. Why you end up with two sets of operators is a bit in-depth, but it's essentially an emergent property. If you know any solid state physics, it is similar to how phonons arise from considering the sum of the oscillations of each individual point in a lattice.",
"provenance": null
},
{
"answer": "While for a full treatment of particles and antiparticles, you need to use quantum field theory and move beyond wavefunctions as /u/RobusEtCeleritas says, within the context of the Dirac equation (which, in fact, led to the prediction of antimatter in the first place), we can formulate wavefunctions for matter and antimatter.\n\nThe Dirac equation can be used to describe electrons and positrons both relativistically and quantum mechanically. In the standard formulation of the Dirac equation, the wavefunction has four components, two corresponding to the electron and two corresponding to the positron (the antimatter partner of the electron).\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "12437648",
"title": "Antisymmetrizer",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 506,
"text": "In quantum mechanics, an antisymmetrizer formula_1 (also known as antisymmetrizing operator) is a linear operator that makes a wave function of \"N\" identical fermions antisymmetric under the exchange of the coordinates of any pair of fermions. After application of formula_1 the wave function satisfies the Pauli principle. Since formula_1 is a projection operator, application of the antisymmetrizer to a wave function that is already totally antisymmetric has no effect, acting as the identity operator.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11149",
"title": "Fresnel equations",
"section": "Section::::Theory.:Power ratios (reflectivity and transmissivity).\n",
"start_paragraph_id": 117,
"start_character": 0,
"end_paragraph_id": 117,
"end_character": 972,
"text": "The \"Poynting vector\" for a wave is a vector whose component in any direction is the \"irradiance\" (power per unit area) of that wave on a surface perpendicular to that direction. For a plane sinusoidal wave the Poynting vector is where and are due \"only\" to the wave in question, and the asterisk denotes complex conjugation. Inside a lossless dielectric (the usual case), and are in phase, and at right angles to each other and to the wave vector ; so, for s polarization, using the and components of and respectively (or for p polarization, using the and components of and ), the irradiance in the direction of is given simply by which is in a medium of intrinsic impedance . To compute the irradiance in the direction normal to the interface, as we shall require in the definition of the power transmission coefficient, we could use only the component (rather than the full component) of or or, equivalently, simply multiply by the proper geometric factor, obtaining .\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38471907",
"title": "Phasor approach to fluorescence lifetime and spectral imaging",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 379,
"text": "Phasor approach refers to a method which is used for vectorial representation of sinusoidal waves like alternative currents and voltages or electromagnetic waves. The amplitude and the phase of the waveform is transformed into a vector where the phase is translated to the angle between the phasor vector and X axis and the amplitude is translated to vector length or magnitude.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50903",
"title": "Wavelet",
"section": "Section::::Wavelet transforms.\n",
"start_paragraph_id": 81,
"start_character": 0,
"end_paragraph_id": 81,
"end_character": 759,
"text": "A wavelet is a mathematical function used to divide a given function or continuous-time signal into different scale components. Usually one can assign a frequency range to each scale component. Each scale component can then be studied with a resolution that matches its scale. A wavelet transform is the representation of a function by wavelets. The wavelets are scaled and translated copies (known as \"daughter wavelets\") of a finite-length or fast-decaying oscillating waveform (known as the \"mother wavelet\"). Wavelet transforms have advantages over traditional Fourier transforms for representing functions that have discontinuities and sharp peaks, and for accurately deconstructing and reconstructing finite, non-periodic and/or non-stationary signals.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19763060",
"title": "Slater–Condon rules",
"section": "Section::::Mathematical background.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 209,
"text": "In terms of an antisymmetrization operator (formula_1) acting upon a product of \"N\" orthonormal spin-orbitals (with r and \"σ\" denoting spatial and spin variables), a determinantal wavefunction is \"denoted\" as\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50903",
"title": "Wavelet",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 542,
"text": "A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases, and then decreases back to zero. It can typically be visualized as a \"brief oscillation\" like one recorded by a seismograph or heart monitor. Generally, wavelets are intentionally crafted to have specific properties that make them useful for signal processing. Using a \"reverse, shift, multiply and integrate\" technique called convolution, wavelets can be combined with known portions of a damaged signal to extract information from the unknown portions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20177574",
"title": "Antiresonance",
"section": "Section::::Applications.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 717,
"text": "This result makes antiresonances useful in characterizing complex coupled systems which cannot be easily separated into their constituent components. The resonance frequencies of the system depend on the properties of all components and their couplings, and are independent of which is driven. The antiresonances, on the other hand, are dependent upon the component being driven, therefore providing information about how it affects the total system. By driving each component in turn, information about all of the individual subsystems can be obtained, despite the couplings between them. This technique has applications in mechanical engineering, structural analysis, and the design of integrated quantum circuits.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2j7lhy
|
why do we grow hair in specific places?
|
[
{
"answer": "Hair in armpits and groin is a dry lubricant, hair on your head is sun protection, men's facial hair is a sexual marker. ",
"provenance": null
},
{
"answer": "I actually had this question in my head, so I'll add on about something else which has been bothering me:\n\nWhy does the hair in our head have such a specific \"shape\"? It stops around your forehead but extends down your sideburns, goes around your ears and forms sharp edges around the back of your head. And it's the same for most people. How is this regulated?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "26060462",
"title": "Human hair growth",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 374,
"text": "The growth of human hair occurs everywhere on the body except for the soles of the feet, the lips, palms of the hands, some external genital areas, the navel, scar tissue, and, apart from eyelashes, the eyelids. Hair is a stratified squamous keratinized epithelium made of multi-layered flat cells whose rope-like filaments provide structure and strength to the hair shaft.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8416397",
"title": "Long hair",
"section": "Section::::Biological significance.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 598,
"text": "Scientists also view the ability to grow very long hair as a result of sexual selection, since long and healthy hair is a sign of fertility and youth. An evolutionary biology explanation for this attraction is that hair length and quality can act as a cue to youth and health, signifying a woman's reproductive potential. As hair grows slowly, long hair may reveal 2–3 years of a person's health status, nutrition, age and reproductive fitness. Malnutrition, and deficiencies in minerals and vitamins due to starvation, cause loss of hair or changes in hair color (e.g. dark hair turning reddish).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36881878",
"title": "Ear hair",
"section": "Section::::Structure.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 292,
"text": "Hair is a protein filament that grows from follicles in the dermis, or skin. With the exception of areas of glabrous skin, the human body is covered in follicles which produce thick terminal and fine vellus hair. It is an important biomaterial primarily composed of protein, notably keratin.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2494084",
"title": "Hair care",
"section": "Section::::Treatment of damage.:Hair care and nutrition.\n",
"start_paragraph_id": 108,
"start_character": 0,
"end_paragraph_id": 108,
"end_character": 601,
"text": "Genetics and health are factors in healthy hair. Proper nutrition is important for hair health. The living part of hair is under the scalp skin where the hair root is housed in the hair follicle. The entire follicle and root are fed by a supply of arteries, and blood carries nutrients to the follicle/root. Any time an individual has any kind of health concern from stress, trauma, medications of various sorts, chronic medical conditions or medical conditions that come and then wane, heavy metals in waters and food, smoking etc. these and more can affect the hair, its growth, and its appearance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23891514",
"title": "Good Hair",
"section": "Section::::See also.:Rock on \"The Oprah Winfrey Show\".\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 215,
"text": "Ayana Byrd, an editor for \"Glamour\" magazine, said, \"The point is not to say hair is good or bad, it's to say that once we work through the history behind our hair, we can get to a place where it can just be hair.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "561253",
"title": "Mende people",
"section": "Section::::Female culture.:Hair.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 377,
"text": "A woman's hair is a sign of femininity. Both thickness and length are elements that are admired by the Mende. Thickness means the woman has more individual strands of hair and the length is proof of strength. It takes time, care and patience to grow a beautiful, full head of hair. Ideas about hair root women to nature, the way hair grows is compared to the way forests grow.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "464073",
"title": "Hair follicle",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 743,
"text": "The function of hair in humans has long been a subject of interest and continues to be an important topic in society, developmental biology and medicine. Of all mammals, humans have the longest growth phase of scalp hair compared to hair growth on other parts of the body. For centuries, humans have ascribed esthetics to scalp hair styling and dressing and it is often used to communicate social or cultural norms in societies. In addition to its role in defining human appearance, scalp hair also provides protection from UV sun rays and is an insulator against extremes of hot and cold temperatures. Differences in the shape of the scalp hair follicle determine the observed ethnic differences in scalp hair appearance, length and texture.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ah5zu9
|
How did soldiers know the names of enemy weapons and equipment?
|
[
{
"answer": "u/kieslowskifan has an answer to this! \n\n_URL_0_ ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "7946",
"title": "Dog tag",
"section": "Section::::History.:American Civil War.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 507,
"text": "Manufacturers of identification badges recognized a market and began advertising in periodicals. Their pins were usually shaped to suggest a branch of service, and engraved with the soldier's name and unit. Machine-stamped tags were also made of brass or lead with a hole and usually had (on one side) an eagle or shield, and such phrases as \"War for the Union\" or \"Liberty, Union, and Equality\". The other side had the soldier's name and unit, and sometimes a list of battles in which he had participated.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11267248",
"title": "Name tag",
"section": "Section::::Military.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 768,
"text": "Military personnel commonly wear name badges on their uniforms, though usually displaying only the family (last) name. The use of name tags probably originated with laundry tags, used to identify clothing belonging to particular individuals. During World War II the United States military began making use of external name tags, in particular on flight clothing and combat uniforms worn by marines and paratroopers. The use of cloth name tapes became common by the Korean War and its use spread to other armies. The Canadian Army began using cloth name tapes on the combat uniform introduced in the 1960s. During this period, the use of name tags extended from combat and work clothing only, to the dress uniform (and tags made of engraved plastic rather than cloth).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13313754",
"title": "Signaculum",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 301,
"text": "Similar items for identifying civilian goods and equipment have been found as well. Signacula of this variety were not discs that were carried on one's person as with the Roman army equivalent, but are more like modern-day product labels, giving information on the item's manufacturer and affiliates.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28059150",
"title": "Formation patch",
"section": "Section::::History.:World War II.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 876,
"text": "By the time of the Second World War, the various armies did not feel a perceived need to identify individual battalions on battledress uniforms. The German Army had a system of coloured bayonet knots that identified the wearer's company, number shoulder strap buttons that identified the wearer's company/battalion, and shoulder straps that identified the wearer's regiment, but had no distinguishing divisional insignia other than the cuff titles of the 'elite' formations. The British Army prohibited all identifying marks on its Battle Dress uniforms in 1939 save for drab regimental slip-on titles, but in 1941 introduced formation patches to identify the wearer's division. They were initially referred to by the British as \"Divisional Signs\", but this was soon changed to \"Formation Badges\". By the end of the war, Corps, Armies, and Army Groups had their own insignia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "977374",
"title": "Military uniform",
"section": "Section::::History.:Late Roman and Byzantine.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 462,
"text": "The regular thematic (provincial) and Tagmata (central) troops of the Byzantine Empire (East Roman) are the first known soldiers to have had what would now be considered regimental or unit identification. During the 10th century, each of the cavalry \"banda\" making up these forces is recorded as having plumes and other distinctions in a distinctive colour. Officers wore a waist sash or \"pekotarion\", which may have been of different colours according to rank.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24373500",
"title": "Tabula ansata",
"section": "Section::::Overview.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 296,
"text": "\"Tabulae ansatae\" identifying soldiers' units have been found on the \"tegimenta\" (leather covers) of shields, for example in Vindonissa (Windisch, Switzerland). Sculptural evidence, for example on the metopes from the Tropaeum Traiani (Adamclisi, Romania), shows that they were also used for the\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "878854",
"title": "Marksmanship badges (United States)",
"section": "Section::::Marksmanship competition badges.:Distinguished marksmanship programs.:Excellence-in-competition badges.:Changes in EIC badge design.\n",
"start_paragraph_id": 120,
"start_character": 0,
"end_paragraph_id": 120,
"end_character": 2536,
"text": "From 1903 to 1958, the U.S. Army EIC badges were known as Team Marksmanship Badges. Prior to that, the Army awarded a variety of large unique that went by a variety of names from 1880 to 1903. were awarded in gold, silver, and bronze consisting of oval pendants with enameled targets in the center that were superimposed over crossed rifles with bayonets, crossed carbines with slings, a heavy machine gun, or placed between two revolvers. Above the enameled target was the letters \"U.S.\"; but for a short time, the word \"INFANTRY\" or \"CAVALRY\" (unit dependent) appeared above the target while the letters U.S. were embossed beneath the target. The pendant hung from two different brooch designs. From 1903 to 1906 the brooch had rounded arrowhead ends (sean today in the U.S. Marine Corps's EIC badges) bearing the name \"ARMY,\" \"DEPARTMENT,\" or \"DIVISION\" reflecting the level of competition for which the badge was earned. In 1906 the brooch was redesigned with swallow-tail ends bearing the name of the Army corps marksmanship team flanked by the words \"ARMY,\" on the left, and \"TEAM,\" on the right. In 1923, the Army updated the Team Marksmanship Badges with a new three piece design which was awarded in three grades; gold, silver, and bronze for pistol, rifle, and automatic rifle. There were four components to this new badge; the brooch, clasp, Team Disk, and pendant. A plain brooch with a circular center device was used to identify an Army corps or department level award. A wreath laden brooch was used to identify a national or Army level award. A gold, silver, or bronze (score dependent) replica of either crossed Flintlock Pistols, Muskets, or M1918 Browning Automatic Rifles (BARs) hung from the brooch which supported the badge's bronze pendant. The pendant had a bow with two crossed arrows at its center surrounded by a ring of 13 stars which was encircled by an oak wreath. For national and Army level awards, an enameled ring, known as the Team Disk, was placed behind the pendant's ring of 13 stars and was colored to match the branch of service color of the awarded team. Today's Army EIC badges, which began in 1958, are almost identical to the Team Marksmanship Badges with the following exceptions: only one version of the brooch exists and bears the name \"U.S. ARMY;\" the crossed BARs, Team Disks, and gold version of the crossed weapons have been deleted. Also, the entire EIC badge is now cast in either bronze or silver, vise having just the crossed weapons being cast in the medal earned by the shooter.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
dtclmy
|
when you suck on an m & m, why does it feel smooth, then rough, then smooth again?
|
[
{
"answer": "The smooth part is likely the candy glaze, the rough would be the actual shell of the m & m and then the chocolate.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "42526",
"title": "Etching",
"section": "Section::::Faults.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 407,
"text": "\"Foul-bite\" or \"over-biting\" is common in etching, and is the effect of minuscule amounts of acid leaking through the ground to create minor pitting and burning on the surface. This incidental roughening may be removed by smoothing and polishing the surface, but artists often leave faux-bite, or deliberately court it by handling the plate roughly, because it is viewed as a desirable mark of the process.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31300435",
"title": "Sliding criterion (geotechnical engineering)",
"section": "Section::::Sliding-angle.:Roughness small scale (\"Rs\").\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 316,
"text": "The first term \"rough\", \"smooth\", or \"polished\" is established by feeling the surface of the discontinuity; \"rough\" hurts when fingers are moved over the surface with some (little) force, \"smooth\" feels that there is resistance to the fingers, while \"polished\" gives a feeling about similar to the surface of glass.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28967920",
"title": "Rough with the Smooth",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 207,
"text": "\"Rough with the Smooth\" is a song by the British singer Shara Nelson. It was the first single released from her second solo album \"Friendly Fire\" in 1995. The single peaked at no.30 on the UK Singles Chart.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "867815",
"title": "Petrissage",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 343,
"text": "Petrissage (from French \"pétrir\", \"to knead\") are massage movements with applied pressure which are deep and compress the underlying muscles. Kneading, wringing, skin rolling and pick-up-and-squeeze are the petrissage movements. They are all performed with the padded palmar surface of the hand, the surface of the finger and also the thumbs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6501227",
"title": "Bit mouthpiece",
"section": "Section::::Bits without joints.:Straight-bar and Mullen mouth.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 624,
"text": "Action: The mullen mouth and straight bar are fairly similar in action, placing pressure on the tongue, lips, and bars. The mullen provides extra space for the tongue, instead of constantly pushing into it, resulting in more tongue relief, and making it more comfortable, but the mullen does not have as high of a port as a curb, thus does not offer full tongue relief. This bit is generally considered a very mild mouthpiece, although this varies according to the type of bit leverage (snaffle, pelham or curb), and improper use may make it harsh, since the majority of the bit pressure is applied on the sensitive tongue.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6325194",
"title": "Distortion (music)",
"section": "Section::::Theory and circuits.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 412,
"text": "\"Soft clipping\" gradually flattens the peaks of a signal which creates a number of higher harmonics which share a harmonic relationship with the original tone. \"Hard clipping\" flattens peaks abruptly, resulting in higher power in higher harmonics. As clipping increases a tone input progressively begins to resemble a square wave, which has odd number harmonics. This is generally described as sounding \"harsh\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2293147",
"title": "Bottle scraper",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 524,
"text": "The scraper is made of a long shaft, frequently around in length. On one side is a small flexible rubber spatula head roughly across set perpendicular to the shaft. The head is flexible and usually has a rounded half-circle shape one side useful for scraping round bottles and jars and a flat side with two right angles useful for scraping out cartons. The head is flexible so that it can be pushed into and pulled out of bottles whose mouth is smaller than the fully expanded head of the scraper but larger than the shaft.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2jsdk8
|
During the Spanish Reconquista, did much of the Muslim population convert to Catholicism?
|
[
{
"answer": "Follow up question, during Muslim Iberia what per centage of the population stayed christian?",
"provenance": null
},
{
"answer": "Different states treated the Muslims differently.\n\nCastile and it's possessions were known to be extremely aggressive towards Muslims and Jews. However, for most of the period of the Reconquista, this was in the form of a heavy tax burden placed on non-Christians, similar to the Jizya (the Muslim tax on non-Muslims) in place earlier, although probably much heavier. However, prior to the completion of the campaigns, this monetary contribution was probably more beneficial, as it encouraged conversions in the same way the Jizya did; that is, without having to resort to costly and dangerous population purges. The key thing is that any attempt to avoid the heavy (often crushing) burdens placed on them was harshly punished. This was an easy way to target individuals. In addition, huge swaths of land were appropriated by incoming Castilian nobility, which was often still worked by the formerly Muslim peasantry. This gave landowners another important leverage over those that worked their land.\n\nIn Aragon, things worked a little differently. Initially attempts were made to incorporate much of the existing leadership structure of the conquered areas of Catalonia and Valencia, but a revolt in the 13th century by Muslim leadership put the kibosh to that. However, within the coastal cities relatively lax treatment was available to Muslims and Jews, as these populations and their trading relationships were seen as very valuable.\n\nIn both Castile and Aragon, things changed drastically in the very end of the 15th and early 16th centuries. Muslims and Jews were given the option of conversion of expulsion, and this is when the Spanish Inquisition earned it's reputation. Paranoia about falsely converted Muslims and Jews was what gained them that. \n\nI am not as familiar with Portugal however. It does seem that in the early 16th century they too took a harsher tone against Muslims by expelling all Moors, but I am not as certain as to the levels of persecution prior to that. It is important to note that Portugal completed their own Reconquista far before Castile.\n\nAs for the ultimate fate of the Muslim population of Spain, most of them would be descendents of the original inhabitants there before the invasion by the Caliphate who had converted over the centuries, and probably not all that ethnically different from their northern neighbors. The Muslims in leadership positions were the ones who would be the ones most likely to be expelled on sight or killed. Southern Spain has a population with higher 'Moorish' blood origins, likely thanks to greater intermingling of the population with North Africa's due to proximity as well as time spent under their rule (leading to the distinct 'Andalusian' ethnic/cultural group.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "348444",
"title": "Persecution of Muslims",
"section": "Section::::Medieval.:Iberian Peninsula.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 1607,
"text": "During the expansion south of the northern Christian kingdoms, depending on the local capitulations, local Muslims were allowed to remain (Mudéjars) with extreme restrictions, while some were forcefully converted into the Christian faith. After the conquest of Granada, all the Spanish Muslims were under Christian rule. The new acquired population spoke Arabic or Mozarabic, and the campaigns to convert them were unsuccessful. Legislation was gradually introduced to remove Islam, culminating with the Muslims being forced to convert to Catholicism by the Spanish Inquisition. They were known as Moriscos and considered New Christians. Further laws were introduced, as on 25 May 1566, stipulating that they 'had to abandon the use of Arabic, change their costumes, that their doors must remain open every Friday, and other feast days, and that their baths, public and private, to be torn down.' The reason doors were to be left open so as to determine whether they secretly observed any Islamic festivals. King Philip II of Spain ordered the destruction of all public baths on the grounds of them being relics of infidelity, notorious for their use by Muslims performing their purification rites. The possession of books or papers in Arabic was near concrete proof of disobedience with severe repercussions. On 1 January 1568, Christian priests were ordered to take all Morisco children between the ages of three and fifteen, and place them in schools, where they were forced to learn Castillian and Christian doctrine. All these laws and measures required force to be implemented, and from much earlier.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46006",
"title": "Freedom of religion",
"section": "Section::::History.:Europe.:Religious intolerance.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 538,
"text": "After the fall of the city of Granada, Spain, in 1492, the Muslim population was promised religious freedom by the Treaty of Granada, but that promise was short-lived. In 1501, Granada's Muslims were given an ultimatum to either convert to Christianity or to emigrate. The majority converted, but only superficially, continuing to dress and speak as they had before and to secretly practice Islam. The Moriscos (converts to Christianity) were ultimately expelled from Spain between 1609 (Castile) and 1614 (rest of Spain), by Philip III.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34163980",
"title": "Spanish–Moro conflict",
"section": "Section::::Wars during the 1600s.:Background.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 871,
"text": "Following the reconquista, a period during which Spanish and Christian culture were restored to those areas of Spain invaded by the Umayyad Caliphate, the Inquisition required Jews and Muslims to convert to Roman Catholicism, or face exile or the death penalty. Thus, the Spaniards tried to suppress Islam in areas they conquered. To this end, they attacked the Moro Muslim sultanates in the south at Mindanao. The Moro Datus and sultans raided and pillaged Spanish towns in the northern Philippine islands in retaliation for Spanish attacks, and terrorized the Spanish invaders with constant piracy. The Spanish were prepared to conquer Mindanao and the Moluccas after establishing forts in 1635, but the Chinese threatened the Spanish with invasion, forcing them to pull back to defend Manila. Several thousand Chinese who were evicted by the Spanish joined the Moros.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1188632",
"title": "Alexander the Great in the Quran",
"section": "Section::::Islamic depictions of Alexander the Great.:Andalusian traditions.\n",
"start_paragraph_id": 106,
"start_character": 0,
"end_paragraph_id": 106,
"end_character": 1127,
"text": "By 1236 AD, the Reconquista was essentially completed and Europeans had retaken the Iberian peninsula from the Muslims, but the Emirate of Granada, a small Muslim vassal of the Christian Kingdom of Castile, remained in Spain until 1492 AD. During the Reconquista, Muslims were forced to either convert to Catholicism or leave the peninsula. The descendants of Muslims who converted to Christianity were called the Moriscos (meaning \"Moor-like\") and were suspecting of secretly practicing Islam. The Moriscos used a language called Aljamiado, which was a dialect of the Spanish language (Mozarabic) but was written using the Arabic alphabet. Aljamiado played a very important role in preserving Islam and the Arabic language in the life of the Moriscos; prayers and the sayings of Muhammad were translated into Aljamiado transcriptions of the Spanish language, while keeping all Quranic verses in the original Arabic. During this period, a version of the Alexander legend was written in the Aljamaido language, building on the Arabic \"Qisas Dhul-Qarnayn\" legends as well as Romance language versions of the \"Alexander romance\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48870647",
"title": "Rebellion of the Alpujarras (1499–1501)",
"section": "Section::::Aftermath.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 857,
"text": "A proclamation in 1502 extended these forced conversions to the rest of the lands of Castile, even though those outside Granada had nothing to do with the rebellion. The newly converted Muslims were known as \"nuevos cristianos\" (\"new Christians\") or \"moriscos\" (lit. \"Moorish\"). Although they converted to Christianity, they maintained their existing customs, including their language, distinct names, food, dress and even some ceremonies. Many secretly practiced Islam, even as they publicly professed and practiced Christianity. In return, the Catholic rulers adopted increasingly intolerant and harsh policies in order to eradicate these characteristics. This culminated in Philip II's \"Pragmatica\" of 1 January 1567 which ordered the Moriscos to abandon their customs, clothing and language. The \"pragmatica\" triggered the Morisco revolts in 1568–1571.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1863799",
"title": "La Convivencia",
"section": "Section::::End of the Convivencia.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 710,
"text": "Similarly the Muslims of Iberia were forced to convert or face either death or expulsion. This happened even though the Granadan Muslims had been assured of religious freedom at the time of their surrender. Between 1500 and 1502 all remaining Muslims of Granada and Castile were converted. In 1525, Muslims in Aragon were similarly forced to convert. The Muslim communities who converted became known as Moriscos. Still they were suspected by the \"old Christians\" of being crypto-Muslims and so between 1609 and 1614 their entire population of 300,000 was forcibly expelled. All these expulsions and conversions resulted in Catholic Christianity becoming the sole sanctioned religion in the Iberian Peninsula.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1006659",
"title": "Ronda",
"section": "Section::::History.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 688,
"text": "The Spanish Inquisition affected the Muslims living in Spain greatly. Shortly after 1492, when the last outpost of Muslim presence in the Iberian Peninsula, Granada, was conquered, the Spanish decreed that all Muslims must either vacate the peninsula without their belongings or convert. Many people overtly converted to keep their possessions while secretly practicing their religion. Muslims who converted were called Moriscos. They were required to wear upon their caps and turbans a blue crescent. Traveling without a permit meant a death sentence. This systematic suppression forced the Muslims to seek refuge in mountainous regions of southern Andalusia; Ronda was one such refuge.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
80x47m
|
how and why did apa become the standard for referencing sources?
|
[
{
"answer": "In research papers for publication, it's usually *not* standard (at least in most of the journals I am familiar with), partly due to the simple reason that in printed material *words cost money.* It's more common in review articles, perhaps because in primary research articles the other sources are just used for background while in reviews *most* of the content comes from other places so it makes more sense to show more of the citations. \n\nAlso, APA format has been around longer than computers, and citation-managing software specifically. Nowadays, you can automate citations in a word processor and auto-adjust the numbering pretty easily. But back in the day of typewriters and early word processors, if you wanted to number things you had to *manually* number things. So if you decide to add in another reference later, you would have to manually re-number *everything.* And if you're working with 30, 40, 50+ references... you get the idea. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "360397",
"title": "APA style",
"section": "Section::::Characteristics of APA style citation.:In-text citations.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 404,
"text": "APA style uses an author-date reference citation system in the text with an accompanying reference list. That means that to cite any reference in a paper, the writer should cite the author and year of the work, either by putting both in parentheses separated by a comma (parenthetical citation) or by putting the author in the narrative of the sentence and the year in parentheses (narrative citation). \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "360397",
"title": "APA style",
"section": "Section::::Characteristics of APA style citation.:Reference list.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 447,
"text": "In the APA reference list, the writer should provide the author, year, title, and source of the cited work in an alphabetical list of references. If a reference is not cited in the text, it should not be included in the reference list. The reference format varies slightly depending on the document type (e.g., journal article, edited book chapter, blog post), but broadly speaking always follows the same pattern of author, date, title, source. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23952830",
"title": "A Manual for Writers of Research Papers, Theses, and Dissertations",
"section": "Section::::Structure and content of the manual.:Part 2: Source Citation.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 416,
"text": "The more-concise author-date style (sometimes referred to as the \"reference list style\") is more common in the physical, natural, and social sciences. This style involves sources being \"briefly cited in the text, usually in parentheses, by author’s last name and year of publication\" with the parenthetical citations corresponding to \"an entry in a reference list, where full bibliographic information is provided.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3604693",
"title": "H-index",
"section": "Section::::Alternatives and modifications.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 224,
"text": "BULLET::::- The \"i\"10-index indicates the number of academic publications an author has written that have been cited by at least ten sources. It was introduced in July 2011 by Google as part of their work on Google Scholar.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "423362",
"title": "Citation index",
"section": "Section::::History.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 658,
"text": "The earliest known citation index is an index of biblical citations in rabbinic literature, the \"Mafteah ha-Derashot\", attributed to Maimonides and probably dating to the 12th century. It is organized alphabetically by biblical phrase. Later biblical citation indexes are in the order of the canonical text. These citation indices were used both for general and for legal study. The Talmudic citation index \"En Mishpat\" (1714) even included a symbol to indicate whether a Talmudic decision had been overridden, just as in the 19th-century \"Shepard's Citations\". Unlike modern scholarly citation indexes, only references to one work, the Bible, were indexed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1128564",
"title": "ISO 690",
"section": "Section::::Characteristics.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 290,
"text": "ISO 690 prescribes a referencing scheme with a fixed order of bibliographic elements in which the publication date appears after the \"production information\" of \"place\" and \"publisher\", but it allows an exception for the Harvard system, in which the date appears after the creator name(s).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12055142",
"title": "Academic authorship",
"section": "Section::::Definition.:Authorship in the social sciences.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 772,
"text": "The American Psychological Association (APA) has similar guidelines as medicine for authorship. The APA acknowledge that authorship is not limited to the writing of manuscripts, but must include those who have made substantial contributions to a study such as \"formulating the problem or hypothesis, structuring the experimental design, organizing and conducting the statistical analysis, interpreting the results, or writing a major portion of the paper\". While the APA guidelines list many other forms of contributions to a study that do not constitute authorship, it does state that combinations of these and other tasks may justify authorship. Like medicine, the APA considers institutional position, such as Department Chair, insufficient for attributing authorship.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
23khvf
|
is a 200hp car two times faster than a 100hp car?
|
[
{
"answer": "No, it just has twice as much health",
"provenance": null
},
{
"answer": "Most likely not. The horsepower is a rating of the power output of the engine, but the power consumption of a car doesn't usually scale linearly with speed. For example, aerodynamic drag is approximately proportional to the ~~fourth~~ third power of speed, and rolling resistance is quadratic.",
"provenance": null
},
{
"answer": "No.\nThe faster you go, the more wind and rolling resistance you generate. This number goes up exponentially, not linearly. \n\nSimplifying it a lot, it looks like the horsepower needed to overcome just air resistance increases by a power of 3, plus a bunch of funny constants we're going to ignore for this example. Ignoring all the constants to simplify to the meat of the equation, you'll get something like Horsepower Needed = Speed ^ 3. The numbers this gives are way off from realistic, though, so we'll just adjust by a factor of 10000 to get normal looking numbers, giving us \n\n Horsepower Needed = (Speed ^ 3) / 10,000\n\n Again, this isn't the real formula, just an order approximation. \n\n\nPunching in a few numbers with this *extremely* simplified version of things, you'll get: \n\n 100hp = max 100mph\n\n 340hp = max 150mph \n\n 800hp = max 200mph\n\n 1500hp = max 250mph \n\n\nIf you're a bit older than five, go ahead and read here: [linky](_URL_0_)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1043152",
"title": "General Motors Ultralite",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 204,
"text": "Its three-cylinder 1.5 L two-stroke engine could produce 111 hp (83 kW), which made a speed of 135 mph (217 km/h) possible. The car could accelerate from 0 to 60 mph (97 km/h) in less than eight seconds.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58764220",
"title": "Vauxhall Big Six",
"section": "Section::::Model BX/BY and BXL.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 285,
"text": "A 27hp BY was tested by The Motor magazine in 1934 and achieved a top speed of 72mph and accelerated from 0-60mph in 28 seconds. Autocar magazine tested a 20hp BY in 1936 and recorded 0-60 in 36.5 seconds. They did not record an actual top speed but stated that it would exceed 70mph.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29790599",
"title": "Prime Ministerial Car",
"section": "Section::::General specifications.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 232,
"text": "The car has a 5.0 litre supercharged petrol engine producing 503 bhp, with a top speed of , and is capable of reaching from stationary in 9.4 seconds, slower than the original due to the substantially greater weight of the vehicle.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4450574",
"title": "Škoda 1000 MB",
"section": "Section::::A new era for Škoda.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 386,
"text": "The 1000 MB’s overall performance was acceptable, especially when you remember how small an engine it had for a car of its size (as mentioned, a 1-litre engine in a car 13 feet 8 inches long by 5 feet 3 inches wide). The top speed was 120 km/h (75mph), reaching 100 km/h (62mph) from standstill in 27 seconds. Overall fuel economy was around 36 miles per gallon (6.5 litres per 100km).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3428890",
"title": "Fuel saving device",
"section": "Section::::Thermodynamic efficiency.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 466,
"text": "For example, if an automobile typically gets 20 miles<32.19 km> per gallon with a 20% efficient engine that has a 10:1 compression ratio, a carburetor claiming 100MPG would have to increase the efficiency by a factor of 5, to 100%. This is clearly beyond what is theoretically or practically possible. A similar claim of 300MPG for any vehicle would require an engine (in this particular case) that is 300% efficient, which violates the First Law of Thermodynamics.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3571283",
"title": "Jensen C-V8",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 365,
"text": "The car was one of the fastest production four-seaters of its era. The Mk II, capable of , ran a quarter mile (~400 m) in 14.6 seconds, and accelerated from 0– in 6.7 seconds. It was also one of the quickest cars to 60 mph in the world being significantly faster than such performance cars of the period as the Lamborghini Miura, Aston Martin DB5 and Jaguar XK-E. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3830734",
"title": "Rolls-Royce Phantom III",
"section": "Section::::Engineering.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 283,
"text": "The sheer bulk of the car is reflected in its performance figures. An example tested in 1938 by The English Autocar magazine returned a top speed of 140 km/h (87½ mph) and a 0 - 60 mph (0 – 96 km/h) time of 16.8 seconds. The overall fuel consumption quoted from that road test was .\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4xkant
|
why do actors tend to put themselves as "executive producers" & "producers" after being in a television show for a while?
|
[
{
"answer": "This indicates they are not only acting in the show, but taking a stronger creative or production role *behind the scenes*. They are working on MAKING the show in addition to appearing on screen, but not every actor makes this change.",
"provenance": null
},
{
"answer": "It may be just a prestige title, but it may also reflect that when they renegotiated their contract they got some % share of the shows profits.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "19508643",
"title": "Television show",
"section": "Section::::Production.:Development.:Other nations.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 609,
"text": "The production company is often separate from the broadcaster. The executive producer, often the show's creator, is in charge of running the show. They pick the crew and help cast the actors, approve and sometimes write series plots—some even write or direct major episodes—while various other producers help to ensure that the show runs smoothly. Very occasionally, the executive producer will cast themselves in the show. As with filmmaking or other electronic media production, producing of an individual episode can be divided into three parts: pre-production, principal photography, and post-production.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46413",
"title": "Executive producer",
"section": "Section::::Television.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 401,
"text": "In television, an executive producer usually supervises the creative content and the financial aspects of a production. Some writers, like Stephen J. Cannell, Tina Fey, and Ryan Murphy, have worked as both the creator and the producer of the same TV show. In case of multiple executive producers on a TV show, the one outranking the others is called the showrunner, or the leading executive producer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "845728",
"title": "Television producer",
"section": "Section::::Types of television producers.:Post-production producer or post-production coordinator.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 209,
"text": "In live television or \"as-live\", an executive producer seldom has any operational control of the show. His/her job is to stand back from the operational aspects and judge the show as an ordinary viewer might.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "426479",
"title": "Showrunner",
"section": "Section::::History.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 771,
"text": "Traditionally, the executive producer of a television program was the \"chief executive\", responsible for the show's creative direction and production. Over time, the title of executive producer was applied to a wider range of roles—from someone who arranges financing to an \"angel\" who holds the title as an honorific with no management duties in return for providing backing capital. The term \"showrunner\" was created to identify the producer who holds ultimate management and creative authority for the program. The blog and book \"Crafty Screenwriting\" defines a showrunner as \"the person responsible for all creative aspects of the show and responsible only to the network (and production company, if it's not [their] production company). The boss. Usually a writer.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "845728",
"title": "Television producer",
"section": "Section::::Writer as \"producer\".\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 543,
"text": "Because of the restrictions the Writers Guild of America screenwriting credit system places on writing credits, many script writers in television are credited as \"producers\" instead, even though they may not engage in the responsibilities generally associated with that title. On-screen, a \"producer\" credit for a TV series will generally be given to each member of the writing staff who made a demonstrable contribution to the final script. The actual producer of the show (in the traditional sense) is listed under the credit \"produced by\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10276322",
"title": "Painkiller Jane (TV series)",
"section": "Section::::Production.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 205,
"text": "The series credits include several people in the role of producer. Most are credited only for a few episodes. This includes Loken (the star), who is credited as co-executive producer for several episodes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46413",
"title": "Executive producer",
"section": "Section::::Motion pictures.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 230,
"text": "Executive producers vary in involvement, responsibility and power. Some executive producers have hands-on control over every aspect of production, some supervise the producers of a project, while others are involved in name only.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
a1dbzg
|
Is there a historical consensus about the girls who started the accusations of the Salem Witch Trials? Were they put up to it by others? Were they psychopaths? Did they actually believe they were being afflicted by witchcraft?
|
[
{
"answer": "To add to what the others have said, those accusers who had suffered from the conflict with neighbouring Indians had had their lives upended. Those who were orphaned were now living with relatives or friends of their late parents, and those whose families had been displaced lost whatever livelihoods they had previously worked. Their prospects, both for marriage and the rest of their life, were greatly diminished. They could provide little in the way of a dowry, a vital part of arranging a beneficial marriage, and their newfound benefactors (if they had them) had other priorities than their new wards. It is perfectly understandable to look at their situation from their perspective, and feel resentment and anger towards the events that had led to their diminished position; this was the problem. They had been taught that feeling this way made them perfect recruits for the Devil’s cause. He could use their resentment to slip through their defences of faith and use them to further his diabolic aims. \n\nRichard Godbeer suggests that while these first accusers may have genuinely believed that they were the victims of the Devil, their ‘possession’ gave them a legitimate method of expressing their grievances in a society that disapproved of such self-pity. It’s hard for a modern perspective to understand how firmly held beliefs in magic and the Devil were. In other trials, some individuals willingly handed themselves over for the crimes they believed they had committed. Alexander Sussums of Long Melford, Essex had volunteered to be searched by a witch finder during the East-Anglian panic, out of a genuine belief that he had been a witch for over a decade and a half. Through his guilt and negativity, combined with a genuine belief in the power of the Devil and his mother’s reputation for witchcraft, Sussums convinced himself that he too was a witch. Such was his conviction that he actively sought out the man who could, and did, order his arrest and trial for capital crimes (although he was eventually pardoned). Of course, with a society as deeply strained as 17th century New England, there were definitely accusations driven by more mundane motivations, but it is very likely that at least some of the young women who declared their possession did so out of a genuine belief that they were the victims of the Devil.\n\n* Anderson, Virginia Dejohn, 'New England in the Seventeenth Century', in Canny, Nicholas (ed.) *The Oxford History of the British Empire: Volume I: The Origins of Empire*\n* Godbeer, Richard, ‘Witchcraft in British America’, in Levack, Brian (ed.) *The Oxford Handbook of Witchcraft in Early Modern Europe and Colonial America*\n* Hansen, Chadwick, ‘Andover Witches and the Causes of the Salem Witchcraft Trials’, in Levack, Brian (ed.) *The Oxford Handbook of Witchcraft in Early Modern Europe and Colonial America*\n* Le Beau, Bryan F., *The Story of the Salem Witch Trials*\n* Levack, Brian, ‘State-Building and Witch-Hunting’, in Oldridge, Darren (ed.), *The Witchcraft Reader*\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "279637",
"title": "Abigail Williams",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 221,
"text": "Abigail Williams (July 12, 1680 – c. October 1697) was one of the initial accusers in the Salem witch trials. The trials which led to the arrest and imprisonment of more than 150 innocent people suspected of witchcraft. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20538512",
"title": "Elizabeth Hubbard (Salem witch trials)",
"section": "Section::::Involvement in Salem Witch Trials.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 292,
"text": "A group of girls ranging in age from twelve to twenty were the main accusers in the Salem witch trials. This group, of which Elizabeth Hubbard was a part, also included Ann Putnam, Jr., Mary Walcott, Elizabeth “Betty” Parris, Abigail Williams, Elizabeth Booth, Mercy Lewis, and Mary Warren. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31805518",
"title": "Elizabeth Booth",
"section": "Section::::Booth's role in the Salem Witch Trials.:Accusations against others.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 549,
"text": "Booth, at age eighteen, was one of the six accusers in the 1692 Salem Witch Trials in Salem, Massachusetts, claiming that she was afflicted by witchcraft. Throughout the trials, there are records indicating that she accused ten people of witchcraft. Five of those accused are known to be executed directly due to her testimonies. Those she accused include: John and Elizabeth Proctor, their fifteen-year-old daughter Sarah, William and Benjamin Proctor (two of their sons), Woody Proctor, Giles Corey and Martha Corey, Job Tookey, and Wilmont Redd.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17098750",
"title": "Witchcraft accusations against children",
"section": "Section::::Historical.:Witch finders and accusers.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 379,
"text": "The most renowned trials caused by child accusations occurred in Salem, Massachusetts in 1692. Children were viewed as having an important role in convicting witches, due to their being able to identify people impulsively. Children who made such false allegations often directed them at adults with whom they had strained relationships such as teachers or puritanical neighbors.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "279639",
"title": "Betty Parris",
"section": "Section::::Overview of the Salem Witch Trials.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 491,
"text": "In 1692, the Salem Witch Trials broke out after several girls claimed to be targeted by a 'devilish hand'. After several months, over 150 men, women, and children were charged with witchcraft and sorcery. The Trials were diminishing around September 1692 when the public began to resist the idea of witchcraft. Eventually, the Massachusetts General Court granted freedom to all those accused of sorcery and apologized to their families for the hardships created from the Salem Witch Trials.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "279639",
"title": "Betty Parris",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 330,
"text": "Elizabeth Parris (November 28, 1682 – March 21, 1760) was one of the young women who accused other people of being witches during the Salem witch trials. The accusations made by Betty (Elizabeth) and her cousin Abigail caused the direct death of 20 Salem residents: 19 were hanged (mostly women) and one man was pressed to death.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21898297",
"title": "History of Christianity in the United States",
"section": "Section::::Early Colonial era.:British colonies.:New England.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 691,
"text": "The Salem witch trials were a series of hearings before local magistrates followed by county court trials to prosecute people accused of witchcraft in Essex, Suffolk and Middlesex counties of colonial Massachusetts, between February 1692 and May 1693. Over 150 people were arrested and imprisoned, with even more accused but not formally pursued by the authorities. The two courts convicted twenty-nine people of the capital felony of witchcraft. Nineteen of the accused, fourteen women and five men, were hanged. One man (Giles Corey) who refused to enter a plea was crushed to death under heavy stones in an attempt to force him to do so. At least five more of the accused died in prison.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2qsztg
|
No Irish Need Apply - how badly were Irish discriminated against in 1840's - 1930's America?
|
[
{
"answer": "I am not a historian, but the Library of Congress has a good overview with source documents [here](_URL_0_).",
"provenance": null
},
{
"answer": "As a follow up question, how about in the same period in Great Britain?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2162835",
"title": "Racism in the United States",
"section": "Section::::European Americans.\n",
"start_paragraph_id": 89,
"start_character": 0,
"end_paragraph_id": 89,
"end_character": 714,
"text": "In the 19th century, this was particularly true because of anti-Irish prejudice, which was based on anti-Catholic sentiment, and prejudice against the Irish as an ethnicity. This was especially true for Irish Catholics who immigrated to the U.S. in the mid-19th century; the large number of Irish (both Catholic and Protestant) who settled in America in the 18th century had largely (but not entirely) escaped such discrimination and eventually blended into the white American population. During the 1830s in the U.S., riots over control of job sites broke out in rural areas among rival labor teams from different parts of Ireland, and between Irish and local American work teams competing for construction jobs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46284800",
"title": "Irish Americans",
"section": "Section::::Discrimination.:Stereotypes.\n",
"start_paragraph_id": 106,
"start_character": 0,
"end_paragraph_id": 106,
"end_character": 642,
"text": "There were also Darwinian-inspired excuses for the discrimination of the Irish in America. Many Americans believed that since the Irish were Celts and not Anglo-Saxons, they were racially inferior and deserved second-hand citizenship. The Irish being of inferior intelligence was a belief held by many Americans. This notion was held due to the fact that the Irish topped the charts demographically in terms of arrests and imprisonment. They also had more people confined to insane asylums and poorhouses than any other group. The racial supremacy belief that many Americans had at the time contributed significantly to Irish discrimination.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14561",
"title": "Irish diaspora",
"section": "Section::::Causes.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 702,
"text": "Irish people at home were facing discrimination from Great Britain based on the former's religion. Evictions only increased after the repeal of the British Corn Laws in 1846 and the new Encumbered Estates Act being passed in 1849 as well as the removal of existing civil rights. There had been agrarian terrorism against landlords which these new laws were implemented to stop the retribution. Any hope for change was squashed with the death of Daniel O'Connell in 1847, the political leader championing for Ireland, and the failed rising of the Young Irelanders in 1848. More was to be gained by immigrating to America from Ireland and the 1848 discovery of gold in the Sierra Nevada lured away more.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6445961",
"title": "Anti-Irish sentiment",
"section": "Section::::History.:19th century.:\"No Irish need apply\".\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 1166,
"text": "Historians have debated the issue of anti-Irish job discrimination in the United States. Some insist that the \"No Irish need apply\" signs were common, but others, such as Richard J. Jensen, argue that anti-Irish job discrimination was not a significant factor in the United States, and these signs and print advertisements were posted by the limited number of early 19th-century English immigrants to the United States who shared the prejudices of their homeland. In July 2015 the same journal that published Jensen's 2002 paper published a rebuttal by Rebecca A. Fried, an 8th-grade student at Sidwell Friends School. She listed multiple instances of the restriction used in advertisements for many different types of positions, including \"clerks at stores and hotels, bartenders, farm workers, house painters, hog butchers, coachmen, bookkeepers, blackers, workers at lumber yards, upholsterers, bakers, gilders, tailors, and papier mache workers, among others.\" While the greatest number of NINA instances occurred in the 1840s, Fried found instances for its continued use throughout the subsequent century, with the most recent dating to 1909 in Butte, Montana.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46284800",
"title": "Irish Americans",
"section": "Section::::Irish immigration to the United States.:Mid-19th century and later.:Civil War through the early 20th century.:Women.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 277,
"text": "Prejudices ran deep in the north and could be seen in newspaper cartoons depicting Irish men as hot-headed, violent drunkards. The initial backlash the Irish received in America lead to their self-imposed seclusion, making assimilation into society a long and painful process.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "697558",
"title": "Irish Canadians",
"section": "Section::::Irish in Ontario.:Confederation.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 521,
"text": "Some writers have assumed that the Irish in 19th-century North America were impoverished. DiMatteo (1992), using evidence from probate records in 1892, shows this is untrue. Irish-born and Canadian-born Irish accumulated wealth in a similar way, and that being Irish was not an economic disadvantage by the 1890s. Immigrants from earlier decades may well have experienced greater economic difficulties, but in general the Irish in Ontario in the 1890s enjoyed levels of wealth commensurate with the rest of the populace.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33451678",
"title": "Great Famine's effect on the American economy",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1042,
"text": "Additionally, this rise in population also helped decide the outcome of the Civil War. The Irish emigrants who found their way to the South during the Great Famine saw the situation between the North and the South not unlike their previous situation between Ireland and Britain where they had felt exploited because while there was free trade between Ireland and Britain, Ireland provided potatoes and beef to England while receiving manufactured goods in return. However, while manufacturing jobs pay well the jobs of farm laborers do not; thus, the new Irish in the Southern United States felt the North exploited their new home in the South the same way. Because the Irish emigrants in the South could understand the plight of their new home many willingly took up arms against the North during the Civil War. To counter this, the Union Army employed 144,000 Irish-born troops during the War most of whom were drafted to serve. While a draftee could elude duty by paying $300 most Irish emigrants were too poor to do so, and had to fight.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
8mgbye
|
How Are The Chauvet Cave Drawings Still There?
|
[
{
"answer": "Yes, they are. It's amazing they are still preserved - there are likely countless of other drawings from the period that weren't.\n\nAccess to the drawings is very limited. Here's an account from 2015 by a reporter who was granted rare access: _URL_0_",
"provenance": null
},
{
"answer": "I don't know much about this cave in particular, but I will try to give a bit of an answer. Preservation is very hard and there are few environments that can preserve perishables (like textiles, baskets, or in your case drawings). Such environments can be waterlogged such as bog bodies, frozen like Otzi, or incredibly dry like sand in a desert or certain caves. Caves can be dry enough that micro-organisms do not flourish and preserve perishables for thousands of years. \n\nAs far as earthquakes, some areas just aren't as prone to earthquake activity. There are probably cave drawings that were in caves that suffered earthquake damage, but they simply didn't get preserved. As for anything else knocking the pigment off the walls, animals don't have much incentive to go back that far into a cave. They can get sufficient shelter much closer to the mouth of the cave.\n\nAlso, just a nitpick is that the pictures seem to be made with ocher which is a better pigment than charcoal.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "44784107",
"title": "Lelepa Island",
"section": "Section::::Fele's Cave.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 713,
"text": "Except for the darkened rear wall, all the walls of the cave are covered to head height with cave paintings, some of which overlap each other. The oldest ones are red spots and handprints, probably dating from the first millennium BCE, assigned to the Lapita culture. However, most of the pictures are black line drawings. Among them are some that are 1500 years old, but most were made much later; the most recently dated ones are from the eighteenth century. The illustrations show birds, fish, and anthropomorphic figures. Abstract designs include simple and complex geometric figures such as angles, triangles and diamond shapes. The largest human representation, according to tradition, is that of Roi Mata.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52434864",
"title": "Akbaur cave",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 227,
"text": "There are paintings on the walls of the cave drawn in brown ocher, dating to approximately 3000 BC. The content of the paintings is complex, and includes shapes like triangles and rectangles, lines, dots, and images of people.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49130668",
"title": "Amatérská Cave",
"section": "Section::::History of discovery and of explorations.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 374,
"text": "The cave contains a Neolithic picture, currently the oldest cave painting known in the Czech Republic. It depicts a geometrical shape resembling a grill with a size of 30x40 cm, painted in charcoal on the cave wall. The carbon was dated with the C14 radio-carbon method to be 5,200 years old. The pattern resembles the decorations on some ceramic vessels from that period. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7308491",
"title": "Býčí skála Cave",
"section": "Section::::History.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 374,
"text": "The cave contains a Neolithic picture, currently the oldest cave painting known in the Czech Republic. It depicts a geometrical shape resembling a grill with a size of 30x40 cm, painted in charcoal on the cave wall. The carbon was dated with the C14 radio-carbon method to be 5,200 years old. The pattern resembles the decorations on some ceramic vessels from that period. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6177364",
"title": "Pech Merle",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 554,
"text": "Pech Merle is a cave which opens onto a hillside at Cabrerets in the Lot département of the Occitania region in France, about 35 minutes by road east of Cahors. It is one of the few prehistoric cave painting sites in France that remain open to the general public. Extending for over a kilometre and a half from the entrance are caverns, the walls of which are painted with dramatic murals dating from the Gravettian culture (some 25,000 years BC). Some of the paintings and engravings, however, may date from the later Magdalenian era (16,000 years BC).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59003124",
"title": "Lubang Jeriji Saléh",
"section": "Section::::Cave paintings.:Investigation.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 562,
"text": "The cave paintings were first spotted in 1994 by the French explorer . In 2018, a team of scientists investigating the cave, led by Maxime Aubert from Griffith University and Pindi Setiawan from the Bandung Institute of Technology, published a report in the journal \"Nature\" identifying the paintings as the world's oldest known figurative art. The team had previously investigated cave paintings in the neighbouring island of Sulawesi. In order to date the paintings, the team used dating techniques on the calcium carbonate (limestone) deposits close to them.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20367783",
"title": "La Marche (cave)",
"section": "Section::::History.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 533,
"text": "The La Marche cave paintings were discovered in the caves in the Lussac-les-Châteaux area of France by Léon Péricard in November 1937. Péricard, and his partner Stephane Lwoff, studied these caves for five years and found etchings on more than 1,500 slabs. In 1938, they presented their discovery to the French Prehistoric Society, and published them in the Society's \"Bulletin\". Many people questioned the validity of these findings, however, stating that they made that judgment because the paintings closely resembled modern art.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4it3h5
|
the tingling sound in total silence
|
[
{
"answer": "It's your ears trying to make up for that lack of sound you're not hearing. Same thing goes for the tingling/numb feeling you get when your appendages fall asleep ",
"provenance": null
},
{
"answer": "One form of what you're talking about is tinnitus, ringing in the ears which can be a result of age-related hearing loss, over-exposure to loud sounds (such as at frequent loud concerts or a construction job), or even sometimes brain injuries/tumors. Killing off the hair cells in your ear through loud noises leads to spontaneous activity of the neurons they're connected to, which is interpreted as sound. However, you might be referring more generally to the idea of hearing slight ringing or \"tingly\" noises in total silence even in people without tinnitus, something which often happens particularly after exposure to loud noises for a length of time.\n\nNeurally, this tingly sound doesn't have a definitive explanation, but one perspective is as follows. Essentially, neurons in the ear, like most neurons (i.e., the auditory nerve) are firing regardless of whether you are hearing sounds; they just fire more in response to loud noises. The brain knows this, and so it tends to interpret overall firing rates (and some more complex patterns, etc. of firing) relative to a baseline, no-noise firing rate as 'sound.' However, this isn't a perfect process, so even in the absence of any sounds, there is some activity that might be interpreted as ringing by the brain.\n\nThis explanation isn't complete, and I'm not an expert on auditory neuroscience. Maybe someone who is can add to it!",
"provenance": null
},
{
"answer": "The rushing, almost roaring sound you hear is your blood flow, which is why when you \"flex\" your ears it gets louder.",
"provenance": null
},
{
"answer": "they have drugs for tinnitus now. if u are willing to have ED, hypertension, and bloody stool. my answer was no, but to each his own.",
"provenance": null
},
{
"answer": "Can everybody just chill out with the tinnitus diagnosis? Tinnitus is comparatively rare, and what OP is talking about sounds like what basically everybody who's ever been in a silent space has noticed.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "30797130",
"title": "The Secret Series (Enid Blyton)",
"section": "Section::::The main series.:\"The Secret of Moon Castle\".\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 231,
"text": "On occasion, a strange tingling effect is felt by them, and strange coloured lights seem to rise from the ground, and it seems to be due to a strange metal (possibly radioactive) that mysterious men are mining for unknown reasons.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50192248",
"title": "When the Tingle Becomes a Chill (song)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 210,
"text": "\"When the Tingle Becomes a Chill\" is a song written by Lola Jean Dillon that was originally performed by American country music artist Loretta Lynn. It was released as a single in October 1975 via MCA Records.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2293612",
"title": "The Tingler",
"section": "Section::::Plot.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 645,
"text": "After they contain the tingler and return to Higgins' house, it is revealed that Higgins is the murderer; he frightened his wife to death knowing that she could not scream because she was mute. The centipede-like creature eventually breaks free from the container that held it and is released into Higgins' theater. The tingler latches onto a woman's leg, and she screams until it releases its grip. Chapin controls the situation by shutting off the lights and telling everyone in the theater to scream. When the tingler has left the showing room, they resume the movie and go to the projection room, where they find the tingler and capture it.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14595218",
"title": "Frisson",
"section": "Section::::Neural substrates.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 652,
"text": "Neuroimaging studies have found that the intensity of tingling is positively correlated with the magnitude of brain activity in specific regions of the reward system, including the nucleus accumbens, orbitofrontal cortex, and insular cortex. All three of these brain structures are known to contain a hedonic hotspot, a region of the brain that is responsible for producing pleasure cognition. Since music-induced euphoria can occur without the sensation of tingling or piloerection, the authors of one review hypothesized that the emotional response to music during a frisson evokes a sympathetic response that is experienced as a tingling sensation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "889551",
"title": "Joy buzzer",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 352,
"text": "A joy buzzer (also called a hand buzzer) is a practical joke device that consists of a coiled spring inside a disc worn in the palm of the hand. When the wearer shakes hands with another person, a button on the disc releases the spring, which rapidly unwinds creating a vibration that feels somewhat like an electric shock to someone not expecting it.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49321151",
"title": "Eucalyptus brevistylis",
"section": "Section::::Distribution and habitat.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 367,
"text": "Rate's tingle grows in wet forests near Walpole. It was previously confused with two other \"tingle\" species, the red tingle, \"E. jacksonii\" and the yellow tingle \"E. guilfoylei\". The name \"tingle\" or \"tingle tingle\" is thought to be of Aboriginal origin. This tingle was not previously recognised as a separate species, despite the efforts of the forester Jack Rate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2231147",
"title": "Tingle (character)",
"section": "Section::::Characteristics.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 959,
"text": "Tingle is a short, paunchy 35-year-old man who is obsessed with \"forest fairies\" and dresses up in a green costume, slightly resembling that of Link. He also wears tight red shorts and a necklace with a clock that is permanently stuck at four o'clock. Tingle is normally seen floating around on his red balloon drawing and selling maps for his father, who runs the Southern Swamp pictograph contest and sees Tingle as \"a fool\". He is also known for his catchphrase: \"Tingle, Tingle! Kooloo-Limpah!\" . Tingle appears to have a fixation for Rupees and other similar collectibles, such as Force Gems in \"\" and Kinstones in \"\". In \"Majora's Mask\", Tingle can be found selling maps, and in \"The Wind Waker\", he translates Triforce Maps for a high price, among other things. Tingle's fixation for Rupees is explained in the Nintendo DS game \"Freshly-Picked Tingle's Rosy Rupeeland\", where it is stated that he needs Rupees to live. He is known to dress as a fairy.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3plvye
|
why is the metric system so perfect in regards to water mass versus weight?
|
[
{
"answer": "It's originally designed this way. Units in the SI system are all defined by certain basic quantities, like a liter of water or one Kelvin, or one meter and so on.\n\nex: a kilocalorie (a calorie to all dieters) is the amount of energy required to raise the temperature of one liter of water by one kelvin.\n\nSo it's right there in the definition of the unit.",
"provenance": null
},
{
"answer": "Because it's the way the creators designed it. If you were half way around the world and needed something to reference weight to you could just take a known volume of water.",
"provenance": null
},
{
"answer": "I am not completely sure and the history is very complicated but I believe that the metre as a totally arbitrary measure came first. Then the gram was based on the weight of water at freezing point that would fit in 100th of a metre cubed.",
"provenance": null
},
{
"answer": "The meter was original intended to be one ten millionth of the distance from the equator to the north pole along the earth's surface. (In practise it was the distance between 2 scratches on a certain iron bar kept at a lab near Paris, as you can't just measure the distance from the equator to the north pole whenever you want. Eventually they decided that the scratches on the bar was the definition of the meter, not just a local secondary estimate of a meter. And swapped the iron bar for one of a platinum-iridium alloy that was less subject to corrosion and thermal expansion. But it took a long time to decide on these things.)\n\nOnce they had the meter, more or less, they then picked the unit of weight so that it had the relationship you mentioned, that one gram is the weight of one millionth of a cubic meter of water. So that is not a coincidence at all.\n\nBut, it is not very practical to use two scratches on a 3 foot long iron bar to construct a very precise thimble, and then get ultra pure water to exactly fill that thimble at a certain temperature. So while the gram was inspired by the weight of a cc of water, that was not the actual definition of the gram, or at least not for more than a few years, they quickly switched to using another hunk of metal instead, and decided a gram was one thousandth of that.\n\nSo no, not a coincidence.\n\nBut there is one natural coincidence in here though. A competing definition (or inspiration) for the length of a meter was the length of a pendulum whose swing was exactly one second. The pendulum based definition and the earth-distance based definition happen to be very close to each other.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "6840750",
"title": "Grave (unit)",
"section": "Section::::Kilogramme des Archives.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 434,
"text": "Since trade and commerce typically involve items significantly more massive than one gram, and since a mass standard made of water would be inconvenient and unstable, the regulation of commerce necessitated the manufacture of a \"practical realisation\" of the water-based definition of mass. Accordingly, a provisional mass standard was made as a single-piece, metallic artefact one thousand times as massive as the gram—the kilogram.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11530620",
"title": "Water weights",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 294,
"text": "Water weights are a popular alternative to solid weights as they are safer to use and can offer cost savings in transportation, storage and labour. When performing load tests using water weights, gradual application of the load allows problems to be identified prior to attaining maximum load.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36617820",
"title": "Potato paradox",
"section": "Section::::Simple explanations.:Method 1.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 244,
"text": "One explanation begins by saying that initially the non-water weight is 1 pound, which is 1% of 100 pounds. Then one asks: 1 pound is 2% of how many pounds? In order for that percentage to be twice as big, the total weight must be half as big.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26764",
"title": "International System of Units",
"section": "Section::::History.:Historical definitions.\n",
"start_paragraph_id": 111,
"start_character": 0,
"end_paragraph_id": 111,
"end_character": 1137,
"text": "The early metric systems defined a unit of weight as a base unit, while the SI defines an analogous unit of mass. In everyday use, these are mostly interchangeable, but in scientific contexts the difference matters. Mass, strictly the inertial mass, represents a quantity of matter. It relates the acceleration of a body to the applied force via Newton's law, : force equals mass times acceleration. A force of 1 N (newton) applied to a mass of 1 kg will accelerate it at 1 m/s. This is true whether the object is floating in space or in a gravity field e.g. at the Earth's surface. Weight is the force exerted on a body by a gravitational field, and hence its weight depends on the strength of the gravitational field. Weight of a 1 kg mass at the Earth's surface is ; mass times the acceleration due to gravity, which is 9.81 newtons at the Earth's surface and is about 3.5 newtons at the surface of Mars. Since the acceleration due to gravity is local and varies by location and altitude on the Earth, weight is unsuitable for precision measurements of a property of a body, and this makes a unit of weight unsuitable as a base unit.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4236766",
"title": "Foot–pound–second system",
"section": "Section::::Conversions.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 580,
"text": "Together with the fact that the term \"weight\" is used for the gravitational force in some technical contexts (physics, engineering) and for mass in others (commerce, law), and that the distinction often does not matter in practice, the coexistence of variants of the FPS system causes confusion over the nature of the unit \"pound\". Its relation to international metric units is expressed in kilograms, not newtons, though, and in earlier times it was defined by means of a mass prototype to be compared with a two-pan balance which is agnostic of local gravitational differences.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "178702",
"title": "Pound (force)",
"section": "Section::::Foot–pound–second (FPS) systems of units.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 475,
"text": "In the \"engineering\" systems (middle column), the weight of the mass unit (pound-mass) on Earth's surface is approximately equal to the force unit (pound-force). This is convenient because one pound mass exerts one pound force due to gravity. Note, however, unlike the other systems the force unit is not equal to the mass unit multiplied by the acceleration unit—the use of Newton's Second Law, , requires another factor, \"g\", usually taken to be 32.174049 (lb⋅ft)/(lbf⋅s).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36617820",
"title": "Potato paradox",
"section": "Section::::Simple explanations.:Method 2.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 209,
"text": "If the water decreases to 98%, then the solids account for 2% of the weight. The 2:98 ratio reduces to 1:49. Since the solids still weigh 1 lb, the water must weigh 49 lb for a total of 50 lbs for the answer.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
g63l6
|
Are 98% of the atoms in the human body replaced every year?
|
[
{
"answer": "I looked around for an answer to this, but I couldn't find the original article cited, which was in Annual Report for Smithsonian Institution in 1953.\n\nThis article, I think, sums up the discussion of it though: \n\n_URL_0_",
"provenance": null
},
{
"answer": "It seems to have some good basis in experiments conducted by some dude called Paul Aebersold back in the fifties:\n\n_URL_0_ (Note the date on the article: 1954!)\n\nI'm not surprised that it's a high percentage (the watery bits of your body should be cycled through quite rapidly), though I am surprised that it's quite as high as 98%.",
"provenance": null
},
{
"answer": "I'm not sure but I think he probably means 98% of cells rather than individual atoms. I'm not sure about the percentage but generally, through mitosis, we replace most of the cells our body uses. Now this doesn't really have to do with any type of \"soul\" or anything like that because we don't replace the cells all at once. It's a process that's happening constantly from the time your born to your death. ",
"provenance": null
},
{
"answer": "You may find [this interview from NPR](_URL_0_) relevant:\n\n > KESTENBAUM: McCarthy did some research and he found this article from a Smithsonian Institution publication from 1953. So this is the beginning of the Atomic Age. And the article described these experiments where researchers fed to people radioactive atoms. Or they injected them with radioactive atoms. And then using radiation detectors, they could watch the atoms as they moved around. So they'd watch them go up one arm, into the heart and down the other arm.\n\n > Mr. McCARTHY: You can follow it through their body. Does it get excreted through urine, or is it excreted through their sweat or through feces or, you know, what happens to it? Does it end up in their fingers or in their eyeballs, or you know? So you can follow where these atoms go.\n\n > [...]\n\n > KESTENBAUM: A lot of the atoms get incorporated into our bodies. The article says the atom turnover is quite rapid and quite complete. In a year, 98 percent of the atoms in us now will be replaced by other atoms that we take in, in our air, food and drink. So that means 98 percent of me is new - every year.\n\n > [...]\n\n > KESTENBAUM: Still, this means that in a very real sense, we are not the people that we were a year ago. We're this collection of atoms that hang out together for a while and then they go on to do other things - sort of a momentary cloud of organization.\n\n > So what is me? Am I still me if my parts have been replaced?\n\n > Professor DANIEL DENNETT (Director, Center for Cognitive Studies, Tufts University): Well, of course, the question goes way back to ancient philosophy.\n\n > KESTENBAUM: This is Daniel Dennett. He's a philosopher at Tufts University. Remember, he says, the old joke about Abe Lincoln's axe?\n\n > Prof. DENNETT: There it is in the glass case and it says, this is Abe Lincoln's axe. So I say, that's really his axe? And he says, oh, yes, but, of course, the head has been replaced twice and the handle three times.\n\n > KESTENBAUM: There's also a modern atomic version of this puzzle that really gets to the heart of things.\n\n > Prof. DENNETT: We imagine that your rocket ship has landed on Mars and you have to get back from Mars to Earth by teleporter.\n\n > KESTENBAUM: Here's how the teleporter works. It dismantles you, atom by atom, (unintelligible), you know, records the precise location of every carbon, every hydrogen, every phosphorus, and it sends that information to Earth, (unintelligible), where a receiver transporter reconstructs you, (unintelligible) out of new atoms.\n\n > Prof. DENNETT: And you step out of the teleporter receiver on Earth, is that really you? I say, of course, it's you.\n\n > KESTENBAUM: Okay, that's clear enough.\n\n > But now imagine, he says, instead, the teleporter on Mars doesn't take you apart - it doesn't disassemble you - it just scans your atoms, do-dot-dot-do-do(ph), leaving you intact.\n\n > Prof. DENNETT: So now you're - there's a you that's stranded on Mars and there's a you that's back on Earth. Which is the real you?\n\n > KESTENBAUM: Well, it's pretty clear to me. That there's - that I have David 1 and David 2.\n\n > Prof. DENNETT: Yeah. And does one of them have some sort of special priority? Is one of them sort of realer, more you than the other?\n\n > KESTENBAUM: Yeah. What does my wife do?\n\n > Prof DENNETT: Exactly. Yes.",
"provenance": null
},
{
"answer": "Heavy metals, such as Lead and Mercury, accumulate in animals bodies, particularly our bones. We get these heavy metals from our diet and our environment. Being that they are there for the rest of the animal's life, I would say that the above statement is false. I would also look at the source for the article. Is it from a scientific journal or is it from a newspaper? \n",
"provenance": null
},
{
"answer": "Three things:\n\n1. When you get down to particles the size of atoms or smaller, they're [completely indistinguishable](_URL_0_). This means that two identical atoms could switch places and there would be no way to tell. Everything would look exactly the same as if they hadn't swapped. Atoms don't carry a label that lets you tell this one from that one. Thus it doesn't really make sense to talk about atoms being replaced. (This is actually the origin of some of the quantum weirdness.)\n\n2. The phrase \"studies at Oak Ridge\" is actually really vague. I poked around a little bit and couldn't find that research (maybe someone else can track it down). Unfortunately, it's not uncommon to see bogus research attributed to a legitimate institute like this. Also, the conclusion drawn, that 98% of the atoms in a human body are replaced every year, doesn't sound like the result of a physics experiment. At best, it's probably an *inference* based (probably very loosely) on another result.\n\n3. I also poked around a little to try to find the origins of that \"cells replaced every 7 years\" conjecture. Again, all I could find were quotes citing other quotes without actually citing any original research. Both of these assertions are starting to look to me like the [10% myth](_URL_3_). Such claims are really no more than old rumors often based on a [gross misreading](_URL_2_) of legitimate research.\n\nEdit: looks like [drhu](_URL_1_) tracked down an article that talks about the original research. It seems they tracked different isotopes of atoms, which *is* a way of distinguishing (at least somewhat) between particles.",
"provenance": null
},
{
"answer": "It might be true, but using this as evidence for a soul is cheap. I much prefer the proverb: a man cant cross a river twice, as it won't be the same river, nor the same man. \n\nSee, the same effect can be used to prove the spiritual stuff you can't prove, or it can be a great tool to make us think about the ever changing world. ",
"provenance": null
},
{
"answer": "\"Soul\" and \"Spirit\" are curse words for me on /r/askscience",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "8355163",
"title": "Radioactivity in the life sciences",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 785,
"text": "All atoms exist as stable or unstable isotopes and the latter decay at a given half-life ranging from attoseconds to billions of years; radioisotopes useful to biological and experimental systems have half-lives ranging from minutes to months. In the case of the hydrogen isotope tritium (half-life = 12.3 years) and carbon-14 (half-life = 5,730 years), these isotopes derive their importance from all organic life containing hydrogen and carbon and therefore can be used to study countless living processes, reactions, and phenomena. Most short lived isotopes are produced in cyclotrons, linear particle accelerators, or nuclear reactors and their relatively short half-lives give them high maximum theoretical specific activities which is useful for detection in biological systems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4882",
"title": "Background radiation",
"section": "Section::::Natural background radiation.:Food and water.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 436,
"text": "C is present in the human body at a level of about 3700 Bq (0.1 μCi) with a biological half-life of 40 days. This means there are about 3700 beta particles per second produced by the decay of C. However, a C atom is in the genetic information of about half the cells, while potassium is not a component of DNA. The decay of a C atom inside DNA in one person happens about 50 times per second, changing a carbon atom to one of nitrogen.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8463",
"title": "Dubnium",
"section": "Section::::Isotopes.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 903,
"text": "Only a few atoms of Db can be produced in each experiment, and thus the measured lifetimes vary significantly during the process. During three experiments, 23 atoms were created in total, with a resulting half-life of . The second most stable isotope, Db, has been produced in even smaller quantities: three atoms in total, with lifetimes of 33.4 h, 1.3 h, and 1.6 h. These two are the heaviest isotopes of dubnium to date, and both were produced as a result of decay of the heavier nuclei Mc and Ts rather than directly, because the experiments that yielded them were originally designed in Dubna for Ca beams. For its mass, Ca has by far the greatest neutron excess of all practically stable nuclei, both quantitative and relative, which correspondingly helps synthesize superheavy nuclei with more neutrons, but this gain is compensated by the decreased likelihood of fusion for high atomic numbers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4222539",
"title": "Nuclear safety and security",
"section": "Section::::Hazards of nuclear material.\n",
"start_paragraph_id": 92,
"start_character": 0,
"end_paragraph_id": 92,
"end_character": 924,
"text": "Since the fraction of a radioisotope's atoms decaying per unit of time is inversely proportional to its half-life, the relative radioactivity of a quantity of buried human radioactive waste would diminish over time compared to natural radioisotopes (such as the decay chain of 120 trillion tons of thorium and 40 trillion tons of uranium which are at relatively trace concentrations of parts per million each over the crust's 3 * 10 ton mass). For instance, over a timeframe of thousands of years, after the most active short half-life radioisotopes decayed, burying U.S. nuclear waste would increase the radioactivity in the top 2000 feet of rock and soil in the United States (10 million km) by ≈ 1 part in 10 million over the cumulative amount of natural radioisotopes in such a volume, although the vicinity of the site would have a far higher concentration of artificial radioisotopes underground than such an average.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20825543",
"title": "High-level radioactive waste management",
"section": "Section::::National management plans.:North America.:United States.\n",
"start_paragraph_id": 75,
"start_character": 0,
"end_paragraph_id": 75,
"end_character": 925,
"text": "Since the fraction of a radioisotope's atoms decaying per unit of time is inversely proportional to its half-life, the relative radioactivity of a quantity of buried human radioactive waste would diminish over time compared to natural radioisotopes (such as the decay chains of 120 trillion tons of thorium and 40 trillion tons of uranium which are at relatively trace concentrations of parts per million each over the crust's 3 * 10 ton mass). For instance, over a timeframe of thousands of years, after the most active short half-life radioisotopes decayed, burying U.S. nuclear waste would increase the radioactivity in the top 2000 feet of rock and soil in the United States (10 million km) by ≈ 1 part in 10 million over the cumulative amount of natural radioisotopes in such a volume, although the vicinity of the site would have a far higher concentration of artificial radioisotopes underground than such an average.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19216184",
"title": "Nuclear power debate",
"section": "Section::::Environmental effects.:High-level radioactive waste.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 439,
"text": "Since the fraction of a radioisotope's atoms decaying per unit of time is inversely proportional to its half-life, the relative radioactivity of a quantity of buried human radioactive waste would diminish over time compared to natural radioisotopes (such as the decay chain of 120 trillion tons of thorium and 40 trillion tons of uranium which are at relatively trace concentrations of parts per million each over the crust's 3 ton mass).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "646618",
"title": "Atomoxetine",
"section": "Section::::Pharmacology.:Pharmacokinetics.\n",
"start_paragraph_id": 119,
"start_character": 0,
"end_paragraph_id": 119,
"end_character": 210,
"text": "The half-life of atomoxetine varies widely between individuals, with an average range of 4.5 to 19 hours. As atomoxetine is metabolized by CYP2D6, exposure may be increased 10-fold in CYP2D6 poor metabolizers.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
34qxqd
|
Why are White House meeting minutes recorded and kept?
|
[
{
"answer": "[Here is a paragraph from the University of Oregon Holden Leadership Center on record keeping during meetings:](_URL_0_)\n\n > As you can see, the role of a secretary is more than \"just taking minutes\". The secretary is in effect, the historian. What he/she records will be referred to by current members as a reminder of finished and unfinished business, what needs follow-up and what actions were taken. It will also be kept for future members to read to gain an understanding of where the organization has been and why. Many organizations make it the secretary's responsibility to notify the membership about upcoming meetings-time, date, location-as well as any important items to be discussed.\n\nMany business meetings, particularly important ones, will keep minutes with a stenographer, and government business meetings are no exception. Minutes of a meeting are useful for the principals to later refer to as an aide-memoire, to reference in the event of a later dispute of what was said by whom; for sharing with non-attending staff, possibly even for eventual publication for some types of organizations, there are a wide variety of reasons why minute-keeping is a common practice. Minutes are kept for many meetings, whether the governing board of the local floral society, all the way to the highest government councils, committees, and cabinets.\n\nWhat is more interesting, is that in the modern era, there exists a delicate balance between keeping minutes or particularly audio and audiovisual recording of important meetings and deliberate obfuscation of such records. The infamous White House audio taping system was used to fix Nixon's responsibility, \"what did you know and when did you know it\" during the Watergate crisis and was instrumental in forcing him from office.\n\nWatergate and the role of the taping system in bringing down a Presidency was definitely noted by later politicians of all parties. Part of the art of modern government is to keep the President and other important and public figures from being pinned down to a position, or to having learned a particular bit of knowledge. This permits \"plausible deniability\" in the event of a crisis. The leader can be vague or deny knowing about the decisions that led to the crisis, and on occasion, a convenient subordinate can be thrown to the wolves of the media or the legal system.\n\nLook at the controversy over the use of personal, non-governmental email by some well-known public officials. Government records are almost always preserved and archived, and they are subject to subpoenas during judicial or congressional investigations. In the event of a scandal or crisis gone bad, without good records, a leader can testify with a high degree of vagueness or ignorance: \"Of course if I had warning that this terrible event might happen, I would have taken action to prevent it.\"\n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "54346851",
"title": "White House visitor logs",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 242,
"text": "White House visitor logs, also known as the White House Worker and Visitor Entry System (WAVE), are the guestbook records of individuals visiting the White House to meet with the President of the United States or other White House officials.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52382",
"title": "Watergate scandal",
"section": "Section::::Cover-up and its unraveling.:Senate Watergate hearings and revelation of the Watergate tapes.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 477,
"text": "On Friday, July 13, 1973, during a preliminary interview, deputy minority counsel Donald Sanders asked White House assistant Alexander Butterfield if there was any type of recording system in the White House. Butterfield said he was reluctant to answer, but finally admitted there was a new system in the White House that automatically recorded everything in the Oval Office, the Cabinet Room and others, as well as Nixon's private office in the Old Executive Office Building.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "901046",
"title": "Minutes",
"section": "Section::::Creation.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 451,
"text": "Minutes may be created during the meeting by a typist or court reporter, who may use shorthand notation and then prepare the minutes and issue them to the participants afterwards. Alternatively, the meeting can be audio recorded, video recorded, or a group's appointed or informally assigned secretary may take notes, with minutes prepared later. Many government agencies use minutes recording software to record and prepare all minutes in real-time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14751127",
"title": "White House Chief Calligrapher",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 222,
"text": "The White House chief calligrapher is responsible for the design and execution of all social and official documents at the White House, the official residence and principal workplace of the president of the United States.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3188742",
"title": "Nixon White House tapes",
"section": "Section::::Revelation of the taping system.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 418,
"text": "On July 16, 1973, Butterfield told the committee in a televised hearing that Nixon had ordered a taping system installed in the White House to automatically record all conversations. Special Counsel Archibald Cox, a former United States Solicitor General under President John F. Kennedy, asked District Court Judge John Sirica to subpoena nine relevant tapes to confirm the testimony of White House Counsel John Dean.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2691420",
"title": "Clerk of the United States House of Representatives",
"section": "Section::::Offices and services.:Office of Legislative Operations.:Journal clerks.\n",
"start_paragraph_id": 63,
"start_character": 0,
"end_paragraph_id": 63,
"end_character": 274,
"text": "A journal clerk compiles the daily minutes of House proceedings and publishes these in the \"House Journal\" at the end of each session. The \"House Journal\" is the official record of the proceedings maintained in accordance with Article I, Section 5 of the U.S. Constitution.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54532686",
"title": "Journals of legislative bodies",
"section": "Section::::Countries.:United States.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 412,
"text": "The \"Congressional Record\" is the official record of the proceedings and debates of the United States Congress. It is published by the United States Government Publishing Office, and is issued when the United States Congress is in session. Indexes are issued approximately every two weeks. At the end of a session of Congress, the daily editions are compiled in bound volumes constituting the permanent edition.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2shcvj
|
Do LED lightbulb work in extreme cold temperatures?
|
[
{
"answer": "The operating temperature of the LED will depend on the design of the device, typically -40F is at or below the lower limit of silicon devices, (this will generally be shown on the device datasheet).\nLithonia Lighting OFLR 6 MO 's datasheet quotes its minimum ambient temp as -40C.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "163395",
"title": "Traffic light",
"section": "Section::::Technology.:Optics and lighting.\n",
"start_paragraph_id": 150,
"start_character": 0,
"end_paragraph_id": 150,
"end_character": 396,
"text": "The low energy consumption of LED lights can pose a driving risk in some areas during winter. Unlike incandescent and halogen bulbs, which generally get hot enough to melt away any snow that may settle on individual lights, LED displays – using only a fraction of the energy – remain too cool for this to happen. As a response to the safety concerns, a heating element on the lens was developed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59636913",
"title": "Light-emitting diode physics",
"section": "Section::::Lifetime and failure.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 371,
"text": "Unlike combustion or incandescent lamps, LEDs only operate if they are kept cool enough. The manufacturer commonly specifies a maximum junction temperature of 125 or 150 °C, and lower temperatures are advisable in the interests of long life. At these temperatures, relatively little heat is lost by radiation, which means that the light beam generated by an LED is cool.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9910525",
"title": "LED lamp",
"section": "Section::::Technology overview.:Thermal management.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 425,
"text": "Compared to other lighting systems LEDs must be kept cool as high temperatures can cause premature failure and reduced light output. Thermal management of high-power LEDs is required to keep the junction temperature close to ambient temperature. LED lamps typically include heat dissipation elements such as heat sinks and cooling fins and very high power lamps for industrial uses are frequently equipped with cooling fans.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9910525",
"title": "LED lamp",
"section": "Section::::Limitations.\n",
"start_paragraph_id": 104,
"start_character": 0,
"end_paragraph_id": 104,
"end_character": 532,
"text": "LED life span drops at higher temperatures, which limits the power that can be used in lamps that physically replace existing filament and compact fluorescent types. Thermal management of high-power LEDs is a significant factor in design of solid state lighting equipment. LED lamps are sensitive to excessive heat, like most solid state electronic components. LED lamps should be checked for compatibility for use in totally or partially enclosed fixtures before installation as heat build-up could cause lamp failure and/or fire.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18290",
"title": "Light-emitting diode",
"section": "Section::::Applications.:Lighting.\n",
"start_paragraph_id": 149,
"start_character": 0,
"end_paragraph_id": 149,
"end_character": 360,
"text": "The lower heat radiation compared with incandescent lamps makes LEDs ideal for stage lights , where banks of RGB LEDs can easily change color and decrease heating from traditional stage lighting. In medical lighting, infrared heat radiation can be harmful. In energy conservation, the lower heat output of LEDs also reduces demand on air conditioning systems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "243718",
"title": "Flashlight",
"section": "Section::::LED.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 459,
"text": "LED flashlights may consume 1 watt or much more from the battery, producing heat as well as light. In contrast to tungsten filaments, which must be hot to produce light, both the light output and the life of an LED decrease with temperature. Heat dissipation for the LED often dictates that small high-power LED flashlights have aluminium or other high heat conductivity bodies, reflectors and other parts, to dissipate heat; they can become warm during use.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2357908",
"title": "Automotive lighting",
"section": "Section::::Light sources.:Light-emitting diodes (LED).\n",
"start_paragraph_id": 150,
"start_character": 0,
"end_paragraph_id": 150,
"end_character": 818,
"text": "LED lighting systems are sensitive to heat. Due to the negative influences of heat on the stability of photometric performance and the light transmitting components, the importance of thermal design, stability tests, usage of low-UV-type LED modules and UV-resistance tests of internal materials has increased dramatically. For this reason, LED signal lamps must remain compliant with the intensity requirements for the functions they produce after one minute and after thirty minutes of continuous operation. In addition, UN Regulation 112 contains a set of tests for LED modules, including colour rendering, UV radiation, and temperature stability tests. According to UN Regulations 112 and 123, mechanical, electromechanical or other devices for headlamps must withstand endurance tests and function failure tests.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2gzm63
|
why didn’t the Ancient Egyptians conquer the rest of North Africa to the west of them?
|
[
{
"answer": "The Western/Libyan Desert is 1000 km north to south and 1000 km east to west and largely blocks access out of the Nile and Nile Delta.\n\nAs we saw in the WW 2 desert campaign, the Western Desert makes even mechanized and motorized maneuvering problematic.",
"provenance": null
},
{
"answer": "From a geographic stand point most of the land immediately west of the modern day country of Egypt is mostly desert. The Ancient Egyptian empire was almost primarily based around the Egyptian river and would have been fertile and habitable land. Anything outside of that would be illogical and extremely difficult to live in.",
"provenance": null
},
{
"answer": "Before modern nation-states, the control of such large barren areas was very rare and often only nominal. If you go due west from the heart of ancient Egypt, you run into the \"Great Sand Sea', which is exactly what it sounds like. Places like this have no pre-existing infrastructure to conquer, and there is no way to build a meaningful infrastructure either; there is simply nothing to support a sedentary population. \n\nThe only parts of North Africa that could have conceivably been conquered would have been along the Mediterranean coast. But why did they not conquer these areas then? Well, the Egyptians were not a particularly naval people, and he only large seagoing ships that the Egyptians used were for commerce. Also, inhabitable land along the north-African coast is somewhat segregated, thus discouraging any king of land invasion. The closest arable land west of Egypt would have been on the Marj Plain, near modern-day Benghazi. Between there and Alexandria, there's a whole lot of desert to dissuade invaders. \n\nAlso for what it's worth, the Libyan Berbers who inhabited this area weren't wilting daisies. They were so aggressive in fact, that they managed to install a dynasty in northern Egypt, the Bubasites, for 200 years. So perhaps it was not just geography that suppressed the ability to conquer North Africa.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "166141",
"title": "Music of Africa",
"section": "Section::::Music by regions.:North Africa and the Horn of Africa.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 409,
"text": "North Africa is the seat of ancient Egypt and Carthage, civilizations with strong ties to the ancient Near East and which influenced the ancient Greek and Roman cultures. Eventually, Egypt fell under Persian rule followed by Greek and Roman rule, while Carthage was later ruled by Romans and Vandals. North Africa was later conquered by the Arabs, who established the region as the Maghreb of the Arab world.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7888526",
"title": "Military history of Africa",
"section": "Section::::Military history of Africa by regions.:Military history of Northern Africa.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 417,
"text": "Ancient Greece and the armies of Alexander the Great (336 BC–323 BC) invaded and conquered some parts of North Africa and his generals set up the Ptolemaic dynasty in Egypt. The armies of the Roman Republic (509 BC–31 BC) and the Roman Empire (31 BC–AD 476) subsequently conquered the entire coastal areas of North Africa. The people of Carthage fought the bloody and lengthy Punic Wars (264 BC–146 BC) against Rome.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14099",
"title": "History of Africa",
"section": "Section::::Antiquity.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 673,
"text": "The ancient history of North Africa is inextricably linked to that of the Ancient Near East. This is particularly true of Ancient Egypt and Nubia. In the Horn of Africa the Kingdom of Aksum ruled modern-day Eritrea, northern Ethiopia and the coastal area of the western part of the Arabian Peninsula. The Ancient Egyptians established ties with the Land of Punt in 2,350 BC. Punt was a trade partner of Ancient Egypt and it is believed that it was located in modern-day Somalia, Djibouti or Eritrea. Phoenician cities such as Carthage were part of the Mediterranean Iron Age and classical antiquity. Sub-Saharan Africa developed more or less independently in those times. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "802377",
"title": "Arabization",
"section": "Section::::History of Arabization.:Arabization during the early Caliphate.:North Africa and Iberia.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 1061,
"text": "Neither North Africa nor the Iberian Peninsula were strangers to Semitic culture: the Phoenicians and later the Carthaginians dominated parts of the North African and Iberian shores for more than eight centuries until they were suppressed by the Romans and by the following Vandal and Visigothic invasions, and the Berber incursions. After the Arab invasion of North Africa, The Berber tribes allied themselves with the Umayyad Arab Muslim armies in invading Spain. Later, in 743 AD, the Berbers defeated the Arab Umayyad armies and expelled them for most of West North Africa (al-Maghreb al-Aqsa) during the Berber Revolt, but not the territory of Ifriqiya which stayed Arab (East Algeria, Tunisia, and West-Libya). Centuries later some migrating Arab tribes settled in some plains while the Berbers remained the dominant group mainly in desert areas including mountains. The Inland North Africa remained exclusively Berber until the 11th century; the Iberian Peninsula, on the other hand, remained Arabized, particularly in the south, until the 16th century.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5297977",
"title": "Lion's Blood",
"section": "Section::::Background.:History.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 595,
"text": "In 200 BC, the combined forces of Egypt, Carthage and Abyssinia destroyed the Roman Republic, removing the last European power and paving the way for African dominance. For a thousand years the descendants of Alexander ruled much of the known world with Egypt ruling an empire stretching from Eastern Europe to India. Egypt and Abyssinia also created a major trade route along the Nile and immense networks of canals. By 420, steamboats had been invented and were used to trade with other kingdoms in Africa. Eventually, most of sub-Saharan Africa was under joint Egyptian and Abyssianian rule.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1637852",
"title": "History of North Africa",
"section": "Section::::Classical period.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 398,
"text": "When the Roman Empire began to collapse, North Africa was spared much of the disruption until the Vandal invasion of 429 AD. The Vandals ruled in North Africa until the territories were regained by Justinian of the Eastern Empire in the 6th century. Egypt was never invaded by the Vandals because there was a thousand-mile buffer of desert and because the Eastern Roman Empire was better defended.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6555",
"title": "Carthage",
"section": "Section::::Ancient history.:Islamic period.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 942,
"text": "The Roman Exarchate of Africa was not able to withstand the seventh-century Muslim conquest of the Maghreb. The Umayyad Caliphate under Abd al-Malik ibn Marwan in 686 sent a force led by Zuhayr ibn Qays, who won a battle over the Romans and Berbers led by King Kusaila of the Kingdom of Altava on the plain of Kairouan, but he could not follow that up. In 695, Hassan ibn al-Nu'man captured Carthage and advanced into the Atlas Mountains. An imperial fleet arrived and retook Carthage, but in 698, Hasan ibn al-Nu'man returned and defeated Emperor Tiberios III at the 698 Battle of Carthage. Roman imperial forces withdrew from all of Africa except Ceuta. Fearing that the Byzantine Empire might reconquer it, they decided to destroy Roman Carthage in a scorched earth policy and establish their headquarters somewhere else. Its walls were torn down, its water supply cut off, the agricultural land was ravaged and its harbors made unusable.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2y5uju
|
how does keeping an avocado seed with the raw avocado 'meat' keep it from browning?
|
[
{
"answer": "[Here's a pretty good write-up of why Avocados turn brown](_URL_0_)\n\nIn short, it's the exposure to oxygen that makes the flesh turn brown. Keeping the stone (seed) with the meat does not prevent this in any way.\n\nFun fact about avocados: the trees release enzymes preventing ripening of the fruit, once they're picked they lose the enzymes. This allows the fruits to stay \"fresh\" on the tree far longer. \n\n",
"provenance": null
},
{
"answer": "It doesn't. Exposing the meat to air will oxidize it and turn it brown no matter what. However, oil can help slow browning and acids will denature the enzymes responsible for oxidation.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "12190407",
"title": "Podocarpus henkelii",
"section": "Section::::Name and Cultivation.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 234,
"text": "It can be propagated from seed, which should be planted promptly in a moist, semi-shade position. The fleshy fruit that surrounds the seed must be removed as this inhibits germination. The seed is also vulnerable to fungal infection.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "166017",
"title": "Avocado",
"section": "Section::::As a houseplant.\n",
"start_paragraph_id": 100,
"start_character": 0,
"end_paragraph_id": 100,
"end_character": 559,
"text": "The avocado tree can be grown domestically and used as a (decorative) houseplant. The pit germinates in normal soil conditions or partially submerged in a small glass (or container) of water. In the latter method, the pit sprouts in four to six weeks, at which time it is planted in standard houseplant potting soil. The plant normally grows large enough to be prunable; it does not bear fruit unless it has ample sunlight. Home gardeners can graft a branch from a fruit-bearing plant to speed maturity, which typically takes four to six years to bear fruit.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4973984",
"title": "Solanum muricatum",
"section": "Section::::Cultivation.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 1893,
"text": "They are propagated by cuttings since they are established easily without rooting hormones. It is grown in a manner similar to its relatives such as the tomato, though it grows naturally upright by habit and can thus be cultivated as a free-standing bush, though it is sometimes pruned on . Additionally, supports are sometimes used to keep the weight of the fruit from pulling the plant down. It has a fast growth rate and bears fruit within 4 to 6 months after planting. It is a perennial, but is usually cultivated as an annual. Seedlings are intolerant of weeds, but it can later easily compete with low growing weeds. Like their relatives tomatoes, eggplants, tomatillos and tamarillos, pepinos are extremely attractive to beetles, aphids, white flies and spider mites. Pepinos are tolerant of most soil types, but require constant moisture for good fruit production. Established bushes show some tolerance to drought stress, but this typically affects yield. The plants are parthenocarpic, meaning it needs no pollination to set fruit, though pollination will encourage fruiting. The plant is grown primarily in Chile, New Zealand and Western Australia. In Chile, more than 400 hectares are planted in the Longotoma Valley with an increasing proportion of the harvest being exported. Colombia, Peru, and Ecuador also grow the plant, but on a more local scale. Outside of the Andean region, it has been grown in various countries of Central America, Morocco, Spain, Israel, and the highlands of Kenya. In the United States several hundred hectares of the fruit are grown on a small scale in Hawaii and California. More commercially viable cultivars have been introduced from New Zealand and elsewhere in more recent times. As a result, the fruit has been introduced into up-scale markets in Japan, Europe and North America and it is slowly becoming less obscure outside of South America.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22674238",
"title": "Cryptocarya glaucescens",
"section": "Section::::Description.:Flowers, fruit and germination.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 211,
"text": "Unlike most Australian \"Cryptocarya\" fruit, removal of the fleshy aril is not particularly advised to assist seed germination, as the aril is so thin. Roots and shoots usually appear within three to six months.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12920034",
"title": "Swietenia humilis",
"section": "Section::::Biologically active compounds.:Seed oil.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 476,
"text": "Although the seed is poisonous, the tree shows promise as a source of seed oil with characteristics resembling those of avocado and sunflower oils. The seed germ yields about 45% of edible oil by mass. Of this yield, the fatty acid proportions are about 18% saturated (mainly palmitic and stearic), 30% monounsaturated (mainly oleic), and 48% polyunsaturated (mainly linoleic and linolenic). It also might be of commercial interest as a component of cosmetics and pesticides.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23558923",
"title": "Hicksbeachia pinnatifolia",
"section": "Section::::Cultivation and uses.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 470,
"text": "The seed is edible, though not as valued as that of its relative the macadamia. It is not commercially cultivated but is sometimes grown as an ornamental tree. It can be difficult to establish in the garden. Germination from fresh seed is reliable with a high percentage of success. However, many juveniles soon die of fungal disease. Alexander Floyd recommends adding original leaf litter from beneath the parent tree to promote beneficial anti-fungal micro-organisms.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11467545",
"title": "Penicillium italicum",
"section": "Section::::Management.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 292,
"text": "Inoculation of healthy fruit can be diminished and controlled by careful picking, handling, and packaging of the citrus so that the rinds are not damaged. Without of injury inflicted on the fruit, the conidia are unable to gain access, and thus unable to germinate into infectious pathogens.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
dfmq7e
|
why are tiff files so large?
|
[
{
"answer": "1. TIFF (also known as TIF), file types ending in .tif\nTIFF stands for Tagged Image File Format. TIFF images create very large file sizes. TIFF images are uncompressed and thus contain a lot of detailed image data (which is why the files are so big) TIFFs are also extremely flexible in terms of color (they can be grayscale, or CMYK for print, or RGB for web) and content (layers, image tags).\nTIFF is the most common file type used in photo software (such as Photoshop), as well as page layout software (such as Quark and InDesign), again because a TIFF contains a lot of image data.\n\nSource: _URL_0_",
"provenance": null
},
{
"answer": " > 18MP \n\n > 150MP\n\nI'm guessing that the first is actually meant to read MP, while the second should probably read \"MB\"? Because \"MP\" means \"MegaPixel\" (million pixels), while \"MB\" means \"Mega Bytes\" (million/ 2^(20) bytes).\n\nSo let's take a look at how much that actually is: \n150/18 = 8.333…\n\nSo for every pixel, there are eight and a bit bytes used. Let's just call it an even eight and attribute the rest to metadata (when was the photo taken, what were the iso, shutter etc. settings, maybe GPS coordinates, etc.).\n\nDepending on your colour scheme (RGB/ CMYK/ RGBa/ …) and bit-depth (8-bit/ 16-bit/ 24-bit (\"true colour\") / …), this leaves between one and two bytes per pixel and colour channel. That actually sounds very reasonable. Heck, it isn't even enough to give you true-colour RGB - that would need 3\\*3=9 bytes per pixel (three colour channels, each of which having a precision of 24 bits = 8 bytes).\n\n & #x200B;\n\nThe reason other formats like jpeg or png will generally produce far smaller files is that they utilize (lossy) compression, which simply means that they don't save a colour value for every single pixel but instead try to save space by doing thing like saving \"the next five pixels all have this colour: \\[…\\]\" (very simplified).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "145478",
"title": "TIFF",
"section": "Section::::Features and options.:BigTIFF.\n",
"start_paragraph_id": 63,
"start_character": 0,
"end_paragraph_id": 63,
"end_character": 500,
"text": "The TIFF file formats use 32-bit offsets, which limits file size to around 4 GiB. Some implementations even use a signed 32-bit offset, running into issues around 2 GiB already. BigTIFF is a TIFF variant file format which uses 64-bit offsets and supports much larger files. The BigTIFF file format specification was implemented in 2007 in development releases of LibTIFF version 4.0, which was finally released as stable in December 2011. Support for BigTIFF file formats by applications is limited.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "145478",
"title": "TIFF",
"section": "Section::::Features and options.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 787,
"text": "TIFF is a flexible, adaptable file format for handling images and data within a single file, by including the header tags (size, definition, image-data arrangement, applied image compression) defining the image's geometry. A TIFF file, for example, can be a container holding JPEG (lossy) and PackBits (lossless) compressed images. A TIFF file also can include a vector-based clipping path (outlines, croppings, image frames). The ability to store image data in a lossless format makes a TIFF file a useful image archive, because, unlike standard JPEG files, a TIFF file using lossless compression (or none) may be edited and re-saved without losing image quality. This is not the case when using the TIFF as a container holding compressed JPEG. Other TIFF options are layers and pages.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24306",
"title": "Portable Network Graphics",
"section": "Section::::File size and optimization software.:Compared to GIF.\n",
"start_paragraph_id": 137,
"start_character": 0,
"end_paragraph_id": 137,
"end_character": 939,
"text": "Compared to GIF files, a PNG file with the same information (256 colors, no ancillary chunks/metadata), compressed by an effective compressor is normally smaller than a GIF image. Depending on the file and the compressor, PNG may range from somewhat smaller (10%) to significantly smaller (50%) to somewhat larger (5%), but is rarely significantly larger for large images. This is attributed to the performance of PNG's DEFLATE compared to GIF's LZW, and because the added precompression layer of PNG's predictive filters take account of the 2-dimensional image structure to further compress files; as filtered data encodes differences between pixels, they will tend to cluster closer to 0, rather than being spread across all possible values, and thus be more easily compressed by DEFLATE. However, some versions of Adobe Photoshop, CorelDRAW and MS Paint provide poor PNG compression, creating the impression that GIF is more efficient.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42815625",
"title": "Design of the FAT file system",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 568,
"text": "The FAT file system is a legacy file system which is simple and robust. It offers good performance even in very light-weight implementations, but cannot deliver the same performance, reliability and scalability as some modern file systems. It is, however, supported for compatibility reasons by nearly all currently developed operating systems for personal computers and many home computers, mobile devices and embedded systems, and thus is a well suited format for data exchange between computers and devices of almost any type and age from 1981 through the present.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5118708",
"title": "JPEG XR",
"section": "Section::::Description.:Container format.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 203,
"text": "Being TIFF-based, this format inherits all of the limitations of the TIFF format including the 4 GB file-size limit, which according to the HD Photo specification \"will be addressed in a future update\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24642251",
"title": "Mod deflate",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 253,
"text": "The mod_deflate module does not have a lower bound for file size, so it attempts to compress files that are too small to benefit from compression. This results in files smaller than approximately 120 bytes becoming larger when processed by mod_deflate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16928980",
"title": "File spanning",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 459,
"text": "This is useful when saving large files onto smaller volumes or breaking large files up into smaller files for network messages of limited size (email, newsgroups). It also allows the creation of parity files such as parity archive (PAR) to verify and restore missing or corrupted package files. Another advantage with this is coping with file size limits on some file systems of removable media, or coping with volume size limits of things like floppy disks.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ebnkco
|
how do we develop crushes on people?
|
[
{
"answer": "you see someone you find attractive and then you get a bone bone and decide that you want to make a hooman with them",
"provenance": null
},
{
"answer": "Lots of different reasons. Most influential factors include:\n- Proximity: You’re more likely to have a crush on someone who you have multiple classes with each day than you are to have a crush on someone who lives across the country.\n\n- Pheromones: Chemical signals, so to speak, that indicate a good genetic match or a person who is ovulating, to name a couple examples (there’s been a study where people use unscented soaps and deodorants and wear the same white t-shirt to bed every night for a week and then different people come to the lab and sniff the shirts to decide which person they find most attractive based on pheromones more or less. Heterosexual men prefer the shirts of women who are ovulating, and also like the smell of shirts worn by homosexual men the least)\n\n- Similar Levels of Attractiveness: This applies a bit more to the kind of person you actually end up in a relationship in as opposed to a crush. But a person tends to pursue people who are about the same level of attractiveness as they themselves are. This way, you protect your ego because you perceive the crush to be less likely to reject you. There are obviously exceptions to this (20 year old women dating wealthy 70 year old men, as an extreme example)\n\n- Admirable Qualities: That person has some sort of qualities that you would like to adopt in yourself or associate with your internal image of your ideal self. A person who is socially awkward and anxious and wishes they weren’t, for example, might have a secret crush on the outgoing, friendly person who strikes up conversations with the people who look like they could use a friend. This has a limitation: our egos come first - we don’t want people who we perceive as being so much better than ourselves that we feel inferior.\n\n- Time: The more time you spend with a person (similar to proximity), the more you start to really pay attention to a person. Think of the experiment where complete strangers stare into each other’s eyes for minutes at a time, and by the end of it, they feel a bit more comfortable with them even if they never exchange words. \n\n\nThere are a looooot more but these are the most commonly observed in lab settings\n\n\nSource: Psychology of Relationships and Intimacy class in college; also have a degree in psychology.\n\nEdit: Pressed enter between each bullet for better readability",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "19377820",
"title": "Stampede",
"section": "Section::::Human stampedes and crushes.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 293,
"text": "Crushes often occur during religious pilgrimages and large entertainment events, as they tend to involve dense crowds, with people closely surrounded on all sides. Human stampedes and crushes also occur in episodes of panic (e.g. in response to a fire or explosion) as people try to get away.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3619875",
"title": "Crush (American game show)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 202,
"text": "Crush is a game show which aired on USA Network from March to August 2000. It was hosted by Andrew Krasny and was known as \"The show that begs for an answer to the question, \"Should friends try love?\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6134005",
"title": "Crush (Dave Matthews Band song)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 384,
"text": "\"Crush\" is a song by the Dave Matthews Band, released as the third single from their album \"Before These Crowded Streets\". As a single, it reached #11 on the Modern Rock Tracks chart, #75 on the Billboard Hot 100, #38 on the Top 40 Mainstream, and #20 on the Adult Top 40. As the album version is over eight minutes in length, the song time was cut almost in half for radio airplay. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30271284",
"title": "Crush injury",
"section": "Section::::Pathophysiology.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 481,
"text": "Crush Injury is damage to structures as a result of crushing. Crush syndrome is a systemic result of rhabdomyolysis and subsequent release of cell contents.. The severity of crush syndrome is dependant on the duration and magnitude of the crush injury as well as the bulk of muscle affected. It can result from both short duration, high magnitude injuries (such as being crushed by a building) or from low magnitude, long duration injuries such as coma or drug induced immobility.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11353966",
"title": "Crush (video game)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 523,
"text": "Crush is a platformer-puzzle video game developed by Kuju Entertainment's Zoë Mode studio and published by Sega in 2007 for the PlayStation Portable. Its protagonist is Danny, a young man suffering from insomnia, who uses an experimental device to explore his mind and discover the cause of his sleeplessness. Each level of the game, representing events from Danny's life and inspired by artists such as Tim Burton and M.C. Escher, requires the player to control Danny as he collects his \"lost marbles\" and other thoughts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19377820",
"title": "Stampede",
"section": "Section::::Human stampedes and crushes.:Crushes.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 315,
"text": "Among causes of fatal crushes, sometimes described as \"crazes\", is when a large crowd is trying to get \"toward\" something; typically occurring when members at the back of a large crowd continue pushing forward not knowing that those at the front are being crushed, or because of something that forces them to move.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2217772",
"title": "Crush syndrome",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 554,
"text": "Crush syndrome (also traumatic rhabdomyolysis or Bywaters' syndrome) is a medical condition characterized by major shock and renal failure after a crushing injury to skeletal muscle. Crush \"injury\" is compression of extremities or other parts of the body that causes muscle swelling and/or neurological disturbances in the affected areas of the body, while crush \"syndrome\" is localized crush injury with systemic manifestations. Cases occur commonly in catastrophes such as earthquakes, to victims that have been trapped under fallen or moving masonry.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
29qo5i
|
why charities with similar goals don't merge to become more effective?
|
[
{
"answer": "That would require a lot of profit sharing for the lawyers and executives. Charities more proficient at throwing celebrity endorsed, wine and cheese events aren't going to share their spoils with the smaller outfits holding three legged races at a park.\n\nAbove all it's about paying the bills and lining their pockets. Whatever is left over goes to build wells, cancer research, etc.",
"provenance": null
},
{
"answer": "Merging companies (and yes, non-profits are companies) requires a lot of work. First, just because two companies do the same thing does not mean that they could merge easily. What if one is a Catholic charity, and the other is non-religious? Is the new charity religious or not? Which of the two Presidents is going to be in charge? Are you going to fire a bunch of staff? If not, how is it more efficient to have 1 big company with twice as many workers?\n\nCompanies merge when it makes some financial sense. Since non-profits are not in it to maximize value, they don't have to worry about being the biggest and best.",
"provenance": null
},
{
"answer": "Corruption is a big one, lots of charities run at the minimum spending to be called a charity and you don't know which ones are/to what degree profiteering unless they show you their books.",
"provenance": null
},
{
"answer": "Because most these non profit charities actually rake in a lot of money that goes to CEOs and higher ups. They aren't about to lose money like that.",
"provenance": null
},
{
"answer": "I was talking to a well to do person who wanted to found a charity that did what the Red Cross did. I asked her why she didn't make a donation, volunteer or apply for a job. She didn't want to \"work for another organization\". She wanted to be the CEO.\n\nIn this case, it was her personal ego and desire to have her name be on the marquee.",
"provenance": null
},
{
"answer": "The simple answer is that they don't become more effective by merging. Mergers of all kinds tend to cause bloat within companies. Non-profits rely on being nimble, quick to act, and flexible to changing needs and conditions. As they become too large, they are less able to do all of those things.\n\nAnd then they have larger infrastructure to maintain, which takes money away from where it is needed.",
"provenance": null
},
{
"answer": "[Sometimes they do.](_URL_0_)\n\nProper answer: Some charities that raise money for the \"same\" thing might be focusing on different aspects of it. If the issue is big enough multiple charities might be able to do this very well without getting in each other's way.",
"provenance": null
},
{
"answer": "Because everyone would lose their jobs.",
"provenance": null
},
{
"answer": "I work full time for a not for profit organization so I can tell you my point of view. I'll try to explain why your question, to me, sounds a lot like 'why don't all grocery stores and restaurants of the world merge together to be more efficient and make more money'...\n\nThe biggest reason is that we often we have different 'specialities'. For example one might be very adept at mobilizing students as volunteers; or be very specialized to deliver programs to a specific audience (specific age group, type of community, etc). Everything in the organization (staff, resources, mentality, values) etc could be geared towards that. Trying to merge might end being like trying to merge a restaurant and grocery store.... it might not really be more efficient. \n\nAlso I'll add that most NFPs I have worked with are really driven by the passion of the people working there, and that has to be considered a resource, too. Where I work, and pretty much any other NFP that I have had the inside scoop on, we make a lot less money than if we worked in equivalent jobs in the for profit sector. So we are motivated by different things than money. Often it's the belief in our cause or a love of the methods through which we work towards our cause. I'm motivated by our program and all its side benefits (the personal growth our volunteers go through) as much as I like the outcome of our work. If my organization merged with another one that used a totally different approach and I lost that aspect of things, I might not stick around for my job, which I think would be a loss for 'the cause'. I guess I'm just trying to give another example of another benefit of having different means to the same end... motivating people / harnessing their passion is important for NFPs because salary is not going to cut it (even more so if your are working with volunteers)! \n\nThrough my work, I often meet people who work/run other not for profit organizations with similar goals. Usually we explain our organizations to each other and then we try to see if we can partner. For example I might be able to provide a pool of trained/keen/screened volunteers to deliver a resource that they have created but don't have the manpower to deliver themselves. Whenever we create a new program, we look carefully at what already exists so that we don't duplicate anything. I think everyone finds their niche and develops their expertise accordingly. Sometimes I do encounter smaller group that have just started a new program that is really similar to what we do, and I feel they are reinventing the wheel (often students... who had a good idea and didn't take the time to research if other similar programs existed). I usually offer to bring them under our umbrella, sometimes it works, but sometimes running their own thing (as volunteers) is their motivation so it's important to them to continue on their own. When you are working with volunteers, the time they donate is a scarce resource (like money) and if they'll donate more by having ownership of their own initiative, then it might be more efficient to let them lead their own thing.\n\nAnyways, I don't think it's a bad thing overall that we don't all merge together. A little healthy 'competition' is a good thing like it would be for any companies. We are all competing for funding from government, corporations, and individuals, and it makes us try harder!\n\nPS. It really frustrates me when people say that NFPs are ways of lining their pockets. I know there have been some bad apples but there are bad apples in everything. Where I work and any other organization that I hae first hand knowledge of, our salaries are very low compared to what we would make in industry, and the CEO's salary is extremely reasonable. We are audited every year and there are a lot of regulations in place by the government (in my country anyways). I'm really sad that some bad apples are giving a bad reputation to the sector and potentially hurting us all.",
"provenance": null
},
{
"answer": "higher-ups won't get most of the money that's used for \"administration\"",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "15947756",
"title": "Ken Stern",
"section": "Section::::Career.:Writing.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 905,
"text": "In 2013 Ken Stern's book \"With Charity for All: Why Charities are Failing and a Better Way to Give\" was published by Knopf Doubleday Publishing Group. His book discusses the problems in the not for profit charity sector, and appeals to donors for more evaluation and consideration in their decision making, in order to provide support for upcoming best of class charities, so that these organizations may survive and flourish in a sector controlled by large, traditional charities with less than optimal performance. He points out that although this sector accounts for a fast-growing ten percent of U.S. economic activity with over one trillion dollars in yearly donations, it has very little transparency, accountability, or oversight. He was interviewed with a focus on his book by Ken Berger, CEO of Charity Navigator. The interview was televised on CSPAN's BookTV series \"Afterwords\" in March, 2013.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38183408",
"title": "Charitable for-profit entity",
"section": "Section::::Similarities between charitable for-profit entity and not-for-profit charities.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 653,
"text": "Although there are many differences between charitable for-profit entities and traditional charities, they do hold some similarities that can be said to be quite major. Both will have a strong vision in what they want from the business overall. Both will therefore have similar strategic plans in order to get the best out of the business regardless of their aims and objectives being different to an extent. For-profit entities and non-profit charities will both strive to meet their objectives that are laid out on their mission statements. They are both given limited funds, so will therefore have to aim to meet their goals with the funds provided.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4345474",
"title": "American Red Cross",
"section": "Section::::History and organization.:Ranking.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 406,
"text": "In 1996, the \"Chronicle of Philanthropy\", an industry magazine, released the results of the largest study of charitable and non-profit organization popularity and credibility. The study showed that ARC was ranked as the third \"most popular charity/non-profit in America\" of over 100 charities researched with 48% of Americans over the age of 12 choosing \"Love\", and \"Like A lot\" to describe the Red Cross.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "638448",
"title": "Mothers Against Drunk Driving",
"section": "Section::::History.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 360,
"text": "In 1994, \"The Chronicle of Philanthropy\" released the results of the largest study of charitable and non-profit organization popularity and credibility. The study showed that MADD was ranked as the \"most popular charity/non-profit in America of over 100 charities researched with 51% of Americans over the age of 12 choosing 'Love' and 'Like A Lot' for MADD\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1569398",
"title": "Street fundraising",
"section": "Section::::Face-to-face fundraising.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 510,
"text": "Face-to-face fundraising, which includes street and door-to-door fundraising, has in recent years become a major source of income for many charities around the world. The reason the technique is so popular is that charities usually get a very profitable return on their investment (often around 3:1) because the person is asked to donate on a regular basis. By securing long term donations, charities are able to plan future campaigns in the knowledge that they have a guaranteed amount of money to work with.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "581673",
"title": "American Heart Association",
"section": "Section::::History.:1990s–2000s: Awareness campaigns.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 428,
"text": "In 1994, the \"Chronicle of Philanthropy\", an industry publication, released the results of the largest study of charitable and non-profit organization popularity and credibility. The study showed that the American Heart Association was ranked as the 5th \"most popular charity/non-profit in America\" of over 100 charities researched with 95% of Americans over the age of 12 choosing \"Love\" and \"Like A lot\" description category.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "917917",
"title": "Fundraising",
"section": "Section::::Organizations.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 307,
"text": "Fundraising is a significant way that non-profit organizations may obtain the money for their operations. These operations can involve a very broad array of concerns such as religious or philanthropic groups such as research organizations, public broadcasters, political campaigns and environmental issues.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
22s0k5
|
why are endorphins not used as the ultimate drug?
|
[
{
"answer": "They are. Heroin and opioid/opiate narcotics are basically just synthetic endorphins. ",
"provenance": null
},
{
"answer": "Endorphins can't cross the blood-brain barrier. Injecting or injesting them won't actually do anything, because they won't get to the receptors on which they exert their \"feel-good\" effect.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "60825",
"title": "Endorphins",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 955,
"text": "Endorphins (contracted from \"endogenous morphine\") are endogenous opioid neuropeptides and peptide hormones in humans and other animals. They are produced by the central nervous system and the pituitary gland. The term \"endorphins\" implies a pharmacological activity (analogous to the activity of the corticosteroid category of biochemicals) as opposed to a specific chemical formulation. It consists of two parts: \"endo-\" and \"-orphin\"; these are short forms of the words \"endogenous\" and \"morphine\", intended to mean \"a morphine-like substance originating from within the body\". The class of endorphins includes three compounds—α-endorphin (alpha endorphins), β-endorphin (beta endorphins), and γ-endorphin (gamma endorphins)—which preferentially bind to μ-opioid receptors. The principal function of endorphins is to inhibit the communication of pain signals; they may also produce a feeling of euphoria very similar to that produced by other opioids.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "157313",
"title": "Judith Reisman",
"section": "Section::::Erototoxins.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 409,
"text": "Endorphins are substances produced by the brain as a result of various things including sexual arousal, physical exercise, strong pain, laughter, etc. They cause pleasurable sensations and are somewhat addictive; drugs like morphine attach to the same receptors as endorphins. However, endorphins do not fit Reisman's definition of erototoxins, as many things cause them to be released, not only pornography.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1776932",
"title": "Endorphin (software)",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 253,
"text": "It has been used in movies and video games such as Troy, Poseidon and Tekken 5. As of 2014, Endorphin is no longer supported by NaturalMotion. The software is unavailable for purchase, and the user community has been removed from the company's website.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39316715",
"title": "Endorphins (song)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 278,
"text": "\"Endorphins\" is the fourth single by British DJ and record producer Sub Focus, released from his second studio album \"Torus\". The song features vocals from British singer Alex Clare. The song has reached number 10 on the UK Singles Chart and number seven on the UK Dance Chart.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1422685",
"title": "Beta-Endorphin",
"section": "Section::::Function and effects.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 960,
"text": "β-Endorphin is an agonist of the opioid receptors; it preferentially binds to the μ-opioid receptor. Evidence suggests that it serves as a primary endogenous ligand for the μ-opioid receptor, the same receptor to which the chemicals extracted from opium, such as morphine, derive their analgesic properties. β-Endorphin has the highest binding affinity of any endogenous opioid for the μ-opioid receptor. Opioid receptors are a class of G-protein coupled receptors, such that when β-endorphin or another opioid binds, a signaling cascade is induced in the cell. Acytelation of the N-terminus of β-endorphin, however, inactivates the neuropeptide, preventing it from binding to its receptor. The opioid receptors are distributed throughout the central nervous system and within the peripheral tissue of neural and non-neural origin. They are also located in high concentrations in the Periaqueductal gray, Locus coeruleus, and the Rostral ventromedial medulla.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10977211",
"title": "Diprenorphine",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 869,
"text": "Diprenorphine is the strongest opioid antagonist that is commercially available (some 100 times more potent as an antagonist than nalorphine), and is used for reversing the effects of very strong opioids for which the binding affinity is so high that naloxone does not effectively or reliably reverse the narcotic effects. These super-potent opioids, with the single exception of buprenorphine (which has an improved safety-profile due to its partial agonism character), are not used in humans because the dose for a human is so small that it would be difficult to measure properly , so there is an excessive risk of overdose leading to fatal respiratory depression. However conventional opioid derivatives are not strong enough to rapidly tranquilize large animals, like elephants and rhinos, so drugs such as etorphine and carfentanil are available for this purpose.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60825",
"title": "Endorphins",
"section": "Section::::Properties.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 429,
"text": "Endorphins play a major role in the body's response to inhibiting pain but endorphins have also been looked at for their role in pleasure. There has been a lot of research in the euphoric state that is produced after the release of endorphins in cases such as runner's high, orgasms, and eating appetizing food. Endorphins have also been looked into as a way to aid in the treatment of anxiety and depression through exercising.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5birw9
|
if pre-election polling is mostly done by phone interviews via landline, and the number of landlines is declining among most demographic groups, why are they still fairly accurate?
|
[
{
"answer": "It is not accurate that \"... poling is mostly done by phone interviews via landline\". Actual polling companies call cellphones and have non-phone ways of reaching people. \"Polls\" limited to landlines are badly disguised political activism.",
"provenance": null
},
{
"answer": "I forget which election, but I believe it was Wilson. The polls were heavily in his opponents favor, however Wilson won by a decent margin. This was due to the fact that only rich people had phones, and therefore only rich people were included in poles. ",
"provenance": null
},
{
"answer": "[NPR Politics Podcast - Polls](_URL_0_) \n\nCheck out this podcast. They talk to a Pollster from Pew Research Center about this very subject. ",
"provenance": null
},
{
"answer": "If you know the demographics of the population you're polling, and you know the demographics of the actual people you get ahold of on the phone, you can then weight the results to adjust for the polling discrepancy.\n\nExample: Lets say your polling population is 50/50 men/women. You then call a bunch of people to actually poll them, and realize the people who pick up the phone are 25/75 men/women. Since you know the population should be 50/50, you count each male response twice to account for the polling discrepancy.\n\nIn practice it's more complicated, but the principle is the same, and the adjustment methodologies are documented for any legitimate poll. (So they can be peer reviewed)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "277315",
"title": "Opinion poll",
"section": "Section::::Potential for inaccuracy.:Coverage bias.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 718,
"text": "This issue was first identified in 2004, but came to prominence only during the 2008 US presidential election. In previous elections, the proportion of the general population using cell phones was small, but as this proportion has increased, there is concern that polling only landlines is no longer representative of the general population. In 2003, only 2.9% of households were wireless (cellphones only), compared to 12.8% in 2006. This results in \"coverage error\". Many polling organisations select their sample by dialling random telephone numbers; however, in 2008, there was a clear tendency for polls which included mobile phones in their samples to show a much larger lead for Obama, than polls that did not.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41609161",
"title": "Voter suppression in the United States",
"section": "Section::::Methods.:Inequality in Election Day resources.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 300,
"text": "Delays at polling places are widely regarded as being a greater problem in urban areas. In 2012, polling places in minority neighborhoods in Maryland, South Carolina, and Florida were systematically deprived of the resources they needed to operate effectively, leading to long lines on election day.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "277315",
"title": "Opinion poll",
"section": "Section::::Potential for inaccuracy.:Coverage bias.\n",
"start_paragraph_id": 52,
"start_character": 0,
"end_paragraph_id": 52,
"end_character": 603,
"text": "Polling organizations have developed many weighting techniques to help overcome these deficiencies, with varying degrees of success. Studies of mobile phone users by the Pew Research Center in the US, in 2007, concluded that \"cell-only respondents are different from landline respondents in important ways, (but) they were neither numerous enough nor different enough on the questions we examined to produce a significant change in overall general population survey estimates when included with the landline samples and weighted according to US Census parameters on basic demographic characteristics.\" \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9590163",
"title": "2009 Indian general election",
"section": "Section::::Electoral issues.:Polling stations.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 266,
"text": "There were 828,804 Polling Stations around the country – a 20% increase over the number from the 2004 election. This was done mainly to avoid vulnerability to threat and intimidation, to overcome geographical barriers and to reduce the distance travelled by voters.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50466375",
"title": "2019 Australian federal election",
"section": "Section::::Opinion polls.:Assessment of polling accuracy.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 394,
"text": "The former director of Newspoll, Martin O'Shannessy, cited changes in demographics and telephone habits which have changed the nature of polling from calling random samples of landlines to calling random mobile numbers and automated \"robocalls\"—with the ensuing drop in response rates resulting in lower quality data due to smaller samples and bias in the sample due to who chooses to respond.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38274557",
"title": "List of polling organizations",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 318,
"text": "This is a list of notable polling organizations by country. All the major television networks, alone or in conjunction with the largest newspapers or magazines, in virtually every country with elections, operate their own versions of polling operations, in collaboration or independently through various applications.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6583943",
"title": "Rasmussen Reports",
"section": "Section::::Polling topics.:Elections.:Presidential.:2012.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 457,
"text": "On November 8, the Rasmussen Reports daily presidential tracking poll analysis said \"The 2012 election was very likely the last presidential election of the telephone polling era. While the industry did an excellent job of projecting the results, entirely new techniques will need to be developed before 2016. The central issue is that phone polling worked for decades because that was how people communicated. In the 21st century, that is no longer true.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6rgsbx
|
What makes meth labs so dangerous?
|
[
{
"answer": "because they're attempting to replicate processes that are normally carried out in a _URL_2_, but without the design of engineers, construction with appropriate materials or operation by trained experts. Instead they're copying instructions from the internet or passed down orally, using whatever is cheapest and available and running the processes without necessarily understanding the details, especially the energies, of the reactions.\n\nI'm not interested in looking up the specific synthetic steps involved, but I expect at least a few of them are exothermic, meaning that when that reaction happens, one of the products is heat. If that reaction happens faster than expected, then you get lots of heat, all at once, and if that happens in a liquid solution like water, the liquid turns to gas, and expands violently, as in _URL_1_\n\nThere's nothing intrinsically dangerous about making meth, or any other pharmaceutical, but some of the reactions can be quite dangerous, even putting aside the potential for violently energetic reactions. One scary example: _URL_0_",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "610329",
"title": "Clandestine chemistry",
"section": "Section::::Enforcement of controls on precursor chemicals.:Amphetamines.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 734,
"text": "Although the prevalence of domestic meth labs continues to be high in western states, they have spread throughout the United States. It has been suggested that \"do-it-yourself\" meth production in rural areas is reflective of a broader DIY approach that includes activities such as hunting, fishing, and fixing one’s cars, trucks, equipment, and house. Toxic chemicals resulting from methamphetamine production may be hoarded or clandestinely dumped, damaging land, water, plant life and wild life, and posing a risk to humans. Waste from methamphetamine labs is frequently dumped on federal, public, and tribal lands. The chemicals involved can explode and clandestine chemistry has been implicated in both house and wild land fires.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21543425",
"title": "Rolling meth lab",
"section": "Section::::Transportation hazard.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 566,
"text": "The process of synthesizing methamphetamine (also known as \"cooking\") can be dangerous as it involves poisonous, flammable, and explosive chemicals. When the lab is mobile, it presents a risk to wherever it happens to be, as demonstrated in November 2001, a rolling meth lab that was carrying anhydrous ammonia exploded on Interstate 24 in southwest Kentucky, prompting law enforcement to shut down the highway. Such incidents have not only injured the meth producers, but have injured passing motorists and police officers, who are also exposed to dangerous fumes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47523622",
"title": "Inland Empire",
"section": "Section::::Demographics.:Crime.\n",
"start_paragraph_id": 75,
"start_character": 0,
"end_paragraph_id": 75,
"end_character": 517,
"text": "The region has also been noted as a center of methamphetamine drug production. The Riverside and San Bernardino county sheriffs' departments busted 635 meth labs in 2000; law enforcement has driven most of the meth production industry to Mexico since 2007, but many of the homes discovered to have been used as meth labs before 2006 have since been sold on the market before California law required rigorous decontamination, leading to a legacy of health hazards for unsuspecting renters and home-buyers in the area.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40542151",
"title": "History and culture of substituted amphetamines",
"section": "Section::::Illicit drug culture.:Illegal synthesis.:Illegal laboratories.\n",
"start_paragraph_id": 70,
"start_character": 0,
"end_paragraph_id": 70,
"end_character": 564,
"text": "Short-term exposure to high concentrations of chemical vapors that exist in black-market methamphetamine laboratories can cause severe health problems and death. Exposure to these substances can occur from volatile air emissions, spills, fires, and explosions. Such methamphetamine labs are sometimes discovered when emergency personnel respond to fires due to improper handling of volatile or flammable materials. Single-pot \"shake and bake\" syntheses are particularly prone to explode and ignite, and, when abandoned, still pose a severe hazard to firefighters.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40542151",
"title": "History and culture of substituted amphetamines",
"section": "Section::::Illicit drug culture.:Illegal synthesis.:Illegal laboratories.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 484,
"text": "Methamphetamine cooks, their families, and first responders are at high risk of experiencing acute health effects from chemical exposure, including lung damage and chemical burns to the body. After the seizure of a methamphetamine lab, a low exposure risk to chemical residues often exists, but this contamination can be sanitized. Chemical residues and lab wastes that are left behind at a former methamphetamine lab can cause severe health problems for people who use the property.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36844837",
"title": "Methamphetamine in the United States",
"section": "Section::::Hazardous waste.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 267,
"text": "Meth lab waste is toxic and extremely hazardous, making cleanup a major problem for authorities and property owners. Common wastes include brake cleaner, ammonia, soda bottles, cat litter, lithium batteries, engine starter, matches and pseudoephedrine blister packs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36844837",
"title": "Methamphetamine in the United States",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 729,
"text": "Much of the methamphetamine consumed in the US is manufactured domestically by amateur chemists in meth labs from common household drugs and chemicals such as lye, lithium, and ammonia. Since the passage of the Combat Methamphetamine Epidemic Act of 2005, the Drug Enforcement Administration has reported a sharp decline in domestic meth lab seizures, but drug cartels continue to meet demand by manufacturing meth in Mexico and smuggling it across the border. In 2012, the DEA seized a total of 3,898 kg of methamphetamine and 11,210 meth labs. , the Sinaloa Cartel is the most active drug cartel involved in smuggling illicit drugs like methamphetamine into the United States and trafficking them throughout the United States.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
19knsp
|
What is the affect of aging on sex cells in humans, and how is DNA preserved to pass on to offspring?
|
[
{
"answer": "The [germ cell line](_URL_2_) is separated from the rest of the developing organism early in development, and is kept in a state of minimal cell division and protection from metabolic damage. There is still degradation of the genetic material in [males](_URL_1_) and [females](_URL_0_), but it doesn't become an issue until past the age of 40.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "38057969",
"title": "Progeroid syndromes",
"section": "Section::::Defects in DNA repair.:RecQ-associated PS.:Werner syndrome.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 245,
"text": "Cells of affected individuals have reduced lifespan in culture, more chromosome breaks and translocations and extensive deletions. These DNA damages, chromosome aberrations and mutations may in turn cause more RecQ-independent aging phenotypes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29586267",
"title": "Origin and function of meiosis",
"section": "Section::::Function.:Genetic diversity.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 445,
"text": "However, in the presence of a fairly stable environment, individuals surviving to reproductive age have genomes that function well in their current environment. They raise the question of why such individuals should risk shuffling their genes with those of another individual, as occurs during meiotic recombination? Considerations such as this have led many investigators to question whether genetic diversity is the adaptive advantage of sex.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3815307",
"title": "James F. Crow",
"section": "Section::::Biography.:Paternal age effect on DNA.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 983,
"text": "Crow also did research and writing in how DNA in sperm degrades as men age, through repeated copying, and can then be passed along to children in permanently degraded form, which they likely then pass on as well. As a result, he said in 1997 that the \"greatest mutational health hazard to the human genome is fertile older males\". He described mutations that have a direct visible effect on the child's health and also mutations that can be latent or have minor visible effects on the child's health; many such mutations allow the child to reproduce, but cause more serious problems for grandchildren, great-grandchildren and later generations However, evidence to support Crow's \"greatest mutational health hazard\" claim appears to be weak; a 2009 review concludes that the absolute risk from paternal age for genetic anomalies in offspring is low, and states that \"there is no clear association between adverse health outcome and paternal age but longitudinal studies are needed.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3042204",
"title": "Male infertility",
"section": "Section::::Causes.:Pre-testicular causes.:DNA damage.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 540,
"text": "Common inherited variants in genes that encode enzymes employed in DNA mismatch repair are associated with increased risk of sperm DNA damage and male infertility. As men age there is a consistent decline in semen quality, and this decline appears to be due to DNA damage. The damage manifests by DNA fragmentation and by the increased susceptibility to denaturation upon exposure to heat or acid, the features characteristic of apoptosis of somatic cells. These findings suggest that DNA damage is an important factor in male infertility.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5762008",
"title": "Microbial genetics",
"section": "Section::::Microorganisms whose study is encompassed by microbial genetics.:Protozoa.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 606,
"text": "In the asexual fission phase of growth, during which cell divisions occur by mitosis rather than meiosis, clonal aging occurs leading to a gradual loss of vitality. In some species, such as the well studied \"Paramecium tetraurelia\", the asexual line of clonally aging paramecia loses vitality and expires after about 200 fissions if the cells fail to undergo meiosis followed by either autogamy (self-fertilizaion) or conjugation (outcrossing) (see aging in \"Paramecium\"). DNA damage increases dramatically during successive clonal cell divisions and is a likely cause of clonal aging in \"P. tetraurelia\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51778204",
"title": "Spermatogonial stem cell",
"section": "Section::::Differentiation.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 552,
"text": "Male reproductive function declines with increasing age as indicated by decreased sperm quality and fertility. As rats age, undifferentiated spermatogonial cells undergo numerous changes in gene expression. These changes include upregulation of several genes involved in the DNA damage response. This finding suggests that during aging there is an increase in DNA damage leading to an upregulation of DNA damage response proteins to help repair these damages. Thus it appears that reproductive aging originates in undifferentiated spermatogenic cells.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8610048",
"title": "Paternal age effect",
"section": "Section::::Mechanisms.:Epigenetic changes.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 449,
"text": "The production of sperm cells involves DNA methylation, an epigenetic process that regulates the expression of genes. Improper genomic imprinting and other errors sometimes occur during this process, which can affect the expression of genes related to certain disorders, increasing the offspring's susceptibility. The frequency of these errors appears to increase with age. This could explain the association between paternal age and schizophrenia.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
26hc0e
|
Can anyone identify the markings on this rock in my front yard? (Buffalo, NY)
|
[
{
"answer": "You'll probably get more help over at /r/whatisthisthing, the appropriate subreddit for these kinds of questions.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "10833767",
"title": "Hove Park",
"section": "Section::::Overview.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 340,
"text": "In the southwest corner lies a rock called \"The Goldstone\". Legend has it that the devil threw the approximately 20 ton rock there while excavating Devil's Dyke. Towards the north is a sculpture by the environmental artist Chris Drury; \"Fingermaze\" is a labyrinth-like design based on a fingerprint, consisting of stones set into the turf.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7388081",
"title": "Writing Rock State Historical Site",
"section": "Section::::The carvings.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 750,
"text": "The design on the rocks are clearly American Indian, despite unfounded speculation attributing the origins of the “mysterious carvings” to Vikings, Chinese, or others. Similar rock art sites are found in Roche Percee and Kamsack, Saskatchewan; Longview and Writing-on-Stone Provincial Park, Alberta; Pictograph Cave near Billings, Montana; Dinwoody, Wyoming; Ludlow Cave, South Dakota; and at numerous archeological sites in the upper midwestern United States. Thunderbirds, mythological creatures responsible for lightning and thunder, are central to stories told by Algonquian and Siouan-speaking tribes. Many Plains Indians such as Plains Cree, Plains Ojibwa, Gros Ventre, Crow, Dakota (Sioux) Mandan, and Hidatsa used thunderbirds in their art. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39701850",
"title": "Red Rock (Wyoming)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 419,
"text": "Red Rock is a rock formation in southwestern Wyoming that was used by travelers on the Overland Trail to record signature inscriptions from passersby. The wind-smoothed red rock stands about high and has a circumference of about . The sandstone formation records signatures dating to at least the 1850s. The signatures on the upwind side of the rock have weathered to faint traces. The rock is on privately owned land.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17233719",
"title": "Indian Head Rock",
"section": "Section::::Etymology.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 463,
"text": "The name \"Indian Head Rock\" comes from a carving on the bottom of the boulder with the features of a human face. It has been theorized that the face was carved by a Native American artist as a petroglyph, a boatman as a river gauge, or was carved by John Book from Portsmouth, Ohio who later fought in the Battle of Shiloh. Other theories include that a band of robbers used it to mark their nearby stash and that a quarryman carved the face with a metal device.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17233719",
"title": "Indian Head Rock",
"section": "Section::::History.:19th century.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 722,
"text": "In 1894, a rough illustration of the Indian Head Rock appeared in a Portsmouth newspaper. In the sketch, initials, last names with first initials, and a crude house figure are seen. All of these features are extant on the boulder, but many are now obscured by additional engravings which may not have been present at the time. The rock is depicted several feet from the Kentucky shoreline with the bottom submerged. The \"Indian Head\" face is depicted just below the waterline, but it is clear the illustrator had not seen the actual face, as it is drawn as a profile. Some of the names shown on the rock are identified with historically prominent Portsmouth residents or families, e.g., F. Kinney, C. Molster and D. Ford.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17233719",
"title": "Indian Head Rock",
"section": "Section::::History.:19th century.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 629,
"text": "The first known reference to the Indian Head Rock was to its use as a gauge of the Ohio River. A log kept by a local resident recorded the river stage with reference to various points on the rock: the mouth or the eyes of the carving or the top of the rock. (E.g., \"1849--Sept. 23, top of rock 2 1/2 inches under water\", \"1851--Sept 27, eyes to be seen--the lowest measure on record from 1839 to this date.\", and \"1854--Sept. 5, mouth just on the water-line--therefore lower than since 1839.\") The first record in the log was dated November 10, 1839, when the mouth of the figure was said to be \"10 1/4 inches out of the water.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "55183449",
"title": "Inscription Rock (Kelleys Island, Ohio)",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 941,
"text": "Inscription Rock was discovered partially buried in the sand of the lake shore in 1833 and by 1915, it was appearing on postcards for tourists in the area and is still a well-visited site to this day. In 1851 Col. Eastman of the United States Army was commissioned to analyze and create detailed drawings of the rock and petroglyphs. He then submitted copies to Shingvauk, a Native American with a knowledge of pictography, for further interpretation. There are over 100 images on the rock and the carvings were noted to be similar to ones used by the Iroquois in Canada. Due to the soft nature of the limestone rock in the area, the carvings are generally believed to be less than 1,000 years old but the Inscription Rock remains one of the most significant and accessible examples of native petroglyphs in the area. Due to its proximity to the Lake Erie shoreline, it is under constant threat of further erosion by wind and wave activity.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
19x933
|
how did the golden eye disk hold a full game and an emulator with 10 games only on 12mb?
|
[
{
"answer": "It is a game contained in a ROM and the emulator is what allows you to play access the ROM. There is not as much data as you think.",
"provenance": null
},
{
"answer": "I'm pretty sure the music is synthesized. The music actually stored on the cartridge is basically just the musical notes it should play, instead of having the actual sound data. The console then has a built in sequencer that reads the notes and plays them. The music files aren't thus very large at all.\n\nThe textures are very low quality, and probably take up most of the 12 megabytes. \n\nI'm not sure what you mean by emulator, but that is just additional code that probably doesn't take up a lot of space.",
"provenance": null
},
{
"answer": "because older games like that kept lists of instructions on how to play the music, and how to draw the video, instead of storing the already processed and ready video and sound. The instructions lists take a lot less space than the methods we use to store video these days. Modern Mp3 files, and videos are completed music and video that have already been processed, and take up a lot more space.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "891454",
"title": "Power Player Super Joy III",
"section": "Section::::Hardware.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 414,
"text": "The consoles have 76 built-in games, although marketing frequently claims to have more than 1,000 ways of playing them. Hence, the game count of 76,000 is listed as a gold sticker on the box. Most of the included games had been originally released for the NES or Famicom, but some have been created by the manufacturer. Most of the games have had their title screen graphics removed to save space on the ROM chip.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31852479",
"title": "Street Fighter II: Champion Edition",
"section": "Section::::Ports.:X68000.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 801,
"text": "On November 26, 1993, Capcom released an X68000 port of \"Champion Edition\" exclusively in Japan, which consisted of four floppy disks. The port is almost identical to the arcade version, with the same exact graphics and almost identical soundtrack. However, the X68000 version forces player to switch floppy disks when loading different stages and characters (it is possible to avoid this by installing the game to the system's hard drive if the computer has more than 6 Megabytes). The game also included a joystick adapter that allowed players to use the Super Famicom and Mega Drive versions of Capcom's CPS Fighter joystick controller. On an X68030 with multiple PCM (pulse-code modulation) drivers installed, the music and voice quality can match that of the arcade version's ADPCM sound system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3442219",
"title": "Second generation of video game consoles",
"section": "Section::::Home systems.:Atari 2600 & 5200.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 797,
"text": "Early Atari 2600 cartridges contained 2 kilobytes of read-only storage. This limit grew steadily from 1978 to 1983: up to 16 kilobytes for Atari 5200 cartridges. \"Bank switching\", a technique that allowed two different parts of the program to use the same memory addresses, was required for the larger cartridges to work. The Atari 2600 cartridges got as large as 32 kilobytes through this technique. The Atari 2600 had only 128 bytes of RAM available in the console. A few late game cartridges contained a combined RAM/ROM chip, thus adding another 256 bytes of RAM inside the cartridge itself. The Atari standard joystick was a digital controller with a single fire button released in 1977. The port of the Atari joystick was the de facto standard digital joystick specification for many years.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "496990",
"title": "Didaktik",
"section": "Section::::Didaktik M.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 344,
"text": "5.25-inch floppy disk drive called D40 was introduced in 1992 and featured a \"Snapshot\" (see also Hibernation (computing)) button that allowed to store current content of the memory (memory image) on diskette. It was also possible later to load the memory image and continue playing the game (or whatever was stored) from the respective state.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18944028",
"title": "Nintendo Entertainment System",
"section": "Section::::Hardware.:Technical specifications.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 231,
"text": "The NES contains 2 kB of onboard work RAM. A game cartridge may contain expanded RAM to increase this amount. The sizes of NES games vary from 8 kB (\"Galaxian\") to 1 MB (\"Metal Slader Glory\"), but 128 to 384 kB is the most common.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1010890",
"title": ".kkrieger",
"section": "Section::::Procedural content.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 275,
"text": "The entire game uses only 97,280 bytes of disk space. In contrast, most contemporaneous first-person shooters filled one or more CDs or DVDs. According to the developers, \".kkrieger\" itself would take up around 200–300 MB of space if it had been stored the conventional way.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "412595",
"title": "Super Game Boy",
"section": "Section::::Predecessors and successors.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 439,
"text": "The Wide-Boy64 was released for the N64 in two major versions, the CGB version which allowed Game Boy and Game Boy Color titles to be played on a television, and the AGB version which also allowed Game Boy Advance games to be played. It cost $1400, and like the original Wide Boy, it was only available to developers and the gaming press. These devices were used to take screenshots of Nintendo handheld video games to be in retail media.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2msxdo
|
how do the big torrent uploaders like yify, eztv, etc, not get caught?
|
[
{
"answer": "There can be many reasons:\n\n- hiding behind a VPN or TOR or another proxy, or all of those; remember that there still are countries where piracy is not regulated by law\n- initial seeding from a remote server\n- actually living in a country with no laws against piracy\n- all of the above\n\nI also doubt that they are *that* heavily hunted for. The authorities have much bigger Internet problems like hacking, fraud, drug trade. ",
"provenance": null
},
{
"answer": "Well actually those are not the actual source of the pirated content. The content usually comes from scene groups like KILLERS, DIMENSION for tv shows, RELOADED, SKIDROW for games etc. I'm not really sure if torrent is the first platform these original files appear in. The ones OP mentions are just known uploaders on public trackers like The Pirate Bay. You won't find those on private trackers. \n\n\n\nOP, you'll have a better chance of getting good answers to your question if you ask it in /r/Trackers",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "52122944",
"title": "Torrents-Time",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 794,
"text": "Torrents-Time is a browser plugin that allows websites to have the same functionality as the popular Popcorn Time program, without requiring the client to download an application. Released 2 February 2016, sites such as The Pirate Bay and the now defunct KickassTorrents others supported the plugin within days, allowing for in-browser streaming of popular videos. Only two weeks into its history it was attacked by anti-piracy groups on a number of grounds. The security of the plugin has been questioned, especially its reliance on cross-origin resource sharing and parts of its javascript implementation which could end up compromising a target computer and stealing information about the source. However, the Torrents-Time team claims these fears are exaggerations and based \"half-truths\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "239098",
"title": "BitTorrent",
"section": "Section::::Operation.:Anonymity.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 258,
"text": "Private torrent trackers are usually invitation only, and require members to participate in uploading, but have the downside of a single centralized point of failure. Oink's Pink Palace and What.cd are examples of private trackers which have been shut down.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2349829",
"title": "Leecher (computing)",
"section": "Section::::P2P networks.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 952,
"text": "However, on most BitTorrent tracker sites, the term \"leecher\" is used for all users who are not seeders (which means they do not have the complete file yet). As BitTorrent clients usually begin to upload files almost as soon as they have started to download them, such users are usually not freeloaders (people who don't upload data at all to the swarm). Therefore, this kind of leeching is considered to be a legitimate practice. Reaching an upload/download ratio of 1:1 (meaning that the user has uploaded as much as they downloaded) in a BitTorrent client is considered a minimum in the etiquette of that network. In the terminology of these BitTorrent sites, a leech becomes a seeder (a provider of the file) when they finished downloading and continues to run the client. They will remain a seeder until the file is removed or destroyed (settings enable the torrent to stop seeding at a certain share ratio, or after X hours have passed seeding).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9488407",
"title": "Home server",
"section": "Section::::Services provided by home servers.:BitTorrent.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 443,
"text": "Home servers are ideal for utilizing the BitTorrent protocol for downloading and seeding files as some torrents can take days, or even weeks to complete and perform better on an uninterrupted connection. There are many text based clients such as rTorrent and web-based ones such as TorrentFlux and Tonido available for this purpose. BitTorrent also makes it easier for those with limited bandwidth to distribute large files over the Internet.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31201241",
"title": "Torrent poisoning",
"section": "Section::::Barriers to torrent poisoning.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 378,
"text": "There are several reasons why content providers and copyright holders may not choose torrent poisoning as a method for guarding their content. First, before injecting decoys, content providers have to normally monitor the BitTorrent network for signs that their content is being illegally shared (this includes watching for variations of files and files in compressed formats).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32881",
"title": "Warez",
"section": "Section::::Warez distribution.:Rise of software infringement.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 641,
"text": "Today most warez files are distributed to the public via bittorrent and One-click hosting sites. Some of the most popular software companies that are being targeted are Adobe, Microsoft, Nero, Apple, DreamWorks, and Autodesk, to name a few. To reduce the spread of illegal copying, some companies have hired people to release \"fake\" torrents (known as Torrent poisoning), which look real and are meant to be downloaded, but while downloading the individual does not realize that the company that owns the software has received his/her IP address. They will then contact his/her ISP, and further legal action may be taken by the company/ISP.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1845497",
"title": "BitTorrent (software)",
"section": "Section::::Features.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 263,
"text": "The BitTorrent client enables a user to search for and download torrent files using a built-in search box (\"Search for torrents\") in the main window, which opens the BitTorrent torrent search engine page with the search results in the user's default web browser.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5kuu9h
|
what is the difference between "_url_1_" and "_url_0_"? i know that they are one and the same, but in general i want to understand how the domain name works.
|
[
{
"answer": "_URL_0_ is what is called a subdomain. _URL_1_ is called a subdirectory. Pretty much the same on the server side except that a subdirectory is within a domain's directory while a subdomain is outside a domain's directory, yet has its address response to the initial domain. The reason that those particular two respond to the same place is that both addresses go to the same server location.\n\nresource: I work in websites and hosting. ",
"provenance": null
},
{
"answer": "\nThere are two protocols here:\n\n- The Domain Name System (DNS for short) resolves a text domain name, like \"_URL_3_\", to a numeric IP address, like \"10.234.56.7\".\n\n- The Hyper-Text Transfer Protocol (HTTP for short, or HTTPS if you use the Secure version) allows you to request websites from a computer at an IP address.\n\nIf you type \"_URL_3_\" into your browser, first the browser will add the transform the URL into \"_URL_0_/\". The missing parts added are \"http://\" which specifies the protocol, and \"/\" which specifies a resource. It will ask your ISP's DNS for the address of \"_URL_3_\", be informed by the DNS server that the address is 10.234.56.7, then send an HTTP request to 10.234.56.7 for the \"/\" resource.\n\nIf you type \"_URL_2_\" into your browser, again the browser will transform the URL, this time into \"_URL_1_\". The resource part \"/mail\" was already there, so only the \"http://\" protocol part needed to be added this time. It will ask your ISP's DNS for the address of \"_URL_5_\", be informed by the DNS server that the address is 127.77.88.99, then send an HTTP request to 127.77.88.99 for the \"/mail\" resource.\n\nGoogle has programmed their servers to have both of these URL's do the same thing. For example, \"_URL_1_\" may use an HTTP redirect to tell your browser it should ask for \"_URL_0_\" instead (an HTTP redirect is a reply a website can send to your browser to tell it to request a different URL and change your address bar to match). Another common use for redirects is to redirect the HTTP version of the website to the HTTPS version.\n",
"provenance": null
},
{
"answer": "When you buy an internet name (domain name), you would buy \"_URL_3_\". Once you own that, you can use it for different servers, like _URL_2_, _URL_0_, _URL_1_ or whatever you want. www._URL_3_ would traditionally be used for your companies main web server, but in fact you can set it up anyway you want.\n\nEverything after the slash indicates a different directory or application on that server. So maybe on your server you have a /mail folder, maybe a /games folder or whatever you want.\n\nIf you want to go beyond ELI5, you can use technologies like URL rewriting or BigIP iRules or host name bindings where you can parse domain names however you want and the guidelines I wrote above can be bypassed.\n",
"provenance": null
},
{
"answer": "Breaking up the `_URL_0_/mail` URL a bit:\n\n- `_URL_0_` determines *which server you contact* (specifically, you're looking for the server that corresponds to the `www` subdomain registered under the `google` domain, which is registered under the `com` top-level domain.\n\n- `/mail` determines which document you ask the server for (and if this part isn't specified, you effectively just ask for `/`\n\nOf course, since Google owns both `_URL_0_` and `_URL_2_`, they can make both URLs lead to the same page anyway, but this is the difference. One says \"Give me the `/mail` document from the server located at `_URL_0_`, and the other says \"give me the `/` document from the server at `_URL_2_`\".",
"provenance": null
},
{
"answer": "As a web site creator, here is my understanding:\n\nImagine two servers. One is the main site server and the other is a dedicated mail server both of which are wired together. A browser request for \"_URL_0_\" will route the request to the main server and redirect it to the mail server. Where as the \"_URL_1_\" request goes directly to the mail server without the need for passing through the main server. Both methods will use the same IP numbered address.\n\nThe reason for the two variations is because we humans would not remember to enter the numbered IP address and which of the two methods used is determined by the person who programmed the link used to access it. If it was Google staff member it likely would be the direct method. But if it was programmed by another web site they likely would use the main server redirect because they were unaware that a dedicated server exists.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "21796",
"title": "Namespace",
"section": "Section::::In programming languages.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 298,
"text": "As a rule, names in a namespace cannot have more than one meaning; that is, different meanings cannot share the same name in the same namespace. A namespace is also called a context, because the same name in different namespaces can have different meanings, each one appropriate for its namespace.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15117923",
"title": ".рф",
"section": "Section::::Second level domains.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 468,
"text": "The second level domain names are registered directly with user defined names, such as company names. There are no standardized category names (such as com or org) used on the second level. The second level domain names are intended to have Cyrillic characters only, but some have Latin characters or digits instead. For the third level names, it is fairly common that \"www\" (Latin characters) are used, but most main company addresses don't use any third level name.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32146",
"title": "Uniform Resource Identifier",
"section": "Section::::Relation to XML namespaces.\n",
"start_paragraph_id": 63,
"start_character": 0,
"end_paragraph_id": 63,
"end_character": 552,
"text": "In XML, a namespace is an abstract domain to which a collection of element and attribute names can be assigned. The namespace name is a character string which must adhere to the generic URI syntax. However, the name is generally not considered to be a URI, because the URI specification bases the decision not only on lexical components, but also on their intended use. A namespace name does not necessarily imply any of the semantics of URI schemes; for example, a namespace name beginning with \"http:\" may have no connotation to the use of the HTTP.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39878",
"title": "Domain name",
"section": "Section::::Domain name registration.:Technical requirements and process.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 372,
"text": "A domain name consists of one or more labels, each of which is formed from the set of ASCII letters, digits, and hyphens (a-z, A-Z, 0-9, -), but not starting or ending with a hyphen. The labels are case-insensitive; for example, 'label' is equivalent to 'Label' or 'LABEL'. In the textual representation of a domain name, the labels are separated by a full stop (period).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39878",
"title": "Domain name",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 582,
"text": "A domain name is an identification string that defines a realm of administrative autonomy, authority or control within the Internet. Domain names are used in various networking contexts and for application-specific naming and addressing purposes. In general, a domain name identifies a network domain, or it represents an Internet Protocol (IP) resource, such as a personal computer used to access the Internet, a server computer hosting a web site, or the web site itself or any other service communicated via the Internet. In 2017, 330.6 million domain names had been registered.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "483205",
"title": ".name",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 255,
"text": "The domain name is a generic top-level domain (gTLD) in the Domain Name System of the Internet. It is intended for use by individuals for representation of their personal name, nicknames, screen names, pseudonyms, or other types of identification labels.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2292485",
"title": "Personal web page",
"section": "Section::::Domain names.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 1051,
"text": "Many people choose a domain name like \"FirstnameLastname.com\" to host their personal website on (e.g., patxhosa.com), whereas outside the English-speaking world, the home country's top level domain (TLD) is commonly used. People with common names may choose to add their middle name or initial in the URL if the primary choice of domain name is already in use. For example, a woman named \"Jane Doe\" will probably find that \"janedoe.com\" is already taken, so she may have to use a variant of her name, such as adding in a number (\"janedoe1.com\"), a birth year (\"janedoe1980.com\") or a middle initial (\"janeqdoe.com\"). The .name TLD is specifically intended to be used for personal web pages, but has not proven to be popular. Personal websites may instead use other generic TLDs like .me, .co, .net and .info, but also .com, .biz and .org, even though individuals rarely think of themselves as companies or (non-profit) organizations. Some people opt to find a TLD that forms a word when combined with the domain name; this is known as domain hacking.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
l1bnx
|
what exactly happened to gandalf after the snafu at moria?
|
[
{
"answer": "He slays the beast. He dies. While dead, he does things in the afterlife which he never talks about. He is reborn as the white, for I imagine his bravery, valor, and just doing the right thing.\n\nHe then gets taken away by the eagle, taken somewhere where he gives advice, and such, I believe it was to the cliff of the birds.",
"provenance": null
},
{
"answer": "The fall did not kill him or the Balrog. When they landed, the Balrog fled from Gandalf, but he chased it through the myriad of twisting tunnels below ground. After a few days, he finally found and killed it, but not before suffering mortal wounds himself. His spirit went to \"*a place beyond space and time*\", but he was resurrected and returned to Middle Earth, ostensibly because he was needed to defeat Sauron. The books never tell exactly what happened...",
"provenance": null
},
{
"answer": "He slays the beast. He dies. While dead, he does things in the afterlife which he never talks about. He is reborn as the white, for I imagine his bravery, valor, and just doing the right thing.\n\nHe then gets taken away by the eagle, taken somewhere where he gives advice, and such, I believe it was to the cliff of the birds.",
"provenance": null
},
{
"answer": "The fall did not kill him or the Balrog. When they landed, the Balrog fled from Gandalf, but he chased it through the myriad of twisting tunnels below ground. After a few days, he finally found and killed it, but not before suffering mortal wounds himself. His spirit went to \"*a place beyond space and time*\", but he was resurrected and returned to Middle Earth, ostensibly because he was needed to defeat Sauron. The books never tell exactly what happened...",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "53221",
"title": "Gandalf",
"section": "Section::::Internal biography.:Middle-earth.:The Fellowship of the Ring.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 450,
"text": "After a long fall, Gandalf and the Balrog crashed into a deep subterranean lake in Moria's underworld. Gandalf pursued the Balrog through the tunnels for eight days until they climbed to the peak of Zirakzigil. Here they fought for two days and nights. In the end, the Balrog was defeated and cast down onto the mountainside. Gandalf himself died shortly afterwards, and his body lay on the peak while his spirit travelled \"out of thought and time\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "199002",
"title": "Isengard",
"section": "Section::::Literature.:Orthanc.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 569,
"text": "After his defeat, Saruman was confronted by Théoden King of Rohan, Gandalf and Aragorn, at which time Gríma Wormtongue, Saruman's servant, threw the \"palantír\" at the group in an attempt to kill them or possibly Gandalf. Saruman was then locked in Orthanc and guarded by Treebeard, but was later set free, turning the tower's keys over to Treebeard before leaving and taking Gríma with him. Treebeard's main reason for letting Saruman go was that he could not bear to see any living thing caged. Saruman exploited this weakness, most likely using his power with words.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53221",
"title": "Gandalf",
"section": "Section::::Internal biography.:Middle-earth.:Gandalf the White.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 427,
"text": "Gandalf was eventually \"sent back\" as Gandalf the White, and returned to life on the mountain top. Gwaihir, lord of eagles, carried him to Lórien, where he was healed of his injuries and re-clothed in white robes by Galadriel. He travelled to Fangorn Forest, where he encountered Aragorn, Gimli, and Legolas (who were tracking Merry and Pippin). They mistook him for Saruman, but he stopped their attacks and revealed himself.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29798",
"title": "The Lord of the Rings",
"section": "Section::::Plot summary.:The Two Towers.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 590,
"text": "Gandalf explains that he slew the Balrog. Darkness took him, but he was sent back to Middle-earth to complete his mission. He is clothed in white and is now Gandalf the White, for he has taken Saruman's place as the chief of the wizards. Gandalf assures his friends that Merry and Pippin are safe. Together they ride to Edoras, capital of Rohan. Gandalf frees Théoden, King of Rohan, from the influence of Saruman's spy Gríma Wormtongue. Théoden musters his fighting strength and rides with his men to the ancient fortress of Helm's Deep, while Gandalf departs to seek help from Treebeard.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53221",
"title": "Gandalf",
"section": "Section::::Internal biography.:Middle-earth.:Gandalf the White.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 770,
"text": "Gandalf arrived in time to help order the defences of Minas Tirith. His presence was resented by Denethor, the Steward of Gondor; but after Denethor's son Faramir was gravely wounded in battle, Denethor sank into despair and madness. Together with Prince Imrahil of Dol Amroth, Gandalf led the defenders during the siege of the city. When the forces of Mordor finally broke the main gate, Gandalf alone on Shadowfax confronted the Witch-king of Angmar, Lord of the Nazgûl. But at that moment the Rohirrim arrived, compelling the Witch-king to withdraw and engage them. Gandalf would have ridden to their aid, but he too was suddenly required elsewhere—to save Faramir from Denethor, who sought in desperation to burn himself and his son on a funeral pyre in Rath Dínen.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "63751",
"title": "The Return of the King",
"section": "Section::::Plot summary.:Book V: The War of the Ring.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 684,
"text": "Gandalf, Aragorn and the other Captains of the West lead an army to the Black Gate of Mordor and lay siege to Sauron's army. In a parley before the battle, the Mouth of Sauron, a messenger from the Black Gate, displays Frodo's \"mithril\" shirt, his elven-cloak and Sam's barrow-blade and then demands the surrender of the Captains and their obeisance to Sauron as conditions for Frodo's release. Despite the shock of seeing the objects and the complete loss of hope, Gandalf perceives that the emissary is lying, seizes the items, and rejects the terms. The battle begins and Pippin kills a Troll, which then falls onto him, and he loses consciousness just as the Great Eagles arrive.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "63750",
"title": "The Two Towers",
"section": "Section::::Plot summary.:Book III: The Treason of Isengard.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 890,
"text": "Gandalf and the entire company then go to Orthanc. Théoden rejects Saruman's offer of peace despite the wizard's cunning words. Gandalf then offers Saruman a chance to repent, but Saruman is too proud and refuses. So Gandalf casts Saruman out of the Order of Wizards and the White Council and breaks his staff. Gríma throws something from a window at Gandalf but misses, and it is picked up by Pippin. Gandalf quickly takes it from Pippin. This object turns out to be one of the \"palantíri\" (seeing-stones). Pippin, unable to resist the urge, looks into it and encounters the Eye of Sauron, but emerges unscathed from the ordeal. Gandalf then realizes at last the link between Isengard and Mordor and how Saruman fell at last into evil. By looking into the \"palantír\", Saruman became ensnared by the Dark Lord and made to do Sauron's bidding. Gandalf then gives the \"palantír\" to Aragorn. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
8mktot
|
Did ancient Israel abolish or prohibit slaveowning in any way after the Exodus from Egypt? Was it socially frowned upon to own slaves?
|
[
{
"answer": "EDIT: Just to be clear, not a historian, but I’ve been reading this book for the past ten years, I think I know a little bit about it.\n\nThe Law actually says a lot about slaves and servants and how you were to treat them. How well they followed it or if they followed it all is questionable whether you believe the Bible (That’s a heavy theme of the Old Testament) or not but we can look at what they thought they were supposed to do.\n\nAmong fellow Israelites the law was explicit about keeping one another out of poverty, helping the poor, forgiving debts Ect. But not everyone cooperates. So if you could not afford to pay a debt, pay for your land, pay for a dowry, or if you couldn’t even take care of yourself, you could sell yourself, or your children and become a servant.\n\nBut this was not permanent. The law gave you two outs. You could work for seven years then you were free to go. If you married while in service and had kids, the owner kept those. And if you were Female, you had to marry into the family, if you could. But other then that, you were free. You could even go back to your own land or your husbands land if you were working off a Dowry. \n\nInteresting though if you didn’t want to leave, you could become an Indentured servant, from Exodus 25.\n\n“But if the servant declares, ‘I love my master and my wife and children and do not want to go free,’ then his master must take him before the judges. He shall take him to the door or the doorpost and pierce his ear with an awl. Then he will be his servant for life.“\n\nBasically if you treated your slaves well, they could stay on and become basically a family member. It’s was a good incentive to be nice to slaves. \n(And there were laws in place to prevent people from like forcing slaves into indentured servitude without consent but that’s another story)\n\nThe other way to be free was called the year of Jubilee. The idea was that every fifty years, all debts just stopped. If you sold land to someone, you got it back. If you owed someone money, you were covered. And if you were a Israelite slave, you were free. Period. And as cool as it sounds (If completely unpractical for a more advanced civilization) I can say for certainly, I doubt this was ever observed due to Ocupation and quarreling before the exile. And afterwards, land was fairly scarce.\n\nForeigners weren’t quite as lucky. If someone came to live in your land peacefully, you weren’t allowed to enslave them. The law said that you should incorporate Foreigners into your land and basically make them Israelites. People you were fighting though were fair game. Though most of the time, the law said kill everyone and take none for slaves. Like the invasion of the promised land, and they didn’t want the natives to breed with the Israelites so they didn’t turn to idols. The Old Testament acknowledges this didn’t happen basically at all. And Foreign slaves were not under the seven years rule, I don’t believe. Luckly the law was pretty lax about becoming an Israelites. Basically be circumcised and respect Passover. So a foreign slave could become an Israelite and become free by extension.\n\nTreatment of slaves was pretty good according to the law. You were to respect slaves as people, feed and give them space, and refrain from sexual relations. Indetured servants were to be treated as family. And punishment for mistreating slaves ran from heavty fines to freedom for the slave.\n\nI’ve been mostly pulling from Leviticus, Exodus, Joshua, and the all important, Spurgeon’s commentaries.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "6510049",
"title": "Ki Teitzei",
"section": "Section::::Readings.:Fourth reading — Deuteronomy 23:8–24.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 329,
"text": "In the continuation of the reading, Moses taught that if a slave sought refuge with the Israelites, the Israelites were not to turn the slave over to the slave's master, but were to let the former slave live in any place the former slave might choose and not ill-treat the former slave. A closed portion (, \"setumah\") ends here.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12666941",
"title": "Jewish views on slavery",
"section": "Section::::Talmudic era.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 367,
"text": "It is apparent that Jews still owned Jewish slaves in the Talmudic era because Talmudic authorities tried to denounce the practice that Jews could sell themselves into slavery if they were poverty-stricken. In particular, the Talmud said that Jews should not sell themselves to non-Jews, and if they did, the Jewish community was urged to ransom or redeem the slave.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46767667",
"title": "Slavery in ancient Egypt",
"section": "Section::::Slave life.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 680,
"text": "Much of the research conducted on Egyptian enslavement has focused on the issue of payment to slaves. Masters did not commonly pay their slaves a regular wage for their service or loyalty. The slaves worked so that they could either enter Egypt and hope for a better life, receive compensation of living quarters and food, or be granted admittance to work in the Beyond. Although slaves were not “free” or rightfully independent, slaves in the New Kingdom were able to leave their master if they had a “justifiable grievance”. Historians have read documents about situations where this could be a possibility but it is still uncertain if independence from slavery was attainable.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12666941",
"title": "Jewish views on slavery",
"section": "Section::::Talmudic era.:Converting or circumcising non-Jewish slaves.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 575,
"text": "The Talmudic laws required Jewish slave owners to try to convert non-Jewish slaves to Judaism. Other laws required slaves, if not converted, to be circumcised and undergo ritual immersion in a bath (\"mikveh\"). A 4th century Roman law prevented the circumcision of non-Jewish slaves, so the practice may have declined at that time, but increased again after the 10th century. Jewish slave owners were not permitted to drink wine that had been touched by an uncircumcised person so there was always a practical need, in addition to the legal requirement, to circumcise slaves.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12666941",
"title": "Jewish views on slavery",
"section": "Section::::Biblical era.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 729,
"text": "Ancient Israelite society allowed slavery; however, total domination of one human being by another was not permitted. Rather, slavery in antiquity among the Israelites was closer to what would later be called indentured servitude. Slaves were seen as an essential part of a Hebrew household. In fact, there were cases in which, from a slave's point of view, the stability of servitude under a family in which the slave was well-treated would have been preferable to economic freedom. It is impossible for scholars to quantify the number of slaves that were owned by Hebrews in ancient Israelite society, or what percentage of households owned slaves, but it is possible to analyze social, legal, and economic impacts of slavery.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2171906",
"title": "Slavery and religion",
"section": "Section::::Judaism.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 886,
"text": "Jewish participation in the slave trade itself was also regulated by the Talmud. Fear of apostasy lead to the Talmudic discouragement of the sale of Jewish slaves to non-Jews, although loans were allowed; similarly slave trade with Tyre was only to be for the purpose of removing slaves from non-Jewish religion. Religious racism meant that the Talmudic writers completely forbade the sale or transfer of Canaanite slaves out from Palestine to elsewhere. Other types of trade were also discouraged: men selling themselves to women, and post-pubescent daughters being sold into slavery by their fathers. Pre-pubescent slave girls sold by their fathers had to be freed-then-married by their new owner, or his son, when she \"started\" puberty; slaves could not be allowed to marry free Jews, although masters were often granted access to the \"services\" of the wives of any of their slaves.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14098762",
"title": "History of Zionism",
"section": "Section::::Background: The historic and religious origins of Zionism.:Biblical precedents.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 753,
"text": "The precedence for Jews to return to their ancestral homeland, motivated by strong divine intervention, first appears in the Torah, and thus later adopted in the Christian Old Testament. After Jacob and his sons had gone down to Egypt to escape a drought, they were enslaved and became a nation. Later, as commanded by God, Moses went before Pharaoh, demanded, \"Let my people go!\" and foretold severe consequences, if this was not done. Torah describes the story of the plagues and the Exodus from Egypt, which is estimated at about 1400 BCE, and the beginning of the journey of the Jewish People toward the Land of Israel. These are celebrated annually during Passover, and the Passover meal traditionally ends with the words \"Next Year in Jerusalem.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
rtfag
|
Someone I met got me curious... in the early days of America, mineral surveyors would cross the country looking for metal deposits. How exactly did the bore into rocks and test things without modern equipment?
|
[
{
"answer": "A lot of it boils down to understanding the geology itself. If you can recognise evidence of (for example) a large igneous porphyry body you might take a good guess at there being workable amounts of copper mineralisation.\n\n\nAlternatively, simple techniques such as panning for heavy minerals in stream beds can tell you that there's economically viable mineral deposits somewhere upstream.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "39780642",
"title": "Treasure Hill (White Pine County, Nevada)",
"section": "Section::::Geology.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 468,
"text": "When detailed geological investigations were carried out by Geologists from the US and Britain, their finding was a dampner to the development of the mines in the hill. They inferred that the ores found were mere deposits only and not sourced by ore bearing veins in the rocks which could produce mineral ore for a long period. This revelation coupled with miners strikes and bad weather conditions resulted in a mass exodus of people, in 1870, from the mining belt. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "224230",
"title": "Philmont Scout Ranch",
"section": "Section::::History.:Private ownership.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 1023,
"text": "The history of mining at Philmont dates back to the years immediately after the Civil War. U.S. soldiers were stationed in the West after the war, as the U.S. Army was driving out the American Indians. The story is that one of these soldiers befriended an Indian, who happened to give him a shiny rock. The shiny material in the rock was found to be copper. According to the story, the soldier and two of his friends went up to investigate, and found gold. They could not stay to mine the gold and the area was overrun by miners by the time they returned the next year. Scores of gold mines were excavated in Philmont, and operated into the early 20th century. A large vein of gold is said to lie under Mount Baldy to this day, but extracting it has not been feasible. It is a common joke at Philmont that some day the mines under Baldy will collapse and Phillips will be the highest mountain in Philmont. The Contention Mine, located at Cyphers Mine, and the Aztec Mine, located at French Henry, are open to guided tours.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48168600",
"title": "Tosham Hill range",
"section": "Section::::Geology of Tosham hill.:Scientific studies.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 734,
"text": "From 1894-96, Lt-General C.A. Mcmahon (1830-1904), who was also the president of British Geologists' Association, was the first modern geologist to study these rocks. He described the petrography of the rocks in 1884 and 1886 and published his work in the \"Records of Geological Survey of India\". During 1994-96, Khorana, Dhir and Jayapaul of Geological Survey of India carried out the first mineral survey and scout drilling of several hills in the Tosham range. During 2014-2016, Ravindra Singh and Dheerendra Singh of Banaras Hindu University undertook first ever Indus Valley Civilization archaeological excavations of the area to confirm the connection of ores mined from these hills with the smelting metallurgical work of IVS.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52194371",
"title": "Marjorie Hooker",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1017,
"text": "Marjorie Hooker (10 May 1908 – 4 May 1976) was an American geologist who worked to collect data on the make-up of igneous and metamorphic rocks as well as acted as a mineral specialist for the United States Department of State from 1943–1947. Her work on deciphering chemical data for granite rocks led her to collect and correspond information with geologists from all around the world. The multiple associations with which she worked include the American Association for the Advancement of Science, the Washington Academy of Sciences, the Geological Society of London, the Mineralogical Society of Great Britain and Ireland, the American Geophysical Union, the Geological Society of America, and the Mineralogical Association of Canada. She also worked as a delegate of the International Geological Congresses for their 19th, 20th, 23rd, and 24th meetings. Her contributions to Geology have been recognized with an award created in her name at Syracuse University to recognize and aid exceptional student research.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41752572",
"title": "Elizabeth F. Fisher",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 955,
"text": "Elizabeth Florette Fisher (November 26, 1873 – April 25, 1941) was one of the first field geologists in the United States. Born in Boston, Massachusetts, she attended and later taught at Massachusetts Institute of Technology (MIT). She was also the first woman to be sent out by an oil company for a survey, helping to locate oil wells in North-Central Texas during a nationwide oil shortage. During this same time, she not only continued her career as an instructor at Wellesley College, but also wrote an influential textbook for junior high students called \"Resources and Industries of the United States\". She stressed the need for conservation, and believed \"unclaimed\" land should be used for agriculture. She was a fellow of the American Association for the Advancement of Science and the American Geographical Society, and also was a member of the Appalachian Mountain Club and the Boston Society of Natural History. She died in 1941 from illness.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "615438",
"title": "General Mining Act of 1872",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 398,
"text": "All citizens of the United States of America 18 years or older have the right under the 1872 mining law to locate a lode (hard rock) or placer (gravel) mining claim on federal lands open to mineral entry. These claims may be located once a discovery of a locatable mineral is made. Locatable minerals include but are not limited to platinum, gold, silver, copper, lead, zinc, uranium and tungsten.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20441703",
"title": "Peavine Peak",
"section": "Section::::History.:Modern History and Development.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 316,
"text": "Mineral values were extracted from placer deposits as early as 1856 by prospectors returning to California. These small-scale operations lasted only as long as the water in the stream beds permitted, at which times the miners would clear off and move on. Such intermittent placer mining continued through the 1860s.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
66feuh
|
cryptocurrency mining. what is the process and why is a gpu required?
|
[
{
"answer": "Currency has value in part because it is rare. If you can get however much of it you want, it becomes worthless. Imagine you can just print off $100 bills from home and they count as real dollars. Why, then, would I sell you, I dunno, a used book for $5? I can just print off $100, so why do I want your $5? Or even $500? Or even $5000? I don't need your dollars, I can have as many as I want whenever I want.\n\nLikewise, cryptocurrency derives value in part because of its rarity. You can't just *have* bitcoins. But like real dollars, bitcoins still have to come from *somewhere*. You may be thinking \"dollars come from the US mint\" but that isn't really true. Physical paper dollars come from the US mint, but the underlying *value* of a dollar comes from the goods and services you can use the dollar to purchase. Those goods and services take time and resources to acquire, and the dollar value it takes to purchase the goods and services is assigned based on how much time and the cost of those resources. Take the simplest example: gold.\n\nYou want gold, it's shiny, it's malleable, it doesn't tarnish, it's an important part of electronics, etc. *I* want food, for obvious reasons. I don't know how to farm and I don't have the tools or land for farming. I *do* have the tools and expertise to find gold. The opposite is true for you: you have farming stuff, but no gold-getting stuff. So I will trade my time getting gold, which to me has less value than food, and trade you my gold for your food. Everyone wins, and it's not complicated until you start adding in a bunch of other people all trading for different resources and you need a way to keep track of who owes what to whom, and that's where currency is useful. Assuming nobody is just stamping out money, the amount of currency you have in the system depends on how much of the resources are available and how much time it takes people to get it. If there's a lot of gold to go around, you need more dollars to represent that gold. And getting gold takes *time* and *tools*.\n\nBack to bitcoin: you have to have a way for your cryptocurrency to enter the system. But you can't just dump it in, because then you'll have more currency in your system than you have absolute value in the system, and the currency will be worth less. You also can't just hand it out to people, because that isn't fair, and those people can horde the cryptocurrency and create artificial scarcity, and control the currency such that it's a hassle to use and nobody wants it, which also makes it worth less (although in both cases, perhaps not worthless). You have to have a way for the currency to enter the system *slowly* to keep up with the demand for it, and control who gets it, and give the currency inherent value by making it - like gold - hard to obtain.\n\nThe solution is \"mining\" it. The cryptocurrency is obtained by having your computer \"mine\" it by solving very long, difficult math problems. This takes a lot of time - the problems aren't simple 1+2, they're incredibly complex functions that take even fast computers a very long time to complete. It also takes resources: you can solve more problems with a faster computer, but that means you have to invest in a faster computer. It solves the cryptocurrency dilemma perfect, though, for those reasons: you are investing time and resources, which are inherently valuable, into the cryptocurrency, which makes *it* valuable. And anyone can do it.\n\nGPUs or graphics processing units are useful because they attack computing by using a *lot* of small, efficient processors rather than the traditional CPU (central processing unit) way of doing it, which is to have a few very powerful processors. CPUs solve problems by having a few core processors, like *maybe* eight, doing thousands of processes each second. A GPU has thousands of processors instead, and they each do a few processes each.\n\nGPUs are useful for crypto mining because you can work on many different functions simultaneously, and the functions can be broken down into smaller, easier problems that can be solved in parallel. Compare that to a normal CPU that would solve one of the functions much faster, but has to solve *just that one* function, in its entirety, before moving onto the next one. It's the difference between having a tiny group of miners that work really fast and nonstop, but are all in the same mine, and having thousands of miners that are kind of ok at mining but you have hundreds of them in each mine and you have hundreds of different mines.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "36662188",
"title": "Cryptocurrency",
"section": "Section::::Architecture.:Mining.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 1105,
"text": "In cryptocurrency networks, \"mining\" is a validation of transactions. For this effort, successful miners obtain new cryptocurrency as a reward. The reward decreases transaction fees by creating a complementary incentive to contribute to the processing power of the network. The rate of generating hashes, which validate any transaction, has been increased by the use of specialized machines such as FPGAs and ASICs running complex hashing algorithms like SHA-256 and Scrypt. This arms race for cheaper-yet-efficient machines has been on since the day the first cryptocurrency, bitcoin, was introduced in 2009. With more people venturing into the world of virtual currency, generating hashes for this validation has become far more complex over the years, with miners having to invest large sums of money on employing multiple high performance ASICs. Thus the value of the currency obtained for finding a hash often does not justify the amount of money spent on setting up the machines, the cooling facilities to overcome the enormous amount of heat they produce, and the electricity required to run them.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36662188",
"title": "Cryptocurrency",
"section": "Section::::Reception.\n",
"start_paragraph_id": 95,
"start_character": 0,
"end_paragraph_id": 95,
"end_character": 316,
"text": "The cryptocurrency community refers to pre-mining, hidden launches, ICO or extreme rewards for the altcoin founders as a deceptive practice. It can also be used as an inherent part of a cryptocurrency's design. Pre-mining means currency is generated by the currency's founders prior to being released to the public.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36662188",
"title": "Cryptocurrency",
"section": "Section::::Reception.\n",
"start_paragraph_id": 93,
"start_character": 0,
"end_paragraph_id": 93,
"end_character": 203,
"text": "An enormous amount of energy goes into proof-of-work cryptocurrency mining, although cryptocurrency proponents claim it is important to compare it to the consumption of the traditional financial system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43006637",
"title": "Mining pool",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 728,
"text": "In the context of cryptocurrency mining, a mining pool is the pooling of resources by miners, who share their processing power over a network, to split the reward equally, according to the amount of work they contributed to the probability of finding a block. A \"share\" is awarded to members of the mining pool who present a valid partial proof-of-work. Mining in pools began when the difficulty for mining increased to the point where it could take centuries for slower miners to generate a block. The solution to this problem was for miners to pool their resources so they could generate blocks more quickly and therefore receive a portion of the block reward on a consistent basis, rather than randomly once every few years.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56143781",
"title": "NiceHash",
"section": "Section::::Business model.:Hashing power buyers.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 447,
"text": "Buyers select the crypto-currency that they want to mine, a pool on which they want to mine, set the price that they are willing to pay for it, and place the order. Once the order is fulfilled by miners who are running NiceHash Miner on their machines, buyer gets the crypto-currency from the pool. This means that buyers aren't required to run complex mining operations themselves, and there is no capital investment in mining hardware required.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52226949",
"title": "Zcoin",
"section": "Section::::History.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 535,
"text": "In December 2018, Zcoin implemented Merkle tree proof, a mining algorithm that deters the usage of Application-specific integrated circuit (ASIC) in mining coins by being more memory intensive for the miners. This allows ordinary users to use central processing unit (CPU) and graphics card for mining, so as to enable egalitarianism in coin mining. In the same month, Zcoin released an academic paper proposing the Lelantus protocol that remove the need of trusted setup and hides the origin and the amount of coins in a transaction.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28249265",
"title": "Bitcoin",
"section": "Section::::Design.:Mining.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 439,
"text": "\"Mining\" is a record-keeping service done through the use of computer processing power. Miners keep the blockchain consistent, complete, and unalterable by repeatedly grouping newly broadcast transactions into a \"block\", which is then broadcast to the network and verified by recipient nodes. Each block contains a SHA-256 cryptographic hash of the previous block, thus linking it to the previous block and giving the blockchain its name.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
8u2m40
|
if a 213g potato has .2g of fat, 4.3g of protein, and 37g of carbs, what is the other 171.5g?
|
[
{
"answer": "Net carbs? If yes, most of the rest is fiber",
"provenance": null
},
{
"answer": "Water, fibre, and other things that humans don't digest into energy.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "843005",
"title": "Runts",
"section": "Section::::Ingredients and nutrition information.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 234,
"text": "Serv size: 12 pieces, servings: about 3.5, amount per serving calories: 60, total fat: 0 g (0% DV) Sodium: 0 mg (0% DV) total carb: 14 g (5% DV) sugars: 13 g protein: 0 g (Percent daily values (DV) are based on a 2,000 calorie diet.)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31164570",
"title": "Integrated Child Development Services",
"section": "Section::::Implementation.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 229,
"text": "For nutritional purposes ICDS provides 500 kilocalories (with 12-15 gm grams of protein) every day to every child below 6 years of age. For adolescent girls it is up to 500 kilo calories with up to 25 grams of protein everyday.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1074264",
"title": "Glycemic load",
"section": "Section::::Description.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 544,
"text": "Glycemic load of a 100g serving of food can be calculated as its carbohydrate content measured in grams (g), multiplied by the food's GI, and divided by 100. For example, watermelon has a GI of 72. A 100 g serving of watermelon has 5 g of available carbohydrates (it contains a lot of water), making the calculation 5 × 72/100=3.6, so the GL is 4. A food with a GI of 90 and 8 g of available carbohydrates has a GL of 7.2 (8 × 90/100=7.2), while a food with a GI of just 6 and with 120 g of carbohydrate also has a GL of 7.2 (120 × 6/100=7.2).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1229612",
"title": "Mashed pumpkin",
"section": "Section::::Nutritional information.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 275,
"text": "A single cup of unseasoned mashed pumpkin contains only 49 calories, but has 564 mg of potassium, 5,000 mcg of beta-carotene, 853 mcg of alpha-carotene, 3,500 mcg of beta-cryptoxanthin, 2,400 mcg of lutein and zeaxanthin, 12,000 IUs of vitamin A, and 2.5 g of dietary fiber.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "147809",
"title": "Mochi",
"section": "Section::::Nutrition.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 207,
"text": "A single serving of 44.0 g has 96 Calories (kilocalories), 1.0 g of fat, but no trans or saturated fat, 1.0 mg of sodium, 22.0 g of carbohydrates, 0 g of dietary fiber, 6.0 g of sugar, and 1.0 g of protein.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "228541",
"title": "Chocolate-coated marshmallow treats",
"section": "Section::::National varieties.:Krembo.:Nutritional information.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 337,
"text": "The average krembo weighs 25 grams (0.92 ounces) and has 115 calories. According to the fine print on packing foil, per 100 g of krembo there are 419 calories, 3.2 g protein, 64 g carbohydrates (of which 54 g are sugars); 16.7% Fats (of which 13.9% are poly-saturated fatty acids, less than 0.5% are trans fatty acids) and 67 mg sodium.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46573",
"title": "Oat",
"section": "Section::::Health.:Protein.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 260,
"text": "Oat protein is nearly equivalent in quality to soy protein, which World Health Organization research has shown to be equal to meat, milk and egg protein. The protein content of the hull-less oat kernel (groat) ranges from 12 to 24%, the highest among cereals.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
24pyyq
|
Gigantic Black Holes in galaxy centers keeps 'devouring' matter. Why doesn't that eventually result in whole galaxies being consumed and merging into single immense 'holes' with all galactic mass inside them?
|
[
{
"answer": "The planets don't fall into the Sun because they basically \"keep missing\" when they fall towards it, hence going around in elliptical orbits. Matter around a black hole is essentially the same. The gas and dust orbiting a black hole carries angular momentum with it that must be conserved and hence it will orbit around the hole in a disk, always missing the black hole and never falling into it. Of the entire accretion disk it is only a small fraction that eventually falls into the hole. Even if more matter fell into the hole, the rate of accretion would eventually stop when the luminosity created by the infall gets so high that the radiation pressure counteracts the infall motion. This limit is known as the Eddington luminosity. One of the big misconceptions of black holes is that they are cosmic vacuum cleaners that just go around in a galaxy and suck up everything they encounter, but this is vastly untrue. In fact, only very little mass actually enters a black hole.\n\nAlso, if the Sun's mass was doubled somehow, the period of the planetary orbits would get a lot shorter according to Newton's law of universal gravity and Kepler's laws, but they wouldn't necessarily fall into the Sun unless they could somehow lose their angular momentum like if the Earth was moving through a cloud of gas, which it isn't.",
"provenance": null
},
{
"answer": "A black hole of mass M has no more gravitational pull than any other object of mass M. So basically, whether the center of the galaxy has M kilograms of stars or M kilograms of black holes, the gravitational pull is the same. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "23614364",
"title": "Hypercompact stellar system",
"section": "Section::::Properties.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 503,
"text": "Astronomers believe that supermassive black holes (SMBHs) can be ejected from the centers of galaxies by gravitational wave recoil. This happens when two SMBHs in a binary system coalesce, after losing energy in the form of gravitational waves. Because the gravitational waves are not emitted isotropically, some momentum is imparted to the coalescing black holes, and they feel a recoil, or \"kick,\" at the moment of coalescence. Computer simulations suggest that the kick can be as large as formula_1,\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26930551",
"title": "Sigma (cosmology)",
"section": "Section::::Usage In Cosmology.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 205,
"text": "They saw that the clouds of gas before the galaxies compressed forming a black hole, the black hole would produce phenomenal amounts of energy which would force the rest of the galaxy away and form stars.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18478320",
"title": "Future of an expanding universe",
"section": "Section::::Timeline.:Degenerate Era.:Stellar remnants escape galaxies or fall into black holes.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 598,
"text": "Because of dynamical relaxation, some objects will gain enough energy to reach galactic escape velocity and depart the galaxy, leaving behind a smaller, denser galaxy. Since encounters are more frequent in the denser galaxy, the process then accelerates. The end result is that most objects (90% to 99%) are ejected from the galaxy, leaving a small fraction (maybe 1% to 10%) which fall into the central supermassive black hole. It has been suggested that the matter of the fallen remnants will form an accretion disk around it that will create a quasar, as long as enough matter is present there.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "199940",
"title": "Massive compact halo object",
"section": "Section::::Types.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 614,
"text": "Cosmologists doubt they make up a majority of dark matter because the black holes are at isolated points of the galaxy. The largest contributor to the missing mass must be spread throughout the galaxy to balance the gravity. A minority of physicists, including Chapline and Laughlin, believe that the widely accepted model of the black hole is wrong and needs to be replaced by a new model, the dark-energy star; in the general case for the suggested new model, the cosmological distribution of dark energy would be slightly lumpy and dark-energy stars of primordial type might be a possible candidate for MACHOs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43948",
"title": "Star formation",
"section": "Section::::Stellar nurseries.:Cloud collapse.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 594,
"text": "A supermassive black hole at the core of a galaxy may serve to regulate the rate of star formation in a galactic nucleus. A black hole that is accreting infalling matter can become active, emitting a strong wind through a collimated relativistic jet. This can limit further star formation. Massive black holes ejecting radio-frequency-emitting particles at near-light speed can also block the formation of new stars in aging galaxies. However, the radio emissions around the jets may also trigger star formation. Likewise, a weaker jet may trigger star formation when it collides with a cloud.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17833105",
"title": "Weak gravitational lensing",
"section": "Section::::Weak lensing by clusters of galaxies.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 664,
"text": "Galaxy clusters are the largest gravitationally bound structures in the Universe with approximately 80% of cluster content in the form of dark matter. The gravitational fields of these clusters deflect light-rays traveling near them. As seen from Earth, this effect can cause dramatic distortions of a background source object detectable by eye such as multiple images, arcs, and rings (cluster strong lensing). More generally, the effect causes small, but statistically coherent, distortions of background sources on the order of 10% (cluster weak lensing). Abell 1689, CL0024+17, and the Bullet Cluster are among the most prominent examples of lensing clusters.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12579875",
"title": "The Dreaming Void",
"section": "Section::::Plot summary.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 362,
"text": "What was formerly believed to be a supermassive black hole at the centre of the Milky Way is revealed to be an artificial construct, known as the Void. Inside, there is a strange universe where the laws of physics are very different from those we know. It is slowly consuming the other stars of the galactic core—one day it will have devoured the entire galaxy.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
41klye
|
Not quite connecting the whole Franks, Alemanni, Charlemagne, Holy Roman Empire, and France thing.
|
[
{
"answer": "Charlemagne, at his death, ruled an Empire encompassing modern day France (sans Brittany), the Pyrenees, Austria, Switzerland, the low countries, the northern half of Italy, and the majority of modern Germany. After his death, his son Louis I came to power, then died leaving his three sons to divide up his Empire, which they did in 843 with the Treaty of Verdun. Charles the Bald was given West Francia (modern France without Provence or Brittany), Lothair (the eldest) was given a strip of land encompassing the low countries, Burgundy, Provence, and northern Italy, and Louis the German was given the eastern territories (East Francia). [Here](_URL_0_)'s a map to clarify things. East Francia and parts of Lotharingia developed into the Holy Roman Emperor.\n\nThe traditional dislike between the French and the Germans come from more recent sources (such as the Napoleonic wars, the Franco-Prussian war, and the two World Wars).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "4602123",
"title": "Military history of the Netherlands",
"section": "Section::::The Franks.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 867,
"text": "The Franks or the Frankish people were one of several west Germanic federations. The confederation was formed out of Germanic tribes: Salians, Sugambri, Chamavi, Tencteri, Chattuarii, Bructeri, Usipetes, Ampsivarii, Chatti. They entered the late Roman Empire from the present day Netherlands and northern Germany and conquered northern Gaul where they were accepted as a \"foederati\" and established a lasting realm (sometimes referred to as Francia) in an area that covers most of modern-day France and the western regions of Germany (Franconia, Rhineland, Hesse) and the whole of the Low Countries, forming the historic kernel of the two modern countries. The conversion to Christianity of the pagan Frankish king Clovis was a crucial event in the history of Europe. Like the French and Germans, the Dutch also claim the military history of the Franks as their own.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3012559",
"title": "Bossong",
"section": "Section::::Cultural origins.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 298,
"text": "The Franks were the race of Charlemagne, the Pepins, Dagobert I, and Charles Martel. Their capital was at Aix-la-Chappell (Aachen) in present-day North Rhine-Westphalia, near the borders of Belgium and the Netherlands. The Bossong family seat was in the town of which lies in present-day Lorraine.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19985174",
"title": "Dutch language",
"section": "Section::::History.:Frankish (3rd–5th century).\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 521,
"text": "The Franks emerged in the southern Netherlands (Salian Franks) and central Germany (Ripuarian Franks), and later descended into Gaul. The name of their kingdom survives in that of France. Although they ruled the Gallo-Romans for nearly 300 years, their language, Frankish, became extinct in most of France and was replaced by later forms of the language throughout Luxembourg and Germany in around the 7th century. It was replaced in France by Old French (a Romance language with a considerable Old Frankish influence). \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13224",
"title": "History of Germany",
"section": "Section::::Germanic tribes, 750 BC – 768 AD.:Frankish Empire.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 763,
"text": "After the fall of the Western Roman Empire in the 5th century, the Franks, like other post-Roman Western Europeans, emerged as a tribal confederacy in the Middle Rhine-Weser region, among the territory soon to be called Austrasia (the \"eastern land\"), the northeastern portion of the future Kingdom of the Merovingian Franks. As a whole, Austrasia comprised parts of present-day France, Germany, Belgium, Luxembourg and the Netherlands. Unlike the Alamanni to their south in Swabia, they absorbed large swaths of former Roman territory as they spread west into Gaul, beginning in 250. Clovis I of the Merovingian dynasty conquered northern Gaul in 486 and in the Battle of Tolbiac in 496 the Alemanni tribe in Swabia, which eventually became the Duchy of Swabia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1210359",
"title": "Frankish language",
"section": "Section::::Area.:Austrasia.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 663,
"text": "During the expansion into France and Germany, many Frankish people remained in the original core Frankish territories in the north (i.e. southern Netherlands, Flanders, a small part of northern France and the adjoining area in Germany centred on Cologne). The Franks united as a single group under Salian Frank leadership around 500 AD. Politically, the Ripuarian Franks existed as a separate group only until about 500 AD, after which they were subsumed into the Salian Franks. The Franks were united, but the various Frankish groups must have continued to live in the same areas, and speak the same dialects, although as a part of the growing Frankish Kingdom.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "962731",
"title": "French people",
"section": "Section::::History.:Frankish Kingdom.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 850,
"text": "With the decline of the Roman Empire in Western Europe, a federation of Germanic peoples entered the picture: the Franks, from which the word \"French\" derives. The Franks were Germanic pagans who began to settle in northern Gaul as \"laeti\" during the Roman era. They continued to filter across the Rhine River from present-day Netherlands and Germany between the 3rd and 7th centuries. Initially, they served in the Roman army and obtained important commands. Their language is still spoken as a kind of Dutch (Flemish - Low Frankish) in northern France (Westhoek) and Frankish (Central Franconian) in German speaking Lorraine. The Alamans, another Germanic people immigrated to Alsace, hence the Alemannic German now spoken there. The Alamans were competitors of the Franks, and their name is the origin of the French word for \"German\": \"Allemand\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13289",
"title": "History of the Netherlands",
"section": "Section::::Roman era (57 BC – 410 AD).:Emergence of the Franks.\n",
"start_paragraph_id": 71,
"start_character": 0,
"end_paragraph_id": 71,
"end_character": 264,
"text": "The Franks eventually were divided into two groups: the Ripuarian Franks (Latin: Ripuari), who were the Franks that lived along the middle-Rhine River during the Roman Era, and the Salian Franks, who were the Franks that originated in the area of the Netherlands.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6sdq4x
|
Can bees tell the difference between their own hive's honey and another hive's honey?
|
[
{
"answer": "Well they certainly know the difference between their hive and other hives, even if there are several of them adjacent to each other. \n\nHoneybees will also rob other hives of their honey if food is scarce. This is intentional; they are not mistakenly at the wrong hive, and the host hive will try to repel the invaders. As far as if they would \"know\" the difference between their own honey and another hives if you placed this in containers near the hive, there is no way of knowing. Bees are pure genetics, I don't think there's really much going on in their brains besides the various primal instincts that allow the hive to survive. You would need one tiny MRI machine seeing how the brain lit up to answer this. ",
"provenance": null
},
{
"answer": "I feel like, as sensitive to smell as they are and as much as they rely on pheromones, they must have some way of identifying their own honey. That said, they may just be identifying their own hive and hive-mates, rather than the honey itself.\n\nAdditionally, I think bees are \"smarter\" than we give them credit for. Have you seen the bumblebees being taught to roll a ball to a specific place?\n\n_URL_0_",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "14361",
"title": "Honey",
"section": "Section::::Classification.:Floral source.:Monofloral.\n",
"start_paragraph_id": 68,
"start_character": 0,
"end_paragraph_id": 68,
"end_character": 1060,
"text": "Monofloral honey is made primarily from the nectar of one type of flower. Monofloral honeys have distinctive flavors and colors because of differences between their principal nectar sources. To produce monofloral honey, beekeepers keep beehives in an area where the bees have access to only one type of flower. In practice, because of the difficulties in containing bees, a small proportion of any honey will be from additional nectar from other flower types. Typical examples of North American monofloral honeys are clover, orange blossom, blueberry, sage, tupelo, buckwheat, fireweed, mesquite, and sourwood. Some typical European examples include thyme, thistle, heather, acacia, dandelion, sunflower, lavender, honeysuckle, and varieties from lime and chestnut trees. In North Africa (e.g. Egypt), examples include clover, cotton, and citrus (mainly orange blossoms). The unique flora of Australia yields a number of distinctive honeys, with some of the most popular being yellow box, blue gum, ironbark, bush mallee, Tasmanian leatherwood, and macadamia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35206866",
"title": "Tetragonisca angustula",
"section": "Section::::Human importance.:Honey.:Composition.\n",
"start_paragraph_id": 64,
"start_character": 0,
"end_paragraph_id": 64,
"end_character": 383,
"text": "Like most honey, \"T. angustula\" honey is made up of simple sugars, water, and ash. The specific ratio of these three components makes each honey unique however, and can be affected by season, climate, and other factors that affect flora availability. \"T. angustula\" honey contains more moisture than honey from typical honey bees and is also more acidic, giving it a complex flavor.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "188103",
"title": "Karl von Frisch",
"section": "Section::::Research.:Other work.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 340,
"text": "Frisch's honey bee work included the study of the pheromones that are emitted by the queen bee and her daughters, which maintain the hive's very complex social order. Outside the hive, the pheromones cause the male bees, or drones, to become attracted to a queen and mate with her. Inside the hive, the drones are not affected by the odor.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9222621",
"title": "Calothamnus quadrifidus",
"section": "Section::::Ecology.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 411,
"text": "Research on the competition between honeybees (\"Apis mellifera\") and honeyeaters (especially the Brown honeyeater and White-cheeked honeyeater) for the nectar of \"Calothamnus quadrifidus\" has shown that honeyeaters consume more nectar early in the day. Honeybees, because of their much greater numbers consume a larger volume of nectar but nevertheless, honeyeaters were the more important in pollen dispersal.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "483630",
"title": "Horizontal top-bar hive",
"section": "Section::::Kenyan top-bar hive – KTBH.:Hive management.:Queen exclusion.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 344,
"text": "Natural queen exclusion occurs more frequently in top-bar hives, because the brood nest is separated from the honey section by at least a full bar of honey comb, and not just a few centimetres of honey as may be the case in a multi-storey framed hive. And the more honey is gathered, the further the brood nest becomes from newly created comb.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "432932",
"title": "Honey bee race",
"section": "Section::::Description.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 289,
"text": "The races of the honey bee are classified into various named instances of an informal taxonomic rank of race—below that of subspecies—on the basis of shared genetic traits. The term \"honey bee\" means a bee of the species \"Apis mellifera\" which descend from bees that originated in Africa.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5939825",
"title": "Apis andreniformis",
"section": "Section::::Behavior.:Dominance hierarchy.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 531,
"text": "Unlike cavity-dwelling honey bees whose queen has a distinct chemical signal from that of the worker bees, \"A. andreniformis\" queens have similar chemical signals as their workers. Chemical signals secreted from the mandibular gland in \"A. andreniformis\" are not caste-determining like it is in other honey bees. As stated previously, the presence of royal jelly on young female larva produced the queen bee. Drones, or male bees, are not used for pollination or honey production, but are instead used only to mate with the queen.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4z2wxw
|
if you sweat salt does that mean your body needs more salt or it already had too much?
|
[
{
"answer": "One of those pesky \"electrolytes\" all these sports drink companies are trying to sell us on buying, salt (or sodium, if you prefer) is necessary for proper bodily function.\n\nSea water has a an average content of about 35 parts per thousand, so for every liter (1000ml) of seawater, you've got 35 grams of salt.\nThe reason it's harmful to drink seawater is because the human kidney is only capable of making urine that is LESS salty than 35 parts per thousand, so you'd have to urinate more liquid than the content of the seawater. Your body literally dehydrates faster than you can drink it.\n\nYou can drink small amounts of seawater occasionally, so don't worry if you get a little in your mouth when you're swimming at the beach. Just don't make it a habit.\n\nWhen you add a little salt to some water, you're adding much much less than that, so it's not a problem. Neither is having some salt in your food, because we take in so much more water per day than is required to filter out the salt. Even your food as water in it!\n\nEDIT: I forgot to answer the main question!\nYes, your body needs salt. And yes, when you sweat, a small part of that is salt. \nYou get all you need from the foods you eat, even if you don't eat processed foods that are high in sodium, so don't worry about it.\nIt doesn't mean your body had \"excess salt\", nor should you worry about trying to put extra salt on your food later to compensate. ",
"provenance": null
},
{
"answer": "I don't know if this will help you out, but once I was training for a half marathon. I didn't hydrate well during a 20k run once and had sweat a lot during the run. At the end of it, I could brush off the salt crystals which had formed on my skin. About 30 minutes after that my leg muscles started cramping like crazy. The pain was more intense than just delayed onset muscle soreness or lactic acid build up. I limped back to the building I was training in and just knew I needed salt water. It was part instinct and part basic medical knowledge I had that told me I needed specifically salt water. I managed to get my mitts on a package of salt and mixed that into some water. 15-20 min later, the cramps eased off. The water tasted horrible, but was absolutely necessary.\n\nI'm sure pro athletes have a perfect routine that works for them with regards to electrolyte balance during their events, and I learned my lesson with regards to my requirements during long runs in a hot environment.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1381306",
"title": "Sweat gland",
"section": "Section::::Sweat.:Mechanism.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 396,
"text": "In both apocrine and eccrine sweat glands, the sweat is originally produced in the gland's coil, where it is isotonic with the blood plasma there. When the rate of sweating is low, salt is conserved and reabsorbed by the gland's duct; high sweat rates, on the other hand, lead to less salt reabsorption and allow more water to evaporate on the skin (via osmosis) to increase evaporative cooling.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "198725",
"title": "Drinking water",
"section": "Section::::Importance of access to safe drinking water.:Requirements.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 216,
"text": "Fluid balance is key. Profuse sweating can increase the need for electrolyte (salt) replacement. Water intoxication (which results in hyponatremia), the process of consuming too much water too quickly, can be fatal.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29825613",
"title": "Salt and cardiovascular disease",
"section": "Section::::Effect of salt on blood pressure.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 387,
"text": "The human body has evolved to balance salt intake with need through means such as the renin–angiotensin system. In humans, salt has important biological functions. Relevant to risk of cardiovascular disease, salt is highly involved with the maintenance of body fluid volume, including osmotic balance in the blood, extracellular and intracellular fluids, and resting membrane potential.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34900600",
"title": "Health effects of salt",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 478,
"text": "The health effects of salt are the conditions associated with the consumption of either too much or too little salt. Salt is a mineral composed primarily of sodium chloride (NaCl) and is used in food for both preservation and flavor. Sodium ions are needed in small quantities by most living things, as are chloride ions. Salt is involved in regulating the water content (fluid balance) of the body. The sodium ion itself is used for electrical signaling in the nervous system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2429234",
"title": "Fluid balance",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 542,
"text": "Profuse sweating can increase the need for electrolyte replacement. Water-electrolyte imbalance produces headache and fatigue if mild; illness if moderate, and sometimes even death if severe. For example, water intoxication (which results in hyponatremia), the process of consuming too much water too quickly, can be fatal. Deficits to body water result in volume contraction and dehydration. Diarrhea is a threat to both body water volume and electrolyte levels, which is why diseases that cause diarrhea are great threats to fluid balance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "647889",
"title": "Israel Hanukoglu",
"section": "Section::::Contributions to science.:Epithelial sodium channel (ENaC).\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 845,
"text": "Systemic pseudohypoaldosteronism patients with mutated ENaC subunits may lose significant amount salt in sweat especially at hot climates. To identify the sites of salt loss, Hanukoglu brothers examined the localization of ENaC in the human skin. In a comprehensive study examining all the layers of skin and epidermal appendages, they found a widespread distribution of ENaC in keratinocytes in the epidermal layers. Yet, in the eccrine sweat glands, ENaC was localized on the apical cell membrane exposed to the duct of these sweat glands. Based on additional observations, they concluded that the ENaC located on the eccrine gland sweat ducts is responsible for the uptake of Na ions from sweat secretions. This recycling of Na reduces the concentration of salt in perspiration and prevents the loss of salt at hot climates via perspiration.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34900600",
"title": "Health effects of salt",
"section": "Section::::Long-term effects.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 301,
"text": "Although many health organizations and recent reviews state that high consumption of salt increases the risk of several diseases in children and adults, the effect of high salt consumption on long term health is controversial. Some suggest that the effects of high salt consumption are insignificant.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1h1t5v
|
How does the body build tolerance to caffeine?
|
[
{
"answer": "By regulation of the number of receptors sensitive to caffeine on the cell membrane.\n\nCaffeine functions by inhibiting adenosine receptors in the brain, which we believe is involved in our biological clocks. After prolonged exposure to caffeine, the cell tries to return to homeostasis by *increasing* the number of adenosine receptors present on each cell's membrane to compensate for caffeine's effects. This results in the need for more caffeine to achieve the same effect on the brain as before, and can lead to withdrawal symptoms once the baseline caffeine level is removed.\n\nOf course, there's more to it than that, but that's the crux of the idea.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "6868",
"title": "Caffeine",
"section": "Section::::Use.:Enhancing performance.:Cognitive.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 541,
"text": "Caffeine is a central nervous system stimulant that reduces fatigue and drowsiness. At normal doses, caffeine has variable effects on learning and memory, but it generally improves reaction time, wakefulness, concentration, and motor coordination. The amount of caffeine needed to produce these effects varies from person to person, depending on body size and degree of tolerance. The desired effects arise approximately one hour after consumption, and the desired effects of a moderate dose usually subside after about three or four hours.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45588500",
"title": "Caffeine-induced anxiety disorder",
"section": "Section::::Mechanism of caffeine action.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 406,
"text": "Caffeine acts in multiple ways within the brain and the rest of the body. However, due to the concentration of caffeine required, antagonism of adenosine receptors is the primary mode of action. The following mechanisms are ways in which caffeine may act within the body, but depending on necessary caffeine concentration and other factors may not be responsible for the clinical effects of the substance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6868",
"title": "Caffeine",
"section": "Section::::Adverse effects.:Reinforcement disorders.:Dependence and withdrawal.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 904,
"text": "Tolerance to the effects of caffeine occurs for caffeine induced elevations in blood pressure and the subjective feelings of nervousness. Sensitization, the process whereby effects become more prominent with use, occurs for positive effects such as feelings of alertness and well being. Tolerance varies for daily, regular caffeine users and high caffeine users. High doses of caffeine (750 to 1200 mg/day spread throughout the day) have been shown to produce complete tolerance to some, but not all of the effects of caffeine. Doses as low as 100 mg/day, such as a 6 oz cup of coffee or two to three 12 oz servings of caffeinated soft-drink, may continue to cause sleep disruption, among other intolerances. Non-regular caffeine users have the least caffeine tolerance for sleep disruption. Some coffee drinkers develop tolerance to its undesired sleep-disrupting effects, but others apparently do not.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39337193",
"title": "Stress in medical students",
"section": "Section::::Physical effects.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 732,
"text": "Medical students have been known to consume caffeinated beverages to be active and alert during time of studying. These students drink large quantities of coffee, tea, cola, and energy drinks. Though an increased intake of caffeine can increase the levels of adenosine, adrenaline, cortisol and dopamine in the blood, caffeine also inhibits the absorption of some nutrients, increasing the acidity of the gastrointestinal tract and depleting the levels of calcium, magnesium, iron and other trace minerals of the body through urinary excretion. Furthermore, caffeine decreases blood flow to the brain by as much as 30 percent, and it decreases the stimulation of insulin, a hormone that helps regulate the body's blood sugar level.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "166189",
"title": "Autonomic nervous system",
"section": "Section::::Caffeine effects.\n",
"start_paragraph_id": 72,
"start_character": 0,
"end_paragraph_id": 72,
"end_character": 667,
"text": "Caffeine is a bio-active ingredient found in commonly consumed beverages such as coffee, tea, and sodas. Short-term physiological effects of caffeine include increased blood pressure and sympathetic nerve outflow. Habitual consumption of caffeine may inhibit physiological short-term effects. Consumption of caffeinated espresso increases parasympathetic activity in habitual caffeine consumers; however, decaffeinated espresso inhibits parasympathetic activity in habitual caffeine consumers. It is possible that other bio-active ingredients in decaffeinated espresso may also contribute to the inhibition of parasympathetic activity in habitual caffeine consumers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6868",
"title": "Caffeine",
"section": "Section::::Pharmacology.:Pharmacodynamics.:Enzyme targets.\n",
"start_paragraph_id": 78,
"start_character": 0,
"end_paragraph_id": 78,
"end_character": 387,
"text": "Caffeine, like other xanthines, also acts as a phosphodiesterase inhibitor. As a competitive nonselective phosphodiesterase inhibitor, caffeine raises intracellular cAMP, activates protein kinase A, inhibits TNF-alpha and leukotriene synthesis, and reduces inflammation and innate immunity. Caffeine also affects the cholinergic system where it inhibits the enzyme acetylcholinesterase.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45588500",
"title": "Caffeine-induced anxiety disorder",
"section": "Section::::Genetics and variability of caffeine consumption.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 540,
"text": "While many factors contribute to individual differences in a person's response to caffeine, such as environmental and demographic factors (i.e. age, drug use, circadian factors, etc.), genetics play an important role in individual variability. This inconsistency in responses to caffeine can take place either at the metabolic or at the drug-receptor level. The effects of genetic factors can occur either directly by changing acute or chronic reactions to the drug or indirectly by altering other psychological or physiological processes.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6su659
|
Why are there nuclear-powered subs and aircraft carriers but no nuclear-powered airplanes?
|
[
{
"answer": "This was experimented with a bit in the '50s by both the Americans and the Soviets. The main issue is that to protect the crew from radiation you need a lot of heavy shielding, and this makes it difficult to fly. \n\n_URL_0_",
"provenance": null
},
{
"answer": "Power to weight ratios. Naval vessels are massive, with displacements of thousands of tonnes. The reactors powering such vessels are themselves massive, with weights in the many hundreds of tonnes. That weight comes from the combination of the fuel, the reactor control and cooling components, and the heavy radiation shielding.\n\nNuclear powered aircraft have been researched however, with several different designs. The molten salt reactor design, for example, came out of US Air Force research on a nuclear reactor light enough to power an airplane. There's also the hair raising concept of a nuclear powered RAMJET, as in [project pluto](_URL_0_). The problem with all of these designs is that if you don't have the luxury to add enough weight for shielding and containment then you generally have a very \"dirty\" aircraft. Either one that radiates the crew a bit (MSR) or a drone aircraft that leaves a trail of nuclear fallout in its wake (SLAM/Pluto).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "942255",
"title": "Nuclear navy",
"section": "Section::::Nuclear-powered aircraft carriers.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 811,
"text": "The United States Navy has by far the most nuclear-powered aircraft carriers, with ten Nimitz-class carriers and one Gerald R. Ford-class carrier in service. The last conventionally-powered aircraft carrier left the U.S. fleet as of 12 May 2009, when the USS \"Kitty Hawk\" (CV-63) was deactivated. France's latest aircraft carrier, the \"R91 Charles de Gaulle\", is nuclear-powered. The United Kingdom rejected nuclear power early in the development of its \"Queen Elizabeth\"-class aircraft carriers on cost grounds, as even several decades of fuel use costs less than a nuclear reactor. Since 1949 the Bettis Atomic Power Laboratory near Pittsburgh, Pennsylvania has been one of the lead laboratories in the development of the nuclear navy. The planned Indigenous Chinese Carriers also feature Nuclear Propulsion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2219",
"title": "Aircraft carrier",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1044,
"text": "As of , there are 41 active aircraft carriers in the world operated by thirteen navies. The United States Navy has 11 large nuclear-powered fleet carriers—carrying around 80 fighter jets each—the largest carriers in the world; the total combined deckspace is over twice that of all other nations combined. As well as the aircraft carrier fleet, the U.S. Navy has nine amphibious assault ships used primarily for helicopters, although these also carry up to 20 vertical or short take-off and landing (V/STOL) fighter jets and are similar in size to medium-sized fleet carriers. China, France, India, Russia, and the UK each operate a single large/medium-size carrier, with capacity from 30 to 60 fighter jets. Italy operates two light fleet carriers and Spain operates one. Helicopter carriers are operated by Japan (4), France (3), Australia (2), Egypt (2), Brazil (1), South Korea (1), and Thailand (1). Future aircraft carriers are under construction or in planning by Brazil, China, India, Russia, the United Kingdom, and the United States.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "194856",
"title": "USS Enterprise (CVN-65)",
"section": "Section::::Design.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 342,
"text": "\"Enterprise\" is also the only aircraft carrier to house more than two nuclear reactors, having an eight-reactor propulsion design, with each A2W reactor taking the place of one of the conventional boilers in earlier constructions. She is the only carrier with four rudders, two more than other classes, and features a more cruiser-like hull.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21355775",
"title": "History of the aircraft carrier",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 328,
"text": "Aircraft carriers are warships that evolved from balloon-carrying wooden vessels into nuclear-powered vessels carrying scores of fixed- and rotary-wing aircraft. Since their introduction they have allowed naval forces to project air power great distances without having to depend on local bases for staging aircraft operations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32022907",
"title": "United States Navy Nuclear Propulsion",
"section": "Section::::History.:Aircraft carriers.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 409,
"text": "The first production class of nuclear-powered aircraft carrier is the . Ten \"Nimitz\"-class aircraft carriers in total were produced with all remaining in active duty. This class of aircraft carrier is currently intended to be replaced with the . The \"Gerald R. Ford\"-class aircraft carriers are still in production, with three currently being produced. There are plans to produce an additional seven vessels.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "181160",
"title": "Nimitz-class aircraft carrier",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 559,
"text": "Instead of the gas turbines or diesel-electric systems used for propulsion on many modern warships, the carriers use two A4W pressurized water reactors which drive four propeller shafts and can produce a maximum speed of over and maximum power of around . As a result of the use of nuclear power, the ships are capable of operating for over 20 years without refueling and are predicted to have a service life of over 50 years. They are categorized as nuclear-powered aircraft carriers and are numbered with consecutive hull numbers between CVN-68 and CVN-77.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "496572",
"title": "Nuclear propulsion",
"section": "Section::::Surface ships, submarines, and torpedoes.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 331,
"text": "Nuclear-powered vessels are mainly military submarines, and aircraft carriers. Russia is the only country that currently has nuclear-powered civilian surface ships, mainly icebreakers. America currently (as of July 2018) has 11 aircraft carriers in service, and all are powered by nuclear reactors. For more detailed articles see:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
kq793
|
Physics problem that has been plaguing me since high school.
|
[
{
"answer": "The question at hand is: What causes the speed limit on the motion of the missiles? Is it air resistance (as is the case for airborne missiles)? \n\nIf so, then firing a missile (limited to 200 m/s velocity in air) from a jet (traveling at 500 m/s in air) will cause the missile to fly backward at 300 m/s from the perspective of the jet pilot. Imagine throwing a beachball from the window of a rapidly moving car. The air catches it going \"above its speed limit,\" and drags it back down pretty quickly. \n\nIf all of this is happening in outer-space, where there is no air resistance, whatever speed limit is imposed won't be with respect to the air, so you're the pilot will likely see the missile move forward in the expect fashion. ",
"provenance": null
},
{
"answer": "Remember that the missile is not travelling at 200m/s relative to the plane as it is released, or it would probably rip a wing off with it!\n\nThe missile will detach and fire. Its initial speed will be that of the plane, but if it cannot maintain this speed then it will slow down.\n\nObviously in reality missiles are far more aerodynamic and have much higher thrust/weight ratio than a plane so the missile is likely to have higher speed!\n\nEDIT: To add to this, consider 3 phases:\n\nPhase 1, Missile attached to plane, travelling at the same speed as plane\n\nPhase 2, Missile detaches and reduces in speed due to air resistance\n\nPhase 3, Missile fires and its speed increases upto and beyond that of the plane (for a real world missile).",
"provenance": null
},
{
"answer": "With no air resistance:\n\nFighter travelling at constant 100 m/s fires missile. Missile gets up to 200 m/s relative to fighter. So missile looks like its going 300m/s, measured by someone on the ground.\n\nFighter travelling at constant 500m/s fires missile. Missile gets up to 200 m/s relative to fighter. So missile looks like its going 700m/s, measured by someone on the ground.\n\nThis assumes that the fighter is not accelerating. If the fighter is accelerating, and there is no air resistance, then the fighter will go faster. Air resistance determines the \"maximum speed\" of fighter jets and missiles. So yes, **air resistance is of crucial importance**.\n\nIf there were no air resistance, the fighter could theoretically get up to 8,000 m/s, like the space shuttle (if it had enough jet fuel). And the missile would go even faster. I assume that the maximum speed of the missile is based on how much fuel is inside it.",
"provenance": null
},
{
"answer": "Thanks for the replies everyone! Love this subreddit. My mind can rest easy - I had forgotten that the velocities can just be added to each other (assuming no air resistance). If there's air resistance, the missile will lag behind the plane. Many future headaches avoided. ",
"provenance": null
},
{
"answer": "It will help your understanding of many physical systems to consider, in this case, that the top speed of a missile is not an intrinsic property of the missile, nor is the top speed of an aircraft. Consider instead more intrinsic properties: that a missile or jet aircraft has some intrinsic maximum thrust (force), and its shape endows it with certain drag and lift properties. A missile launched from an aircraft traveling at 100 m/s will accelerate if its intrinsic thrust force can overcome the drag at that speed. Maximum speed in these cases is reached when the drag force increases to match the thrust force, whatever speed that may be.",
"provenance": null
},
{
"answer": "The jet has the power to take the rocket to 500. Once dropped, the rocket engine kicks in to accelerate it toward 700. but the second its dropped from the jet, it no longer has the added power of the jets engine. Once the rockets engine reaches max power, it slowly losses the momentum of the +500 boost of the jet. Since the rocket engine is only capable of 200 it is not capable of maintaining a speed above that. Once the momentum of the jets initial 500 boost wears off the rocket slows to 200.",
"provenance": null
},
{
"answer": "The question at hand is: What causes the speed limit on the motion of the missiles? Is it air resistance (as is the case for airborne missiles)? \n\nIf so, then firing a missile (limited to 200 m/s velocity in air) from a jet (traveling at 500 m/s in air) will cause the missile to fly backward at 300 m/s from the perspective of the jet pilot. Imagine throwing a beachball from the window of a rapidly moving car. The air catches it going \"above its speed limit,\" and drags it back down pretty quickly. \n\nIf all of this is happening in outer-space, where there is no air resistance, whatever speed limit is imposed won't be with respect to the air, so you're the pilot will likely see the missile move forward in the expect fashion. ",
"provenance": null
},
{
"answer": "Remember that the missile is not travelling at 200m/s relative to the plane as it is released, or it would probably rip a wing off with it!\n\nThe missile will detach and fire. Its initial speed will be that of the plane, but if it cannot maintain this speed then it will slow down.\n\nObviously in reality missiles are far more aerodynamic and have much higher thrust/weight ratio than a plane so the missile is likely to have higher speed!\n\nEDIT: To add to this, consider 3 phases:\n\nPhase 1, Missile attached to plane, travelling at the same speed as plane\n\nPhase 2, Missile detaches and reduces in speed due to air resistance\n\nPhase 3, Missile fires and its speed increases upto and beyond that of the plane (for a real world missile).",
"provenance": null
},
{
"answer": "With no air resistance:\n\nFighter travelling at constant 100 m/s fires missile. Missile gets up to 200 m/s relative to fighter. So missile looks like its going 300m/s, measured by someone on the ground.\n\nFighter travelling at constant 500m/s fires missile. Missile gets up to 200 m/s relative to fighter. So missile looks like its going 700m/s, measured by someone on the ground.\n\nThis assumes that the fighter is not accelerating. If the fighter is accelerating, and there is no air resistance, then the fighter will go faster. Air resistance determines the \"maximum speed\" of fighter jets and missiles. So yes, **air resistance is of crucial importance**.\n\nIf there were no air resistance, the fighter could theoretically get up to 8,000 m/s, like the space shuttle (if it had enough jet fuel). And the missile would go even faster. I assume that the maximum speed of the missile is based on how much fuel is inside it.",
"provenance": null
},
{
"answer": "Thanks for the replies everyone! Love this subreddit. My mind can rest easy - I had forgotten that the velocities can just be added to each other (assuming no air resistance). If there's air resistance, the missile will lag behind the plane. Many future headaches avoided. ",
"provenance": null
},
{
"answer": "It will help your understanding of many physical systems to consider, in this case, that the top speed of a missile is not an intrinsic property of the missile, nor is the top speed of an aircraft. Consider instead more intrinsic properties: that a missile or jet aircraft has some intrinsic maximum thrust (force), and its shape endows it with certain drag and lift properties. A missile launched from an aircraft traveling at 100 m/s will accelerate if its intrinsic thrust force can overcome the drag at that speed. Maximum speed in these cases is reached when the drag force increases to match the thrust force, whatever speed that may be.",
"provenance": null
},
{
"answer": "The jet has the power to take the rocket to 500. Once dropped, the rocket engine kicks in to accelerate it toward 700. but the second its dropped from the jet, it no longer has the added power of the jets engine. Once the rockets engine reaches max power, it slowly losses the momentum of the +500 boost of the jet. Since the rocket engine is only capable of 200 it is not capable of maintaining a speed above that. Once the momentum of the jets initial 500 boost wears off the rocket slows to 200.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "11522564",
"title": "Conceptual physics",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 410,
"text": "The spread of the conceptual approach to teaching physics broadened the range of students taking physics in high school. Enrollment in conceptual physics courses in high school grew from 25,000 students in 1987 to over 400,000 in 2009. In 2009, 37% of students took high school physics, and 31% of them were in Physics First, conceptual physics courses, or regular physics courses using a conceptual textbook.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2711029",
"title": "Physics education",
"section": "Section::::Physics education in American high schools.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 1305,
"text": "Physics First is a popular and relatively new movement in American high schools. In schools with this curriculum 9th grade students take a course with introductory physics education. This is meant to enrich students understanding of physics, and allow for more detail to be taught in subsequent high school biology, and chemistry classes; it also aims to increase the number of students who go on to take 12th grade physics or AP Physics (both of which are generally electives in American high schools.) But many scientists and educators argue that freshmen do not have an adequate background in mathematics to be able to fully comprehend a complete physics curriculum, and that therefore quality of a physics education is lost. While physics requires knowledge of vectors and some basic trigonometry, many students in the Physics First program take the course in conjunction with Geometry. They suggest that instead students first take biology and chemistry which are less mathematics-intensive so that by the time they are in their junior year, students will be advanced enough in mathematics with either an Algebra 2 or pre-calculus education to be able to fully grasp the concepts presented in physics. Some argue this even further, saying that at least calculus should be a prerequisite for physics.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33258408",
"title": "Problems in General Physics",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 516,
"text": "Problems in General Physics is a book written by Igor Irodov and first published in 1981. It is a collection of about 2000 physics problems in mechanics, thermodynamics, molecular physics, oscillations, and electromagnetism. In India, the book is widely used by high school students preparing for engineering entrance examinations such as the Joint Entrance Examination. Each problem in the book is really unique, especially mechanics problems, and the questions help students understand the basic laws of physics. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1579586",
"title": "Physics First",
"section": "Section::::Criticism.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 810,
"text": "In addition, many scientists and educators argue that freshmen do not have an adequate background in mathematics to be able to fully comprehend a complete physics curriculum, and that therefore quality of a physics education is lost. While physics requires knowledge of vectors and some basic trigonometry, many students in the Physics First program take the course in conjunction with geometry. They suggest that instead students first take biology and chemistry which are less mathematics-intensive so that by the time they are in their junior year, students will be advanced enough in mathematics with either an algebra 2 or pre-calculus education to be able to fully grasp the concepts presented in physics. Some argue this even further, saying that at least calculus should be a prerequisite for physics.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2711029",
"title": "Physics education",
"section": "Section::::Physics education research.:Major areas of research.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 783,
"text": "BULLET::::2. Epistemology: Physics Education Research began as a trial-and-error approach to improve learning (something most teachers are familiar with). Because of the downsides of such an approach, theoretical bases for research were developed early on, most notable through the University of Maryland. The theoretical underpinnings of PER are mostly built around a Piagettean constructivism. Theories on cognition in physics learning were put forward by Redish, Hammer, Elby and Scherr, who built off of diSessa's “Knowledge in Pieces”. The Resources Framework, developed from this work, is notable, which builds off of research in neuroscience, sociology, linguistics, education and psychology. Additional frameworks are forthcoming, most recently the “Possibilities Framework”\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2880916",
"title": "The National College Entrance Examination",
"section": "Section::::Reform of the National College Entrance Examination.:\"3+3\" system.\n",
"start_paragraph_id": 98,
"start_character": 0,
"end_paragraph_id": 98,
"end_character": 266,
"text": "BULLET::::- Originally, the original intention of the reform was to let the students develop their strengths and avoid weaknesses, but the students were rushing to the high-scoring subjects. This has resulted in very few people in certain subjects, such as physics.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2711029",
"title": "Physics education",
"section": "Section::::Physics education in American high schools.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 372,
"text": "Physics is taught in high schools, college and graduate schools. In the US, it has traditionally not been introduced until junior or senior year (i.e. 11th/12th), and then only as an elective or optional science course, which the majority of American high school students have not taken. Recently in the past years, many students have been taking it their sophomore year.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
141c9n
|
what is the impact of palestine being promoted to "non-member observer status" in the un?
|
[
{
"answer": "It's not a full UN membership but it's a more symbolic move since it gives them more status than before. They are now on the same level as the Vatican and Switzerland (until a few years ago). \n\nIf they now try to join the international criminal court, this status of theirs will give them more 'points' in their favor. That's quite important.\n\nAlso, they can now go to many UN meetings. So mostly it's a step forward.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "24324",
"title": "Palestine Liberation Organization",
"section": "Section::::Political status.:Status at the United Nations.:′Non-member observer state′ status.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 452,
"text": "By September 2012, with their application for full membership stalled due to the inability of Security Council members to 'make a unanimous recommendation', the PLO had decided to pursue an upgrade in status from \"observer entity\" to \"non-member observer state\". On 29 November 2012, Resolution 67/19 passed, upgrading Palestine to \"non-member observer State\" status in the United Nations. The new status equates Palestine's with that of the Holy See.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1595329",
"title": "United Nations General Assembly observers",
"section": "Section::::Non-member observer states.:Present non-member observer states.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 637,
"text": ", there are two permanent non-member observer states in the United Nations: the Holy See and Palestine. The Holy See uncontroversially obtained its non-member observer state status in 1964 and Palestine was so designated in 2012, following an application for full membership in 2011 which has not yet been put to a UN Security Council vote largely due to the U.S. pressure. Both the Holy See and the State of Palestine are described as \"Non-member States having received a standing invitation to participate as observers in the sessions and the work of the General Assembly and maintaining permanent observer missions at Headquarters\". \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1595329",
"title": "United Nations General Assembly observers",
"section": "Section::::Non-member observer states.:Present non-member observer states.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 478,
"text": "The change of Palestinian observer status in 2012 from \"non-member observer entity\" to \"non-member observer state\" was regarded as an \"upgrade\" of their status. Many called the change \"symbolic\", but which was regarded as providing new leverage to the Palestinians in their dealings with Israel. As a result, in the change in status, the United Nations Secretariat recognized Palestine's right to become a party to treaties for which the UN Secretary-General is the depositary.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39270413",
"title": "International recognition of the State of Palestine",
"section": "Section::::Palestine in the United Nations.:Non-member observer state status.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 812,
"text": "During September 2012, Palestine decided to pursue an upgrade in status from \"observer entity\" to \"non-member observer state\". On 27 November of the same year, it was announced that the appeal had been officially made, and would be put to a vote in the General Assembly on 29 November, where their status upgrade was expected to be supported by a majority of states. In addition to granting Palestine \"non-member observer state status\", the draft resolution \"expresses the hope that the Security Council will consider favorably the application submitted on 23 September 2011 by the State of Palestine for admission to full membership in the United Nations, endorses the two state solution based on the pre-1967 borders, and stresses the need for an immediate resumption of negotiations between the two parties.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33037361",
"title": "Palestine 194",
"section": "Section::::Applications.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 501,
"text": "By the fall of 2012, the Palestinians had decided to suspend their application for full membership in favour of seeking an upgrade in status to \"non-member observer state\". However, their membership application was not abandoned and the UNGA resolution upgrading their status passed in November 2012 \"expresses the hope that the Security Council will consider favourably the application submitted on 23 September 2011 by the State of Palestine for admission to full membership in the United Nations\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "546440",
"title": "International Day of Solidarity with the Palestinian People",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 216,
"text": "In 2012, The General Assembly voted to grant Palestine non-member observer State status at the United Nations by a vote of 138 in favour to 9 against with 41 abstentions by the 193-member Assembly, Resolution 67/19.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "96665",
"title": "Palestinian territories",
"section": "Section::::Political status and sovereignty.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 1152,
"text": "On Thursday, 29 November 2012, In a 138–9 vote (with 41 abstaining) General Assembly resolution 67/19 passed, upgrading Palestine to \"non-member observer state\" status in the United Nations. The new status equates Palestine's with that of the Holy See. The change in status was described by \"The Independent\" as \"de facto recognition of the sovereign state of Palestine\". The vote was a historic benchmark for the partially recognised State of Palestine and its citizens, whilst it was a diplomatic setback for Israel and the United States. Status as an observer state in the UN will allow the State of Palestine to join treaties and specialised UN agencies, including the International Civil Aviation Organisation, the International Criminal Court, and other organisations for recognised sovereign nations. It shall permit Palestine to claim legal rights over its territorial waters and air space as a sovereign state recognised by the UN, and allow the Palestinian people the right to sue for control of their claimed territory in the International Court of Justice and to bring war-crimes charges against Israel in the International Criminal Court.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1igbw4
|
What were Pinkerton agents duties, roles, etc. in 1890-1912?
|
[
{
"answer": "Pinkertons were part of a huge private detective agency that was essentially the Black Water of the 19th century. One of the Pinkerton's specialties was strikebreaking. Employers would hire the company to provide thugs that would stop strikes in progress, and this is arguably what Pinkertons are most famous for. So yes, that is definitely something a Pinkerton would do.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "144856",
"title": "Pinkerton (detective agency)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 728,
"text": "Pinkerton, founded as the Pinkerton National Detective Agency, is a private security guard and detective agency established in the United States by Scotsman Allan Pinkerton in 1850 and currently a subsidiary of Securitas AB. Pinkerton became famous when he claimed to have foiled a plot to assassinate president-elect Abraham Lincoln, who later hired Pinkerton agents for his personal security during the Civil War. Pinkerton's agents performed services ranging from security guarding to private military contracting work. Notably, the Pinkerton Detective Agency hired women and minorities, a practice uncommon at the time. Pinkerton was the largest private law enforcement organization in the world at the height of its power.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "73374",
"title": "Mercenary",
"section": "Section::::Laws of war.:National laws.:United States.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 625,
"text": "The Anti-Pinkerton Act of 1893 () forbade the U.S. government from using Pinkerton National Detective Agency employees, or similar private police companies. In 1977, the United States Court of Appeals for the Fifth Circuit interpreted this statute as forbidding the U.S. government from employing companies offering \"mercenary, quasi-military forces\" for hire (United States ex rel. \"Weinberger v. Equifax\", 557 F.2d 456, 462 (5th Cir. 1977), cert. denied, 434 U.S. 1035 (1978)). There is a disagreement over whether or not this proscription is limited to the use of such forces as strikebreakers, because it is stated thus:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10194693",
"title": "Labor spying in the United States",
"section": "Section::::A historical overview.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 472,
"text": "In 1944, historian J. Bernard Hogg, surveying the history of labor spying, observed that Pinkerton agents were secured \"by advertising, by visiting United States recruiting offices for rejectees, and by frequenting waterfronts where men were to be found going to sea as a last resort of employment,\" and that \"[to] labor they were a 'gang of toughs and ragtails and desperate men, mostly recruited by Pinkerton and his officers from the worst elements of the community.'\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1562175",
"title": "Alkali Act 1863",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 631,
"text": "In 1874, under the Alkali Act 1874, the Inspector became the Chief Inspector. The first Chief Inspector was Dr Robert Angus Smith, he was statutorily responsible for the standards set and maintained by the Inspectorate, and reported directly to the Permanent Secretary of his department. For the first sixty years of its existence, the Inspectorate was solely concerned with the heavy chemicals industry, but from the 1920s onwards, its responsibilities were expanded, culminating in the Alkali Order 1958. This placed all major heavy industries which emitted smoke, grit, dust and fumes under the supervision of the Inspectorate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1855509",
"title": "United States Postal Inspection Service",
"section": "Section::::History.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 341,
"text": "In 1801, the title of \"surveyor\" was changed to \"Special Agent\". In 1830, the Special Agents were organized into the \"Office of Instructions and Mail Depredations\". The Postal Inspection Service was the first federal law enforcement agency to use the title Special Agent for its officers. Congress changed this title to \"Inspector\" in 1880.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2163674",
"title": "La Follette Committee",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 787,
"text": "The Committee investigated the five largest detective agencies: the Pinkerton National Detective Agency, the William J. Burns International Detective Agency, the National Corporation Service, the Railway Audit and Inspection Company, and the Corporations Auxiliary Company. Most of the agencies subpoenaed, including the Pinkerton Agency, attempted to destroy their records before receiving the subpoenas, but enough evidence remained to \"piece together a picture of intrigue\". It was revealed that Pinkerton had operatives \"in practically every union in the country\". Of 1,228 operatives, there were five in the United Mine Workers, nine in the United Rubber Workers, seventeen in the United Textile Workers, and fifty-five in the United Auto Workers that had organized General Motors.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9658993",
"title": "Morris Friedman",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 685,
"text": "Friedman described the Pinkertons as a secret police force. Under questioning by Clarence Darrow, defense attorney for Bill Haywood, Friedman identified Pinkerton agents who had infiltrated the Western Federation of Miners: Charlie Siringo, who became recording secretary of the miners' union in Burke, Idaho; A. H. Crane, secretary of the Cripple Creek, Colorado union; C. J. Connibear, president of the Florence, Colorado union; R. P. Bailey, a member of the Victor, Colorado union; and A. W. Gratias, president of the union at Globeville. Pinkerton Agent George W. Riddell, former president of the Eureka miners union in Utah, was forced to resign when Friedman published the book.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3e0noc
|
How would one derive electricity from post-fusion plasma?
|
[
{
"answer": "Boil water.\n\nThat is how pretty much every power plant makes electricity. Its easy to make heat with nuclear or fossil fuels. You use the heat to boil water, which makes a lot of pressure, and you use the pressure to turn a turbine, which is geared to a generator.",
"provenance": null
},
{
"answer": "its my crude understanding that the Polywell systems (which get like, no funding, and are unlikely to be progressed any time soon because the Tokomak projects have the bigger rice bowl and more aggressive protectors) do it directly.\n\nPolywell reactors unlike Tokomak reactors are able to achieve the temperatures to undergo Proton-Boron Fusion (in theory at least, see above no money = no progress) the result would be pure alpha radiation which in effect are high kinetic (very hot) helium ions traveling perpendicular to the magnetic containment. this would induce current into the containment field. so in practice all you would need to do is power up the fields and start the reactor up and once running you could draw power off the containment field.\n\nthe system would run much more thermally efficiently because it directly converts the heat into current instead of doing it via thermal transfer and steam turbines. it would also be much smaller system, contained into the actual reactor core, coils and whatever control circuitry it needed. i believe the 100MW plant that was planned, but not funded, would be about 1 cubic meter give or take some engineering constraints.\n\npractically every other system is simply a steam engine.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "918877",
"title": "Levitated dipole",
"section": "Section::::History.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 837,
"text": "Adapting this concept to a fusion experiment was first proposed by Dr. Jay Kesner (MIT) and Dr. Michael Mauel (Columbia) in the mid to late nineties. The pair assembled a team and raised money to build the machine. They achieved first plasma on Friday, August 13, 2004 at 12:53 PM. First plasma was done by (1) successfully levitating the dipole magnet and (2) RF heating the plasma. The LDX team has since successfully conducted several levitation tests, including a 40-minute suspension of the superconducting coil on February 9, 2007. Shortly after, the coil was damaged in a control test in February 2007 and replaced in May 2007. The replacement coil was inferior, a copper wound electromagnet, that was also water cooled. Scientific results, including the observation of an inward turbulent pinch, were reported in Nature Physics.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1690634",
"title": "Magnetic confinement fusion",
"section": "Section::::Magnetic fusion energy.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 883,
"text": "In 1997, scientists at the Joint European Torus (JET) facilities in the UK produced 16 megawatts of fusion power. Scientists can now exercise a measure of control over plasma turbulence and resultant energy leakage, long considered an unavoidable and intractable feature of plasmas. There is increased optimism that the plasma pressure above which the plasma disassembles can now be made large enough to sustain a fusion reaction rate acceptable for a power plant. Electromagnetic waves can be injected and steered to manipulate the paths of plasma particles and then to produce the large electrical currents necessary to produce the magnetic fields to confine the plasma. These and other control capabilities have come from advances in basic understanding of plasma science in such areas as plasma turbulence, plasma macroscopic stability, and plasma wave propagation. Much of this\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "261362",
"title": "ITER",
"section": "Section::::Reactor overview.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 384,
"text": "Once fusion has begun, high energy neutrons will radiate from the reactive regions of the plasma, crossing magnetic field lines easily due to charge neutrality (see neutron flux). Since it is the neutrons that receive the majority of the energy, they will be ITER's primary source of energy output. Ideally, alpha particles will expend their energy in the plasma, further heating it.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42943213",
"title": "TAE Technologies",
"section": "Section::::Projects.:Russian cooperation.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 283,
"text": "The Budker Institute of Nuclear Physics, Novosibirsk, built a powerful plasma injector, shipped in late 2013 to the company's research facility. The device produces a neutral beam in the range of 5 to 20 MW, and injects energy inside the reactor to transfer it to the fusion plasma.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15743956",
"title": "Magnetized target fusion",
"section": "Section::::Devices.:FRX-L.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 904,
"text": "In the pioneering experiment, Los Alamos National Laboratory's FRX-L, a plasma is first created at low density by transformer-coupling an electric current through a gas inside a quartz tube (generally a non-fuel gas for testing purposes). This heats the plasma to about (~2.3 million degrees). External magnets confine fuel within the tube. Plasmas are electrically conducting, allowing a current to pass through them. This current, generates a magnetic field that interacts with the current. The plasma is arranged so that the fields and current stabilize within the plasma once it is set up, self-confining the plasma. FRX-L uses the field-reversed configuration for this purpose. Since the temperature and confinement time is 100x lower than in MCF, the confinement is relatively easy to arrange and does not need the complex and expensive superconducting magnets used in most modern MCF experiments.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47760134",
"title": "Leopoldo Soto Norambuena",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1427,
"text": "When he arrived at the Comisión Chilena de Energía Nuclear, he started to work in plasmas driven by small transient electrical discharges and small pulsed power devices: z-pinch, capillary discharges and plasma focus. His work has contributed to understand that it is possible to scale the plasma focus in a wide range of energies and sizes, keeping the same value of ion density, magnetic field, plasma sheath velocity, Alfvén speed and the quantity of energy per particle. Therefore, fusion reactions are possible to be obtained in ultra-miniaturized devices (driven by generators of 0.1J for example), as well as they are obtained in bigger devices (driven by generators of 1MJ). However, the stability of the plasma pinch highly depends on the size and energy of the device. A rich plasma phenomenology it has been observed in the table-top plasma focus devices developed by Soto´s group: filamentary structures, toroidal singularities, plasma bursts and plasma jets generations. In addition, possible applications are explored using these kind of small plasma devices: development of portable generator as non-radioactive sources of neutrons and x-rays for field applications, pulsed radiation applied to biological studies, plasma focus as neutron source for nuclear fusion-fission hybrid reactors, and the use of plasma focus devices as plasma accelerators for studies of materials under intense fusion-relevant pulses.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2618773",
"title": "DEMOnstration Power Station",
"section": "Section::::Technical design.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 574,
"text": "Once fusion has begun, high-energy neutrons at about 160,000,000,000 kelvins will flood out of the plasma along with X-rays, neither being affected by the strong magnetic fields. Since neutrons receive the majority of the energy from the fusion, they will be the reactor's main source of thermal energy output. The ultra-hot helium product at roughly 40,000,000,000 kelvins will remain behind (temporarily) to heat the plasma, and must make up for all the loss mechanisms (mostly bremsstrahlung X-rays from electron collisions) which tend to cool the plasma rather quickly.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
zf2wr
|
Why does adding more components to a transistor increase clock speed?
|
[
{
"answer": "First, they aren't referring to adding more components to a transistor, rather they are talking about increasing the number of transistors on a chip or the density of transistors on a chip.\n\nIncreased transistor count doesn't necessarily lead to a higher clock speed, but as we increase the number of transistors, we are also packing them tighter, which does allow for a higher clock speed. One factor which limits clock speed is the distance between the sections on a chip. As the chips get better, we can make those sections smaller, and we can increase the clock speed. We are also increasing the number of transistors.\n\nEDIT:\n\nIt should be noted that clock speed doesn't fully tell you how powerful a processor is. Clock speed is merely how long one cycle on the chip is. Modern chips are multicored and multithreaded and occasionally have multiple adding units. More transistors means that you can do more, even if clock speed isn't increased.",
"provenance": null
},
{
"answer": "At the level of fundamental physics, the speed of transistors is limited by physical effects. Among other effects, the smaller a transistor is, the less electrical charge you have to move around to make it turn on and off. That means you can turn it on and off more times per second when it's smaller.\n\nIn addition to faster speeds, having smaller transistors means that you can add new circuits to the integrated circuit 'chip'. This might be more memory (higher amounts of cache memory), or duplicate processing units (2, 4, 8 cores, etc). It can also be additional function that used to be outside the processor. Many current processors now incorporate graphics processing on the same die.\n\nHaving the extra functionality in one IC not only gives you more processing for the buck, but it reduces the number of chips and amount of system interconnects required to build a complete computer, which reduces cost. If you compare the number of chips on motherboards from ten years ago with the number now, you'll find fewer chips, and those old motherboards required add-on cardsfor graphics, USB, and more.\n[EDIT] for clarity",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "21306150",
"title": "Random-access memory",
"section": "Section::::Memory wall.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 854,
"text": "First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat... Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-called Von Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25652303",
"title": "Computer architecture",
"section": "Section::::Design goals.:Power efficiency.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 753,
"text": "Modern circuits have less power required per transistor as the number of transistors per chip grows. This is because each transistor that is put in a new chip requires its own power supply and requires new pathways to be built to power it. However the number of transistors per chip is starting to increase at a slower rate. Therefore, power efficiency is starting to become as important, if not more important than fitting more and more transistors into a single chip. Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into a single chip as possible. In the world of embedded computers, power efficiency has long been an important goal next to throughput and latency.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41723875",
"title": "Dennard scaling",
"section": "Section::::Breakdown of Dennard scaling around 2006.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 503,
"text": "Since around 2005–2007 Dennard scaling appears to have broken down. As of 2016, transistor counts in integrated circuits are still growing, but the resulting improvements in performance are more gradual than the speed-ups resulting from significant frequency increases. The primary reason cited for the breakdown is that at small sizes, current leakage poses greater challenges and also causes the chip to heat up, which creates a threat of thermal runaway and therefore further increases energy costs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "238601",
"title": "Quiet PC",
"section": "Section::::Causes of noise.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 403,
"text": "Many of these sources increase with the power of the computer: more transistors of a given size use more power, which releases more heat, and increasing the rotation speed of fans to address this will (all things being equal) increase their noise. Similarly, increasing hard disk drives' and optical disc drives' rotation speeds increases performance, but generally also vibration and bearing friction.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26690448",
"title": "Semiconductor consolidation",
"section": "Section::::History.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 383,
"text": "As chips continued to get faster, so did the levels of sophistication within the circuitry. Companies were constantly updating machinery to be able to keep up with production demands and overhauls of newer circuits. Companies raced to make transistors smaller in order to pack more of them on the same size silicon and enable faster chips. This practice became known as \"shrinkage\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21884508",
"title": "Thermal simulations for integrated circuits",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 873,
"text": "Miniaturizing components has always been a primary goal in the semiconductor industry because it cuts production cost and lets companies build smaller computers and other devices. Miniaturization, however, has increased dissipated power per unit area and made it a key limiting factor in integrated circuit performance. Temperature increase becomes relevant for relatively small-cross-sections wires, where it may affect normal semiconductor behavior. Besides, since the generation of heat is proportional to the frequency of operation for switching circuits, fast computers have larger heat generation than slow ones, an undesired effect for chips manufacturers. This article summaries physical concepts that describe the generation and conduction of heat in an integrated circuit, and presents numerical methods that model heat transfer from a macroscopic point of view.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3503207",
"title": "Multi-core processor",
"section": "Section::::Development.:Commercial incentives.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 594,
"text": "Several business motives drive the development of multi-core architectures. For decades, it was possible to improve performance of a CPU by shrinking the area of the integrated circuit (IC), which reduced the cost per device on the IC. Alternatively, for the same circuit area, more transistors could be used in the design, which increased functionality, especially for complex instruction set computing (CISC) architectures. Clock rates also increased by orders of magnitude in the decades of the late 20th century, from several megahertz in the 1980s to several gigahertz in the early 2000s.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
j3e4d
|
why can't a state just print more banknotes to create more money? [li5]
|
[
{
"answer": "Think of a rare baseball card. If there's only 10 of them in existence, then everyone would want them and they would be willing to trade hundreds of chocolate bars for it. \nNow think if they printed 990 more of that rare baseball card. Now everyone has one, and no one is willing to trade a chocolate bar for it.",
"provenance": null
},
{
"answer": "Money stands for things. A dollar stands for a dollar's worth of bread, or gold, or land. We invented money so we don't have to swap chickens and stuff to buy things. It's a lot easier. \n\nNow. Paper money used to be backed up by gold. The U.S. had, say, a billion dollars of gold, and so it put out a billion dollars of paper money. The paper dollar was a promise that you could go get a dollar in gold. Today, there isn't enough gold to back up all the paper money in the world. So paper money is a promise that the government can pay you back. Using dollars says that you believe in the government. That you believe the people of the United States will be rich enough and hardworking enough - and lucky enough! - to always pay back their loans.\n\nNow, if you keep printing dollars, then it's like making too many promises. If you promise that you'll wash the dishes, you can't promise to mow the lawn at the same time. People start to think you're lying. And then they don't believe in your promises as much. The same thing happens to money. If you print a whole bunch of dollars, without making the government run better or showing that people are working harder and making more money, then that's the same as breaking a promise. People won't use dollars - or they'll say it's not worth as much. \n\nSo printing too many dollars won't make more money. It will make money worth less. ",
"provenance": null
},
{
"answer": "By printing more money, you devalue it.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "61048505",
"title": "The North and South Wales Bank",
"section": "",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 235,
"text": "Only seven banks still retain the rights to print their own notes, all of which are in Scotland and Northern Ireland. Lenders that print these notes must hold assets that are equivalent to the amount of notes they have in circulation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1978363",
"title": "History of the United States dollar",
"section": "Section::::Fiat standard.\n",
"start_paragraph_id": 57,
"start_character": 0,
"end_paragraph_id": 57,
"end_character": 430,
"text": "All circulating notes, issued from 1861 to present, will be honored by the government at face value as legal tender. This means that the federal government will accept old notes as payments for debts owed to the federal government (taxes and fees), or exchange old notes for new ones, but will not redeem notes for gold or silver, even if the note states that it may be thus redeemed. Some bills may have a premium to collectors.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13681",
"title": "Hyperinflation",
"section": "Section::::Effects.:Currency.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 241,
"text": "Some banknotes were stamped to indicate changes of denomination, as it would have taken too long to print new notes. By the time new notes were printed, they would be obsolete (that is, they would be of too low a denomination to be useful).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1297457",
"title": "Money creation",
"section": "Section::::Physical currency.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 397,
"text": "The central bank, or other competent, state authorities (such as the Treasury), are typically empowered to create new, physical currency, i.e. paper notes and coins, in order to meet the needs of commercial banks for cash withdrawals, and to replace worn and/or destroyed currency. The process does not increase the money supply, as such; the term \"printing [new] money\" is considered a misnomer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "180846",
"title": "Federal Reserve Note",
"section": "Section::::Criticisms.:Security.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 1436,
"text": "Despite the relatively late addition of color and other anti-counterfeiting features to U.S. currency, critics hold that it is still a straightforward matter to counterfeit these bills. They point out that the ability to reproduce color images is well within the capabilities of modern color printers, most of which are affordable to many consumers. These critics suggest that the Federal Reserve should incorporate holographic features, as are used in most other major currencies, such as the pound sterling, Canadian dollar and euro banknotes, which are more difficult and expensive to forge. Another robust technology, the polymer banknote, has been developed for the Australian dollar and adopted for the New Zealand dollar, Romanian leu, Papua New Guinea kina, Canadian dollar, and other circulating, as well as commemorative, banknotes of a number of other countries. Polymer banknotes are a deterrent to the counterfeiter, as they are much more difficult and time consuming to reproduce. They are said to be more secure, cleaner and more durable than paper notes. One major issue with implementing these or any new counterfeiting countermeasures, however, is that (other than under Executive Order 6102) the United States has never demonetized or required a mandatory exchange of any existing currency. Consequently, would-be counterfeiters can easily circumvent any new security features simply by counterfeiting older designs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "497752",
"title": "History of the United States Constitution",
"section": "Section::::Articles of Confederation.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 569,
"text": "Without taxes the government could not pay its debt. Seven of the thirteen states printed large quantities of its own paper money, backed by gold, land, or nothing, so there was no fair exchange rate among them. State courts required state creditors to accept payments at face value with a fraction of real purchase power. The same legislation that these states used to wipe out the Revolutionary debt to patriots was used to pay off promised veteran pensions. The measures were popular because they helped both small farmers and plantation owners pay off their debts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "208286",
"title": "Banknote",
"section": "Section::::Advantages and disadvantages.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 309,
"text": "BULLET::::4. Wear costs. Banknotes don't lose economic value by wear, since, even if they are in poor condition, they are still a legally valid claim on the issuing bank. However, banks of issue do have to pay the cost of replacing banknotes in poor condition and paper notes wear out much faster than coins.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3mvqaq
|
what is the "sharing economy"?
|
[
{
"answer": "Instead of being full time taxi drivers anyone with a car and a license can hop on and be an uber driver.\n\nSimilarly, Airbnb allows individuals to host people in their homes as a hotel would.\n\nThe \"sharing\" idea is that personal assets (cars, homes, etc.) are utilized to provide services for a fee instead of assets that are wholly dedicated to providing those services.",
"provenance": null
},
{
"answer": "Benjamen Walker's podcast 'Theory of Everything' did a great three-part series on this, called 'Instaserfs'. Opinions differ, but their angle is that it's startups convincing freelance casual workers that minimum-wage gigs are in some way empowering.\n\nPodcast here: _URL_0_",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "39502824",
"title": "Sharing economy",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 347,
"text": "Sharing economy is a term for a way of distributing goods and services, a way that differs from the traditional model of corporations hiring employees and selling products to consumers. In the sharing economy, individuals are said to rent or \"share\" things like their cars, homes and personal time to other individuals in a peer-to-peer fashion. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2041431",
"title": "Sharing",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 537,
"text": "Sharing is the joint use of a resource or space. It is also the process of dividing and distributing. In its narrow sense, it refers to joint or alternating use of inherently finite goods, such as a common pasture or a shared residence. Still more loosely, \"sharing\" can actually mean giving something as an outright gift: for example, to \"share\" one's food really means to give some of it as a gift. Sharing is a basic component of human interaction, and is responsible for strengthening social ties and ensuring a person’s well-being.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9933471",
"title": "Digital marketing",
"section": "Section::::Sharing economy.\n",
"start_paragraph_id": 124,
"start_character": 0,
"end_paragraph_id": 124,
"end_character": 494,
"text": "The \"sharing economy\" refers to an economic pattern that aims to obtain a resource that is not fully utilized. Nowadays, the sharing economy has had an unimagined effect on many traditional elements including labor, industry, and distribution system. This effect is not negligible that some industries are obviously under threat. The sharing economy is influencing the traditional marketing channels by changing the nature of some specific concept including ownership, assets, and recruitment.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39502824",
"title": "Sharing economy",
"section": "Section::::History.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 610,
"text": "The term \"sharing economy\" began to appear around the time of the Great Recession, enabling social technologies, and an increasing sense of urgency around global population growth and resource depletion. Professor Lawrence Lessig was possibly first to use the term in 2008, though others claim the origin of the term is unknown. As of 2015, according to a Pew Research Center survey, only 27% of Americans had heard of the term \"sharing economy\". Survey respondents who had heard of the term had divergent views on what it meant, with many thinking it concerned \"sharing\" in the traditional sense of the term.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39502824",
"title": "Sharing economy",
"section": "Section::::Misnomer as \"Sharing economy\".\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 1474,
"text": "In an article in \"Harvard Business Review\", authors Giana M. Eckhardt and Fleura Bardhi argue that \"sharing economy\" is a misnomer, and that the correct term for this activity is access economy. The authors say, \"When \"sharing\" is market-mediated—when a company is an intermediary between consumers who don't know each other—it is no longer sharing at all. Rather, consumers are paying to access someone else's goods or services.\" The article states that companies (such as Uber) that understand this, and whose marketing highlights the financial benefits to participants, are successful, while companies (such as Lyft) whose marketing highlights the social benefits of the service are less successful.This insight − that it is an access economy rather than a sharing economy – has important implications for how companies in this space compete. It implies that consumers are more interested in lower costs and convenience than they are in fostering social relationships with the company or other consumers...The access economy is changing the structure of a variety of industries, and a new understanding of the consumer is needed to drive successful business models. A successful business model in the access economy will not be based on community, however, as a sharing orientation does not accurately depict the benefits consumers hope to receive. It is important to highlight the benefits that access provides in contrast to the disadvantages of ownership and sharing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39502824",
"title": "Sharing economy",
"section": "Section::::Related Concepts.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 300,
"text": "The sharing economy is related to the circular economy, which aims to minimize waste and which includes co-operatives, co-creation, recycling, upcycling, re-distribution, and trading used goods. It is also closely related to collaborative consumption in which an item is consumed by multiple people.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39502824",
"title": "Sharing economy",
"section": "Section::::Misnomer as \"Sharing economy\".\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 366,
"text": "The notion of \"sharing economy\" has often been considered an oxymoron, and a misnomer for actual commercial exchanges. Arnould and Rose proposed to replace the misleading term \"sharing\" with mutualism or mutualization. A distinction can be made between free mutualization, such as genuine sharing, and for-profit mutualization, such as Uber, Airbnb, and Taskrabbit.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3557k4
|
is hemp realistically a great replacement for many materials (plastics and papers) or has this been over emphasized by those seeking legal marijuana?
|
[
{
"answer": "Hemp is a decent replacement for a broad range of things. You won't see paper companies switching from wood to hemp just because it's legalized. \n\nAlso hemp contains almost none of the active ingredients that recreational marijuana has (you can't get high off of it).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1232085",
"title": "Cannabidiol",
"section": "Section::::Society and culture.:Legal status.:United States.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 709,
"text": "As of April 2019, CBD extracted from marijuana remains a Schedule I Controlled Substance, and is not approved as a prescription drug, dietary supplement, or allowed for interstate commerce in the United States. CBD derived from hemp (with 0.3% THC or lower) was delisted as a federally scheduled substance by the 2018 Farm Bill. FDA regulations still apply: hemp CBD is legal to sell as a cosmetics ingredient, but despite a common misconception, because it is an active ingredient in an FDA-approved drug, cannot be sold under federal law as an ingredient in food, dietary supplements, or animal food. It is a common misconception that the legal ability to sell hemp (which may contain CBD) makes CBD legal.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1442200",
"title": "Medicines and Healthcare products Regulatory Agency",
"section": "Section::::Criticism.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 212,
"text": "Recent reclassification of CBD oil and other hemp products as a medicine thus effectively banning it has been criticized as cruel and disproportionate to those using it given that it has few if any side effects.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22269056",
"title": "Industrial Hemp Farming Act of 2009",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 980,
"text": "Republican Ron Paul of Texas) and House Democrat Barney Frank of Massachusetts) on April 2, 2009. It sought to clarify the differences between marijuana and industrial hemp as well as repeal federal laws that prohibit cultivation of industrial, but only for research facilities of higher education from conducting research. Industrial hemp is the non-psychoactive, low-THC, oilseed and fibers varieties of the cannabis sativa plant. Hemp is a sustainable resource that can be used to create thousands of different products including fuel, fabrics, paper, household products, and food and has been used for hundreds of centuries by civilizations around the world. If H.R.1866 passes American farmers will be permitted to compete in global hemp markets. On March 10, 2009, both Paul and Frank wrote a letter to their Congressional colleagues urging them to support the legislation. This bill was previously introduced in 2005 under the title of Industrial Hemp Farming Act of 2005.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "963313",
"title": "Hemp",
"section": "Section::::History.:United States.\n",
"start_paragraph_id": 108,
"start_character": 0,
"end_paragraph_id": 108,
"end_character": 648,
"text": "Another claim is that Mellon, Secretary of the Treasury and the wealthiest man in America at that time, had invested heavily in DuPont's new synthetic fiber, nylon, and believed that the replacement of the traditional resource, hemp, was integral to the new product's success. The company DuPont and many industrial historians dispute a link between nylon and hemp, nylon became immediately a scarce commodity. Nylon had characteristics that could be used for toothbrushes (sold from 1938) and very thin nylon fiber could compete with silk and rayon in various textiles normally not produced from hemp fiber, such as very thin stockings for women.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53901363",
"title": "Cannabis industry",
"section": "Section::::Market value.:United States.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 261,
"text": "The national (non-psychoactive) hemp market was $600 million in 2015, Accurate predictions of potential future legal markets for hemp are deemed impossible to predict due to \"the absence since the 1950s of any commercial and unrestricted hemp production in the\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22269056",
"title": "Industrial Hemp Farming Act of 2009",
"section": "Section::::History.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 553,
"text": "President Donald Trump signed the 2018 farm bill on Thursday December 20th 2018, which legalized hemp — a variety of cannabis that does not produce the psychoactive component of marijuana — paving the way to legitimacy for an agricultural sector that has been operating on the fringe of the law. Industrial hemp has made investors and executives swoon because of the potential multibillion-dollar market for cannabidiol, or CBD, a non-psychoactive compound that has started to turn up in beverages, health products and pet snacks, among other products.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16939254",
"title": "Race in the United States criminal justice system",
"section": "Section::::The war on drugs.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 1133,
"text": "In 1937, the Marijuana Transfer Tax Act was passed. Several scholars have claimed that the goal was to destroy the hemp industry, largely as an effort of businessmen Andrew Mellon, Randolph Hearst, and the Du Pont family. These scholars argue that with the invention of the decorticator, hemp became a very cheap substitute for the paper pulp that was used in the newspaper industry. These scholars believe that Hearst felt that this was a threat to his extensive timber holdings. Mellon, United States Secretary of the Treasury and the wealthiest man in America, had invested heavily in the DuPont's new synthetic fiber, nylon, and considered its success to depend on its replacement of the traditional resource, hemp. However, there were circumstances that contradict these claims. One reason for doubts about those claims is that the new decorticators did not perform fully satisfactorily in commercial production. To produce fiber from hemp was a labor-intensive process if you include harvest, transport and processing. Technological developments decreased the labor with hemp but not sufficient to eliminate this disadvantage.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3k2yr9
|
what's a military formation?
|
[
{
"answer": "Any set of soldiers marching under one command, usually in a regular form.\n\nAlready Egyptians did so that the they had the less experienced at the front, and the more experienced at the back, to prevent retreat and fill in gaps. The ancient Greeks invented the phalanx. The enemy can do more damage if they can strike multiple soldiers or go behind them and hit them for the side or back. If the soldiers form an unbroken line, no one can get through and the enemy can face the formation only from the front, which is strongest. The shields are locked together, so there is one long armored wall. But, some of the soldiers are struck and killed, so there will be holes in the line. If there is already another line behind it, the gap is easily filled.\n\nThe formation is psychologically effective against unorganized fighters, but degenerates into a pushing match in phalanx-to-phalanx combat.\n\nClosed formations were used also even in the firearms era. But, he machine gun made them useless as an actual fighting formation. Still, soldiers are taught to walk and move about in formation in military drill. It creates a sense of an organized force and develops discipline.",
"provenance": null
},
{
"answer": "It can also apply to vehicles, such as tanks, planes, or ships.\n\nSimplified:\n\nThink of the \"line of battle\" used by British and French fleets during the Napoleonic wars; all the ships spaced out nose to tail, guns pointing sideways (a ship's sides could fit more guns than the front). If ships broke formation and became positioned side by side, one of them would get in the way of the other's guns.\n\nOr world war 2, when allied bombers over Europe would fly in squad formations, and several squads formed into bigger formations. This would allow the defensive machine gunners in each bomber to \"cover\" each other from Nazi fighter planes, and also reduce the time over hostile flak cannons. If you fly one at a time over hostile AA, it's gonna be easy to pick you off. If you all fly to the target independently, the lack o f coordination is going to cause a collision. Ergo, fly in a formation.\n\nBack to infantry: you grunts at Waterloo might be facing direct cannon fire, or cavalry charges. Best thing to do in case of cannon is to form a side-by-side skirmish line so that if cannon balls come at you they hit one guy at most. \n\nBut then what if you get charged by bojack horsemen? Each one of you standing unreinforced is going to probably get cut down by a fast-moving cavalry saber. Best thing to do in that case is form a square. That way, no matter what direction the very mobile cavalry charges from, it's repulsed by disciplined musketry. \n\nThe faster your unit can change formations, the likelier y'all are to survive on the battlefield.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "6714405",
"title": "Military organization",
"section": "Section::::Commands, formations, and units.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 986,
"text": "A formation is defined by the US Department of Defense as \"two or more aircraft, ships, or units proceeding together under a commander\". Formin in the Great Soviet Encyclopedia emphasised its combined-arms nature: \"Formations are those military organisations which are formed from different speciality Arms and Services troop units to create a balanced, combined combat force. The formations only differ in their ability to achieve different scales of application of force to achieve different strategic, operational and tactical goals and mission objectives.\" It is a composite military organization that includes a mixture of integrated and operationally attached sub-units, and is usually combat-capable. Example of formations include: divisions, brigades, battalions, wings, etc. Formation may also refer to tactical formation, the physical arrangement or disposition of troops and weapons. Examples of formation in such usage include: pakfront, panzerkeil, testudo formation, etc.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6714405",
"title": "Military organization",
"section": "Section::::Commands, formations, and units.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 706,
"text": "A typical unit is a homogeneous military organization (either combat, combat-support or non-combat in capability) that includes service personnel predominantly from a single arm of service, or a branch of service, and its administrative and command functions are self-contained. Any unit subordinate to another unit is considered its sub-unit or minor unit. It is not uncommon for unit and formation to be used synonymously in the United States. In Commonwealth practice, formation is not used for smaller organizations like battalions which are instead called \"units\", and their constituent platoons or companies are referred to as sub-units. In the Commonwealth, formations are divisions, brigades, etc.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2600388",
"title": "Battlegroup (army)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 517,
"text": "A battlegroup (British/Commonwealth term), or task force (U.S. term) in modern military theory is the basic building block of an army's fighting force. A battlegroup is formed around an infantry battalion or armoured regiment, which is usually commanded by a lieutenant colonel. The battalion or regiment also provides the command and staff element of a battlegroup, which is complemented with an appropriate mix of armour, infantry and support personnel and weaponry, relevant to the task it is expected to perform.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "92357",
"title": "Military",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 859,
"text": "A military is a heavily-armed, highly-organized force primarily intended for warfare, also known collectively as armed forces. It is typically officially authorized and maintained by a sovereign state, with its members identifiable by their distinct military uniform. It may consist of one or more military branches such as an Army, Navy, Air Force and in certain countries, Marines and Coast Guard. The main task of the military is usually defined as defense of the state and its interests against external armed threats. Beyond warfare, the military may be employed in additional sanctioned and non-sanctioned functions within the state, including internal security threats, population control, the promotion of a political agenda, emergency services and reconstruction, protecting corporate economic interests, social ceremonies and national honor guards.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8548286",
"title": "Brigade group",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 605,
"text": "It generally refers to a formation which includes three or four battlegroups, or an infantry brigade (three battalions), supported by armoured, artillery, field engineer, aviation and support units, and amounting to about 5,000 soldiers. A brigade group represents the smallest unit able to operate independently for extended periods on the battlefield. It is similar to the concept of a regimental combat team (RCT), which was once used by the United States Army, but which now uses the term \"brigade combat team\" (BCT). The United States Marine Corps continues to use the term \"regimental combat team\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1661434",
"title": "Tactical formation",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 411,
"text": "A tactical formation (or order) is the arrangement or deployment of moving military forces such as infantry, cavalry, AFVs, military aircraft, or naval vessels. Formations were found in tribal societies such as the \"pua rere\" of the Māori, and ancient or medieval formations which include shield walls (\"skjaldborg\" in Old Norse), phalanxes (lines of battle in close order), Testudo formation and skirmishers. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "555031",
"title": "Army group",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 310,
"text": "Army groups may be multi-national formations. For example, during World War II, the Southern Group of Armies (also known as the U.S. 6th Army Group) comprised the U.S. Seventh Army and the French First Army; the 21st Army Group comprised the British Second Army, the Canadian First Army and the US Ninth Army.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5jbikq
|
Why was Dwight Eisenhower made Supreme Allied Commander during WWII despite the United States entering the war after other nations?
|
[
{
"answer": "I asked a similar question a few months ago that I think might get to the heart of what you're asking. Hopefully this helps? The response I got was fantastic! (all credit to u/goodmorningdave)\n\n_URL_0_",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5065020",
"title": "Ike: Countdown to D-Day",
"section": "Section::::Noteworthy.:Errors.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 629,
"text": "BULLET::::- The opening scene suggests that Great Britain and the United States had not seriously considered the possibility of a supreme allied commander prior to planning the D-Day invasion. In fact, appointing supreme commanders for the various theaters was seen as a given as it had proved beneficial in the last days of World War I with the appointment of Ferdinand Foch in 1918 over the Allied forces in Western Europe. The reason Eisenhower's appointment took some negotiation was the fact that the original supreme commander for the European Theater of Operations, Frank Maxwell Andrews, was killed in an airplane crash.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46438687",
"title": "December 1950",
"section": "Section::::December 19, 1950 (Tuesday).\n",
"start_paragraph_id": 124,
"start_character": 0,
"end_paragraph_id": 124,
"end_character": 525,
"text": "BULLET::::- General Dwight D. Eisenhower, retired from the United States Army, was brought back into service by President Truman to serve as the first Supreme Allied Commander of Europe (SACEUR). The announcement was made on the same day that the NATO nations accepted the Pleven Plan, proposed by French Prime Minister Rene Pleven, for the gradual rearmament of Germany and its integration into the defense of Western Europe. Eisenhower had been serving as President of Columbia University after returning to civilian life.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "322063",
"title": "Supreme Allied Commander",
"section": "Section::::World War II.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 562,
"text": "General of the Army Dwight D. Eisenhower served in successive Supreme Allied Commander roles. Eisenhower was the Commander-in-Chief, Allied Force for the Mediterranean theatre. Eisenhower then served as Supreme Commander Allied Expeditionary Force (SCAEF) in the European theatre, starting in December 1943 with the creation of the command to execute Operation Overlord and ending in July 1945 shortly after the End of World War II in Europe. In 1951, Eisenhower would again be a Supreme Allied Commander, the first to hold the post for NATO (see next section).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "322469",
"title": "Battle of Kasserine Pass",
"section": "Section::::Aftermath.:Eisenhower.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 725,
"text": "General Dwight D. Eisenhower began restructuring the Allied command, creating the 18th Army Group, commanded by General Sir Harold R. L. G. Alexander, to tighten the operational control of the three Allied nations involved and improve their coordination. Major General Lloyd Fredendall was relieved by Eisenhower and sent home. Training programmes at home had contributed to U.S. Army units in North Africa being saddled with disgraced commanders who had failed in battle and were reluctant to advocate radical changes. Eisenhower found through Major General Omar Bradley and others, that Fredendall's subordinates had lost confidence in him and Alexander told U.S. commanders, \"I'm sure you must have better men than that\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27715261",
"title": "Presidents of the United States on U.S. postage stamps",
"section": "Section::::Dwight D. Eisenhower.\n",
"start_paragraph_id": 312,
"start_character": 0,
"end_paragraph_id": 312,
"end_character": 418,
"text": "Dwight David Eisenhower (October 14, 1890 – March 28, 1969) was a five-star general in the United States Army and the 34th President of the United States, serving from 1953 until 1961. During World War II, he served as Supreme Commander of the Allied forces in Europe and planned the successful invasion of France and Germany in 1944–45, from the Western Front. In 1951, he became the first supreme commander of NATO.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3835602",
"title": "Military career of Dwight D. Eisenhower",
"section": "Section::::Profile.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 811,
"text": "General Dwight D. Eisenhower was never in combat on the battlefront. The majority of his military career (23 of 38 years) was at the rank of major or lieutenant colonel, mid-level field ranks. He spent a great deal of his military career in staff positions as a planner or trainer and not as a commander of combat army units. He was an aide to the legendary general Douglas MacArthur who was very difficult to deal with. General Eisenhower's skill at dealing with difficult personalities persuaded President Roosevelt to promote him to become the commanding general of the largest amphibious military invasion in history on the beaches of Normandy. This was a landing force of nine allied countries that required the overall commander to have great interpersonal skills and planning and coordination abilities.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41709910",
"title": "Armchair general",
"section": "Section::::Alternate usage.:Examples.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 373,
"text": "BULLET::::- Dwight D. Eisenhower, after enlisting in the U.S. Army in 1911, was assigned to the Army War College and graduated in 1928. He never served in combat, even during World War I, and held mostly administrative positions afterwards. During World War II, he was appointed Supreme Commander of the Allied Expeditionary Force, in spite of never having been in combat.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
45efsd
|
AskScience AMA Series: We study neutrinos made on earth and in space, hoping to discover brand-new particles and learn more about the mysteries of dark matter, dark radiation, and the evolution of the universe. Ask us anything!
|
[
{
"answer": "How exactly do you detect neutrinos? I was under the impression that while known they are one of the most elusive particles. ",
"provenance": null
},
{
"answer": "How exactly can neutrinos shed light on the nature of dark matter? Is there any hypothesis that scientists want to test regarding the connection between the two, or is the research more exploratory at this stage? ",
"provenance": null
},
{
"answer": "How can a sterile neutrino ever be detected, my understanding was that a normal neutrino only interacts via the weak force (and gravity) and a sterile neutrino doesn't interact via weak force. Also in wat way could a sterile neutrino be involved in dark matter? Thanks in advance for you time :)",
"provenance": null
},
{
"answer": "What is dark radiation, and how is it different from normal radiation.",
"provenance": null
},
{
"answer": "How dense does an object have to be such that neutrinos would interact with it? Like neutron star dense?",
"provenance": null
},
{
"answer": "The properties of neutrinos could shed light on the existence of dark matter, but how would this tie up with the news that gravity waves have been detected, which also promises to shed light on dark matter? \n\nHow does gravitational-wave astronomy intersect with the field of neutrino astronomy?\n\nEdit: I see also that Joseph Weber, whose earlier claims to have detected gravity waves were rejected, later worked on neutrino scattering.",
"provenance": null
},
{
"answer": "What is the difference between the \"flavors\" of neutrinos? Thanks!",
"provenance": null
},
{
"answer": "I read somewhere that a supernova releases a lot (90%+) of its radiant energy in a short burst of neutrinos. When a huge amount of energy like this is converted into neutrinos, would the sheer number of particles mean enough interaction to cause an effect noticeable without an advanced detector? (Assuming you could survive close enough to a supernova to observe any interactions).\n\nAlso how does this connect to gravity waves? Actually can you just put the gravity waves team on the phone please?\n\nPeople these scientists have been generous enough to come here andanswer your questions, please keep it related to their field.",
"provenance": null
},
{
"answer": "What is your day to day job like and how do you go about getting it?",
"provenance": null
},
{
"answer": "I was under the impression dark matter was only hypothesized, what discoveries has made its existence more than speculation? ",
"provenance": null
},
{
"answer": "It is currently unknown whether neutrinos have a regular or inverted mass hierarchy (e.g. do the electron, mu, and tau neutrinos get heavier with each generation, or not?). Do your expected results differ based on which hierarchy turns out to exist?",
"provenance": null
},
{
"answer": "Is there any significance of the recent findings of gravitational waves on your research?",
"provenance": null
},
{
"answer": "Will it one day be possible to create a walkie-talkie like device (even if it's directional) and beam data from NYC to Tokyo straight through the Earth using neutrinos? \n\nI'm assuming that if this were possible, banks and other financial centers would jump on it simply for the arbitrage opportunities by connecting computers together faster than can currently be done using around the world cables or satellites.\n\n",
"provenance": null
},
{
"answer": "My very basic understanding is that neutrinos are just one possible \"ingredient\" of dark matter. Is it the most plausible particle in the composition of dark matter or are there other, perhaps more abundant particles, that also contribute? \n\nAlso, if your kid is crying for no good reason, do you still call them a WIMP, or is that off-limits now?",
"provenance": null
},
{
"answer": "I've heard of Dark Matter and Dark Energy, but I've never heard of Dark Radiation before. What is Dark Radiation?",
"provenance": null
},
{
"answer": "Aside from the better understanding of the composition of our world, in what ways could this breakthrough affect the day to day lives of regular-non-astrophysicist individuals like myself?",
"provenance": null
},
{
"answer": "Hello! Thanks for posting. \n\nHow likely is it that dark matter is a whole zoo of new particles that we haven't or can't discover? How does it not interacting with much indicate that it is one type of particle if possible?",
"provenance": null
},
{
"answer": "I have read that neutrinos bring validity to the theory of the elastic Big Bang (don't know the correct name) but the theory is that the universe will eventually start to retract back into what people belive to be the singularity only neutrinos prevent such drastic collapse and are the driving force behind the universe switching from contraction to expansion. Do you have any input on this? Maybe can you elaborate on the role neutrinos play in such a theory? I'm sorry if I cannot better articulate my position but I am not well studied in physics.",
"provenance": null
},
{
"answer": "Experimental neutrino detectors generally involve large and elaborate detectors in tanks (except one, I believe sits on the sea bed) sitting down big holes.\n\nWill it ever get easier to detect such weakly interacting particles?",
"provenance": null
},
{
"answer": "What do you think of the possibility that you are largely on a wild-goose chase when you are hunting for dark energy, matter and perhaps even black holes?\n\nAre you open for the possibility that there are some fundamental building blocks in current understanding -- and thus taught curriculum -- of Physics that are sufficiently flawed to always produce errors when interated enough in the realm of micro and the realm of macrocosmos while still being accurate enough in mundane life to avoid suspicion?\n\nOr is entertaining such ideas the modern equivalent of heresy?",
"provenance": null
},
{
"answer": "What are your personal guesses for *δ^(CP)*?",
"provenance": null
},
{
"answer": "Thanks for doing the AMA! :) I have a few questions for you:\n\n1) We know that the flavour Neutrino states are superpositions of the mass states. But we can measure how much of each mass state the flavour states contain. Why can't we then say that the mass of the flavour states is simply the sum of the mass states? \n\n2) This is a bit philosophical, but do you have any theories as to why Nature only involves left chirality in the weak interaction? :)\n\n",
"provenance": null
},
{
"answer": "What explains the definite discovery of neutrinos, yet the elusiveness of Dark Matter in being discovered?",
"provenance": null
},
{
"answer": "Not a physicist, so excuse the layman's terminology:\n\n* Why does gravity interact with everything (neutrinos, dark matter, normal matter), whilst those things rarely interact with each other ?\n",
"provenance": null
},
{
"answer": "Would it be possible, if you had many, many neutrinos, to disrupt normal matter to the point it would be dangerous for life?\n\nI know neutrinos only interact with gravity and the weak force, but the weak force is responsible for changing the flavor of quarks. My understanding was that when a neutrino strikes a proton, the weak force turns the proton into a neutron and emits an electron.\n\nSo, theoretically, if you had an extraordinarily enormous number of neutrinos, could they be dangerous? I imagine if you had enough of these weak iterations, you could turn non-radioactive matter radioactive, or cause damage to life through beta decay.",
"provenance": null
},
{
"answer": "Hey actually watched the cosmos episode where they talk about neutrinos. And I didn't know earth made them how is that possible?",
"provenance": null
},
{
"answer": "Do you have any research relationship with the research facility in Soudan, MN?",
"provenance": null
},
{
"answer": "How does a neutrino compare to a lepton, inflaton, and other virtual particles? When neutrinos (or inflatons) change, how it conservation of energy preserved?",
"provenance": null
},
{
"answer": "How do neutrinos oscillate between flavours? It seems impossible if they have different masses, but if their mass was the same surely they'd be indistinguishable anyway.",
"provenance": null
},
{
"answer": "Do you ever use fractal math/geometry for finding smaller particles?",
"provenance": null
},
{
"answer": "What have you gleaned from your experiments and observations that could explain what dark matter is? What is your hypothesis as to why there is so much more dark matter in comparison to visible matter in the universe? Thank you.",
"provenance": null
},
{
"answer": "Is dark matter somewhere near of being detected? Or is it something that won't happen in the near future (from this lifetime to say, a few decades in the future) ",
"provenance": null
},
{
"answer": "Is it easier or harder to detect neutrinos from man-made sources versus natural sources? My understanding is that the universe is practically saturated in a flood of the little buggers - how can one distinguish man-made neutrinos from all the rest?",
"provenance": null
},
{
"answer": "What kind of mysteries could neutrinos solve?",
"provenance": null
},
{
"answer": "Hello. Thanks for doing the AMA. I come from a southern state of India where a recent proposal (The INO) for a neutrino observatory was almost cancelled due to opposition from the public. Ignorant poloticians fear mongering since it was nuclear and very illiterate public combined with some whacky conspiracy theories brought down everything. \n\nMy question is this. How do i explain the tech and why we need it in very simple terms to these people? These are people with no education and not even the most basic understanding of physics. What should be my approach as someone trying to answer the question,\n\n What good is it gonna do for us now. Better spend the money trying to solve our plight rather than doing this. \n\nAny suggestions on dealing with ignorant politicians and activists who hold a huge sway over the people will also help a ton.",
"provenance": null
},
{
"answer": "Something I've mused over for a little while and I thought you might find it interesting.\n\nIf we agree that form begets function(or vice-versa, take your pick): Then it strikes me as an interesting observation when I compare the structure of Dark matter distribution with the structure of mycelium, the nervous system, and, of all things, internet architecture.\n\nWhy this strikes me as interesting is that this type of web/node structure seems primarily to be employed when information(I'm using this loosely) is being passed from location to location.\n\nIs this a line of thinking your team has been down before? If yes/no would you care to share your thoughts on this despite being a little off topic?\n\nIt's one of those things that got it self lodged in my head and I find myself thinking about it a couple of times each year.\n\nref for dark matter structure assumptions: _URL_0_",
"provenance": null
},
{
"answer": "My Physics professor studied Neutrinos down in Antartica with a bunch of other scientists; my question is, why go down to Antartica to study them, what is the benefit?",
"provenance": null
},
{
"answer": "Thanks for a great AMA! We had a great time!",
"provenance": null
},
{
"answer": "Is there any evidence for neutrino oscillations where MORE (instead of fewer) neutrinos are measured than expected (without oscillations) or even better experiments that observe a full cycle?",
"provenance": null
},
{
"answer": "Physics Undergraduate at Arkansas Tech University here,\n\nWould you fly me to where ever you are so that I can do incredible research like you are doing now with you this summer? \n\n\"Yes\" you say?\n\nFantastic! Have your people call my people. \n\nIn all seriousness though, the opportunity to do undergraduate research with you all this summer would be incredible and I love and appreciate your work.",
"provenance": null
},
{
"answer": "So I'm Curious- if we as the scientific community are looking for another form of neutrino- a particle that makes up what we consider dark matter and gives off this dark radiation, have we officially agreed on what makes up dark matter (100%)? Have we done any experiments to place an object or instrument into a location we believe is dark matter to take a measure of what we find? Or is this all still based on observing and measuring the gravity and emitted energy from these dark matter locations? \n\nDark matter confuses and intrigues me, so I'm mostly curious as to what we actually know.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2594468",
"title": "Frank Close",
"section": "Section::::Publications.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 317,
"text": "His 2010 book \"Neutrino\" discusses the tiny, difficult-to-detect particle emitted from radioactive transitions and generated by stars. Also discussed are the contributions of John Bahcall, Ray Davis, Bruno Pontecorvo, and others who made a scientific understanding of this fundamental building block of the universe.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "633325",
"title": "Neutrino astronomy",
"section": "Section::::Applications.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 215,
"text": "There are currently goals to detect neutrinos from other sources, such as active galactic nuclei (AGN), as well as gamma-ray bursts and starburst galaxies. Neutrino astronomy may also indirectly detect dark matter.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "230547",
"title": "Case Western Reserve University",
"section": "Section::::Research.\n",
"start_paragraph_id": 96,
"start_character": 0,
"end_paragraph_id": 96,
"end_character": 385,
"text": "BULLET::::- Frederick Reines, in 1965, first detected neutrinos created by cosmic ray collisions with the Earth's atmosphere and developed innovative particle detectors. Case Western Reserve had selected Prof. Reines as chair of the physics department based on Reines's work that first detected neutrinos emitted from a nuclear reactor—work for which Reines shared a 1995 Nobel Prize.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50650",
"title": "Astronomy",
"section": "Section::::Observational astronomy.:Fields not based on the electromagnetic spectrum.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 655,
"text": "In neutrino astronomy, astronomers use heavily shielded underground facilities such as SAGE, GALLEX, and Kamioka II/III for the detection of neutrinos. The vast majority of the neutrinos streaming through the Earth originate from the Sun, but 24 neutrinos were also detected from supernova 1987A. Cosmic rays, which consist of very high energy particles (atomic nuclei) that can decay or be absorbed when they enter the Earth's atmosphere, result in a cascade of secondary particles which can be detected by current observatories. Some future neutrino detectors may also be sensitive to the particles produced when cosmic rays hit the Earth's atmosphere.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2450202",
"title": "Kamioka Liquid Scintillator Antineutrino Detector",
"section": "Section::::Results.:Geological antineutrinos (geoneutrinos).\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 320,
"text": "KamLAND also published an investigation of geologically-produced antineutrinos (so-called geoneutrinos) in 2005. These neutrinos are produced in the decay of thorium and uranium in the Earth's crust and mantle. A few geoneutrinos were detected and this limited data were used to limit the U/Th radiopower to under 60TW.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12193372",
"title": "Sanford Underground Research Facility",
"section": "Section::::Scientific Research.:Experiments under development.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 412,
"text": "DUNE, LBNF/hosted by Fermilab: Scientists with the Deep Underground Neutrino Experiment (DUNE) hope to revolutionize our understanding of the role neutrinos play in the creation of the universe. Using the Long-Baseline Neutrino Facility (LBNF), they'll shoot a beam of neutrinos from Fermilab in Batavia, Illinois, 800 miles through the earth to detectors deep underground at Sanford Lab in Lead, South Dakota. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16597340",
"title": "Large Underground Xenon experiment",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 582,
"text": "The Large Underground Xenon experiment (LUX) aimed to directly detect weakly interacting massive particle (WIMP) dark matter interactions with ordinary matter on Earth. Despite the wealth of (gravitational) evidence supporting the existence of non-baryonic dark matter in the Universe, dark matter particles in our galaxy have never been directly detected in an experiment. LUX utilized a 370 kg liquid xenon detection mass in a time-projection chamber (TPC) to identify individual particle interactions, searching for faint dark matter interactions with unprecedented sensitivity.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
c84ty8
|
what do people who speak different languages hear when someone speaks english?
|
[
{
"answer": "It sounds just like what you hear when you hear someone speak in a language you don’t know. You can tell they are saying something that means something because their voice is controlled and their body language will tell you they are not just making up sounds like a crazy person. When I hear American English vs United Kingdom English, American English sounds like the mouth is more open so the words are more full sounding. There is a great video online that demonstrates what English sounds. The video uses a mix of English words and gibberish, but sounds like how an American English speaker would. This is what English would sounds to someone who doesn’t know English. The video can be found [here](_URL_0_) .",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "293583",
"title": "Nipawin",
"section": "Section::::Demographics.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 317,
"text": "While English is spoken by all residents, over 10% of the population also speak a second language with Cree, German, Ukrainian, French, Tagalog, Spanish, Afrikaans, Dutch, Chinese, Korean, Inuktitut, English, Albanian, Bantu languages, Bosnian, Greek, Hungarian, Lithuanian, Polish, Russian and Mandarin represented.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "165745",
"title": "Singapore English",
"section": "Section::::English language trends in Singapore.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 249,
"text": "BULLET::::5. Those who learnt English as a native language (sometimes as a sole native language, but usually alongside other languages) and use it as their dominant language (many people, mostly children born after 1965 to highly educated parents).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "415406",
"title": "English as a second or foreign language",
"section": "Section::::Difficulties for learners.:Pronunciation.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 215,
"text": "English contains a number of sounds and sound distinctions not present in some other languages. Speakers of languages without these sounds may have problems both with hearing and with pronouncing them. For example:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5052508",
"title": "Moutfort",
"section": "",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 349,
"text": "English is on the school program latest in 7th class, and it sounds somehow familiar so that everyone believes he knows English at once. Anyhow, most people can communicate a bit in English. It is being said: If you don't know how to speak English, just take a hot potato in your mouth and speak a funny Luxembourgish, and they will understand you.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8569916",
"title": "English language",
"section": "Section::::Geographical distribution.:Three circles of English-speaking countries.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 879,
"text": "Those countries have millions of native speakers of dialect continua ranging from an English-based creole to a more standard version of English. They have many more speakers of English who acquire English as they grow up through day-to-day use and listening to broadcasting, especially if they attend schools where English is the medium of instruction. Varieties of English learned by non-native speakers born to English-speaking parents may be influenced, especially in their grammar, by the other languages spoken by those learners. Most of those varieties of English include words little used by native speakers of English in the inner-circle countries, and they may show grammatical and phonological differences from inner-circle varieties as well. The standard English of the inner-circle countries is often taken as a norm for use of English in the outer-circle countries.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21514416",
"title": "COLA (software architecture)",
"section": "Section::::Description.:Natural language analogy.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 499,
"text": "Also, anyone can become an English speaker simply by having this base translated into their native tongue (a more tractable problem than translating the whole of English). Once they know this subset then they know enough English to understand other statements like the giraffe one, and thus grow their knowledge to the whole language through English sentences (which can be reused by everyone, regardless of their first language). This is analogous to the bootstrapping and compatibility of a COLA.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28904321",
"title": "Yolmo language",
"section": "Section::::Language contact.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 265,
"text": "Individuals may also have other languages in their personal repertoire, through marriage to someone form a different language group, international work or engagement with tourists from different countries. English is increasingly common as a language of education.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2eu1oj
|
why do people donate to different cancers (breast, prostate, etc), won't one cure lead to cures for all the others?
|
[
{
"answer": "Cancer is actually a term that broadly describes a group of diseases, not just one, where there is uncontrollable cell growth that invades other parts of the body. What this means is that there are a number of different causes as to *why* the cells become cancerous, requiring different types of research to find different causes (i.e. caused by DNA damage in gene A, as opposed to being damaged in gene B, etc). \n\nFor your second question, there is not a most 'important' one to uncover to cure the others. It's like comparing apples to sheep, cancers are different and can even vary between people. While it's possible that the cures may be connected and have the potential to shed light on other types of cancer, this is absolutely not a 100% guarantee. ",
"provenance": null
},
{
"answer": "[This comic sort of sums it up](_URL_0_).",
"provenance": null
},
{
"answer": "Firstly, you have to understand that Susan G Komen is a scam-they aren't raising money for a cure, they're raising money for \"awareness,\" which is a fancy way of saying you're paying them to tell people that breast cancer sucks. Oh yeah, and they're aggressive with lawsuits and very corrupt, that too. ",
"provenance": null
},
{
"answer": "My understanding is that cancer, being that it is an uncontrollable and often random outburst of cell growth, is like a mutation. The cells mutate, get confused, and start screwing things up in other areas of your body. Because of this cancer is not a one-size-fits-all sort of ailment, and therefor has no one-shot quick cure. As /u/TheSeventhCircle said, within the different types of cancer its specific characteristics can change on a person to person basis. That's why finding a \"cure for cancer\" is actually not accurate at all because it isn't a disease or virus, it's your own body attacking itself. Granted it may be possible to find specific causes of it outside of things like exposure to harmful elements (IE a smoker getting lung cancer, an outdoors man getting skin cancer, etc.) which would lead into a more comprehensive understanding of what it is, and therefor a potential cure for its varying forms. However since it is so varied and diverse, I doubt say focusing on curing breast cancer would lead to a breakthrough in the curing of prostate cancer. They're just too different.\n\n\nIn regards to that chart, it is very curious that cancer receives the highest amount of awareness and charity funding where heart disease is forgotten. Perhaps instead of the heartwarming (lol) cancer story in the movies they should show heart disease instead, as that may actually be curable (even though, like cancer, it is an umbrella term). ",
"provenance": null
},
{
"answer": "To put it simply, different cancers are caused by different things. You wouldn't ask is disease just disease? Why won't one cure for disease cure all other disease?",
"provenance": null
},
{
"answer": "Asking why there's no cure for cancer is like asking why there's no cure for having allergies. There's more than one thing that people can be/are allergic to, just like there's more than one thing that might cause cancer.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "327995",
"title": "Orthomolecular medicine",
"section": "Section::::Prevalence.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 435,
"text": "Even though the health benefits are not established, the use of high doses of vitamins is also common in people who have been diagnosed with cancer. According to Cancer Research UK, cancer patients should always seek professional advice before taking such supplements, and using them as a substitute for conventional treatment \"could be harmful to [their] health and greatly reduce the chance of curing or controlling [their] cancer\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "105219",
"title": "Cancer",
"section": "Section::::Research.\n",
"start_paragraph_id": 191,
"start_character": 0,
"end_paragraph_id": 191,
"end_character": 440,
"text": "Because cancer is a class of diseases, it is unlikely that there will ever be a single \"cure for cancer\" any more than there will be a single treatment for all infectious diseases. Angiogenesis inhibitors were once incorrectly thought to have potential as a \"silver bullet\" treatment applicable to many types of cancer. Angiogenesis inhibitors and other cancer therapeutics are used in combination to reduce cancer morbidity and mortality.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1490789",
"title": "Tenovus Cancer Care",
"section": "Section::::Research.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 349,
"text": "The charity has funded scientists who have contributed to discoveries that have helped treat and care for millions of cancer patients all around the world. For example, in 1975 researchers showed that a contraceptive pill could halt the growth of breast cancers, leading to the birth of Tamoxifen, which is now taken by millions of women worldwide.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4826527",
"title": "Lorraine Day",
"section": "Section::::Alternative cancer treatment.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 392,
"text": "As a promoter of alternative medicine she claims to have discovered the cause and cure of cancer, as a result of God showing her how to recover from her own cancer with a 10 step plan. According to her theory, all cancers are due to weakness of the immune system which must be cured by diet. \"All diseases are caused by a combination of three factors: malnutrition, dehydration, and stress.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20528341",
"title": "INCTR Challenge Fund",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 532,
"text": "There are several reasons for this high death toll from cancer in developing countries. Due to poverty, lack of resources and vast distances, public access to treatment maybe difficult or non-existent. There is also not enough awareness (public or professional) about cancer to help either prevent the disease developing or to support early diagnosis. As a result, 80% of cancer patients present with advanced/incurable cancers. Unfortunately, in many cases, palliative care will not be available to them at the end of their lives.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7172",
"title": "Chemotherapy",
"section": "Section::::Limitations.\n",
"start_paragraph_id": 110,
"start_character": 0,
"end_paragraph_id": 110,
"end_character": 416,
"text": "Chemotherapy does not always work, and even when it is useful, it may not completely destroy the cancer. People frequently fail to understand its limitations. In one study of people who had been newly diagnosed with incurable, stage 4 cancer, more than two-thirds of people with lung cancer and more than four-fifths of people with colorectal cancer still believed that chemotherapy was likely to cure their cancer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60103727",
"title": "Personalized onco-genomics",
"section": "Section::::Rationale.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 573,
"text": "As cancer can arise from countless different genetic mutations and a combination of mutations, it is challenging to develop a cure that is appropriate for all cancers, given the genetic diversity of the human population. To provide and develop the most appropriate genetic therapy for one’s cancer, personalized oncogenomics has been developed. By sequencing a cancer patient’s genome, a clinical scientist can better understand which genes/part of the genome have been mutated specifically in that patient and a personalized treatment plan can potentially be implemented.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
21gngu
|
Effects Of Shakespearean/Elizabethan Theater on the London Society?
|
[
{
"answer": "**Clothing**\n\nThe major theaters served as fashion runways, setting popular trends in clothing. Costumes were elaborate, expensive, and often borrowed from or donated by local tailors, cobblers, and jewelers. These clothiers could then advertise that they were producing the clothes being worn in the most fashionable theaters in town. And in late 1500s - early 1600s England, fashion was serious business:\n\n > In these days a wondrous excess of apparel had spread itself all over England, and the habit of our own country, though a peculiar vice incident to our apish nation, grew into such contempt, that men by their new fangled garments, and too gaudy apparel, discovered a certain deformity and arrogancy of mind whilst they jetted up and down in their silks glittering with gold and silver, either imbroidered or laced. The Queen, observing that, to maintain this excess, a great quantity of money was carried yearly out of the land, to buy silks and other outlandish wares, to the impoverishing of the commonwealth; and that many of the nobility which might be of great service to the commonwealth and others that they might seem of noble extraction, did, to their own undoing, not only waste their estates, but also run so far in debt, that of necessity they came within the danger of law thereby, and attempted to raise troubles and commotions when they had wasted their own patrimonies\n\n* [A Complete History of England: IV. The history of Queen Elizabeth I](_URL_1_), written by Edward, lord Herbert of Cherbury in 1706, page 452.\n\nSee also:\n\nStephenson, Henry Thew. The Elizabethan People. New York, Henry Holt and Company, 1910. Shakespeare Online. 20 Feb. 2010. (accessed March 27, 2014) _URL_3_\n\n**Language**\n\nThere was no dictionary of the English language prior to 1755. In the Elizabethan period, London was teeming with foreign trade and with it came linguistic influences from many far-flung cultures. English was a fluid, dynamic language and the plays of the period are famous for their wordplay. Theaters of the day became laboratories for language with new words being adopted, adapted, or invented to convey the emotions of the characters. Shakespeare alone is believed to have invented (or at least been the first to write down) some 1,300 common words.\n\nSee [The Development of Early Modern English](_URL_2_), by Marta Zapala-Kraj, 2009.\n\n**Thought/Society**\n\nIn *Hamlet* Act 3, Scene 2, Shakespeare describes acting (and by extension, the purpose of theater) as being an art \"whose end, both at the first and now, was and is to hold, as ’twere, the mirror up to nature, to show virtue her own feature, scorn her own image, and the very age and body of the time his form and pressure\".\n\nShakespeare's plays reflect the society they were written for. England was in a period of transition between its Medieval past and its Renaissance future and the growth pangs of that transition were being played out on stage. Among Shakespeare's greatest influences on the art of theatrical storytelling is the heavy use of the soliloquy as a means of allowing the audience to listen to a character's most intimate thoughts. As we listen, we meet people who are simultaneously progressive and old fashioned. They have complex, multi-faceted personalities and think of themselves as unique individuals defined as much by merit and personality as by social class. We hear superstition wrestling with science, urban sophistication clashing with provincial wisdom, and numerous variations on the eternal human question: \"Given the knowledge of our own mortality, what should we do with the time that we have to be alive?\"\n\nSee:\n\n[Shakespeare's Philosophy](_URL_4_), by Colin McGinn, 2009.\n\n[From Shakespeare to Existentialism: An Original Study : Essays on Shakespeare and Goethe, Hegel and Kierkegaard, Nietzsche, Rilke, and Freud, Jaspers, Heidegger, and Toynbee](_URL_0_), by Walter Arnold Kaufmann, 1980",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "11474756",
"title": "Tudor London",
"section": "Section::::Elizabethan London.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1209,
"text": "The late 16th century, when William Shakespeare and his contemporaries lived and worked in London, was one of the most notable periods in the city's cultural history. There was considerable hostility to the development of the theatre, however. Public entertainments produced crowds, and crowds were feared by the authorities because they might become mobs, and by many ordinary citizens who dreaded that large gatherings might contribute to the spread of plague. Theatre itself was discountenanced by the increasingly influential Puritan strand in the nation. However, Queen Elizabeth loved plays, which were performed for her privately at Court, and approved of public performances of \"such plays only as were fitted to yield honest recreation and no example of evil\". On April 11, 1582, the Lords of the Council wrote to the Lord Mayor to the effect that, as \"her Majesty sometimes took delight in those pastimes, it had been thought not unfit, having regard to the season of the year and the clearance of the city from infection, to allow of certain companies of players in London, partly that they might thereby attain more dexterity and perfection in that profession, the better to content her Majesty\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3923277",
"title": "German Reed Entertainments",
"section": "Section::::The German Reed theatrical revolution.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 366,
"text": "This form of entertainment consisted of musical plays \"of a refined nature\". During the early Victorian era, visiting the theatre was considered distasteful to the respectable public. Shakespeare and classic British plays were presented, but the London stage became dominated by risque farces, burlesques and bad adaptations of French operettas. Jessie Bond wrote, \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14020",
"title": "History of London",
"section": "Section::::Modern history.:Tudor London (1485–1603).\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 537,
"text": "The late 16th and early 17th century saw the great flourishing of drama in London whose preeminent figure was William Shakespeare. During the mostly calm later years of Elizabeth's reign, some of her courtiers and some of the wealthier citizens of London built themselves country residences in Middlesex, Essex and Surrey. This was an early stirring of the villa movement, the taste for residences which were neither of the city nor on an agricultural estate, but at the time of Elizabeth's death in 1603, London was still very compact.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1943904",
"title": "Tamburlaine",
"section": "Section::::Performance history.\n",
"start_paragraph_id": 75,
"start_character": 0,
"end_paragraph_id": 75,
"end_character": 419,
"text": "The stratification of London audiences in the early Jacobean period changed the fortunes of the play. For the sophisticated audiences of private theatres such as Blackfriars and (by the early 1610s) the Globe Theatre, Tamburlaine's \"high astounding terms\" were a relic of a simpler dramatic age. Satiric playwrights occasionally mimicked Marlowe's style, as John Marston does in the induction to \"Antonio and Mellida\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39566",
"title": "English Renaissance theatre",
"section": "Section::::Playwrights.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 270,
"text": "The growing population of London, the growing wealth of its people, and their fondness for spectacle produced a dramatic literature of remarkable variety, quality, and extent. Although most of the plays written for the Elizabethan stage have been lost, over 600 remain.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23477542",
"title": "History of theatre",
"section": "Section::::European theatre.:English Elizabethan theatre.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 584,
"text": "The City of London authorities were generally hostile to public performances, but its hostility was overmatched by the Queen's taste for plays and the Privy Council's support. Theatres sprang up in suburbs, especially in the liberty of Southwark, accessible across the Thames to city dwellers but beyond the authority's control. The companies maintained the pretence that their public performances were mere rehearsals for the frequent performances before the Queen, but while the latter did grant prestige, the former were the real source of the income for the professional players.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "143284",
"title": "Restoration comedy",
"section": "Section::::Theatre companies.:War of the theatres, 1695–1700.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 591,
"text": "London again had two competing companies. Their dash to attract audiences briefly revitalised Restoration drama, but also set it on a fatal downhill slope to the lowest common denominator of public taste. Rich's company notoriously offered Bartholomew Fair-type attractions – high kickers, jugglers, ropedancers, performing animals – while the co-operating actors, even as they appealed to snobbery by setting themselves up as the only legitimate theatre company in London, were not above retaliating with \"prologues recited by boys of five, and epilogues declaimed by ladies on horseback\".\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
28f7b4
|
why do they need so much money for cancer research?
|
[
{
"answer": "Scientific research costs money - hiring scientists, buying highly specialized equipment, buying the supplies, chemicals, etc... It can cost thousands of dollars to buy a milligram of a single antibody you need for your experiments.\n\nSuccess is also not guaranteed in research, so you can spend millions of dollars over several years and fail in what you're trying to do. Much of research is exploratory, and so lots of scientists are researching a bunch of different things all at the same time.\n\nAssuming your research is \"successful\", turning a research discovery into a marketable product (it needs to make more money than it costs to produce, it needs to be produced on a way larger scale than what was developed in the research lab) is an incredibly hard and costly endeavor. ",
"provenance": null
},
{
"answer": "For some charities, the money is directed almost entirely to research. For example, the Multiple Myeloma Research Foundation has received an A+ rating from _URL_0_ judging by [these criteria](http://www._URL_0_/criteria.html). \n\nOther charities aren't so dedicated to actual research, and instead often are categorized as 'awareness' charities. Now, that's not to say that they weren't started in an attempt to do some good, or that they aren't doing good now, but they're not really helping. Everyone is aware of breast cancer now, but the goal of many of these is to make sure that people get mammograms and do self checks. Why these things aren't part of regular health care exams is an entire other kettle of fish. Groups like the American Breast Cancer Foundation are these sort of groups. Unfortunately only about 25% of their raised funds go to actually help people get exams. The rest is spent on continued fund raising, and that includes pay for their employees and the board.",
"provenance": null
},
{
"answer": "Because the human body is stupidly complex and research on how to keep it from killing itself is really finnicky.\n\nAlso, \"Cancer\" isn't just one disease. There are like, a bajillion different types of cancer with a scrillion different genetic and chemical causes and preventors.",
"provenance": null
},
{
"answer": "Not to cancer research but to the next years marketing campaign.\n\nTo the artists who are hosting the events and who are getting credit for 'their help beating cancer'.\n\nTo the TV commercials and marketing companies. (taking away the biggest chunk)\n\nTo events for cancer survivers. (money must get spent)\n\nTo pay for poor patients chemo and treatment. (okay for me)\n\nTo psychologic assistance for patients and familie. (still a bit okay)\n\nTo everything but cancer research. (not okay)\n\nEdit: but this is an unpopular reality so will get downvoted",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "30271975",
"title": "Breast cancer awareness",
"section": "Section::::Achievements of the breast cancer movement.:Social progress.:Increased resources for treatment and research.\n",
"start_paragraph_id": 58,
"start_character": 0,
"end_paragraph_id": 58,
"end_character": 445,
"text": "Breast cancer advocates have successfully increased the amount of public money being spent on cancer research and shifted the research focus away from other diseases and towards breast cancer Breast cancer advocates also raise millions of dollars for research into cures each year, although most of the funds they raise is spent on screening programs, education and treatment (; ). Most breast cancer research is funded by government agencies .\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "615793",
"title": "Cancer Research UK",
"section": "Section::::Research.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 602,
"text": "Around 40% of the charity's research expenditure goes on basic laboratory research relevant to all types of cancer into the molecular basis of cancer. The research is intended to improve understanding of how cancer develops and spreads and thus provide a foundation for other research. The rest of its funding is used to support research into over 100 specific cancer types, focusing on key areas such as drug discovery and development; prevention, early detection and imaging; surgery and radiotherapy; and cancers where survival rates are still low, such as oesophageal, lung and pancreatic cancers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1334379",
"title": "Cancer research",
"section": "Section::::Research funding.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 701,
"text": "In the early 2000s, most funding for cancer research came from taxpayers and charities, rather than from corporations. In the US, less than 30% of all cancer research was funded by commercial researchers such as pharmaceutical companies. Per capita, public spending on cancer research by taxpayers and charities in the US was five times as much in 2002–03 as public spending by taxpayers and charities in the 15 countries that were full members of the European Union. As a percentage of GDP, the non-commercial funding of cancer research in the US was four times the amount dedicated to cancer research in Europe. Half of Europe's non-commercial cancer research is funded by charitable organizations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "615793",
"title": "Cancer Research UK",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 221,
"text": "Cancer Research UK's work is almost entirely funded by the public. It raises money through donations, legacies, community fundraising, events, retail and corporate partnerships. Over 40,000 people are regular volunteers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1490545",
"title": "Worldwide Cancer Research",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 206,
"text": "The team of 40 currently works to fund £4 million of cancer research around the world every year – raised entirely from donations. Its stated vision is to see a world where no life is cut short by cancer. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18558900",
"title": "Yale Cancer Center",
"section": "Section::::Research.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 409,
"text": "Basic research in cancer is a hallmark of Yale Cancer Center, which draws approximately $96 million in cancer research funding to Yale every year. Yale is home to some of the world’s leading investigators in cancer, who have provided a steady stream of advances in a number of disciplines, contributing to the basic understanding of cancer and to the development of new therapeutic and diagnostic approaches.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1334379",
"title": "Cancer research",
"section": "Section::::Difficulties.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 309,
"text": "Cancer research processes have been criticised. These include, especially in the US, for the financial resources and positions required to conduct research. Other consequences of competition for research resources appear to be a substantial number of research publications whose results cannot be replicated.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2hmvoj
|
Unlike in Europe, where the tradition is still to build in brick and concrete, why does the US construct buildings using cheap building materials?
|
[
{
"answer": "What cheap building materials are you talking about in particular?",
"provenance": null
},
{
"answer": "I'm assuming you are asking about why houses in tornado or hurricane zones are constructed out of wood. Hurricanes and tornadoes are extremely destructive and no building material can hold together in the path of a F4 or F5 tornado. In fact concrete and brick blocks can cause even more damage if they end up flying. So instead, buildings in Tornado Alley are required to have underground tornado shelters that can keep its occupants safe in a tornado. \n\nI don't know where you got the idea that buildings in the US don't have strict building codes. ",
"provenance": null
},
{
"answer": "(On my phone, so will add sources later.) In California, at least, unreinforced masonry construction has been banned since 1933. This is because wood buildings flex in earthquakes but stay standing; unreinforced masonry, like brick or stone, falls down. Because of this, most new buildings less than six stories are wood-framed, and everything taller is steel framed or made of reinforced masonry or reinforced concrete. \n\nedit: it's the Field Act of 1933 and Garrison Act of 1939.",
"provenance": null
},
{
"answer": " > Unlike in Europe, where the tradition is still to build in brick and concrete\n\nWhat do you mean by \"in Europe\"? I'm from a European country where pretty much all houses are made of wood. ",
"provenance": null
},
{
"answer": "It's also a lot about which materials are abundant in a region. \nWhen you live in a wooded region like Canada it's only logical that many buildings are made of wood. \nBut, when you take the Netherlands as an example, there aren't a lot of forests left but there is an abundance of clay which can be made into bricks. \nOn a side note it also has a lot to do with the popularity of the Chicago school in the USA. If I recall correctly this style focused a lot more on concrete instead of bricks. \nThe Chicago school was however not widely adopted in Europe. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5666707",
"title": "Architectural terracotta",
"section": "Section::::History.:Western terracotta.:1930s–present.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 308,
"text": "Post-World War II the industry had to face the decline of buildings built during the heyday of the material, 1910–1940. Structural problems resulting from incomplete waterproofing, improper installation, poor maintenance, and interior corroding mild steel made the material unpopular in newer constructions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "464779",
"title": "Building material",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 613,
"text": "Building material is any material which is used for construction purposes. Many naturally occurring substances, such as clay, rocks, sand, and wood, even twigs and leaves, have been used to construct buildings. Apart from naturally occurring materials, many man-made products are in use, some more and some less synthetic. The manufacturing of building materials is an established industry in many countries and the use of these materials is typically segmented into specific specialty trades, such as carpentry, insulation, plumbing, and roofing work. They provide the make-up of and structures including homes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31686418",
"title": "Green building and wood",
"section": "Section::::Reducing waste.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 321,
"text": "When used properly, wood, concrete and steel can last for decades or centuries. In North America, most structures are demolished because of external forces such as zoning changes and rising land values. Designing for flexibility and adaptability secures the greatest value for the embodied energy in building materials. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52686",
"title": "Romanesque architecture",
"section": "Section::::Characteristics.:Walls.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 547,
"text": "The building material differs greatly across Europe, depending upon the local stone and building traditions. In Italy, Poland, much of Germany and parts of the Netherlands, brick is generally used. Other areas saw extensive use of limestone, granite and flint. The building stone was often used in comparatively small and irregular pieces, bedded in thick mortar. Smooth ashlar masonry was not a distinguishing feature of the style, particularly in the earlier part of the period, but occurred chiefly where easily worked limestone was available.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2111048",
"title": "Earthquake engineering",
"section": "Section::::Earthquake-resistant construction.:Adobe structures.\n",
"start_paragraph_id": 129,
"start_character": 0,
"end_paragraph_id": 129,
"end_character": 382,
"text": "Around thirty percent of the world's population lives or works in earth-made construction. Adobe type of mud bricks is one of the oldest and most widely used building materials. The use of adobe is very common in some of the world's most hazard-prone regions, traditionally across Latin America, Africa, Indian subcontinent and other parts of Asia, Middle East and Southern Europe.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4526",
"title": "Brick",
"section": "Section::::History.:Europe.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 419,
"text": "During the Early Middle Ages the use of bricks in construction became popular in Northern Europe, after being introduced there from Northern-Western Italy. An independent style of brick architecture, known as brick Gothic (similar to Gothic architecture) flourished in places that lacked indigenous sources of rocks. Examples of this architectural style can be found in modern-day Denmark, Germany, Poland, and Russia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4526",
"title": "Brick",
"section": "Section::::History.:Industrial era.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 454,
"text": "Production of bricks increased massively with the onset of the Industrial Revolution and the rise in factory building in England. For reasons of speed and economy, bricks were increasingly preferred as building material to stone, even in areas where the stone was readily available. It was at this time in London that bright red brick was chosen for construction to make the buildings more visible in the heavy fog and to help prevent traffic accidents.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5x3m2p
|
what is a car engine really doing when it is "warming up"?
|
[
{
"answer": " Well, first off you don't need to warm up the engine of a modern car. They are designed and built to such a fine tolerance you can simply turn them on and drive under normal operating conditions year round.\n\n As to what they are doing they are literally warming up, or getting hotter. Since the engine is made of metal which expands slightly when heated the parts of the engine will expand a bit, and the main engine components (the block and head, or lower and upper part of the engine) will expand enough to float off each other a bit when they get fully heated. \n\n This isn't a worry because the parts have a gasket between them designed to make a proper seal so no oil or radiator fluid leak out. This has been a problem in the past, some engines from the 70's and 80's that leaked notoriously did because newer materials expanded at unpredictable rates. We are well past the days of those exotic (for the time) alloys and early aluminum head/cast iron block hybrids. ",
"provenance": null
},
{
"answer": "they are \"warming\"!\n\nChemical reactions inside the motors are more efficient when happen in a range of temperatures. this is usually especially true in diesel motors.\nMoreover there are other fluids (oil for example) which are less viscous when warmer than ambient temperature, and when it happens motors work better.",
"provenance": null
},
{
"answer": "You are letting the temperature of the engine increase.\n\nIn very cold temperatures, this allows the lubricant to heat to a point it flows more freely before the engine is operated at higher RPMs. \n\nBut mostly it is so the engine will be warm enough to transfer heat to the heater, and warm up the rest of the car.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2622651",
"title": "Autogas",
"section": "Section::::Converter-and-mixer system operation.\n",
"start_paragraph_id": 186,
"start_character": 0,
"end_paragraph_id": 186,
"end_character": 853,
"text": "Cold start enrichment is achieved by the fact that the engine coolant is cold when the engine is cold. This causes denser vapour to be delivered to the mixer. As the engine warms up, the coolant temperature rises until the engine is at operating temperature and the mixture has leaned off to the normal running mixture. Depending on the system, the throttle may need to be held open further when the engine is cold in the same manner as with a petrol carburettor. On others, the normal mixture is intended to be somewhat lean and no cold-start throttle increase is needed. Because of the way enrichment is achieved, no additional \"choke\" butterfly is required for cold starting with LPG. Some evaporators have an electric choke valve, energising this valve, before starting the engine, will spray some LPG vapour into the carburetor to help cold start.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49209",
"title": "Carburetor",
"section": "Section::::Fuel supply.:Float chamber.\n",
"start_paragraph_id": 72,
"start_character": 0,
"end_paragraph_id": 72,
"end_character": 564,
"text": "The fuel stored in the chamber (bowl) can be a problem in hot climates. If the engine is shut off while hot, the temperature of the fuel will increase, sometimes boiling (\"percolation\"). This can result in flooding and difficult or impossible restarts while the engine is still warm, a phenomenon known as \"heat soak\". Heat deflectors and insulating gaskets attempt to minimize this effect. The Carter Thermo-Quad carburetor has float chambers manufactured of insulating plastic (phenolic), said to keep the fuel 20 degrees Fahrenheit (11 degrees Celsius) cooler.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1310044",
"title": "Block heater",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 453,
"text": "Heaters are also available for engine oil so that warm oil can immediately circulate throughout the engine during start up. The easier starting results from warmer, less viscous engine oil and less condensation of fuel on cold metal surfaces inside the engine; thus an engine block heater reduces a vehicle's emission of unburned hydrocarbons and carbon monoxide; also heat is available more instantly for the passenger compartment and glass defogging.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5326405",
"title": "Hot-bulb engine",
"section": "Section::::Operation and working cycle.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 814,
"text": "Once the engine is running, the heat of compression and ignition maintains the hot bulb at the necessary temperature, and the blow-lamp or other heat source can be removed. Thereafter, the engine requires no external heat and requires only a supply of air, fuel oil and lubricating oil to run. However, under low power the bulb could cool off too much, and a throttle can cut down the cold fresh air supply. Also, as the engine's load is increased, so does the temperature of the bulb, causing the ignition period to advance; to counteract pre-ignition, water is dripped into the air intake. Equally, if the load on the engine is low, combustion temperatures may not be sufficient to maintain the temperature of the hot bulb. Many hot-bulb engines cannot be run off-load without auxiliary heating for this reason.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "708520",
"title": "Internal combustion engine cooling",
"section": "Section::::Overview.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 368,
"text": "Cooling is also needed because high temperatures damage engine materials and lubricants and becomes even more important in hot climates. Internal-combustion engines burn fuel hotter than the melting temperature of engine materials, and hot enough to set fire to lubricants. Engine cooling removes energy fast enough to keep temperatures low so the engine can survive.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7126735",
"title": "Heater core",
"section": "Section::::Control.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 541,
"text": "Once the engine has warmed up, the coolant is kept at a more or less constant temperature by the thermostat. The temperature of the air entering the vehicle's interior can be controlled by using a valve limiting the amount of coolant that goes through the heater core. Another method is blocking off the heater core with a door, directing part (or all) of the incoming air around the heater core completely, so it does not get heated (or re-heated if the air conditioning compressor is active). Some cars use a combination of these systems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21600868",
"title": "Radiator (engine cooling)",
"section": "Section::::Automobiles and motorcycles.:Temperature control.:Waterflow control.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 1410,
"text": "Once at optimum temperature, the thermostat controls the flow of engine coolant to the radiator so that the engine continues to operate at optimum temperature. Under peak load conditions, such as driving slowly up a steep hill whilst heavily laden on a hot day, the thermostat will be approaching fully open because the engine will be producing near to maximum power while the velocity of air flow across the radiator is low. (The velocity of air flow across the radiator has a major effect on its ability to dissipate heat.) Conversely, when cruising fast downhill on a motorway on a cold night on a light throttle, the thermostat will be nearly closed because the engine is producing little power, and the radiator is able to dissipate much more heat than the engine is producing. Allowing too much flow of coolant to the radiator would result in the engine being over cooled and operating at lower than optimum temperature, resulting in decreased fuel efficiency and increased exhaust emissions. Furthermore, engine durability, reliability, and longevity are sometimes compromised, if any components (such as the crankshaft bearings) are engineered to take thermal expansion into account to fit together with the correct clearances. Another side effect of over-cooling is reduced performance of the cabin heater, though in typical cases it still blows air at a considerably higher temperature than ambient.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3d3ywm
|
how can i avoid mosquito bites?
|
[
{
"answer": "Source: I live in the south, and I have visited north east Arkansas which has more mosquitos than air at times. In Texas we have mosquitos that are so large they look like baby wasps. You can clearly see they have black and grey stripes on them. When you smack them on your arm it's a bloody mess. I have a friend that contracted West Nile while at work and I hate itching so I like to avoid them. In order of most to least effective or important:\n\n* Stay indoors around sunset. You will learn that there is a peak time where it's time to seek shelter, just as it starts to cool off. No amount of Deet is going to ward them all off during that time.\n\n* Keep some Deep Woods Off nearby at all times. You don't necessarily have to wear it all the time, but when you get your first bite go ahead and hose down with it. Don't spray it in your face, but be sure and get your back, and the backs of your legs and arms. They love behind the knees. Be careful of overly powerful deet products. I once was handed a DEET wipe that claimed \"maximum strength\". Putting it on my skin made me sick almost instantly and I was not in a place where I could wash it off or remove it effectively.\n\n* Never allow anything to collect water around your living space. Outdoor standing water = mosquito breeding ground. They look like tadpoles in the water but in reality they are satan's spawn. turn over all buckets, or other things collecting water.\n\n* Citronella products can be effective to a point but you have to be close to them. ThermaCELL appliances seem to kinda work but they are expensive and cumbersome. When I go camping I have four cheap hurricane lamps ($5.00 each at Walmart) that I power with citronella lamp oil (~$10.00 for 64oz Walmart). The lamps burn very efficiently if you keep the wik short and they do an ok job of creating a bug free zone. I also just like the look of old timey lamps.\n\n* Wind is your friend. I don't have two ceiling fans on my back porch just for cooling. They are mostly to keep the bugs away. If they can't fly, they can't get you. If you can get in front of a good fan you are not going to be getting bit by mosquitos (as much). \n\nHurricane Lamp:\n_URL_0_\n\nCitronella Lamp Oil:\n_URL_1_",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "368736",
"title": "Misconceptions about HIV/AIDS",
"section": "Section::::HIV infection.:HIV is transmitted by mosquitoes.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 624,
"text": "When mosquitoes bite a person, they do not inject the blood of a previous victim into the person they bite next. Mosquitoes do, however, inject their saliva into their victims, which may carry diseases such as dengue fever, malaria, yellow fever, or West Nile virus and can infect a bitten person with these diseases. HIV is not transmitted in this manner. On the other hand, a mosquito may have HIV-infected blood in its gut, and if swatted on the skin of a human who then scratches it, transmission is hypothetically possible, though this risk is extremely small, and no cases have yet been identified through this route.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21054623",
"title": "Mosquito-borne disease",
"section": "Section::::Transmission.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 963,
"text": "A mosquito's period of feeding is often undetected; the bite only becomes apparent because of the immune reaction it provokes. When a mosquito bites a human, it injects saliva and anti-coagulants. For any given individual, with the initial bite there is no reaction but with subsequent bites the body's immune system develops antibodies and a bite becomes inflamed and itchy within 24 hours. This is the usual reaction in young children. With more bites, the sensitivity of the human immune system increases, and an itchy red hive appears in minutes where the immune response has broken capillary blood vessels and fluid has collected under the skin. This type of reaction is common in older children and adults. Some adults can become desensitized to mosquitoes and have little or no reaction to their bites, while others can become hyper-sensitive with bites causing blistering, bruising, and large inflammatory reactions, a response known as skeeter syndrome.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59441924",
"title": "Mosquito bite allergy",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 810,
"text": "Mosquito bite allergies (MBA), also termed hypersensitivity to mosquito bites (HMB), are excessive reactions of varying severity to mosquito bites. MBA are not caused by any toxin or pathogen in the saliva injected by a female mosquito at the time it takes its blood-meal. (Male mosquitos do not take blood-meals.) Rather, they are allergic hypersensitivity reactions caused by the non-toxic allergenic proteins contained in the mosquito's saliva. By general agreement, mosquito bite allergies do not include the ordinary wheal and flare responses to these bites although these reactions are also allergic in nature. Ordinary mosquito bite allergies are nonetheless detailed here because they are the best understood reactions to mosquito bites and provide a basis for describing what is understood about MBA.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37789",
"title": "Mosquito",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 463,
"text": "The saliva of the mosquito transmitted to the host with the bite can cause itching and a rash. In addition, many species of mosquitoes inject or ingest (or both) disease-causing organisms with the bite and are thus vectors of diseases such as malaria, yellow fever, Chikungunya, West Nile virus, dengue fever, filariasis, Zika virus and other arboviruses. By transmitting diseases, mosquitoes kill more people than any other animal taxon: over 700,000 each year.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37789",
"title": "Mosquito",
"section": "Section::::Bites.\n",
"start_paragraph_id": 160,
"start_character": 0,
"end_paragraph_id": 160,
"end_character": 1164,
"text": "Mosquito bites lead to a variety of mild, serious, and, rarely, life-threatening allergic reactions. These include ordinary wheal and flare reactions and mosquito bite allergies (MBA). The MBA, also termed hypersensitivity to mosquito bites (HMB), are excessive reactions to mosquito bites that are not caused by any toxin or pathogen in the saliva injected by a mosquito at the time it takes its blood-meal. Rather, they are allergic hypersensitivity reactions caused by the non-toxic allergenic proteins contained in the mosquito's saliva. Studies have shown or suggest that numerous species of mosquitoes can trigger ordinary reactions as well as MBA. These include \"Aedes aegypti, Aedes vexans, Aedes albopictus, Anopheles sinensis, Culex pipiens\", \"Aedes communis, Anopheles stephensi\", \"Culex quinquefasciatus, Ochlerotatus triseriatus\", and \"Culex tritaeniorhynchus\". Furthermore, there is considerable cross-reactivity between the salivary proteins of mosquitoes in the same family and, to a lesser extent, different families. It is therefore assumed that these allergic responses may be caused by virtually any mosquito species (or other biting insect). \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59441924",
"title": "Mosquito bite allergy",
"section": "Section::::Systemic allergic reactions.:Presentation.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 1180,
"text": "Individuals with systemic mosquito bite allergies respond to mosquito bites with intense local skin reactions (e.g. blisters, ulcers, necrosis, scarring) and concurrent or subsequent systemic symptoms (high-grade fever and/or malaise; less commonly, muscle cramps, bloody diarrhea, bloody urine, proteinuria, and/or wheezing; or very rarely, symptoms of overt anaphylaxis such as hives, angioedema (i.e. skin swelling in non-mosquito bite areas), shortness of breath, rapid heart rate, and low blood pressure]]. There are very rare reports of death due to anaphylaxis following mosquito bites. Individual with an increased risk of developing severe mosquito bite reactions include those experiencing a particularly large number of mosquito bites, those with no previous exposure to the species of mosquito causing the bites, and those with a not fully developed immune system such as infants and young children. Individuals with certain Epstein-Barr virus-associated lymphoproliferative, non-Epstein-Barr virus malignant lymphoid, or other predisposing disease also have an increased rate of systemic mosquito bite reactions but are considered in a separate category (see below).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21054623",
"title": "Mosquito-borne disease",
"section": "Section::::Prevention.:Personal protection methods.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 744,
"text": "There are other methods that an individual can use to protect themselves from mosquito bites. Limiting exposure to mosquitoes from dusk to dawn when the majority of mosquitoes are active and wearing long sleeves and long pants during the period mosquitoes are most active. Placing screens on windows and doors is a simple and effective means of reducing the number of mosquitoes indoors. Anticipating mosquito contact and using a topical mosquito repellant with DEET or icaridin is also recommended. Draining or covering water receptacles, both indoor and outdoors, is also a simple but effective prevention method. Removing debris and tires, cleaning drains, and cleaning gutters help larval control and reduce the number of adult mosquitoes.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3mpskz
|
The German Schlieffen Plan was in development for years prior to the breakout of WWI... how well was the plan kept secret? Was the invasion of Belgium genuinely a surprise to the other European powers?
|
[
{
"answer": "Debates over whether or not the idea of a \"Schlieffen plan\" actually existing aside, the Plan itself was kept fairly secret, with wargaming of scenarios based on it being few in number, and knowledge of the actual plan being restricted to high ranking war ministry and General Staff members.\n\nThat being said, to an extent the French and the Russians had guessed German intentions before. The French formulated Plan XVII with the expectation that the Germans would invade through southern Belgium (ie south of the Meuse) and Luxembourg, avoiding the bulk of the country while also avoiding most of the French fortress line. Hence Plan XVII envisioned placing two French armies inside Alsace-Lorraine via offensives to threaten the German advance from the south, while three armies to the north parried and reversed the main German attack. Likewise, the Russians promised to mobilize '800 000 men' to be sent against presumably weak German opposition, in support of the French, while two thirds of Russia's mobilized forces would move against Austria-Hungary. \n\nWhen the German invasion actually came, it was certainly a surprise for France, Britain and Belgium. The French did not expect an invasion of Belgium on such a wide front, and with all of Germany's reserve divisions committed. The British were shocked considering that the invasion encompassed the whole country; had the invasion taken place as the French believed it would, the chances of British involvement would have been greatly reduced. Few were also prepared for the ferocity of the German attack: in spite of Belgian civilians having been told to avoid altercations and largely heeding this advice from their government, the invading Germans lashed out at 'francs-tireurs' real or largely imagined, and c. 5600 Belgian and c. 900 French civilians were murdered, and tens of thousands of homes destroyed. Dinant and Lueven (including it's university library) were almost completely raised. \n\n* *War Planning in 1914*, Holger Herwig and Richard Hamilton\n* *Helmuth von Moltke and the Origins of the First World War*, Annika Mombauer\n* *Catastrophe*, Max Hastings\n* *Belgian Atrocities 1914: A history of denial*, John Horne and Alan Kramer\n* *The War that Ended Peace*, Margaret MacMillan",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "42522359",
"title": "Belgium in \"the long nineteenth century\"",
"section": "Section::::Periods.:Reign of Albert I (to 1914).:Prelude to World War I.\n",
"start_paragraph_id": 77,
"start_character": 0,
"end_paragraph_id": 77,
"end_character": 947,
"text": "From as early as 1904, Alfred von Schlieffen of the German General Staff began to draw up a military strategy, known as the Schlieffen Plan, which could be put into action if Germany found itself involved in a two-front war against France and Russia. The core of the plan was a rapid attack on France on the outbreak of war, forcing a quick victory in the west before the Russians had time to fully mobilize their forces. The Schlieffen Plan took advantage of the French military's concentration and fortifications along the Franco-German border by prescribing an invasion of neutral Belgium and Luxembourg. According to the plan, the German army would rapidly overwhelm the Belgian military and then move quickly through the country and then towards Paris. The general staff believed that none of the signatories would be willing to honor their commitments from the 1839 Treaty of London, which a German diplomat dismissed as a \"scrap of paper\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23180963",
"title": "Battle of Belgium",
"section": "Section::::Pre-battle plans.:Belgian place in Allied strategy.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 584,
"text": "The weakness of the plan was that, politically at least, it abandoned most of eastern Belgium to the Germans. Militarily it would put the Allied rear at right angles to the French frontier defences; while for the British, their communications located at the Bay of Biscay ports, would be parallel to their front. Despite the risk of committing forces to central Belgium and an advance to the Scheldt or Dyle lines, which would be vulnerable to an outflanking move, Maurice Gamelin, the French commander, approved the plan and it remained the Allied strategy upon the outbreak of war.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1461804",
"title": "Timeline of the Battle of France",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 526,
"text": "Nazi Germany planned to use forces that would distract the Allies that would enter Belgium which would make French and British troops leave their current position. Germany would also use a second force to navigate the Ardennes Forest and move around the Maginot Line. Germany had a very simple and strategic plan take the Netherlands and Luxembourg before invading France and Belgium. The plan focused on eliminating any resistance that remained, capturing Paris, crossing the English Channel and then invading Great Britain.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13289",
"title": "History of the Netherlands",
"section": "Section::::1900 to 1940.:Neutrality during the First World War.\n",
"start_paragraph_id": 300,
"start_character": 0,
"end_paragraph_id": 300,
"end_character": 1132,
"text": "The German war plan (the Schlieffen Plan) of 1905 was modified in 1908 to invade Belgium on the way to Paris but not the Netherlands. It supplied many essential raw materials to Germany such as rubber, tin, quinine, oil and food. The British used its blockade to limit supplies that the Dutch could pass on. There were other factors that made it expedient for both the Allies and the Central Powers for the Netherlands to remain neutral. The Netherlands controlled the mouths of the Scheldt, the Rhine and the Meuse Rivers. Germany had an interest in the Rhine since it ran through the industrial areas of the Ruhr and connected it with the Dutch port of Rotterdam. Britain had an interest in the Scheldt River and the Meuse flowed from France. All countries had an interest in keeping the others out of the Netherlands so that no one's interests could be taken away or be changed. If one country were to have invaded the Netherlands, another would certainly have counterattacked to defend their own interest in the rivers. It was too big a risk for any of the belligerent nations and none wanted to risk fighting on another front.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13854",
"title": "History of France",
"section": "Section::::1914-1945.:World War I.\n",
"start_paragraph_id": 388,
"start_character": 0,
"end_paragraph_id": 388,
"end_character": 440,
"text": "Germany's \"Schlieffen Plan\" was to quickly defeat the French. They captured Brussels, Belgium by 20 August and soon had captured a large portion of northern France. The original plan was to continue southwest and attack Paris from the west. By early September they were within of Paris, and the French government had relocated to Bordeaux. The Allies finally stopped the advance northeast of Paris at the Marne River (5–12 September 1914).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "51751618",
"title": "Historiography of the Battle of France",
"section": "Section::::Recent analyses.:Tooze (2006).\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 1079,
"text": "Tooze wrote that the plan failed to offer the possibility of a decisive victory in the west desired by Hitler but lasted until the Mechelen Incident of February 1940. The incident was the catalyst for an alternative plan for an encircling move through the Ardennes proposed by Manstein but it came too late to change the armaments programme. The swift victory in France was not the consequence of a thoughtful strategic synthesis but a lucky gamble, an improvisation to resolve the strategic problems that the generals and Hitler had failed to resolve by February 1940. The Allies and the Germans were equally reluctant to reveal the casual way that the Germans gained their biggest victory. The blitzkrieg myth suited the Allies, because it did not refer to their military incompetence; it was expedient to exaggerate the excellence of German equipment. The Germans avoided an analysis based on technical determinism, since this contradicted Nazi ideology and \"OKW\" attributed the victory to the \"revolutionary dynamic of the Third Reich and its National Socialist leadership\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "161133",
"title": "Gerd von Rundstedt",
"section": "Section::::World War II.:Invasion of France and the Low Countries.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 1062,
"text": "Hitler's original plan was to attack in late November, before the French and British had time fully to deploy along their front. The plan, devised by Hitler, was essentially for a re-run of the invasion of 1914, with the main assault to come in the north, through Belgium and the Netherlands, then wheeling south to capture Paris, leaving the French Army anchored on the Maginot Line. Most senior officers were opposed to both the timing and the plan. Rundstedt, Manstein, Reichenau (commanding 6th Army in Army Group B), List and Brauchitsch remonstrated with Hitler in a series of meetings in October and November. They were opposed to an offensive so close to the onset of winter, and they were opposed to launching the main attack through Belgium, where the many rivers and canals would hamper armoured operations. Manstein in particular, supported by Rundstedt, argued for an armoured assault by Army Group A, across the Ardennes to the sea, cutting the British and French off in Belgium. This \"Manstein Plan\" was the genesis of the blitzkrieg of May 1940.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1a0fgj
|
Was WWI a true good guy vs bad guy war?
|
[
{
"answer": "Okay, no. No, no, no, no. No war is ever a \"good guy vs. bad guy\" sort of thing from a historical perspective, though individual nations by and large choose to cast the war they're fighting in that sense. You can bet World War II was seen by citizens of the Axis powers as a \"good vs. evil\" affair, just as certainly as you can bet that they didn't see themselves on the \"evil\" side.\n\nThe truth of the matter is that history really is, to raise the old cliche, \"written by the victors\". Or at least by those left alive to write it. Retrospect allows us to see the horrors of the Holocaust or the unspeakably brutal after-effects of the Eastern or Chinese or Philippine Fronts, but it's probably safe to say that the majority of German conscripts fighting at, say, Kursk, were not there so that their leaders could continue to exterminate millions of innocent civilians in frighteningly efficient fashion.\n\n**tl;dr: No war is ever \"good vs. evil\"; individuals and nations just choose to cast it that way.**",
"provenance": null
},
{
"answer": "The truth is that WW1 wasn't fought over anything in particular. Perhaps the only belligerents that had any real reason to go to war were the Serbians who wanted to remain free of direct Austro-Hungarian control and the Austro-Hungarians who wanted to control the Serbians (in a manner of speaking). The other parties had a variety of reasons which I'll cover below.\n\nWar had been coming for a long time before the actual event. The arms race between Great Britain and Germany resulted to the development of the Dreadnought and increased tensions between the two nations. The intricate treaties and alliances saw Germany surrounded by Triple Entente powers. Obviously this left Germany feeling threatened and coupled with the existing German military tradition, preparations for war were inevitable. \n\nFrance had territorial ambitions. It had been humiliated by Germany in the Franco-Prussian War and was forced to cede the territories of Alsace-Lorraine to the Prussians. Russia is probably the more interesting. It had no real interest in joining the war and only did so because of Serbia's plea that Russia aid its Slavic brethren. Following German unification, Russia feared German military intentions and so agreed to an alliance with France, encircling Germany. \n\nItaly really had no reason whatsoever. It had initially been aligned with Germany and the Austro-Hungarians but withdrew when hostilities broke out. It eventually joined the Entente and spent most of the war getting its butt handed to it. The British were part of the Entente but only entered the war after Germany violated Belgian neutrality. I don't recall reading anything about it, but I suspect the British were also suspicious of German colonial ambitions and with the High Seas Fleet, Germany had the naval power to threaten British colonial possessions abroad.\n\nThe Ottoman Empire only entered the conflict after initial hostilities had broken out and only did so because war would have provided a nice distraction to its domestic issues of which it had many. The Empire also didn't really care who it allied with, it approached the Russians but refused Russian demands that would have effectively placed parts of the Ottoman Empire under Russian control, the British also refused. Germany on the other handed needed an ally in the Middle East. The Ottomans could threaten British interests which would tie up British forces (which it did) and also had the potential to threaten Russia (which it did to some extent).\n\ntl,dr: the reasons that the different belligerents entered the war were varied and all served national interests. None can be labeled good or evil.\n\nEdit: a word",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "4465983",
"title": "Goodbyeee",
"section": "Section::::Themes.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 766,
"text": "In his book \"The Great War\", Ian F. W. Beckett also cited Sheffield: the latter commented that \"Blackadder Goes Forth\" was successful because \"the characters and situations needed no explanation, so familiar was the audience with the received version of the war\". Beckett noted the popularity of the episode's final scene, and compared it to a similarly popular scene from \"Dad's Army\". He said that this comparison demonstrates the observation made by historian A. J. P. Taylor that the Second World War has been regarded as a \"good war\" in comparison to the first; he opined that \"television producers...have much to answer for in the perpetuation of the image of the Great War as one in which a generation of 'lions' were needlessly sacrificed by the 'donkeys'\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20198488",
"title": "Call of Duty: World at War",
"section": "Section::::Reception.:Critical response.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 587,
"text": "GameSpot praised the darker, grittier portrayal of the World War II settings. 1UP.com noted the significantly increased graphic violence and gore (even over the M-rated \"Call of Duty 4\") as a positive improvement in realism saying, \"While enemies died en masse in previous installments, dismemberment and gore were essentially nonexistent. That's no longer the case — here, legs are severed, men cry out in agony as they reach for lost body parts, and gouts of blood fly as bullets pierce flesh.\" and that \"\"World at War\" portrays the horror of WWII more accurately than ever before...\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2740181",
"title": "I'll Remember April (1999 film)",
"section": "Section::::Plot.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 938,
"text": "Although the horrors of World War II are far removed from the Pacific Coast community where adolescent Duke Cooper (Trevor Morgan) and his three best chums play soldier, experiment with swearing, and earnestly patrol the beach for Japanese submarines, the realities of the war are about to come crashing down around them. Not when a Japanese soldier, stranded and wounded when his sub quickly dived, washes ashore; his capture by the foursome merely allows for more playtime and thoughts of becoming heroes. It's coming because Duke's older brother is on some island awaiting combat and the black sedans with military tags have already begun rolling through town to deliver their grim announcements. And Duke's Japanese American pal Willie Tanaka (Yuki Tokuhiro), all three feet and 55 pounds of him, has suddenly become a threat to national security, so he, his mother, and grandfather are soon to be shipped away to an internment camp.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46188299",
"title": "History of propaganda",
"section": "Section::::Propaganda films.:Interwar period.\n",
"start_paragraph_id": 103,
"start_character": 0,
"end_paragraph_id": 103,
"end_character": 270,
"text": "Between the Great Wars American films celebrated the bravery of the American soldiers while depicting war as an existential nightmare. Films such as \"The Big Parade\" depicted the horrors of trench warfare, the brutal destruction of villages, and the lack of provisions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "65975",
"title": "Other Losses",
"section": "Section::::External links.\n",
"start_paragraph_id": 122,
"start_character": 0,
"end_paragraph_id": 122,
"end_character": 214,
"text": "BULLET::::- Richard Drayton: \"\"An ethical blank cheque\"\" British and U.S. mythology about the second world war ignores our own crimes and legitimises Anglo-American war making, \"The Guardian\", Tuesday May 10, 2005\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57986126",
"title": "Good Times Bad Times (film)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 375,
"text": "Good Times Bad Times is a Canadian documentary film, directed by Donald Shebib and released in 1969. A depiction of military veterans, the film juxtaposed original BBC documentary footage from World War I and World War II with contemporary footage of veterans recalling their experiences at Royal Canadian Legion halls, Remembrance Day commemorations and veterans hospitals.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19982766",
"title": "The Best War Ever",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 550,
"text": "The Best War Ever: America and World War II is a revisionist history book written by Dr. Michael C. C. Adams (professor of history at Northern Kentucky University). The book was and first published by the Johns Hopkins University press in 1993 as part of its \"American Moment\" series, edited by University of Wisconsin–Madison history professor Stanley I. Kutler. In a 2004 survey of American college history instructors, the book was voted #2 in the \"most likely to plagiarize\" category, finishing just behind \"Amusing the Million\" by John Kasson. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5xjmhj
|
Would it be right to say that semiconductors are produced by doping?
|
[
{
"answer": "No, not all semiconductors are doped. There are also [intrinsic semiconductors](_URL_0_), where pure materials act as semiconductors. For example, a pure chunk of silicon is an important example of such an intrinsic semiconductor. ",
"provenance": null
},
{
"answer": "Doping is a necessary process for many *applications* of semiconductors, but the materials are semiconductors without doping as well.",
"provenance": null
},
{
"answer": "What doping does in those systems is create conditions under which a junction between two semiconductors can be made to conduct electricity. In other words, it can be switched between a zero state, or a one state. But without the doping, the materials used, like silicon, are semiconductors on their own. Being a semiconductor means simply that the electrons living inside those materials can never have certain values of their energy. Imagine they can have any value between 0 and 1 (in some unit of energy), or any value between 2 and 3, but they can't have values between 1 and 2. That part of the energy spectrum is known as the energy gap. Basically, if you put electrons in all states from 0 to 1, you cannot put another one in a state of energy 1.0001, for example. The next available energy for that extra electron is 2, so you have to provide one entire unit of energy to bring that electron into your material.\n\nThat's precisely what happens when you transport electricity: you connect metallic cables to your sample, and you apply a voltage to bring electrons from your battery, through one cable, into the sample, then out through the other cable and back to the battery. But if you need to provide 1 unit of energy, and your battery can only provide, say, half a unit, then you cannot bring electrons into the sample (there are no states for those energies!), and current can't pass through it. That's what being a semiconductor is all about.",
"provenance": null
},
{
"answer": "It doesn't necessarily have to be doped. Doping allows the creation of diodes, certain types of transistors and devices. A semiconductor is defined by having a specific bandgap. Insulators are materials that have a large separation between the valence and conduction band so it would be difficult for electrons to get excited and move into the conduction band. Conductors usually have lots of electrons in the conduction band and the valence band can overlap the conduction band meaning it'll easily be conductive. For semiconductor it's somewhere in between and a few electrons can reach the conduction band. \nThe most basic is semiconductor is silicon based and you can manipulate its behavior by adding dopants such as in the diode or silicon mosfet. \nOne example is a transistor that doesn't have to use doped material is a Gallium Nitride High electron mobility transistor (HEMT). Essentially you have a small layer of AlGaN on top of GaN and due to the lattice mismatch it causes a stress on the interface which causes a piezoelectric polarization thereby creating a 2 dimensional electron gas channel like with regular silicon mosfets. ",
"provenance": null
},
{
"answer": "Semiconductors are simply materials which *can* be doped to make Electronic Devices based off of those materials. Usually doping(in Silicon based devices) involves using Silicon which has been laced with impurities to produce the pn junctions you speak of. Silicon by itself is an intrinsic semiconductor and it's not capable of being used as is in devices. \n\nNowadays, for MOSFETs(Metal Oxide Semiconductor Field Effect Transistors), we use Compound Semiconductors which are made out of different materials altogether. For example Gallium Arsenide (GaAs) is used because it can give us better conduction characteristics and produce faster devices. Another one, as has been pointed out below is AlGaN(Aluminum Gallium Nitride) which is used in cases where we need extremely high mobility (speed of movement) of electrons.\n\nKeep in mind that MOSFETs form the basis of Very Large Scale Integrated circuits which are the basis of chip fabrication technology today.\n\n\n\nSo what does it mean to be an intrinsic semiconductor?\n\nYou can take a pure silicon wafer and it's still considered a semiconductor. Mostly this regular silicon can't be used as it is because we need a charge imbalance to induce current flow. This is done by doping two pieces of silicon and joining them to form the pn junction you talked about. I hope the internal process of doping is something you've understood well already.\n\n\nA device with only 1 pn junction is a diode. We can alter the doping concentrations of the p and n type substrates (surfaces) to produce different devices. For example Zener diodes use imbalances in doping concentrations to produce the reverse breakdown characteristics they use to control voltage.\n\n\nWe can keep adding these junctions and tweaking their doping to form more intricate and useful devices. A common purpose device with 2 pn junctions is a Bipolar Junction Transistor (BJT). An SCR(silicon controlled rectifier) is a device which has 3 pn junctions. These are old devices compared to the technology we have today. I'm currently doing my undergrad in Ee so these are the devices I'm mostly familiar with.",
"provenance": null
},
{
"answer": "Most of the time you actually say semiconductors are \"grown\". It's weird, I know - when I started working in a lab a few years ago and started describing the process to people I knew, everybody was confused because the word \"grow\" seems to imply something alive, but it's actually also the technical term for creating semiconductor crystals.\n\nIt's because semiconductors are created by very slowly adding material to a crystal so it appears to slowly increase in size, or grow. In the case of silicon crystals (the most well-known semiconductor in everything from desktop computers to solar cells) there's literally a huge (like 30cm diameter) cylinder of solid silicon crystal slowly being pulled from a vat of liquid silicon and cooled, causing the liquid to freeze to the crystal. It's like making salt crystallize out of water but really really precise.\n\n\"Doping\" is basically the process of mixing in some trace impurities during growth so they get mixed into the semiconductor and modify the material's electronic properties. You can even dope a semiconductor after growth by shooting impurity atoms at it at such a high speed they embed themselves in in the surface of the crystal.\n\nIf you don't dope the semiconductor (if it's pure), it's known as \"intrinsic\", because all the mobile charge carriers (electrons AND holes) are thermally excited from the material itself, rather than in doped \"extrinsic\" semiconductors, where the mobile charge carriers (either electrons OR holes) are excited from the dopant impurity atoms which are kind of \"external\" - not part of the base material.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "9512766",
"title": "Extrinsic semiconductor",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 766,
"text": "Doping is the key to the extraordinarily wide range of electrical behavior that semiconductors can exhibit, and extrinsic semiconductors are used to make semiconductor electronic devices such as diodes, transistors, integrated circuits, semiconductor lasers, LEDs, and photovoltaic cells. Sophisticated semiconductor fabrication processes like photolithography can implant different dopant elements in different regions of the same semiconductor crystal wafer, creating semiconductor devices on the wafer's surface. For example a common type of transistor, the n-p-n bipolar transistor, consists of an extrinsic semiconductor crystal with two regions of n-type semiconductor, separated by a region of p-type semiconductor, with metal contacts attached to each part.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "775808",
"title": "Shallow donor",
"section": "Section::::Overview.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 429,
"text": "Introducing impurities in a semiconductor which are used to set free additional electrons in its conduction band is called doping with donors. In a group IV semiconductor like silicon these are most often group V elements like arsenic or antimony. However, these impurities introduce new energy levels in the band gap affecting the band structure which may alter the electronic properties of the semiconductor to a great extent.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40344",
"title": "Semiconductor device",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1189,
"text": "Semiconductor materials are useful because their behavior can be easily manipulated by the deliberate addition of impurities, known as doping. Semiconductor conductivity can be controlled by the introduction of an electric or magnetic field, by exposure to light or heat, or by the mechanical deformation of a doped monocrystalline silicon grid; thus, semiconductors can make excellent sensors. Current conduction in a semiconductor occurs due to mobile or \"free\" electrons and electron holes, collectively known as charge carriers. Doping a semiconductor with a small proportion of an atomic impurity, such as phosphorus or boron, greatly increases the number of free electrons or holes within the semiconductor. When a doped semiconductor contains excess holes, it is called a p-type semiconductor (\"p\" for positive electric charge); when it contains excess free electrons, it is called an n-type semiconductor (\"n\" for negative electric charge). A majority of mobile charge carriers have negative charge. The manufacture of semiconductors controls precisely the location and concentration of p- and n-type dopants. The connection of n-type and p-type semiconductors form p–n junctions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9512766",
"title": "Extrinsic semiconductor",
"section": "Section::::Conduction in semiconductors.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 659,
"text": "Unlike in metals, the atoms that make up the bulk semiconductor crystal do not provide the electrons which are responsible for conduction. In semiconductors, electrical conduction is due to the mobile charge carriers, electrons or holes which are provided by impurities or dopant atoms in the crystal. In an extrinsic semiconductor, the concentration of doping atoms in the crystal largely determines the density of charge carriers, which determines its electrical conductivity, as well as a great many other electrical properties. This is the key to semiconductors' versatility; their conductivity can be manipulated over many orders of magnitude by doping.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53954",
"title": "Work function",
"section": "Section::::Physical factors that determine the work function.:Doping and electric field effect (semiconductors).\n",
"start_paragraph_id": 69,
"start_character": 0,
"end_paragraph_id": 69,
"end_character": 399,
"text": "From this one might expect that by doping the bulk of the semiconductor, the work function can be tuned. In reality, however, the energies of the bands near the surface are often pinned to the Fermi level, due to the influence of surface states. If there is a large density of surface states, then the work function of the semiconductor will show a very weak dependence on doping or electric field.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27709",
"title": "Semiconductor",
"section": "Section::::Physics of semiconductors.:Doping.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 490,
"text": "The conductivity of semiconductors may easily be modified by introducing impurities into their crystal lattice. The process of adding controlled impurities to a semiconductor is known as \"doping\". The amount of impurity, or dopant, added to an \"intrinsic\" (pure) semiconductor varies its level of conductivity. Doped semiconductors are referred to as \"extrinsic\". By adding impurity to the pure semiconductors, the electrical conductivity may be varied by factors of thousands or millions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "860861",
"title": "Doping (semiconductor)",
"section": "Section::::Dopant elements.:Group IV semiconductors.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 603,
"text": "By doping pure silicon with Group V elements such as phosphorus, extra valence electrons are added that become unbonded from individual atoms and allow the compound to be an electrically conductive n-type semiconductor. Doping with Group III elements, which are missing the fourth valence electron, creates \"broken bonds\" (holes) in the silicon lattice that are free to move. The result is an electrically conductive p-type semiconductor. In this context, a Group V element is said to behave as an electron donor, and a group III element as an acceptor. This is a key concept in the physics of a diode.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4isofh
|
what are the "loudness wars", why are they happening, and why should anyone care that music is getting louder?
|
[
{
"answer": "Music is getting compressed so it sounds louder. Before this you're set your volume to your preferred level and would hear everything from quiet notes to very loud and distinct drum hits. Now the quiet notes are louder, the mid range is louder, and consequently the formerly loud and distinct drum hits are just barely louder than everything else.\n\n[This](_URL_0_) video demonstrates it better than any written description really can.",
"provenance": null
},
{
"answer": "There isn't as much of an issue with it today, so we might be able to say the the war is over, or at the very least, a truce.\n\nLouder tends to sound better. Why? Not really sure, but it is probably just something to do with biology.\n\nThis fact is very important in mixing music because a big issue that is run into constantly is that you will tend to find things to be better the louder they get. It is very easy to trick yourself into thinking that you skillfully EQ'd a track, but in reality, all you did was increase the volume.\n\nNow, the big factor in the loudness wars is compression. The sort of compression we are talking about in essence will make softer sounds louder. If you compress a signal enough, you can make a whisper the same loudness as a yell. This is what is called as reducing the dynamic range, as in the range of loudness is reduced. It may have been previously from -50 to 0 db, but after some heavy compression it is now -20 to 0 db.\n\nYou can see this in the photo below. Where there used to be peaks and valleys, it is just a straight line.\n\n_URL_0_\n\nSo, we can certainly come up with many negatives as to why this is bad. An obvious one is the philosophical question of \"compared to what?\", in that if everything in a song is the same volume, isn't the song neither loud nor soft?\n\nThat is honestly my biggest issue with poorly compressed music. There is never a point where it just hits you. What should be an epic buildup or a sudden spike reduces itself to be less of a surprise. The soft parts are never soft, the loud parts are never loud.\n\nBut, more importantly, there is a very big reason as to why music needs more of this sort compression now compared to before, and that reason is that we listen to music everywhere. \n\nPortable devices are relatively new, and the idea of listening to music on bike rides, on the train, when shopping, when working out, when doing work, riding the lawn mower, and so on was soon to follow. \n\nBut a big problem arose, and that was that while you were at the gym listening to your favorite song and a soft part came on, you couldn't quite hear it because of that noisy elderly couple chatting in the corner, so you turned it up to hear it... and then the loud part came on and you are frantically looking for the volume knob before you blow your eardrums.\n\nThis is a big problem because not only is it annoying to have to constantly adjust the volume, but it can actually do harm to your ears. A decent deal of compression can help this a lot by reducing the dynamic range just enough so that the soft parts are loud enough to hear over that noisy elderly couple, but soft enough to be distinguishable from the loud part. The goal is essentially to have it all audible, retain dynamics, and not have the listener have to touch the volume knob.\n\nYou might be wondering why portable music changed this. Well, it is because you used to have to listen to music in spaces where there wasn't much noise to overcome. You might pop in a record at the silence of your home. With advances in technology, we know listen to music in less suitable places to hear all the details.\n\nWith that said, there is an interesting selection process in how certain genres of music tend to be selected for their venue. Rock music tends to work for hockey stadiums because it is loud, simply, and punchy; whereas classical in the same stadium would softly garble on a sock.",
"provenance": null
},
{
"answer": "People who make the music think that YOU think everything sounds better louder. And they keep trying to outdo each other to sell records. Because of this, they are sacrificing dynamics (highs and lows) for something that's consistently \"loud\". To me, it's also boring and rather tiring especially when it's done obnoxiously. \n\nI have heard it said that the loudness wars are almost over. Since most music is streaming through YouTube and Spotify and they control the loudness... the actual loudness of the recording doesn't matter as much anymore except if you're listening to a cd. And people think their music should probably be loud if they want to get it on the radio, but that's not true because the radio compresses and limits and EQs the music beforehand anyway. \n\nSource : the mastering show (podcast). ",
"provenance": null
},
{
"answer": "It has ruined every single band ever... From Metallica, to my favorite, Parkway Drive. \n\nI saw Parkway Drive perform their new album live and damn near shit myself. There is NO reason a band should sound THAT much better live. \n\nTheir new album has a dynamic range of 5. Fucking embarrassing. ",
"provenance": null
},
{
"answer": "Go listen to an old recording of money for nothing. The dynamics are great. Listen to it loud. If your sound system is bad, it's going to sound bad. If it's good it's going to sound good.\n\nWith the terrible and compressed mixes of today, and compressed in the sense that the dynamics are compressed, a bad sound system will sound a little better and a good one a lot worse than it could. Your ears will get tired faster and stuff like snare drums will sound weak and disappear into the mix.",
"provenance": null
},
{
"answer": "Loudness Wars: The war part. People discover that the audience remembers \"louder\" as \"better\". Some things get louder (like the average volume of Television Commercials).\n\nSo the producers started mixing their tracks \"hotter\" so they'd stand out in play. Most stations mixing on CDs at the time wouldn't spend much time tweaking levels for every song played.\n\nAlso then at parties those tracks would \"pop\".\n\nNow why \"louder is worse\". In both encoding and electronics performance you get lower quality output. The long explanation is skipped here but basically if you run components near their limits they kind of just don't do as well.\n\nSo anyway, If you take, say a classic Police CD and anything modern and play the CDs back-to-back the total difference is amazing.\n\nSo the same song recorded at the \"natural volumes\" will be in the meaty part of the performance and output curves for the decoders and such. (hence the other stuff about compressing signals and such.)\n\nAlso, if people stopped being dicks about the volume, then it would be easier to play music.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3576625",
"title": "Loudness war",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 661,
"text": "The loudness war (or loudness race) refers to the trend of increasing audio levels in recorded music which reduces audio fidelity and, according to many critics, listener enjoyment. Increasing loudness was first reported as early as the 1940s, with respect to mastering practices for 7\" singles. The maximum peak level of analog recordings such as these is limited by varying specifications of electronic equipment along the chain from source to listener, including vinyl and Compact Cassette players. The issue garnered renewed attention starting in the 1990s with the introduction of digital signal processing capable of producing further loudness increases.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4358939",
"title": "Botellón",
"section": "Section::::Controversy.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 288,
"text": "BULLET::::1. Noise: Because participants gather in the streets and other public areas, the noise can disturb surrounding residents and citizens. Also, loud music contributes to the amount of noise, which is one reason why participants have begun moving to less populated areas in cities.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35332840",
"title": "Noise in music",
"section": "Section::::Noise as excessive volume.\n",
"start_paragraph_id": 73,
"start_character": 0,
"end_paragraph_id": 73,
"end_character": 206,
"text": "Music played at excessive volumes is often considered a form of noise pollution. Governments such as that of the United Kingdom have local procedures for dealing with noise pollution, including loud music.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18080825",
"title": "Loud music",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 287,
"text": "Loud music is music that is played at a high volume, often to the point where it disturbs others and/or causes hearing damage. It may include music that is sung live with one or more voices, played with instruments, or broadcast with electronic media, such as radio, CD, or MP3 players.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33381677",
"title": "Noise Chaos War",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 275,
"text": "Noise Chaos War is a compilation album by the American thrash metal band, Hirax. It contains three EP's in remastered format; \"Barrage of Noise\" (2000), \"Chaos and Brutality\" and \"Assassins of War\" (2007). \"Bombs of Death\" is a live recorded video from a 2009 show in Japan.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "87595",
"title": "Noise music",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 875,
"text": "Contemporary noise music is often associated with extreme volume and distortion. In the domain of experimental rock, examples include Lou Reed's \"Metal Machine Music\", and Sonic Youth. Other examples of music that contain noise-based features include works by Iannis Xenakis, Karlheinz Stockhausen, Helmut Lachenmann, Cornelius Cardew, Theatre of Eternal Music, Glenn Branca, Rhys Chatham, Ryoji Ikeda, Survival Research Laboratories, Whitehouse, Ramleh, Coil, Brighter Death Now, Merzbow, Dror Feiler, Cabaret Voltaire, Psychic TV, Blackhouse, Jean Tinguely's recordings of his sound sculpture (specifically \"Bascule VII\"), the music of Hermann Nitsch's \"Orgien Mysterien Theater\", and La Monte Young's bowed gong works from the late 1960s. Genres such as industrial, industrial techno, lo-fi music, black metal, sludge metal, and glitch music employ noise-based materials.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47801945",
"title": "Louder Than War (website)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 352,
"text": "Louder Than War is a music and culture website and magazine focusing on mainly alternative arts news, reviews, and features. The site is an editorially independent publication that was started by journalist John Robb in 2010 and is now run by a team of other journalists with a worldwide team of freelancers. There has been a print edition since 2015.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2dodx1
|
How would a Christian church service have differed in Rome in 500, 1000, and 1500 CE?
|
[
{
"answer": "I can only give some descriptions on the changes throughout history. The essence basically remains the same throughout history: the mass of the catechumens and the eucharistic liturgy proper. Initially Mass was offered only on Sundays and feast days, but as more feast days were inserted, daily masses began to be offered.\n\nChanges were made throughout history: additional prayers, or changes in the order of prayers. But the Canon of Mass, the core eucharistic prayer in the eucharistic liturgy, has more or less been the same in Rome since Pope Gregory I (600 AD) to 1500 AD. Elsewhere there is more variation, and only unified by Pope Pius V in 1570 after the Council of Trent.\n\nThe liturgy in 500 AD compared to 1000 and 1500 AD was simpler, with fewer prayers, graduals, and no Credo, just to mention a few. It should therefore be shorter, but since the medieval ages sometimes did not have sermons, the length could be the same. A Tridentine mass (after 1500s) could reach 3 hour long in high mass form with a sermon.\n\n",
"provenance": null
},
{
"answer": "If you're curious, the liturgy of John Chrysostom (4th century) is still performed in Eastern Orthodox churches today every Sunday morning with very little difference, except a few: Pews wouldn't have been there; catechumens would be standing in the back, in the narthex, and would leave after the catechumen prayers mid-way through; non-Christians would also leave mid-way through, before the eucharist; men and women would be on separate sides of the aisle. Besides these minor differences though, attend pretty much any Eastern Orthodox Sunday morning liturgy and it's word-for-word (translated) the same one done in the fourth century as written by John Chrysostom.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1027957",
"title": "Early Christianity",
"section": "Section::::Ante-Nicene Period (c. 100–325).:Diversity and proto-orthodoxy.:Proto-orthodoxy.:Important Church centers.\n",
"start_paragraph_id": 103,
"start_character": 0,
"end_paragraph_id": 103,
"end_character": 209,
"text": "By the end of the early Christian period, the church within the Roman Empire had hundreds of bishops, some of them (Rome, Alexandria, Antioch, \"other provinces\") holding some form of jurisdiction over others.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12841324",
"title": "History of early Christianity",
"section": "Section::::Ante-Nicene Period (c.100-325).:Diversity and proto-orthodoxy.:Proto-orthodoxy.:Important Church centers.\n",
"start_paragraph_id": 63,
"start_character": 0,
"end_paragraph_id": 63,
"end_character": 209,
"text": "By the end of the early Christian period, the church within the Roman Empire had hundreds of bishops, some of them (Rome, Alexandria, Antioch, \"other provinces\") holding some form of jurisdiction over others.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22765761",
"title": "Ante-Nicene period",
"section": "Section::::Diversity and proto-orthodoxy.:Proto-orthodoxy.:Rome and the Papacy.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 209,
"text": "By the end of the early Christian period, the church within the Roman Empire had hundreds of bishops, some of them (Rome, Alexandria, Antioch, \"other provinces\") holding some form of jurisdiction over others.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "715089",
"title": "Church service",
"section": "Section::::From Jewish to Christian services.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 1045,
"text": "The real evolution of the Christian service in the first century is shrouded in mystery. By the second and third centuries, such Church Fathers as Clement of Alexandria, Origen, and Tertullian wrote of formalised, regular services: the practice of Morning and Evening Prayer, and prayers at the third hour of the day (terce), the sixth hour of the day (sext), and the ninth hour of the day (none). With reference to the Jewish practices, it is surely no coincidence that these major hours of prayer correspond to the first and last hour of the conventional day, and that on Sundays (corresponding to the Sabbath in Christianity), the services are more complex and longer (involving twice as many services if one counts the Eucharist and the afternoon service). Similarly, the liturgical year from Christmas via Easter to Pentecost covers roughly five months, the other seven having no major services linked to the work of Christ. However, this is not to say that the Jewish services were copied or deliberately substituted, see Supersessionism.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1680963",
"title": "Christian worship",
"section": "Section::::Early Church Fathers.:From Jewish to Christian services.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 1045,
"text": "The real evolution of the Christian service in the first century is shrouded in mystery. By the second and third centuries, such Church Fathers as Clement of Alexandria, Origen, and Tertullian wrote of formalised, regular services: the practice of Morning and Evening Prayer, and prayers at the third hour of the day (terce), the sixth hour of the day (sext), and the ninth hour of the day (none). With reference to the Jewish practices, it is surely no coincidence that these major hours of prayer correspond to the first and last hour of the conventional day, and that on Sundays (corresponding to the Sabbath in Christianity), the services are more complex and longer (involving twice as many services if one counts the Eucharist and the afternoon service). Similarly, the liturgical year from Christmas via Easter to Pentecost covers roughly five months, the other seven having no major services linked to the work of Christ. However, this is not to say that the Jewish services were copied or deliberately substituted, see Supersessionism.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "54387838",
"title": "Caesarea, Numidia",
"section": "Section::::History.:Romanization and Christianity center.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 369,
"text": "From an early stage, the city had a small but growing population of Christians, Roman and Berber and was noted for the religious debates and tumults which featured the hostility of Roman public religion toward Christians. By the 4th century, the conversion of the population from pagan to Christian beliefs resulted in nearly all of the population being Christianised.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41063388",
"title": "The Schola Cantorum of Rome",
"section": "Section::::Early Christian Church.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 946,
"text": "Peace between the Church and the Roman Empire greatly effected the liturgical life and musical practice of Christians. In the fourth century AD, Constantine became the first Roman emperor to convert to Christianity. This conversion led to the proclamation of the Edict of Milan, which decreed religious tolerance throughout the empire. With more and more converts, it was clear that services could no longer be conducted in the informal manner of the early days. This freedom in religion allowed the church to build for large basilicas which made it possible for public worship and for Christians to finally assume a new dignity. Music, in particular had its own place in these newly constructed basilicas. As the early church of Jerusalem spread westward to Western Europe, it brought along musical elements from diverse areas. It was during this time that the Schola Cantorum made its first appearance at the service of liturgical celebration.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
28sb8s
|
why can hospitals charge $50 a pill for tylenol, but i can buy a whole bottle at the store for $5?
|
[
{
"answer": "My local hospital charged $25000 (to insurance) for the birth of my daughter, a three day affair. That's a lot, but my daughter was severely breached, and my wife needed an emergency c section. They'd both likely be dead if we didn't go to a hospital.\n\nSo if the choice is ludicrous prices or death, what choice do you have?",
"provenance": null
},
{
"answer": "I'm sure the large amount of people that use the ER as a doctors office and don't pay have something to do with it.",
"provenance": null
},
{
"answer": "You're not just paying for the tylenol. You're paying for the nurse to give it to you, to make sure it's the right thing, for the diagnosis from the doctor to give you the tylenol, for anything any of the techs have to do for you, for the bed you're sitting in, for the air conditioning, for the TV, for the electricity, for the hospital administration and record keeping. You're paying for all of it. Going to the hospital for a tylenol is a very silly thing to do.\n\nIt's the exact same reason you get charged $1.99 for a coke at a restaurant, when you can get a whole two liter for $0.99 at the store. You're not just paying for the soda, you're paying to have it brought to you, with ice and a straw, in a social setting, with food available, by a server. Same idea.",
"provenance": null
},
{
"answer": "Because doctors are struggling and don't make enough.",
"provenance": null
},
{
"answer": "Because they can.",
"provenance": null
},
{
"answer": "Because of greed.",
"provenance": null
},
{
"answer": "because insurers haggle over bills. starting way over your target price and negotiating down to where you wanted to be initially is haggling 101.",
"provenance": null
},
{
"answer": "In Denmark... it would cost you about 0$, if it was given to you in the hospital, and you did not have to go buy it at a pharmacy... although, the treatment, the bed, the electricity, the nurse, the doctor.. would add up to the huge sum of 0$... And really... I don´t have a problem with paying for others, because I know that if I (god forbid it) need treatment someday, other will pay for me. I just hope America starts to realise this. Both on healthcare and school system",
"provenance": null
},
{
"answer": "The rationale is that in a hospital, at least two highly trained \nprofessionals have to review whether you should be given that \npill, and a third needs to double check that it's the proper pill,\ngiven at the proper time.\nGiven the rate of 'medication error' problems this isn't as\nridiculous as it appears on the surface.\n\nFor Tylenol, this is ridiculous most of the time, (but \nappropriate in some instances).\n\nStill I agree, = it's bill padding.",
"provenance": null
},
{
"answer": "Insurance companies, Medicaid, Medicare, and people who visit the ER without insurance don't pay $50 a pill. Insurance pays a reasonable amount, Medicaid and Medicare pay about 70% of the pill's cost, and uninsured ER patients pay nothing. Hospitals have to make up the difference somewhere or they'd go out of business.",
"provenance": null
},
{
"answer": "They have to make up the cost from people that don't pay.",
"provenance": null
},
{
"answer": "I am a nurse and I have worked both in a hospital setting as well as a doctor's office setting. This is my understanding of why tings cost so darn much when billed thru a hospital or clinic.\n\nWhen you pay what seems like an exorbitant amount for something like a Tylenol what you are really paying for are things that can't be added to your bill, but you use while in the hospital. Things like electricity, water, staff time, cost of maintaining and upgrading the building/infrastructure, and to help offset things that we can't bill the actual cost because it is too ginormously high.\n\nYou are also paying to of set the cost of what insurance companies write off. For example when I bill a specific code it costs $20 dollars. Insurance A's contract with my hospital reimburses us $10 and requires that we write off the remaining $10. Insurance B's contract pays $15 and we write off $5. Insurance C's contract pays $5 dollars and we write off $15 dollars, and so on and so forth. \n\nSince we can't very well call the electric company and say sorry we didn't get as much reimbursement as we expected this month, what happens is every so often (usually once a year) someone/some committee from the finance department looks are what we need to charge to A)remain competitive with other area hospitals, and B) still be able to pay our bills.",
"provenance": null
},
{
"answer": "Hospital finances are complicated. Bills can vary wildly depending on who is paying the bill. For example medicare sets the \"reasonable and customary\" fee and then generally insurance companies try to negotiate something like that for their members. The only people who would ever see a $50 for tylenol would be the uninsured mostly because the charge is built once and just not adjusted.\n\nWhy would you get an insane number like that? Well insurance companies may reduce the bill by 80% so if the hospital wants $10 for that tylenol they charge $50. Generally though a bill is tied to a Diagnosis Related Group DRG, which more or less says if you have x problem the hospital gets $5000 because it should take 2.1 days. This is just like the mechanic who says a transmission costs $1500 and takes 6 hours. If it takes 3 then yay if it takes 7 then boo. \n\nA big part of hospitals are hidden expenses. Compliance with the myriad of regulations isn't free. Electronic medical records cost plenty and most hospitals gain nothing from them. Safety officer isn't cheap, infectious disease nurses, dietary counseling, diabetes counseling, housekeeping, executive salaries and IT is a big cost. Certain procedures are big dollar losers (mid 5 figures) but physicians have patients who need them so the hospital takes a fat loss to make them happy so they'll do more profitable procedures there. \n\nSo hidden but very real expenses and a complicated billing system make a pretty opaque process. An opaque process can drop crazy bills. ",
"provenance": null
},
{
"answer": "Even different pharmacies have different pricing on prescriptions within the same city. From what I learned, Costco ALWAYS has the lowest pricing compared to Walgreens/other drug stores. I almost had to pay 500 dollars a month for 30 pills, luckily at the end I managed to get insurance and the cost went down to 5 dollars.",
"provenance": null
},
{
"answer": "The average cost of a stay in the ICU for somebody who need ventilated is about $31,000 - $42,000 (over a typical 14 day stay). Poor/homeless/uninsured people receive these treatments, some of whom make multiple emergency room visits per year, and have no way of making their payments. Hospitals inflate prices for insured people and people who can pay for healthcare in order to turn a profit. If you use a hospital, you're payment plan is paying for your care AND the care of the people who had no way of making payment.\n\nEDIT: [source](_URL_0_)",
"provenance": null
},
{
"answer": "They're free in Canada",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2295370",
"title": "Carbidopa/levodopa",
"section": "Section::::Society and culture.:Cost.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 212,
"text": "It is available as a generic medication and is moderately expensive. Globally, the wholesale price of the medication is about US$1.80 to $3.00 a month. In the United States a month's supply is about $50 to $150.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "359238",
"title": "Prescription drug",
"section": "Section::::Regulation.:United States.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 510,
"text": "Large US retailers that operate pharmacies and pharmacy chains use inexpensive generic drugs as a way to attract customers into stores. Several chains, including Walmart, Kroger (including subsidiaries such as Dillons), Target, and others, offer $4 monthly prescriptions on select generic drugs as a customer draw. Publix Supermarkets, which has pharmacies in many of their stores, offers free prescriptions on a few older but still effective medications to their customers. The maximum supply is for 30 days.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "722415",
"title": "Dantrolene",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 359,
"text": "It is marketed by Par Pharmaceuticals LLC as Dantrium (in North America) and by Norgine BV as Dantrium, Dantamacrin, or Dantrolen (in Europe). A hospital is recommended to keep a minimum stock of 36 dantrolene vials totaling 720 mg, sufficient for a 70-kg person. As of 2015 the cost for a typical course of medication in the United States is 100 to 200 USD.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1128211",
"title": "Amiloride",
"section": "Section::::Society and culture.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 331,
"text": "It is on the World Health Organization's List of Essential Medicines, the most important medications needed in a basic health system. In the United States the wholesale price of a month's supply at the usual daily dose of the medication is about US$20.10. In the United Kingdom a month of medication costs the NHS about 24 pounds.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22405807",
"title": "Caphosol",
"section": "Section::::Price.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 211,
"text": "It is a prescription drug. Cash price in California or Oregon is $279.99 as of September 28, 2016 for a week's supply totalling 900 ml of two-part solution. According to the manufacturer, solution consists of: \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "402923",
"title": "Bottled water",
"section": "Section::::Concerns.:Pricing.\n",
"start_paragraph_id": 114,
"start_character": 0,
"end_paragraph_id": 114,
"end_character": 552,
"text": "The Beverage Marketing Corporation (BMC) states that in 2013, the average wholesale price per gallon of domestic non-sparkling bottled water was $1.21. BMC's research also shows that consumers actually tend to buy bottled water in bulk from supermarkets (25.3%) or large discount retailers (57.9%) because it costs significantly less. Convenience stores are likely to have higher prices (4.5%), as do drug stores (2.8%). The remaining 9.5% is accounted for through vending machines, cafeterias and other food service outlets, and other types of sales.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3466131",
"title": "Pyrimethamine",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 412,
"text": "It is on the World Health Organization's List of Essential Medicines, the most effective and safe medicines needed in a health system. In the United States in 2015, it was not available as a generic medication and the price was increased from US$13.50 to $750 a tablet ($75,000 for a course of treatment). In other areas of the world, it is available as a generic and costs as little as $0.05 to $0.10 per dose.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1j0x0z
|
How long does it take for a nerve signal to travel from a blue whale's tail to it's brain?
|
[
{
"answer": "Assuming an optimally myelinated fibre, nerve impulse speeds in humans are circa. 100m/s. So if there were 1 fibre travelling the distance of the blue whale, about 0.3s. \n\nI don't know much about whale physiology, but I very much doubt there is a single axon travelling the whole whale. Instead it will be punctuated with synapses which dramatically slow down the total time taken to traverse the whale.\n\nReflex arcs are faster due to fewer synapses, but will also go via the spinal column rather than the full distance to the brain - so it's an unfair comparison!",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "563392",
"title": "Neural pathway",
"section": "Section::::Functional aspects.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 377,
"text": "Some neurons are responsible for conveying information over long distances. For example, motor neurons, which travel from the spinal cord to the muscle, can have axons up to a meter in length in humans. The longest axon in the human body belongs to the Sciatic Nerve and runs from the great toe to the base of the spinal cord. These are archetypal examples of neural pathways.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31479566",
"title": "Nociception assay",
"section": "Section::::Thermal assays.:Tail flick.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 757,
"text": "The tail flick assay or tail flick test uses a high-intensity beam of light aimed at a rodent's tail to detect nociception. In normal rodents, the noxious heat sensation induced by the beam of light causes a prototypical movement of the tail via the flexor withdrawal reflex. An investigator normally measures the time it takes for the reflex to be induced, a factor influenced by a rodent's sex, age and body weight. The most critical parameter for the tail flick assay is the beam intensity; stimuli producing latencies of larger than 3–4 seconds generally create more variable results. Another important factor to consider is the level of restraint used; rodents held too tightly may exhibit greater tail flick latencies due to heightened stress levels.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20880",
"title": "Mira",
"section": "Section::::Stellar system.:Component A.:Mass loss.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 829,
"text": "Ultraviolet studies of Mira by NASA's Galaxy Evolution Explorer (GALEX) space telescope have revealed that it sheds a trail of material from the outer envelope, leaving a tail 13 light-years in length, formed over tens of thousands of years. It is thought that a hot bow-wave of compressed plasma/gas is the cause of the tail; the bow-wave is a result of the interaction of the stellar wind from Mira A with gas in interstellar space, through which Mira is moving at an extremely high speed of 130 kilometres/second (291,000 miles per hour). The tail consists of material stripped from the head of the bow-wave, which is also visible in ultraviolet observations. Mira's bow-shock will eventually evolve into a planetary nebula, the form of which will be considerably affected by the motion through the interstellar medium (ISM).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46425008",
"title": "Outline of the human brain",
"section": "Section::::Structure of the human brain.:Visible anatomy.:Isolating the brain from other structures.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 316,
"text": "BULLET::::- Neurons – vary in length from less than a millimeter to over a meter. The longest single human neuron currently identified extends from the tip of a toe, well over a meter, up to the spinal cord at L1. Neurons that both originate and terminate inside the brain itself can measure less than a millimeter.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15641117",
"title": "Ferguson reflex",
"section": "Section::::Mechanism.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 924,
"text": "Sensory information regarding mechanical stretch of the cervix is carried in a sensory neuron, which synapses in the dorsal horn before ascending to the brain in the anterolateral columns (ipsilateral and contralateral routes). Via the median forebrain bundle, the efferent reaches the PVN and SON of the hypothalamus. The posterior pituitary releases oxytocin due to increased firing in the hypothalamo-hypophyseal tract. Oxytocin acts on the myometrium, on receptors which have been upregulated by a functional increase of the estrogen-progesterone ratio. This functional ratio change is mediated by a decrease in myometrial sensitivity to progesterone, due to an increase (surely this should be decrease) in progesterone receptor A, and a concurrent increase in myometrial sensitivity to estrogen, due to an increase in estrogen receptor α. This causes myometrial contraction and further positive feedback on the reflex.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "749853",
"title": "Fovea centralis",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 306,
"text": "Approximately half of the nerve fibers in the optic nerve carry information from the fovea, while the remaining half carry information from the rest of the retina. The parafovea extends to a radius of 1.25 mm from the central fovea, and the perifovea is found at a 2.75 mm radius from the fovea centralis.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4019580",
"title": "Lateral giant interneuron",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 324,
"text": "When the sensory hairs of the tail fan of crayfish are stimulated, the LG activates the motor neurons that control flexion movements of the abdomen in a way that propels the crayfish away from the source of the stimulation. The LG bypasses the main neural system that controls locomotion, thus shortening the reaction time.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1jp59x
|
why would someone want to jam gps receivers?
|
[
{
"answer": "The usual reason is people who are commercial drivers who want to do something they aren't allowed to do (take a detour to visit a relative, for example) but their commercial vehicle logs or transmits the vehicle's information to their boss. The vehicle usually gets its location, time, and current speed from GPS, and either logs this information periodically to a recording device or transmits it to the boss/company via the cellular network.\n\nJamming the GPS intermittently as well as when \"needed\" for clandestine activity, and it looks like mechanical GPS / equipment failure.",
"provenance": null
},
{
"answer": "In addition to location data, GPS also provides precise timing data for some applications. If this timing is disrupted, it can usually make whatever it's supporting useless. This can include things from automated toll collection systems to stock exchange trading algorithms. Generally, the people disrupting them do so because they have something to gain by the system's failure.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "27616141",
"title": "Error analysis for the Global Positioning System",
"section": "Section::::Artificial sources of interference.\n",
"start_paragraph_id": 98,
"start_character": 0,
"end_paragraph_id": 98,
"end_character": 553,
"text": "Man-made EMI (electromagnetic interference) can also disrupt or jam GPS signals. In one well-documented case it was impossible to receive GPS signals in the entire harbor of Moss Landing, California due to unintentional jamming caused by malfunctioning TV antenna preamplifiers. Intentional jamming is also possible. Generally, stronger signals can interfere with GPS receivers when they are within radio range or line of sight. In 2002 a detailed description of how to build a short-range GPS L1 C/A jammer was published in the online magazine Phrack.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27683",
"title": "Satellite",
"section": "Section::::Attacks on satellites.:Jamming.\n",
"start_paragraph_id": 172,
"start_character": 0,
"end_paragraph_id": 172,
"end_character": 333,
"text": "Due to the low received signal strength of satellite transmissions, they are prone to jamming by land-based transmitters. Such jamming is limited to the geographical area within the transmitter's range. GPS satellites are potential targets for jamming, but satellite phone and television signals have also been subjected to jamming.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "862061",
"title": "Wide Area Augmentation System",
"section": "Section::::History and development.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 531,
"text": "This inaccuracy in GPS is mostly due to large \"billows\" in the ionosphere, which slow the radio signal from the satellites by a random amount. Since GPS relies on timing the signals to measure distances, this slowing of the signal makes the satellite appear farther away. The billows move slowly, and can be characterized using a variety of methods from the ground, or by examining the GPS signals themselves. By broadcasting this information to GPS receivers every minute or so, this source of error can be significantly reduced.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17800413",
"title": "GPS navigation device",
"section": "Section::::Sensitivity.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 549,
"text": "GPS devices vary in sensitivity, speed, vulnerability to multipath propagation, and other performance parameters. High Sensitivity GPS receivers use large banks of correlators and digital signal processing to search for GPS signals very quickly. This results in very fast times to first fix when the signals are at their normal levels, for example outdoors. When GPS signals are weak, for example indoors, the extra processing power can be used to integrate weak signals to the point where they can be used to provide a position or timing solution.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27616141",
"title": "Error analysis for the Global Positioning System",
"section": "Section::::Natural sources of interference.\n",
"start_paragraph_id": 94,
"start_character": 0,
"end_paragraph_id": 94,
"end_character": 233,
"text": "Since GPS signals at terrestrial receivers tend to be relatively weak, natural radio signals or scattering of the GPS signals can desensitize the receiver, making acquiring and tracking the satellite signals difficult or impossible.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "503209",
"title": "Spoofing attack",
"section": "Section::::GPS spoofing.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 1532,
"text": "A GPS spoofing attack attempts to deceive a GPS receiver by broadcasting incorrect GPS signals, structured to resemble a set of normal GPS signals, or by rebroadcasting genuine signals captured elsewhere or at a different time. These spoofed signals may be modified in such a way as to cause the receiver to estimate its position to be somewhere other than where it actually is, or to be located where it is but at a different time, as determined by the attacker. One common form of a GPS spoofing attack, commonly termed a carry-off attack, begins by broadcasting signals synchronized with the genuine signals observed by the target receiver. The power of the counterfeit signals is then gradually increased and drawn away from the genuine signals. It has been suggested that the capture of a Lockheed RQ-170 drone aircraft in northeastern Iran in December, 2011 was the result of such an attack. GPS spoofing attacks had been predicted and discussed in the GPS community previously, but no known example of a malicious spoofing attack has yet been confirmed. A \"proof-of-concept\" attack was successfully performed in June, 2013, when the luxury yacht \"White Rose of Drachs\" was misdirected with spoofed GPS signals by a group of aerospace engineering students from the Cockrell School of Engineering at the University of Texas in Austin. The students were aboard the yacht, allowing their spoofing equipment to gradually overpower the signal strengths of the actual GPS constellation satellites, altering the course of the yacht.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1325302",
"title": "Radar detector",
"section": "Section::::Description.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 782,
"text": "In recent years, some radar detectors have added GPS technology. This allows users to manually store the locations where police frequently monitor traffic, with the detector sounding an alarm when approaching that location in the future (this is accomplished by pushing a button and doesn't require coordinates to be entered). These detectors also allow users to manually store the coordinates of sites of frequent false alarms, which the GPS enabled detector will then ignore. The detector can also be programmed to mute alerts when traveling below a preset speed, limiting unnecessary alerts. Some GPS enabled detectors can download the GPS coordinates of speed monitoring cameras and red-light cameras from the Internet, alerting the driver that they are approaching the camera.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4adjfs
|
is there a fixed amount of money/assets in the world?
|
[
{
"answer": "no it wouldn't. because you can create money out of nothing. aka interest. the amount of money is always going up basically due to interest. ",
"provenance": null
},
{
"answer": "Money roughly approximates the total amount of wealth in the world.\n\nEvery time someone pulls a rock out of the ground, encourages a plant to grow, or composes a hit single, wealth is created, and eventually, money will be created to reflect this.",
"provenance": null
},
{
"answer": "Money is an arbitrary abstraction of wealth. There is no limit to how much money is in the world.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3473088",
"title": "Tendency of the rate of profit to fall",
"section": "Section::::21st century Marxist controversies.:Production capital/total capital.:Financialization.\n",
"start_paragraph_id": 255,
"start_character": 0,
"end_paragraph_id": 255,
"end_character": 803,
"text": "In 2008, the world's total tradeable financial assets (stocks, debt securities and bank deposits) were estimated at $178 trillion, more than three times the value of what the whole world produces in a year. In June 2017, the world's total public and private debt was estimated at US$217 trillion, again more than three times the value of what the whole world produces in a year. If one assumes a grand-average net profit rate of 5% on this global debt, the profit made from global debt is roughly equal in value to the GDP of China. This has created a world in industrialized countries that is very different from the orthodox classical revolutionary Marxist analysis of the commodity, where workers simply exchange their commodity labor power for a wage to buy a bundle of consumable commodities with.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2116304",
"title": "Gross fixed capital formation",
"section": "Section::::Second-hand fixed assets.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 517,
"text": "The fixed assets purchased may nowadays include substantial used assets traded on second-hand markets, the quantitatively most significant items being road vehicles, planes, and industrial machinery. Worldwide, this growing trade is worth hundreds of billions of dollars, and countries in Eastern Europe and Latin America, Russia, China, India and Morocco use large quantities of second-hand machinery. Often it is bought from Europe, North America and Japan, where fixed assets are on average scrapped more quickly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6350415",
"title": "International financial institutions",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 347,
"text": "Today, the world's largest IFI is the European Investment Bank, with a balance sheet size of €573 billion in 2016. This compares to the two components of the World Bank, the IBRD (assets of $358 billion in 2014) and the IDA (assets of $183 billion in 2014). For comparison, the largest commercial banks each have assets of c.$2,000-3,000 billion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "370092",
"title": "Offshore bank",
"section": "Section::::Banking services.:Ownership.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 570,
"text": "According to Merrill Lynch and Capgemini's “World Wealth Report” for 2000, one third of the wealth of the world's “high-net-worth individuals” — nearly $6 trillion out of $17.5 trillion — may now be held offshore. A large portion, £6.3tn, of offshore assets, is owned by only a tiny sliver, 0.001% (around 92,000 super wealthy individuals) of the world's population. In simple terms, this reflects the inconvenience associated with establishing these accounts, not that these accounts are only for the wealthy. Most all individuals can take advantage of these accounts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21978283",
"title": "Financial position of the United States",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 226,
"text": "The financial position of the United States includes assets of at least $269.6 trillion (1576% of GDP) and debts of $145.8 trillion (852% of GDP) to produce a net worth of at least $123.8 trillion (723% of GDP) as of Q1 2014.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "370432",
"title": "Economic inequality",
"section": "Section::::Measurements.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 1145,
"text": "A study by the World Institute for Development Economics Research at United Nations University reports that the richest 1% of adults alone owned 40% of global assets in the year 2000. The \"three\" richest people in the world possess more financial assets than the lowest 48 nations combined. The combined wealth of the \"10 million dollar millionaires\" grew to nearly $41 trillion in 2008. A January 2014 report by Oxfam claims that the 85 wealthiest individuals in the world have a combined wealth equal to that of the bottom 50% of the world's population, or about 3.5 billion people. According to a \"Los Angeles Times\" analysis of the report, the wealthiest 1% owns 46% of the world's wealth; the 85 richest people, a small part of the wealthiest 1%, own about 0.7% of the human population's wealth, which is the same as the bottom half of the population. In January 2015, Oxfam reported that the wealthiest 1 percent will own more than half of the global wealth by 2016. An October 2014 study by Credit Suisse also claims that the top 1% now own nearly half of the world's wealth and that the accelerating disparity could trigger a recession.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22156522",
"title": "Fiat money",
"section": "Section::::Money creation and regulation.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 397,
"text": "In modern economies, relatively little of the supply of broad money is in physical currency. For example, in December 2010 in the U.S., of the $8,853.4 billion in broad money supply (M2), only $915.7 billion (about 10%) consisted of physical coins and paper money. The manufacturing of new physical money is usually the responsibility of the central bank, or sometimes, the government's treasury.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3tfjyc
|
how do processors work? how is a simple silicon chip able to perform calculations?
|
[
{
"answer": "This is a complicated topic built on very simple ideas. If you go step by step you should be able to wrap your head around it.\n\n**What is a semi-conductor?** Starting all the way down at the atomic level. In pure silicon crystals, the atoms are neatly arranged, and all have their outer electron shell full, so it won't conduct electricity because all the electrons are nice and cosy. By adding impurities into the crystal, we make two types of semiconductors,: P and N. N (negative) has more electrons, so it's willing to give them away and P has fewer electrons (positive) so it's glad to take in electrons.\n\nYou [put a chunk of N next to P](_URL_4_) and you have a diode. Pass electric current (aka a flow of electrons, but in the other direction) through it in the P-N direction, and the electrons will flow freely. however if you pass current in the N-P direction, the part where the two semiconducting materials meet will become \"full\" of electrons, like natural silicon crystals, building a wall between the N and P where electrons don't want to move, and any new electrons will hit that wall and won't be able to move forward, instead they'll just keep building that cosy wall.\n\nSo we can force electricity to only pass one way through a circuit, pretty cool...now what?\n\n**What is a transistor?** By [sandwiching semiconductors](_URL_3_) in an N-P-N or P-N-P way, and attaching electrodes (wires), we have a component that will behave differently depending where the electron flow comes from. This was first used as an amplifier (like a transistor radio), But can also be [used as a switch](_URL_1_).\n\nDepending which part you put current in, what comes out of the transistor will either be current or no current.\n\n**Logic circuits.**\nNow that we have this little thing that can switch depending on if it has current or not, we can string a bunch of them together in various ways to make boolean logic circuits. boolean just means you either have yes or no, or, in binary, 1 or 0.\n\nHere's a [NAND gate](_URL_0_), meaning *not and*, as you can see it pretty much looks like a transistor, because it is! You have two inputs, and if A has current (A=1) and B has current (B=1), it will put out a 0 (because it's a *not and*).\n\n[Here's a basic XOR, *exclusive or* gate](_URL_2_), meaning that A need to be 1 or B needs to be 1 for Q to be 1, but if A and B are both 0 or both 1, Q will be 0. This is just one way basic AND or NAND gates can be strung together.\n\nNow slap a few billion of these together in a CPU and you have a logic machine that can do all kinds of calculations.\n(sorry for the brief ending, I ran out of time, hope you learned something)\n\nEDIT: thanks to all the other people explaining boolean arithmetic on a higher level. Teamwork, yay!\n\nEDIT2: Fixed some links and hopefully cleared up the confusion between electron flow and current.",
"provenance": null
},
{
"answer": "You have transistors the size of about 70x70x70 atoms. So even in a tiny chip you can have a fucktonbazillion of the basic elements.",
"provenance": null
},
{
"answer": "There is nothing simple about it. Processors are quite possibly the most complicated thing mankind has ever invented. Learning how one works is a semester long 300 level EE class that's not a lot of fun. \n\nTo understand how a processor works you have to understand what its purpose is. The processor's job is to take data from memory (RAM), storage (hard drive), user inputs (mouse/keyboard) and then perform an operation on that data and output new data to memory, storage, or output devices (screen, speakers, etc). And at the end of the day, it's mostly just moving data from one place in memory to another. \n\nThe operation that the processor performs is a list of \"instructions\" called a program or algorithm. This is not high level code, every single instruction corresponds to a direct action taken by the processor circuitry. It's important to understand this, because at the instruction (or \"machine\") level, you are forcing a bunch of switches into position directly with a sequence of high and low voltages. \n\nNow when I say \"a bunch of switches\" I really mean a couple million logic gates. Logic gates are the simplest digital circuits and implement the boolean expressions AND, OR, and NOT. We use these operations to define and build more complex ones like addition and negation, even bit shifting (move all the bits to the left or right). Once we have those operations, we have subtraction (negation then addition), and multiplication (repeated additions and shifting). We can build up and implement all these operations in a special circuit that forms the basis of the processor called the **ALU** or Arithmetic Logic Unit. (We also have a thing called an FPU or Floating Point Unit, the ALU works on fixed point numbers, where we need an FPU for the floating point numbers). \n\nThe ALU usually has three inputs and one output. Two data inputs and an instruction input. Think of it like a calculator. It takes two inputs, does the operation you tell it, then gives you the output.\n\nSo the question is, where do the data inputs come from, where does the output go, and what is telling the ALU which operation to perform? The answer is the **registers** and **control unit.** Registers are tiny chunks of memory that hold onto a single \"word.\" If you have a 32 bit processor that means the registers are 32 bits \"wide.\" If you have a 64 bit processor then the registers hold 64 bits. Most processors have 16 registers. \n\nThe control unit is a bit more complex because it's the \"brains\" of the processor. It takes the instruction from the program and controls switches between the registers and the ALU. A simple code example might help you understand this:\n\n ADD $r0, $r1, $r2\n\nThis is an assembly instruction that tells the control unit to switch the data pathway so the ALU inputs are registers 0 and 1, the ALU instruction is \"ADD\" and the ALU output is switched to register 2. \n\nSo to recap, we have the ALU, registers, and control unit. The control unit handles the internal \"data path\" or the routing from the registers to the ALU, and the takes instructions from the program to send instructions to the ALU. \n\nThis is all well and good, but we're still only inside the processor. We haven't talked about how the processor accesses data from the user, memory or hard drive, or how the program is treated by the control unit. \n\nThis all has to do with memory. The processor generally has a special register called the \"stack pointer.\" It stores an \"address\" or location in memory where data lies (which is why it's called a \"pointer\" it \"points\" to data). It's the responsibility of the program to keep track of memory. Usually all the data the processor will ever need is in memory. \n\nIt also has a special layer of memory called the \"cache\" of very fast access memory. It's like RAM but costs a couple hundred bucks a gigabyte so you're lucky to have a few megabytes in the chip. In the cache we store \"program\" memory, which is the list of instructions to be executed. It is important to understand that the program memory holds the instructions as machine code *in order.* The processor has another pointer called the \"program counter\" which points to the location in program memory where the current instruction is stored. At the end of an instruction execution the processor increments the program counter so it then accesses the next instruction in program memory. What's really cool though is that like the stack pointer, the processor has direct access to the program counter which means it can execute instructions that changes its value. This is how we do things like loops in programming, you just reset the program counter to the start of the loop. You can also skip around in program memory, which is how you do things like functional programming. And because you're using very fast, random access memory for this there's usually no performance hit. \n\nLastly, I'm sure you're wondering how the processor knows to go to the next instruction. It's pretty simple, you use a clock. Each time the clock ticks, the program counter increments and this forces the control unit to execute the next instruction. You might be wondering what happens if it gets messed up and the control unit doesn't successfully execute the instruction in that clock cycle. It's called a \"hazard\" and it's really bad so people put lots of effort into writing code that's free of hazards and processor architectures that make them near impossible to happen. \n\nYou may also be wondering, is that really it? It's just a super complicated way of moving data from one place to another and *maybe* doing some math with it? And the answer is yes, that's it. Think about it, when I'm typing this the motherboard firmware is moving the data of the last key pressed and the action of the key being pressed (two data words) into memory, and Chrome is looking for the \"Key is pressed\" word, then the processor jumps to the instruction saying \"load last key pressed\" and moves it into the \"display\" program to show \"these bits at these locations.\" The whole thing is just moving data around. \n\n",
"provenance": null
},
{
"answer": "I think the OP meant how they fundamentally compute things, not how they are made.\n\nThey are not complicated in how they work, their engineering is but the actual mechanism for calculation has been known since about 400 BC apparently. Doing it quickly was the problem. Since the time of the clock, mechanical computation has been known but the capabilities were held back until the advent of the transistor.\n\nSo how a processor works:\n\nA processor is a collection of light switches (or gears and values in a mechanical system) that are arranged into logical gates like AND, OR, XOR, NOT, etc. If you have ever played with Minecraft with mods like RedPower you know about those cool redstone gates.\n\nUsing those gates you can load information into registers. Think of them as little bowls you put marbles in representing bits.\n\nYou then dump the contents of those bowls into a channel that runs the marbles into those cool little gates and you get a result.\n\nYou can make a basic 'computer' with nothing more then Legos and a few marbles. Clockwork computations have been around since the 12th century. The big advancement, attributed to Babbage was the idea of a programmable calculating device. (See also Turing Complete Systems) but that is another discussion.\n\nSo back to the marbles. Depending on what marbles you pick depend on the route the remaining marbles take.\n\nIf the first marble is red, turn left. If the first marble is blue, turn right. Swallow that marble and pass the rest on.\n\nNow if we go left we take the marbles in the first bowl say (1 for red, and 0 for blue) 101101 and run them to a XOR gate. The next bowl (register) also goes 101001. At this point we now have 01101 and 01001 at the XOR gate. As the marbles pass the XOR gate the output becomes 00100 and those marbles roll into the third bowl.\n\nThis is basic computation. Turing, Babbage, Von Neuwmann, and others built modern processing to give us the ability to design complex routines that allow us to route those marbles around to a bunch of different gates, run some gates and routes concurrently, and even more complex stuff, but at the very core, it is very basic fundamentals. A few specialized components like addition, subtraction, multiplication, and division are collections of gates. More complex systems are usually built up from fundamental gates grouped and working together. The deepest principle is Boolean Logic or Binary Logic. Flipping light switches more or less.\n\nThe cool thing is we do this every day as part of a processor already. When you go to the grocery store and there are two lanes for groceries. The cashier rings up your stuff and it goes to one lane but if there is a special code (e.g. that bar that separates your stuff from the next person in line) then a switch is thrown and the remaining stuff gets processed down a different path. \n\nBy chaining tens of millions of those fundamental gates you can do complex calculations but again, the fundamentals are so basic you can build the basic parts on your own. It's shrinking them down and linking them together that is the hard part.\n\nAny basic book on Logic Circuits, a basic breadboard, and a electronics kits < $100 bucks and you can build nearly all the fundamental circuits used in a modern processor with a few exceptions.\n\nFor fun look up the 8086 processor (which has been recreated in minecraft on several occasions) as a good look at how modern processors work.\n",
"provenance": null
},
{
"answer": "These answers are good, but let me try relating it to an actual 5 year old.\n\nImagine the processor is like a type of giant choose your own adventure book. It's a gigantic book, and though you might only have 32 or 64 choices to make at the beginning of the story, each unique set of choices you make at the start results in a different ending. All the billions of possible endings are already written into the processor when it's made, so all the hard work is done for you. You just choose a beginning and it (nearly) instantly gives you the ending. \n\nNow, the way the story is actually written into the processor is like a maze with special gates. When a part of the story reaches a gate, the gate automatically decides which way that part of the story will go, sometimes adding more stories to send through the maze. What comes out of the maze is the final ending, or the result of what you asked the processor to do.\n\nIn reality, each choice you make at the beginning is really an on or off electrical switch (like a light switch). So for instance, you can give the processor two numbers (say 2 and 4) and ask it to add them. This means your story choices would be 2, 4 and add. The trick here is, there is a special language/code you have to know, that lets you write any number, letter or request with just the on/off switches. This is called binary, and the book/processor is already written to understand it. \n\nSo you tell the processor 2, 4 and 'add' in its special on/off switch language and it tells you how that story ends, or in this case, the answer to your problem, 6.\n\nEdit: Rereading the question, I think and explanation of the silicon base was asked for.\n\nThe book/processor is written onto a silicon chip. The silicon is special because a maze can be drawn onto it, where electricity/stories can only travel through where the maze is drawn. The special gates are created with silicon, which has a special ability to only let electricity through under certain circumstances. These circumstances are what we use to build the book and decide which story parts go where next in the maze.",
"provenance": null
},
{
"answer": "Everyone in this thread might enjoy reading [this](_URL_0_).\n\nA thread from /tg/ on 4chan where an anon suggests making a pocket computer out of shrunken zombies acting as simple AND/OR gates. We lovingly termed it Skeletron A.I.\n\nHilarity ensues.",
"provenance": null
},
{
"answer": "Suffice it to say that binary numbers have exact analogies to all your usual mathematical operations. And it has been rigorously proven that any arithmetic operation with binary number can be expressed in some sequence of basic logic operations on the individual bits of the numbers.\n\nThe complexity of a CPU comes from organizing and synchronizing multiple circuits that all do relatively simple things.\n\nUnderstanding how a CPU adds two number is easy. But when you think about some instruction stored on a hard drive that is then loaded and inserted into the cpu and then the how the result is written... The cpu turns into a giant switchboard of circuits that can appear almost completely beyond comprehension.\n\n\n\n\n**Micro Architecture**\n\nOn the most basic level,a CPU reads and stores data from storage, it reads instructions (code) from memory, and it executes those instructions.\n\n\nYou can think of an instruction as a key that plugs into a slot, the instruction turns the slot and like one of those coin smushers at tourist attractions a result is returned on the other side. The CPU has multiple hard-wired circuits that compute specific instructions and each one has its own slot. For every instruction in your program there is a dedicated electronic circuit that will do the work. For efficiency, many of the actual physical circuits are shared between these \"slots\" and countless other little tricks to optimize as much as possible.\n\n The instruction itself consists of an operation code (*opcode*) that selects which slot the key should engage with, and it may pass additional information in the instruction through the door to the circuit inside.\n\nA very simple CPU could only have instructions that do mathematical operations with integers, typically then the instruction processing element is called the [Arithmetic Logic Unit](_URL_0_) and you can read the following page on how logic is used to implement [Addition](_URL_5_ )\n\n\nThis is the architecture view of the CPU\n\n**Architecture**\n\nAround this basic kernel of digital computation are hundreds of additional supporting electronic circuits that make this puppet show run.\n\nThe CPU/ALU discussed above has a clock connected to it, every fraction of a nanosecond it reads the next operation and then the logic executes it. How 32/64 bits of information representing an instruction gets into the CPU core is a very complex process involving multiple circuits that are responsible for: tracking which instructions are loaded, which are about to be loaded, loading instructions from memory.\n\n Most modern CPU's have a tiny amount of memory located on board called the [cache](_URL_10_) that stores instructions and data directly on the CPU silicon. This lets us \"ignore\" how the data actually got there (e.g. exactly how you communicate to a hard drive) because the [Memory Management Unit](_URL_7_), the [DMA Controller](_URL_1_), along with other supporting circuits take care of the logistics of getting data to the core.\n\nThousands of human-years have been put into optimizing each individual subcircuit as well as optimizing combinations of them to perform work faster. \n\n**Silicon**\n\nSo far these are all abstract concepts, the final connection to the physical comes in understanding that the most basic logic elements can be implemented as electronic circuits, and that the electrical properties of silicon based transistors make them very reliable and very fast and therefore a natural choice for implementing a massive synchronized computer. \n\nMost modern processors are implemented using the [CMOS](_URL_6_) Process, which uses a specific type of transistor called the [MOSFET](_URL_3_) to perform the electrical lifting to concretely represent this abstract logical machine. Earlier logic systems using resistor and bipolar transistors includes [TTL](_URL_2_) and [RTL](_URL_8_). \n\nCMOS has been the de facto standard for the lowest level of the electronic design since the 70's and is entirely responsible for the exponential progress of microprocessors, primarily because the logic circuits are simple, consume low power, can switch very fast (compared to other logic families), and can be made very very small. MOSFET transistors are also very simple to make using the deposition-etching paradigm used in manufacturing crystals (effectively a MOSFET is 2 overlapping bits of silicon)\n\nIt may surprise you to find out that it is mathematically proven that *every* logical operation can be rewritten to use a (more complicated sequence) of a single logical operation. In CMOS it is very common to express all logic in terms of [NAND](_URL_11_) gates, using what is called [NAND Logic](_URL_4_). \n\nTo see why NAND is so important we need to look at detail of CMOS technology, where using MOSFETS the NAND gate is the most compact (only 4 transistors) and lowest power of all logic gates you can create [CMOS NAND](_URL_9_).\n\n\n**Press a button and see it blink**\n\nThe Complexity of the CPU comes from absolute synchronicity across thousands of different subcomponents and subcircuits, but at the lowest level most circuits are relatively simple, accomplishing a single task and then stacked together like legos. The genius of a computer architecture comes in organizing a collection of well understood circuits into something that works synchronously together. Additionally, \n\n\n",
"provenance": null
},
{
"answer": "Processors are made up of millions of transistors.\n\nTransistors are just miniature electronic relays. (a switch that is either on or off when power is either applied or removed)\n\nWith one relay you can make a 1 bit storage device (1=turn the relay on, 0 = turn the relay off) for code and data.\n\nWith two relays you can make a 1 bit adder circuit.\n\nBasically that is it, the first processors were made out of relays, the most complicated calculation you can think of can be implemented with millions of 1 bit adders and 1 bit storage devices and a list of instructions (stored in the 1 bit storage devices which connect the adders to specific data in a certain sequence).\n\nSubtract, you invert the input, multiply is a bunch of adds, divide is a bunch of adds and subtracts, sqrt(x) is just a bunch of multiples and divides, same for cos(x) (uses Taylor series) etc\n\nEverything else, pipelines, caches, floating point units etc are just made to speed things up (and are made out of transistors)\n\n1 or 0 that is all you need",
"provenance": null
},
{
"answer": "If you're looking for a thorough but easy to follow explanation I'd recommend \"Code\" by Charles Petzold. It's a fantastic book and gives you a good basic understanding of computer architecture.",
"provenance": null
},
{
"answer": "The best explanation I ever found was this: _URL_0_\n\n",
"provenance": null
},
{
"answer": "Computerphile: How computers do math demonstrated with dominoes. \n\n_URL_0_\n",
"provenance": null
},
{
"answer": "OP if you want a book that breaks it all down and is easy to understand check out the book [Code by Charles Petzold](_URL_0_)",
"provenance": null
},
{
"answer": "Don't know if it has been posted before, but numberphile has made an [awesome video]( _URL_0_) that explains the inner workings of a processor with dominoes. ",
"provenance": null
},
{
"answer": "Apparently this isn't an ELI5 question. I'm a CS major and it would still be hard to describe in a few sentences. \n\nThe bottom line is there are 1-2 billion transistors in a cpu, which results in a practically infinite amount of combinations of on or off. When a computer looks at all of these on and off switches it is able to do cool things, which answers the first part of your question. Translating binary into cool things is more complicated.",
"provenance": null
},
{
"answer": "On the silicon chip are millions of tiny transistors - like tiny switches. These are arranged in configurations to make logic gates, which the manufacturers \"wire\" together to make more complex logic circuits which can follow 'instructions' held in the memory (more chips attached to the CPU). \n\nThink of it like the way pinball machines work - when the steel ball hits switches or goes through gates, it causes lights to flash, buzzers and bells and actuators etc. to be triggered, and maybe increments your score. Each switch or gate is input into a logic circuit, and the output might be to flash a light or activate a buzzer.\n\nInside the CPU, the \"wiring together\" of these logic circuits allow operations like arithmetic, or tests to be done, and also most important, the \"decoding\" of instructions stored in memory and then operation on data, also stored in memory.\n\nEach instruction is stored in memory as a number, encoded in binary (number base 2), and each number, known as an opcode, specifies a particular operation for the cpu to carry out. There are instructions for loading data from memory into special memory on the cpu called registers, and for loading data from registers into normal memory; instructions for making decisions, and instructions for performing arithmetic, and so on. Today's cpu's can perform millions of these instructions per second.\n\nExamples of instructions (not real ones, just for illustration). The pretend opcodes and their data are the number to the left. Apologies for the formatting:\n\n 5510 0001 load memory location 1 into register a\n 5511 0002 load memory location 2 into register b\n 0212 add register a to register b\n 3112 0100 compare register a and 100\n 9801 if greater, jump to end\n 5611 0003 store register b in memory location 3\n end:\n ....\n ....\n\nUsually these instructions are not written directly by humans, because they're pretty hard to write this way - we use more human friendly languages, called \"high level languages\" instead. However, people can and do write these directly sometimes, and they usually use \"mnemonics\" - short abbreviations that describe the operation required, instead. We call this low-level language \"assembly code\".\n\n\n\n",
"provenance": null
},
{
"answer": "The answer is that they do not perform calculations in the way you think. That is what we see as being the \"result\" of what happens. what a processor does is route electrons through various types of logic gates. You provide input, example, your fingers touching the number pad on a calculator. That input is taken and broken down into logical steps that are performed by the processor. The processor generates output. The output is then translated into numbers that you see on the display. ",
"provenance": null
},
{
"answer": "All of them miss very important levels.\n\n1. **Level 1: The Silicon.**\n\n A semi conductor is a material which can pass an electrical signal or not. Most semiconductors are \"yes or no\" devices (Diodes), but you can place semiconducting material together in a way that you have pins: An input, an output and a control. This is called a transistor, and you can control the level of electrical signal that passes through it. For our purposes, there is a kind that operates in the \"switching zone\", called a MOSFET. This is a voltage controlled voltage device, and turns on or off.\n\n By placing two of these in series between a voltage source and ground, the midpoint can be controlled to be switched to voltage source, ground, floating, or short the source and ground together.\n\n We arrange so that the input to one of these two MOSFETS is the opposite of the input to the other, so one input switches the output to high, or low voltage.\n\n This is the most simple logic gate, either a Buffer (delay) or Inverter (turns a high signal into a low signal).\n\n2. **Level 2: Logic Gates**\n\n A logic gate is a combination of MOSFETS in a manner that has a predictable and known response to input. Multiple inputs can be added. There are SIX kinds of non trivial gates able to be made with two inputs and one output. AND, OR, XOR, NAND, NOR and NXOR. These gates outputs depend if input A AND B are high, A OR B are high, A and not B (exclusive OR). The output can also be inverted for NAND, NOR and NXOR.\n\n3. **Level 3: Metastable circuits.**\n\n This is slightly complicated. In short, if you have two NOR gates, and wire the output of each to one of the inputs of the other you get something that looks like [this](_URL_1_). It doesn't look like that at all. That is an abstraction. The real circuit diagram is more like [this](_URL_0_)\n\n Abstractions are important here. Real silicon are not perfect ideal switches, but we moved up from level 1 to 2 by imagining they were. We moved from level 2 to 3 by ignoring that the switches need resistors and power and ground and various other electronic bits.\n\n But at our current level of abstraction, we have a latch. You can set it on, or reset it off. This output state will stay constant but not really. It can get confused and into bad states. So by taking two latches and putting them one after the other, we can get something called a D FLIP FLOP. The mechanics of this flip flop get rather complicated, but it basically breaks down to this:\n\n A D Flip Flop is a collection of logic gates with one input and one output. The Flip Flop also takes a \"CLOCK\" signal, a square wave. When the clock signal rises from low to high signal, the signal present on the input is then expressed as the output. At all times OTHER than the rising edge, the output is independant of the input.\n\n We have a one bit memory cell.\n\n4. **Level 4: Registers and computation.***\n\n A register is a collection of single bit memory cells that can be read out in parallel or sequential fashion. Sequential registers are not really needed atm, so we'll use parallel. Easiest way to imagine them are a bunch of coins, heads up or down in a line.\n\n See the abstractions? We went from silicon to switches, to logic gates, to memory cells, and now we're operating completely independently of the actual material of the computer. If you had enough coins, the right rules and time, you could run hello world on your kitchen table.\n\n So, a register is a memory element of fixed size, usually, 8 bits. This memory element can be connected to the input of a computational block depending on various signal switching.\n\n Lets talk about computational blocks and binary representation.\n\n Binary is a base two numerical system. It has the representation 0b00000111, which is '9' in decimal. Binary maths looks a bit weird, but behaves like normal maths.\n\n 0b00000111\n +0b00000101\n ----------\n 0b00001100\n\n 1+1 = 10. 10+10+10 = 110.\n\n Starting at the least significant bit, you go, \"Bitn input A, NXOR Bitn input B\", and put that in the same spot in the output. You then go \"Bitn input A, AND Bitn input B\" and if that is true, that is your carry signal, and that goes to your next bit. The next bit up is very similar, except you NXOR the output with the previous carry, and you decide if any 2 or more of the previous carry and your two input bits are one to form the next carry.\n\n So now we have an adder. It can add two binary represented numbers. We can also build a subtracter or multiplier with much more complicated logic gates. But the important thing is while it is complicated, we have the abstractions and tools that let us do it.\n\n5. **ALU, Control Unit, General Purpose Registers, Special Purpose Registers**\n\n Now the big boys come in. Arithmetic Logic Unit This is a configurable computational block that can perform a number of operations. It takes one input which tells it what to do, and two inputs to operate on, which are also loaded into General Purpose Register. It executes the known operation and that is placed in the output General Purpose Register.\n\n So where does does the 'what operation to do' come from? The Control unit is a large collection of logic which takes an encoded instruction. This instruction might say \"Add Register 1 and 2 and put it in 1.\" The control unit will then switch so that registers 1 and 2 are connected to the input to the ALU, and register 1 is connected to the output. It will also set signals so that the ALU does an addition.\n\n How does the Control Unit Know what do do? It reads an instruction from what is known as \"Program Memory\", usually disk. Where to read from in the memory is controlled by the instruction count register, one of the special purpose registers, which increments whenever the Control Unit has finished its previous instruction. The new memory location is then loaded into another of the Special Purpose registers for execution.\n\nSo, here we are.\n\nFive levels in, and we have:\n\n* Silicon can allow or deny electrical signals to pass through it.\n* These switches can be used to create basic logic.\n* This logic can be combined into persistent memory.\n* This memory can be fed into special logic to operate on the contents of the memory.\n* Memory contents can be operated on with special logic to control the operations on memory and how they are executed and stored.\n\nWHEW!\n\nBUT WE\"RE NOT DONE.\n\nWe're not at MACHINE CODE level. This is the most basic level. I haven't covered data memory, buses, program memory, programming, or a swathe of other, equally complex things.\n\nBut, as someone who has done a Electrical Engineering degree, and built transistors on doped silicon, and used logic gates to build adders, and then use VDHL with FPGAs to make a basic, rudimentary CPU, then learnt C to use on an ATMEGA 8, and now works for a multi national software company....\n\nGood question.\n\n**There is no explaining like you were five. Best I can give is \"explain like you're a high school graduate\".**",
"provenance": null
},
{
"answer": "Very hard to Eli5 it, but essentially it works like a lot of switches that are really really tiny, but it's not a simple silicon chip, it's almost as complex or more complex than most factories. Here's one zoomed in: _URL_2_\n\nAt this scale it's still not easy to understand because the pieces are still too small to see, if you get close enough this is what you see:\n\n_URL_1_\n\nIn the second image there are about 5 or 6 transistors showing just in the picture, everything is made of silicon but it's kinda carved out in an intrinsic way, step by step.\n\nIn the first image you can see about 42,000,000 transistors.\n\nThe calculations are made just like any other machine with switches to change modes and values: _URL_0_\n\nthe advantage of the processor is that it's easily reprogrammable to do different stuff",
"provenance": null
},
{
"answer": "Found [this video](_URL_0_) some time ago and I think the topic is very well explained",
"provenance": null
},
{
"answer": "I'd like to make a book recommendation to anyone interested in an approachable in-depth explanation of simple logic processing: Code\n\nIt's seriously the best book I've read on the subject.\n\n_URL_0_",
"provenance": null
},
{
"answer": "This isn't really a subject that can be covered in a way that an actual 5 year old can understand because the fact of the matter is that a lot of very smart people who make processors maybe don't understand every single bit of it either.\n\nBut to do my best anyway, I will explain here the basic workings of a Register Transfer Machine, which contains some of the basic functionality of a computer and lots of the basic components.\n\nYou need only one component to start off with but you will eventually need lots of this component strung together in somewhat complicated ways to do complicated things. This component can be made of whatever you like but when it is 'on' an electrical current will pass through it, and when it is 'off' no electric current will pass through. Generally we use transistors for this job nowadays but before that, we used vacuum tubes. I will say transistor in the next bit when I mean this component.\n\nLogic gates are the next level of complexity. Generally, they are made of between 2 and 8 transistors. The job here is that I will put in electrical signals, which will either be a high voltage or a low voltage, from outside the logic gate and the circuit should put out a high or a low voltage depending on the input voltages and the arrangement of the transistors. The simplest logic gate is the 'not' gate which takes one input: if the input voltage is high, the output voltage should be low; if the input voltage is low, the output voltage should be high.\n\nTwo other important logic gates are 'and' gates and 'or' gates; these both take two inputs. An 'and' gate will output a high voltage only when both its inputs are high. On the other hand, an 'or' gate will output a low voltage only when both its inputs are low.\n\nThe reason these gates are called logic gates are because they are analogous to logical statements where high voltage is analogous to saying something is true and low voltage is analogous to false.\n\nAt the next level are basic composite components. These are made up of ~5 logic gates(so between 10 and 40 transistors) and 'adder' circuits (circuits that can 'add' the value of two inputs) will come under this heading. I won't explain an adder circuit as these are readily available to see online elsewhere but these are important to the working of a processor and good examples of how we can combine logic gates to make more useful composite components.\n\nOne really important composite component is the multiplexor. The goal of the multiplexor is that I have two input voltages and I want to filter one of them out and simply look at the value of the other. but I want to be able to change which input I am looking at. To do this, I have a third input called a control. When the control is low, the multiplexor will match the value of the first input. When the control is high, the multiplexor will match the value of the second input. It might not be immediately obvious why this is useful but eventually, this will be important in something called addressing in the finished machine. This is also a component that goes into something called registers.\n\nBefore going into registers, I will explain that there exist components called flip-flops. Simply, these components have an input and a connection to a clock. When I say clock, I just mean an electrical input that changes value regularly at precise time intervals. When the clock \"ticks\"(changes value), the output of the flip flop may change to match the input. Until the clock ticks, however, the output of the flip-flop will remain the same, no matter if the input changes.\n\nThe goal of a register is to be able to give the signal for this register to accept a high or low voltage and then, until I give the signal again, constantly output the high or low voltage given before. Registers are made up of a flip-flop and a multiplexor. The flip-flop stores the value that we want the register to put out. The multiplexor decides whether we should update the value of the register or keep the signal the same. The way it does this is that we connect the output of the flip-flop into the first input of the multiplexor and leave the second input of the multiplexor open to accept new values. This way, while the control on the multiplexor is low, the flip-flop's value will feed-back into itself. When the control is high, the flip-flop(and so the register circuit as a whole) will take the value of the second input.\n\nRegisters are really important in the operation of processors, they are used to store values used in calculations as well as the results of those calculations.\n\nThe final thing to explain is that a Register Transfer Machine is made up of three components: a register file, containing a bank of register units to contain values to be used in calculations and the results of said calculations; an arithmetic or logic unit, collections of circuits to be used to calculate results (including adder circuits); and a series of connecting wires called 'buses'; there are three buses: data bus, address bus, and control bus. Data buses, unsurprisingly, carry the actual values of the registers and calculations. Address buses indicate which registers to use to get input values and which register to store the result in. Control buses decide which operation to perform.\n\nIn early machines, I might manually change values on the control and address buses to achieve the desired calculations but this quickly becomes tedious and highly inefficient. So, instead, I create a control unit which generates these signals for me.\n\nAnd fundamentally, that's how a processor works.",
"provenance": null
},
{
"answer": "\"simple\" silicon chip\n\nMicroprocessors are the most complex manufactured item in the history of mankind",
"provenance": null
},
{
"answer": "Simple answer: By giving it electricity.\n\nImagine a CPU is a bowl of spaghetti, but starts with one strand and splits into millions at the other end like a tree.\n\nBut it's a metal tree so you can put electricity through it. It has 2 states, powered and unpowered, or on and off, or 1 and 0. \n\nNow, imagine that when this tree makes a branch, each branch splits into a 1 and a 0 branch (one sends power and the other doesn't). \n\nWhen you look at the other end of the strand (millions of ends) you see tons of 1's and 0's. Because not all the power got all the way to the end of each path.\n\nNow imagine that this end with millions was a specific pattern, not random. So that when you applied power, the million ends created a system of 1's and 0's that actually meant something in a language you made up.\n\nLike the first 8 branch endings was 01001010 and that translated into a letter 'H' or something. \n\nNow attach a bunch of these trees together, powering them off and on really fast with a timing switch. \n\nYou then have a bunch of 'places' to hold 'data' in the form of characters. Just by powering something on and off really fast.\n\nSo when you look at the millions of trees with millions of branches, imagine some are looped up in a way that they continue to hold the same 1's or 0's no matter what the state of the original power timing. That is sort of like memory.\n\nNote: This is simplified to the point of probably being wrong. But the concept is there.\n\nEdit: \n\nWhoops forgot the obvious at the end. Memory means you can store numbers or data, and if you can store data (like a 5 and a 7) and a character (like a + sign). You can also connect the loop branches so that 00000001 goes up by 1 binary calculation (to 00000010) each time it loops. Also you can connect the loop branches to 'send' a pattern of 1's and 0's to another bunch of places (it can then 'load' and 'save' data to other places).\n\nSo in this example you input 5 and 7 (or load them from some place you used earlier) and send 5 into the 'addition' loop for 7 loops, and each time 5 goes up by one because that is all those branches do when the power is on. So when 5 is looped 7 times, you unload it's new value to another place and have 12 stored. Thus performing a calculation. ",
"provenance": null
},
{
"answer": "You could have just googled it \nBut you had to get 2000 upvotes didn´t you? \nActually, that seems to be the whole point of this sub",
"provenance": null
},
{
"answer": "Lets dive right into the magical land of data.\n\nWhats the symbol for five? 5. Whats the symbol for ten? 10. But wait, isn't that the symbol for one and zero? Right, so in our numbering system, when we get to the number ten, we write the symbol for one and zero. There is no symbol for ten, we simply recycle the ones we already have. Because of this, we call our numbering system \"base-ten\", or \"decimal\". \n\n\"Ones and zeros\",\"true and false\", and \"on or off\" are all terms you have probably heard before. What these all are referring to is a *different* kind of numbering system. For our decimal system, we write a '10' when we get to ten, but for binary, we write a '10' when we get to two. There is no symbol for two in binary, exactly how there is no symbol for ten in decimal. \"On\" or \"off\" simply refers to '1' or '0' in binary.\n\nJust to make sure that makes sense (as its super important):\n\n01 = one;\n\n10 = two;\n\n11 = three;\n\nMake sense? Cool (if not google \"binary\").\n\nOk, now for something completely different, but related.\n\nTheres something in computer theory called a \"logic gate\". It's a device. It has two inputs, and one output. The only input it accepts is \"on\" or \"off\", and the output is the same, \"on\" or \"off\". You might see the relation to binary.\n\nA logic gates output is based on its input. An example of a logic gate is a \"AND\" gate. When both of the inputs are on, the output is on. Otherwise, the output is off. \n\nYou still with me? Don't worry, the cool stuff is coming soon.\n\nAnother logic gate is the \"NOT\" gate. The NOT gate has one input. If the input is off, the output is on, and vice versa. The output is *not* the input. Get it? \n\nNow, if we put the input of a NOT gate on the output of an AND gate, we get a NAND gate. Creative, I know. We nerds don't get out much. Anyways, try to figure out what the output would be for all the four different possible combinations of the two inputs for the NAND gate.\n\n[Anyways, heres what a NAND gate looks like drawn.](_URL_1_) \n\nNow, you have probably heard of computer memory right? [**ta da!**](_URL_0_)\n\nIt's not going to make total sense at first, but that diagram shows a memory-holder-thingamajig. Look at it for a while and try to figure out what it does. Basically it holds a \"bit\" of memory. You could say that a bit is like one digit of a binary number. You line a bunch of these in a row, and you can start holding numbers.\n\nBut what do you *do* with those numbers?\n\nThis is where it gets cool. You do math with those numbers. This next device is called an \"[*adder*](_URL_2_)\". \n\nThe gate on top is called an XOR gate, its output is on if only one of its inputs is on. If there both on or off, then the output is off. \n\nNow, make it a [little more complex](_URL_3_) and you can add multiple bits at the same time, by linking the last ones \"Cout\" to the next ones \"Cin\".\n\nCool, now we have a basic calculator. How can we turn this up to 11 and make a computer?\n\nCode. \n\nNow, you know what data is, and so code is easy to explain. Its just data. Thats all it is. Really. \n\nThe reason why its different then other data though, is because the CPU interprets it as *instructions.*\n\nIf we wanted to do math for example, and we got to decide the instruction definitions we could use a system like;\n\n 00000001 = *add* a number to another number;\n\n 00000010 = *subtract* a number from another number;\n\nWith this, we can set what logic gates are being used based on data. \n\nNow, real quick, memory is organized on a computer by something called memory addresses, basically they just allow the CPU to ask for memory at a specific location.\nGenerally speaking the addresses are sized by \"bytes\" which is just another word for \"eight bits\". So if we wanted to access memory location five or whatever we could store that as '00000101'.\n\nLets go back and add some more to our table;\n\n00000011 = move this data into some location;\n\nCool, now we can say something like:\n\n\"add the number at location #5 in memory to the other number at location #7 in memory.\"\n\nBy breaking it down into:\n\n (add) (memory address #5) (memory address #7)\n\nWhich is really just\n\n 00000001 00000101 00000111\n\nPretty sweet right?\n\nBut hold on, how does the CPU know where to get its instructions?\n\nOn the cpu, Theres a tiny amount of memory, it does various things, such as hold something called the \"instruction pointer\". The instruction pointer holds the address of the next instruction, and increments itself after every instruction. So basically, the cpu reads the instruction pointer, fetches the next instruction, does it, adds one to the instruction pointer, and then goes back to step one.\n\nBut what happens when it runs out of instructions?\n\nLets go back to our table. Last time, I promise:\n\n 00000100 = set instruction pointer to address\n\nBasically, all this instruction does is set the instruction pointer to a number. You ever wonder what an infinite loop is on a computer? Thats what happens when an instruction pointer is set to instructions that keep telling the instruction pointer to set itself to that same set of instructions.\n\nThats computers in a nutshell.\n\n**tl;dr** I need to get laid.",
"provenance": null
},
{
"answer": "I know I'm late to the party, but [I learned about computers from this 90s cartoon, The Magic School Bus Gets Programmed.](_URL_0_)",
"provenance": null
},
{
"answer": "Imagine you and your spouse agree to turn on the light switch near your front door when either of you gets home, so that if one of you comes home and sees that the switch is on, that means that the other is already home.\n\nOn = home, off = not home.\n\nThis is a binary state. In a computer, if a circuit is on, it is represented as a 1. If it is off, it is represented as a 0. the circuit being represented as 1 or 0 it therefore called a \"bit\" (**bi**nary in**t**eger)\n\nNow imagine you replace that 1 switch with a switchboard that has 8 switches in a row. Your spouse and you can communicate a lot more information now. In fact, you can communicate **255 times** as much information. Instead of just 1 and 0, you now have 8 switches.\n\nSo information can range from\n\n[00000000] (your spouse is not home) to\n\n[00000001] (Your spouse is home) to\n\n[11111111] (Your spouse has been kidnapped by ninjas and needs you to be a bad enough dude to rescue them).\n\n255 configurations.\n\nIn a computer, when 8 bits are clustered together like this, they are called a byte. This is a basic computing concept. You can apply this concept in the context of storage, or of a CPU, and so on.\n\nNow as for how these are used to make calculations, remember the basic concept binary; either a switch is on or off. There is a nifty little type of device called a logic gate that uses binary in a very clever way. In concept, a logic gate is a little circuit that has two inputs and one output, and the output depends on the input.\n\nThere are 7 types of basic logic gates (AND, NAND, OR, NOR, XOR, XNOR, NOT). To explain the underlying idea of what their function is, I will explain only one, the AND gate. The AND gate's function is to output \"on\" if both inputs are \"on\" i.e. if both inputs are \"1\", the output will be 1, but if either or both of them are \"off\" i.e. 0, then it outputs \"off, 0.\n\nHow can the AND gate tell if both are on? Because it is physically wired that way. Here's an MSPaint example of how that effect can basically be achieved:\n\n_URL_0_\n\nIf either is \"off\" (because the circuit is broken) then current can't flow through the output, meaning the output will show as off. So the AND gate will be 1 if the first **and** the second inputs are on.\n\nThe other types of logic gates are (in terms of how they output 1)\n\nNAND: **N**ot **AND** (Both inputs are not simultaneously 1)\n\nOR (one **or** both inputs are 1)\n\nNOR (neither input is 1)\n\nXOR: e**X**clusive **OR** (only one of the two inputs are 1)\n\nXNOR (either both are 1 or both are 0)\n\nNOT, also known as an inverter, is a special one that gives the opposite output to its input. It only has 1 input (usually the output from one of the other types of logic gates).\n\nWith a combination of logic gates, you can create a more complicated logic gate called adders, and these can then be used to output the answers to mathematical inputs.",
"provenance": null
},
{
"answer": "im taking a class on this and its complicated. and its still not explaining what we use today, just what was used and the basics of a simple processor implementation. there are still jumps in logic from electric signals to actual code that i dont understand. this is a complex topic and you might not be ok with the simple answers here, i wasnt when i asked these questions when i was 5. so, these answers are just the beginning.",
"provenance": null
},
{
"answer": "imagine a gigantic plinko game... but one that's rigged so that if you drop balls at the top in a specific pattern, they'll switch a bunch of gates along the way down and always come out the bottom in another pattern. \n\nit's like that. you drop 0110 and 1100 into the top, 10010 pops out the bottom. expand that to millions of switches per second and getting \"hello world\" to pop out in 11010110010 form gets to be quite simple :)\n\na processor is a tiny box full of tiny switches that turn inputs into outputs like a microscopic sized epically huge plinko board.\n\nso... \"magic\" :) ",
"provenance": null
},
{
"answer": "Eli5: imagine if you had a hundred light switches, and a hundred people, each person on each switch. all these people are standing randomly across earth. At one end, there is a light bulb, and at the other, there’s a power source. Every time the power source gets to a switch, the person will decide to flip the switch or not. If a certain combination of people switch their switches to ON, the light will turn on. Now instead of just having earth, one light, and a hundred switches; imagine there is a whole galaxy of earths. millions of lights, billions of switches. Now shrink that into the size of a penny. That’s a processor. It takes the different power sources, puts them through a few switches, and a light comes on, or turns off. That’s it. ",
"provenance": null
},
{
"answer": "I have a midterm on this exact concept tomorrow! Reddit just made me study! There might be a god! As the ultimate procrastinator who uses reddit to distract himself I find this pleasantly ironic. ",
"provenance": null
},
{
"answer": "There is no way to ELI5 this. The closest thing is maybe to show one of those water droplet based \"computers\" that I can't seem to find a link to right now. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2247927",
"title": "Data structure alignment",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 429,
"text": "The CPU in modern computer hardware performs reads and writes to memory most efficiently when the data is \"naturally aligned\", which generally means that the data address is a multiple of the data size. \"Data alignment\" refers to aligning elements according to their natural alignment. To ensure natural alignment, it may be necessary to insert some \"padding\" between structure elements or after the last element of a structure.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "127509",
"title": "Microsequencer",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 201,
"text": "Since CPUs implement an instruction set, it's very useful to be able to decode the instruction's bits directly into the sequencer, to select a set of microinstructions to perform a CPU's instructions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5218",
"title": "Central processing unit",
"section": "Section::::Structure and implementation.:Address generation unit.\n",
"start_paragraph_id": 49,
"start_character": 0,
"end_paragraph_id": 49,
"end_character": 808,
"text": "While performing various operations, CPUs need to calculate memory addresses required for fetching data from the memory; for example, in-memory positions of array elements must be calculated before the CPU can fetch the data from actual memory locations. Those address-generation calculations involve different integer arithmetic operations, such as addition, subtraction, modulo operations, or bit shifts. Often, calculating a memory address involves more than one general-purpose machine instruction, which do not necessarily decode and execute quickly. By incorporating an AGU into a CPU design, together with introducing specialized instructions that use the AGU, various address-generation calculations can be offloaded from the rest of the CPU, and can often be executed quickly in a single CPU cycle.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44634351",
"title": "Address generation unit",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 808,
"text": "While performing various operations, CPUs need to calculate memory addresses required for fetching data from the memory; for example, in-memory positions of array elements must be calculated before the CPU can fetch the data from actual memory locations. Those address-generation calculations involve different integer arithmetic operations, such as addition, subtraction, modulo operations, or bit shifts. Often, calculating a memory address involves more than one general-purpose machine instruction, which do not necessarily decode and execute quickly. By incorporating an AGU into a CPU design, together with introducing specialized instructions that use the AGU, various address-generation calculations can be offloaded from the rest of the CPU, and can often be executed quickly in a single CPU cycle.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58205",
"title": "Vector processor",
"section": "Section::::Description.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 777,
"text": "In general terms, CPUs are able to manipulate one or two pieces of data at a time. For instance, most CPUs have an instruction that essentially says \"add A to B and put the result in C\". The data for A, B and C could be—in theory at least—encoded directly into the instruction. However, in efficient implementation things are rarely that simple. The data is rarely sent in raw form, and is instead \"pointed to\" by passing in an address to a memory location that holds the data. Decoding this address and getting the data out of the memory takes some time, during which the CPU traditionally would sit idle waiting for the requested data to show up. As CPU speeds have increased, this \"memory latency\" has historically become a large impediment to performance; see Memory wall.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10890306",
"title": "Explicit data graph execution",
"section": "Section::::Traditional designs.:Internal parallelism.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 621,
"text": "In the 1990s the chip design and fabrication process grew to the point where it was possible to build a commodity processor with every potential feature built into it. To improve performance, CPU designs started adding internal parallelism, becoming \"superscalar\". In any program there are instructions that work on unrelated data, so by adding more functional units these instructions can be run at the same time. A new portion of the CPU, the \"scheduler\", looks for these independent instructions and feeds them into the units, taking their outputs and re-ordering them so externally it appears they ran in succession.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "284528",
"title": "Bitboard",
"section": "Section::::General technical advantages and disadvantages.:Processor use.:Pros.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 503,
"text": "The advantage of the bitboard representation is that it takes advantage of the essential logical bitwise operations available on nearly all CPUs that complete in one cycle and are fully pipelined and cached etc. Nearly all CPUs have AND, OR, NOR, and XOR. Many CPUs have additional bit instructions, such as finding the \"first\" bit, that make bitboard operations even more efficient. If they do not have instructions well known algorithms can perform some \"magic\" transformations that do these quickly.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2b229e
|
why do my teet hurt when i eat sugary candy (taffy, tootsie rolls...)
|
[
{
"answer": "Sounds like cavity creeps",
"provenance": null
},
{
"answer": "Almost certainly a cavity. Your saliva dissolves the sugar and the liquid sugar mix can get into the smallest of places, so it may only be a really really tiny cavity, but still something to get checked out.",
"provenance": null
},
{
"answer": "Hey sugar tits!",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "42259610",
"title": "Candy making",
"section": "Section::::Occupational hazards.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 396,
"text": "Making candy can be hazardous due to the use of boiled sugar and melted chocolate. Boiling sugar often exceeds —hotter than most cooked foods—and the sugar tends to stick to the skin, causing burns and blisters upon skin contact. Worker safety programs focus on reducing contact between workers and hot food or hot equipment, and reducing splashing, because even small splashes can cause burns. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2347159",
"title": "Divinity (confectionery)",
"section": "Section::::Weather and altitude.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 257,
"text": "Due to the high amounts of sugar, divinity acts like a sponge. If the environment is very humid (over 50%) the candy will absorb moisture from the air, remaining gooey. Under the right conditions, it is a soft, white candy which should be dry to the touch.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "61230",
"title": "Candy",
"section": "Section::::Health effects.:Glycemic index.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 276,
"text": "Most candy, particularly low-fat and fat-free candy, has a high glycemic index (GI), which means that it causes a rapid rise in blood sugar levels after ingestion. This is chiefly a concern for people with diabetes, but could also be dangerous to the health of non-diabetics.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42307510",
"title": "Confetti candy",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 324,
"text": "Confetti candy is a confectionery food product that is prepared with cooked sugar and corn syrup that is formed into sheets, cooled, and then cracked or broken into pieces. It has a hard, brittle texture. To add eye appeal, colored sugar is sometimes sprinkled atop after the cooking and shaping process has been performed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "507776",
"title": "Cotton candy",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 305,
"text": "The candy is made by heating and liquefying sugar, spinning it centrifugally through minute holes — and finally allowing the sugar to rapidly cool and re-solidify into fine strands. It is often sold at fairs, circuses, carnivals, and festivals — served on either a stick, paper cone or in a plastic bag. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31247676",
"title": "Hard candy",
"section": "Section::::Medicinal use.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 287,
"text": "Hard candies are historically associated with cough drops. The extended flavor release of lozenge-type candy, which mirrors the properties of modern cough drops, had long been appreciated. Many apothecaries used sugar candy to make their prescriptions more palatable to their customers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "61230",
"title": "Candy",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 328,
"text": "Candy, also called sweets or lollies, is a confection that features sugar as a principal ingredient. The category, called \"sugar confectionery\", encompasses any sweet confection, including chocolate, chewing gum, and sugar candy. Vegetables, fruit, or nuts which have been glazed and coated with sugar are said to be \"candied\".\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ainsgc
|
Intelligence documents from WWII and after from my Grandfather
|
[
{
"answer": "First and foremost it's amazing that they've been kept. Please do what you can to make sure they don't get damaged and do what you can to minimize the damage. Original documents are often a wonderful treasure trove to many historians. Often even mundane documents can lead to some critical insight that nobody would have suspected at the time or even a century later. So please make sure they stay safe until they can be properly inventoried/scanned/saved.\n\nIt is also possible that they are of little actual value outside of the connection to your grandfather. Still better safe than sorry. Scanning the documents (provided the scanner itself does no damage) would probably be a good place to start as it preserves what is there.\n\nNot sure about the UK, if this were the US I would probably look at whatever preservation/museum that the particular unit he served in might have. I know the several army, navy, and air force units in the US have museums and archives at their home bases. That might be a place to start in the UK as well.\n\n ",
"provenance": null
},
{
"answer": "If there's a local museum or historical society it might be worth contacting them to see if they can help or direct you somewhere. I'd definitely advise giving them a quick read through to see if there's anything particularly interesting in there that you can mention.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "12966709",
"title": "Defense Technical Information Center",
"section": "Section::::History.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 385,
"text": "Established in June 1945 as the Air Documents Research Center (ADRC), the agency's first mission was to collect German air documents. The documents collected were divided into three categories: documents that would assist the war in the Pacific theater, documents of immediate intelligence interest to the United States or British forces and documents of interest for future research.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5107147",
"title": "Black Dispatches",
"section": "",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 530,
"text": "In Washington, the War Department turned over portions of its intelligence files to many of the participants involved. Most of these records were subsequently destroyed or lost. Thus, accounts by individuals of their parts in the war, or official papers focusing on larger subjects, such as military official correspondence, have become important sources of information on intelligence activities. Due to the lack of supporting documents, much of this information is difficult to substantiate or place in perspective and context.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "91171",
"title": "Hitler Diaries",
"section": "Section::::Initial testing and verification; steps towards publication.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 700,
"text": "The first historian to examine the diaries was Hugh Trevor-Roper, who was cautious, but impressed with the volume of documentation in front of him. As the background to the acquisition was explained to him he became less doubtful; he was falsely informed that the paper had been chemically tested and been shown to be pre-war, and he was told that \"Stern\" knew the identity of the officer who had rescued the documents from the plane and had stored them ever since. By the end of the meeting he was convinced that the diaries were genuine, and later said \"who, I asked myself, would forge sixty volumes when six would have served his purpose?\" In an article in \"The Times\" on 23 April 1983 he wrote:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31748",
"title": "Ultra",
"section": "Section::::Bibliography.\n",
"start_paragraph_id": 123,
"start_character": 0,
"end_paragraph_id": 123,
"end_character": 273,
"text": "BULLET::::- The first published account of the previously secret wartime operation, concentrating mainly on distribution of intelligence. It was written from memory and has been shown by subsequent authors, who had access to official records, to contain some inaccuracies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42139371",
"title": "British intelligence agencies",
"section": "Section::::References.:Bibliography.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 273,
"text": "BULLET::::- The first published account of the previously secret wartime operation, concentrating mainly on distribution of intelligence. It was written from memory and has been shown by subsequent authors, who had access to official records, to contain some inaccuracies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4689823",
"title": "The History of the Counter Intelligence Corps",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 286,
"text": "An 18-part series of declassified documents edited by John Mendelsohn and titled \"Covert Warfare: Intelligence, counterintelligence, and military deception during the World War II era\" was published in 1989. Part 11 was also named \"The History of the Counter Intelligence Corps (CIC)\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17103",
"title": "Kurt Waldheim",
"section": "Section::::Presidency of Austria.:Election and Waldheim Affair.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 475,
"text": "Declassified documents from the U.S. Central Intelligence Agency show that the CIA had been aware of some details of his wartime past since 1945. Information about Waldheim's wartime past was also previously published by a pro-German Austrian newspaper, \"Salzburger Volksblatt\", during the 1971 presidential election campaign, including the claim of an SS membership, but the matter was supposedly regarded as unimportant or even advantageous for the candidate at that time.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1pi2xf
|
what does it mean when someone is "in shock"?
|
[
{
"answer": "The term \"shock\" is used very frequently in the medical world. It is when the organs within a person's body aren't getting enough oxygen. If a person is in shock for too long, the organs lacking oxygen will be permanently damaged, possibly leading to death.\n\nEdit: Just to be clear, I haven't seen the video mentioned by the OP. I'm referring to the term \"shock\" strictly as it is used in a medical environment. ",
"provenance": null
},
{
"answer": "Shock is a legitimate problem and can be life threatening if not treated properly and quickly.\n\nThere are a number of causes that can trigger it, but it is basically a significant drop in blood pressure. Anxiety, chest pain, confusion, heavy sweating, shallow breathing, passing out, lack of response to trauma... all can be symptoms of shock.",
"provenance": null
},
{
"answer": "Nobody has given a unified answer so I will try. Basically there are two kinds of shock that somebody could be referring to.\n\n/u/upvoter222, /u/someanonymousaccnt have alluded to **physiological shock**, a condition you are unable to maintain a blood pressure to the extent that it becomes life threatening. There's a huge list of causes but the familiar terms **septic shock** and **toxic shock syndrome** fit into this category.\n\nWhen you talk about somebody appearing calm after suffering some calamity, this probably refers to some form of **[dissociative episode](_URL_1_)**, which is psychological response to overwhelming pain / emotional stress / other psychological stress. Some people describe the experience as being *dream-like* or as if they were watching it *happen to somebody else* and not to themselves. Dissociative episodes are one of the symptoms of **[PTSD](_URL_0_)**, though having an episode does not necessarily mean you have PTSD. It's also an effect of certain medications. This is what I think /u/redleadereu and /u/rafflecopter are referring too.\n\nIn your example, he could very well have both: physiologic shock from blood loss and psychological shock from pain.\n\nEDIT: grammar",
"provenance": null
},
{
"answer": "Shock is best defined as a state of hypoperfusion. Basically, in a normal state, your body moves blood around, through its network of arteries and veins, and delivers oxygen. Heart, Veins/Arteries, and Blood equal Pump, Pipes, and Pepsi. There is a problem with one of these three when a person is in shock... \nHypovolemic shock: Not enough blood. You are bleeding out. \nSeptic Shock: Systemic infection causes massive dilation of pipes, leading to relative hypovolemia. \nCardiogenic Shock: For some reason (there is a bunch) your heart isn't pumping effectively. \nNeurogenic Shock: A severe injury to the brain causes the vessels in the body to dilate, (usually below a certain point, as related to injury location). This dilation again causes a relative hypovolemia state. \nAnaphylactic Shock: This is a sever allergic reaction, which causes a massive dilation of vessels, leading to again, a relative hypovolemic state. \nPhyscogenic Shock: A shock of various causes, this can usually occur when people are scared or witness something they deem horrible or whatever. This is only temporary. Usually these people may feel dizzy, lightheaded, faint, feel the need to vomit. Best guess is it results from a rapid increase in sympathetic tone. ( basically, your fight or flight system just kicked into overdrive ) \nAlong with that, True shock can be either \"compensating\" or \"decompensating\". Compensating shock is when your body has this things happen, you are in a state of hypoperfusion, but your body is holding its own. For example, your heart rate increases, your body shunts blood to the core, you basically are dealing with the situation. At this point, a person's blood pressure usually rises. Decompensated shock is when these mechanisms start to fail. Your body can no longer keep up with the demands. Your heart can only beat so fast for so long. It is at this point that people's blood pressure usually begins to decline. This is when shit gets real. \n\nAs far as people dealing with pain, it is mainly due to the compensatory mechanism of releasing catecholomines. You're body says \"oh, shit, this is bad...\" and you get flooded with adrenaline and things. \n\nEDIT: I spelled shit wrong, and my grammar sucks. Sorry. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "146311",
"title": "Shock (circulatory)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 332,
"text": "Shock is the state of not enough blood flow to the tissues of the body as a result of problems with the circulatory system. Initial symptoms may include weakness, fast heart rate, fast breathing, sweating, anxiety, and increased thirst. This may be followed by confusion, unconsciousness, or cardiac arrest as complications worsen.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4903581",
"title": "Nervous shock",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 580,
"text": "In English law, a nervous shock is a psychiatric / mental illness or injury inflicted upon a person by intentional or negligent actions or omissions of another. Often it is a psychiatric disorder triggered by witnessing an accident, for example an injury caused to one's parents or spouse. Although the term \"nervous shock\" has been described as \"inaccurate\" and \"misleading\", it continues to be applied as a useful abbreviation for a complex concept. The possibility of recovering damages for nervous shock, particularly caused by negligence, is strongly limited in English law.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1585862",
"title": "Acute stress reaction",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 494,
"text": "Acute stress reaction (also called acute stress disorder, psychological shock, mental shock, or simply shock) is a normal psychological response to a terrifying, traumatic, or surprising experience. It should not be confused with the unrelated circulatory condition of shock/hypoperfusion. Acute Stress Reaction is never fatal, but Acute stress reaction (ASR) may develop into delayed stress reaction (better known as Posttraumatic stress disorder, or PTSD) if stress is not correctly managed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "146311",
"title": "Shock (circulatory)",
"section": "Section::::Cause.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 363,
"text": "Shock is a common end point of many medical conditions. Shock itself is a life-threatening condition as a result of compromised body circulation. It has been divided into four main types based on the underlying cause: hypovolemic, distributive, cardiogenic, and obstructive. A few additional classifications are occasionally used including: endocrinologic shock.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1591617",
"title": "Stressor",
"section": "Section::::Biological Responses To Stressors.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 881,
"text": "Traumatic events or any type of shock to the body can cause an acute stress response disorder (ASD). The extent to which one experiences ASD depends on the extent of the shock. If the shock was pushed past a certain extreme after a particular period in time ASD can develop into what is commonly known as Post-traumatic stress disorder (PTSD). There are two ways that the body responds biologically in order to reduce the amount of stress an individual is experiencing. One thing that the body does to combat stressors is to create stress hormones, which in turn create energy reservoirs that are there in case a stressful event were to occur. The second way our biological components respond is through an individuals cells. Depending on the situation our cells obtain more energy in order to combat any negative stressor and any other activity those cells are involved in seize.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1888708",
"title": "Shock value",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 206,
"text": "Shock value is the potential of an image, text, action, or other form of communication, such as a public execution, to provoke a reaction of sharp disgust, shock, anger, fear, or similar negative emotions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "394976",
"title": "Culture shock",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 589,
"text": "Culture shock is an experience a person may have when one moves to a cultural environment which is different from one's own; it is also the personal disorientation a person may feel when experiencing an unfamiliar way of life due to immigration or a visit to a new country, a move between social environments, or simply transition to another type of life. One of the most common causes of culture shock involves individuals in a foreign environment. Culture shock can be described as consisting of at least one of four distinct phases: honeymoon, negotiation, adjustment, and adaptation. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.