id
stringlengths
5
6
input
stringlengths
3
301
output
list
meta
null
8qo2nl
why does the connection strength between a phone and a wifi router fluxuate, even when neither are being touched?
[ { "answer": "Your connection strength isn't just determined by the strength of the signal between you and your phone, it is also impacted by the noise in the environment\n\nWhile the signal strength may remain constant, if the noise increases because of a leaky microwave or increased WiFi traffic from your neighbor's router then the signal to noise ratio drops and you router may need to send messages slower to ensure that they get through to your phone", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1618265", "title": "Cellular traffic", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 656, "text": "A mobile handset which is moving in a cell will record a signal strength that varies. Signal strength is subject to slow fading, fast fading and interference from other signals, resulting in degradation of the carrier-to-interference ratio (C/I). A high C/I ratio yields quality communication. A good C/I ratio is achieved in cellular systems by using optimum power levels through the power control of most links. When carrier power is too high, excessive interference is created, degrading the C/I ratio for other traffic and reducing the traffic capacity of the radio subsystem. When carrier power is too low, C/I is too low and QoS targets are not met.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19108933", "title": "2.4 GHz radio use", "section": "Section::::Resolving interference.:Adding base stations.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 735, "text": "All of the base stations in a wireless network should be set to the same SSID (which must be unique to all other networks within range) and plugged into the same logical Ethernet segment (one or more hubs or switches directly connected without IP routers). Wireless clients then automatically select the strongest access point from all those with the specified SSID, handing off from one to another as their relative signal strengths change. On many hardware and software implementations, this hand off can result in a short disruption in data transmission while the client and the new base station establish a connection. This potential disruption should be factored in when designing a network for low-latency services such as VoIP.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46892412", "title": "Node deletion", "section": "Section::::Random deletion.:Erdős-Rényi model.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 382, "text": "The effect on the network connectedness is measured with the diameter of the network (the length of the longest shortest path between two nodes). When we remove a fraction f of nodes, the diameter of the network increases monotonically with f. This is because each node has approximately the same degree and thus contributes to the interconnectedness by relatively the same amount.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17282821", "title": "Mobile IPTV", "section": "Section::::Technical obstacles.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 372, "text": "The characteristics of the wireless link can vary due to a variety of causes, and the rate of change can be very abrupt. For example, vertical handover can quickly change the path between the source and sink, bandwidth, physical MAC address, IP address. Therefore, some solutions devised for the relatively static wired computer network environment may not work properly.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "45199965", "title": "Energy proportional computing", "section": "Section::::Research in energy proportional computing.:Networks.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 1094, "text": "Networks are emphasized as a key component that are very energy disproportional and contribute to poor cluster and datacenter-level energy proportionality, especially as other components in a server and datacenter become more energy proportional. The main reason they are not energy proportional is because networking elements are conventionally always on due to the way routing protocols are designed, and the unpredictability of message traffic. Clearly, links cannot be shut down entirely when not in use due to the adverse impact this would make on routing algorithms (the links would be seen as faulty or missing, causing bandwidth and load balancing issues in the larger network). Furthermore, the latency and energy penalties that are typically incurred from switching hardware to low power modes would likely degrade both overall network performance and perhaps energy. Thus, like in other systems, energy proportionality of networks will require the development of active performance scaling features, that do not require idle power-down states to save energy when utilization is low.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13028025", "title": "Flat network", "section": "Section::::Drawbacks.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 471, "text": "BULLET::::- Scalability and speed – Connecting all the devices to one central switch, either directly or through hubs, increases the potential for collisions (due to hubs), reduced speed at which the data can be transmitted and additional time for the central switch to process the data. It also scales badly and increases the chance of the network failing if excessive hubs are used and there are not enough switches to control the flow of the data through the network.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51724903", "title": "Progetto neco", "section": "Section::::Technology.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 240, "text": "Every node in the network is made up by two or more radio interfaces, so that is possible receiving the signal and replaying it to one or more node on a different radio frequency, in order to decrease interferences and increase throughput.\n", "bleu_score": null, "meta": null } ] } ]
null
oki57
Would taking a cellulase supplement the way lactose intolerant people take lactase allow a human to each grass and other no digestible plant matter? If so, could this be a way of addressing world hunger?
[ { "answer": "Potentially. It's questionable whether it could provide complete nutrition on its own though, there are limited studies of use of cellulase in the diet of farm animals which so far have only been mildly exploratory.\n\nHowever, world hunger in the modern age is rarely a problem of production but rather of distribution (typically due to the lack of rule of law in an area, or due to intentionally created starvation). If you had the means to ship a bottle of cellulase supplements to a starving population then you would also have the capability to ship them rice, so it's rather a moot point whether or not it would be possible.", "provenance": null }, { "answer": "No. There is much more to getting nutrition from cellulose than just having the right enzymes.\n\nThe digestive systems of animals are quite variable and a system is based on what that animal eats. A cow digests nutrient-poor grass, and has 4 specialized compartments in its large stomach. The last stomach is the \"true stomach\" with the acids and enzymes, and the stomachs before it mostly serve to break down the fiber. (There is also a process where the cow pukes up partially digested grass, chews it, and swallows it back down). The intestines of a cow are very long to extract more nutrients. The multi-compartmented stomach is common to grazing animals, and they are called \"ruminants.\"\n\nHumans are not ruminants, so if we were to digest grass, we'd have to have the anatomy of the non-ruminants that can digest grass, like horses and rabbits. These animals have an enlarged cecum (part of the colon) where bacteria ferment and break down their food. For all intents and purposes, humans do not have a cecum.\n\nLactose is a simpler compound than cellulose, and breaks down more easily. (With the right enzymes). Cellulose is complex and durable, because it's basically the skeleton of a plant. (It forms the cell wall). Digesting it requires a much greater degree of adaptation than digesting lactose. Even if humans could break down the grass, our system wouldn't convert it to energy effectively enough for it sustain us.\n\nEDIT: And I never addressed the premise of your question-if it's even possible for a cellulase supplement to cause the human digestive system to produce the specific enzymes needed to break down cellulose. I don't know enough about enzymes to really give a good answer for this, but I do have my suspicions it wouldn't work. Does anybody more qualified have an answer to this?\n\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "165423", "title": "Digestion", "section": "Section::::Breakdown into nutrients.:Carbohydrate digestion.\n", "start_paragraph_id": 74, "start_character": 0, "end_paragraph_id": 74, "end_character": 562, "text": "Lactase is an enzyme that breaks down the disaccharide lactose to its component parts, glucose and galactose. Glucose and galactose can be absorbed by the small intestine. Approximately 65 percent of the adult population produce only small amounts of lactase and are unable to eat unfermented milk-based foods. This is commonly known as lactose intolerance. Lactose intolerance varies widely by genetic heritage; more than 90 percent of peoples of east Asian descent are lactose intolerant, in contrast to about 5 percent of people of northern European descent.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "342457", "title": "Cellulase", "section": "Section::::Types and action.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 377, "text": "Cellulase action is considered to be synergistic as all three classes of cellulase can yield much more sugar than the addition of all three separately. Aside from ruminants, most animals (including humans) do not produce cellulase in their bodies and can only partially break down cellulose through fermentation, limiting their ability to use energy in fibrous plant material.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2644987", "title": "Lactase persistence", "section": "Section::::Evolutionary advantages.:Benefits of being lactase persistent in adulthood.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 1022, "text": "The consumption of lactose has been shown to benefit humans with lactase persistence through adulthood. For example, the 2009 British Women's Heart and Health Study investigated the effects on women's health of the alleles that coded for lactase persistence. Where the C allele indicated lactase nonpersistence and the T allele indicated lactase persistence, the study found that women who were homozygous for the C allele exhibited worse health than women with a C and a T allele and women with two T alleles. Women who were CC reported more hip and wrist fractures, more osteoporosis, and more cataracts than the other groups. They also were on average 4–6 mm shorter than the other women, as well as slightly lighter in weight. In addition, factors such as metabolic traits, socioeconomic status, lifestyle, and fertility were found to be unrelated to the findings, thus it can be concluded that the lactase persistence benefited the health of these women who consumed dairy products and exhibited lactase persistence.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "736674", "title": "Enterocyte", "section": "Section::::Disorders.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 513, "text": "BULLET::::- Lactose intolerance is the most common problem of carbohydrate digestion and occurs when the human body doesn't produce a sufficient amount of lactase (a disaccharidase) enzyme to break down the sugar lactose found in dairy. As a result of this deficiency, undigested lactose is not absorbed and is instead passed on to the colon. There bacteria metabolize the lactose and in doing so release gas and metabolic products that enhance colonic motility. This causes gas and other uncomfortable symptoms.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56873", "title": "Lactose intolerance", "section": "Section::::Management.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 654, "text": "People with primary lactase deficiency cannot modify their body’s ability to produce lactase. In societies where lactose intolerance is the norm, it is not considered a condition that requires treatment. However, where dairy is a larger component of the normal diet, a number of efforts may be useful. There are four general principles in dealing with lactose intolerance: avoidance of dietary lactose, substitution to maintain nutrient intake, regulation of calcium intake, and use of enzyme substitute. Regular consumption of dairy food by lactase deficient individuals may also reduce symptoms of intolerance by promoting colonic bacteria adaptation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "342457", "title": "Cellulase", "section": "Section::::Uses.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 461, "text": "Cellulase is used in the fermentation of biomass into biofuels, although this process is relatively experimental at present. Medically, Cellulase is used as a treatment for phytobezoars, a form of cellulose bezoar found in the human stomach, and it has exhibited efficacy in degrading polymicrobial bacterial biofilms by hydrolyzing the β(1-4) glycosidic linkages within the structural, matrix exopolysaccharides of the extracellular polymeric substance (EPS).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "45438729", "title": "Feeding Everyone No Matter What", "section": "Section::::Claims.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 423, "text": "Cellulosic biofuel production typically already creates sugar as an intermediate product. There are edible calories in leaves, but there is too much dietary fiber, so solutions include making tea, chewing and not swallowing the solids, and making leaf protein concentrate. Biomass can be predigested by bacteria so that animals that are poor at digesting cellulose can derive nutrition, such as rats and possibly chickens.\n", "bleu_score": null, "meta": null } ] } ]
null
9wr413
how does a smart phone responds to a touch?
[ { "answer": "The inside of the screen carries a small charge. Placing an object, like a finger, on the outside that can act as the second half of a capacitor and distorts the charge pattern. The position of the distortion can be traced by the circuitry to know what action to hairdressing on its location\n\nThink about those toys with the plasma inside where the arc follows your finger as you move it on the outside of the glass globe.\n\nEDIT - Damn you autocorrect and touchscreen tablet! I'll leave it there for the giggles. Should have been \" action to take depending on its location\" or something like that.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8859863", "title": "Multi-touch", "section": "Section::::Implementations.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 302, "text": "Handheld technologies use a panel that carries an electrical charge. When a finger touches the screen, the touch disrupts the panel's electrical field. The disruption is registered as a computer event (gesture) and may be sent to the software, which may then initiates a response to the gesture event.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8841749", "title": "IPhone", "section": "Section::::Hardware.:Sensors.:Proximity sensor.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 215, "text": "A proximity sensor deactivates the display and touchscreen when the device is brought near the face during a call. This is done to save battery power and to prevent inadvertent inputs from the user's face and ears.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9672072", "title": "Mobile campaign", "section": "", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 292, "text": "BULLET::::- An sms, email or app alert is sent to a mobile user, generally with a telephone number to make a one touch connection point possible. Advertisers find this a great way to inform customers about their products or to enter into a conversation about the list of products they offer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47198412", "title": "Force Touch", "section": "Section::::Mechanics.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 1157, "text": "The touch sensitive interface could either be a trackpad or a touch screen. Multiple actuators are mechanically connected to the back of the input surface. The actuators are distributed along the surface, each at a separate contact location, to provide localised haptic feedback to the user. Piezoelectricity is used by actuators to convert the physically-induced vibration or the input displacement into an electrical signal. A controller is configured to activate the actuators in and around the point of contact. The actuators at the point of contact induces waveforms to produce vibration. However, since there are multiple actuators around the point of contact, the vibration can propagate to other locations, thus limiting the localisation effect. This is why a second set of actuators induce waveforms to suppress the vibratory crosstalk produced by the first set of actuators. This maybe achieved by producing waveforms that provides interference in terms of amplitude, frequency or both. The masking waveforms could also alter the vibration at contact locations by providing a user experience other than just suppressing the propagated vibrations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25997913", "title": "Tactile sensor", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 539, "text": "A tactile sensor is a device that measures information arising from physical interaction with its environment. Tactile sensors are generally modeled after the biological sense of cutaneous touch which is capable of detecting stimuli resulting from mechanical stimulation, temperature, and pain (although pain sensing is not common in artificial tactile sensors). Tactile sensors are used in robotics, computer hardware and security systems. A common application of tactile sensors is in touchscreen devices on mobile phones and computing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5314955", "title": "Body capacitance", "section": "Section::::Touch sensors.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 466, "text": "A capacitive touch sensor responds to close approach (but not force of touch) of a part of a human body, usually a fingertip. The capacitance between the device itself and the fingertip is sensed. Capacitive touch screens don't require applying any force to their surfaces, which makes them easier to use and design in some respects. Furthermore, because of body capacitance, people act as good antennas, and some small televisions use people to enhance reception. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12958553", "title": "Eimer's organ", "section": "Section::::Function.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 386, "text": "Today it is still not understood precisely how these receptors convert touch into the electrical signals that the nerve fibres transmit to the brain. Interesting are the properties of touch, e.g. frequency and force, to which the receptors respond and how their responsiveness changes with prolonged stimulation. The receptors can be functionally distinguished based on these features:\n", "bleu_score": null, "meta": null } ] } ]
null
48h5gn
the relation between inflation, bill denomination and printing money.
[ { "answer": "All money only exists as a matter of 'trust' by us, of it's ability to buy a 'thing' we want. It is the intermediary between two barters. It has no intrinsic value whatsoever. All money is 'printed' or 'created' out of fresh air by a statutory source (central bank or whomever) and leaked out into the economy by loaning it to other banks, who loan it out to you, your employers and other financial institutions. Inflation is the thing that occurs when our bartering is inequitable. This is when Mr. X sells his car to Mrs. Y, for more than it's actual worth.... Mr.X makes a 'profit' from Mrs.Y, who is in turn less well off than she should be if the deal was 'fair' Mr.X will use this extra potential to lessen the value of all other money, by now using it on other items he would not have been able to afford if he had made a 'fair trade'. This goes on all the time, millions of times over and the value of each unit of money is thus devalued. The Nation is not 'worth' less though, so the creators of the barter intermediary component, money, need to put more of the stuff back into the system to 'top it back up' to reflect the estimated 'worth' of Everything . OK, maybe a bit over simplistic, but it pretty much explains it. In the Old Days, when things were simpler and Gold was the Standard which determined the value of everything, because of it's rarity and almost constant quantity, money remained worth it's original value far longer !", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "38286", "title": "Inflation", "section": "Section::::Related definitions.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 551, "text": "Conceptually, inflation refers to the general trend of prices, not changes in any specific price. For example, if people choose to buy more cucumbers than tomatoes, cucumbers consequently become more expensive and tomatoes cheaper. These changes are not related to inflation; they reflect a shift in tastes. Inflation is related to the value of currency itself. When currency was linked with gold, if new gold deposits were found, the price of gold and the value of currency would fall, and consequently prices of all other goods would become higher.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38286", "title": "Inflation", "section": "Section::::History.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 882, "text": "By the nineteenth century, economists categorized three separate factors that cause a rise or fall in the price of goods: a change in the \"value\" or production costs of the good, a change in the \"price of money\" which then was usually a fluctuation in the commodity price of the metallic content in the currency, and \"currency depreciation\" resulting from an increased supply of currency relative to the quantity of redeemable metal backing the currency. Following the proliferation of private banknote currency printed during the American Civil War, the term \"inflation\" started to appear as a direct reference to the \"currency depreciation\" that occurred as the quantity of redeemable banknotes outstripped the quantity of metal available for their redemption. At that time, the term inflation referred to the devaluation of the currency, and not to a rise in the price of goods.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23587022", "title": "List of taxes", "section": "Section::::Effective taxes.\n", "start_paragraph_id": 89, "start_character": 0, "end_paragraph_id": 89, "end_character": 290, "text": "BULLET::::- Inflation tax is the value lost by inflation, by holders of cash and those on fixed incomes. Inflation causes those holding cash to lose money by reducing its real value, but at the same time, reduces the amount owed by debtors because the real value of the debt has decreased.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38286", "title": "Inflation", "section": "Section::::Causes.:Keynesian view.\n", "start_paragraph_id": 56, "start_character": 0, "end_paragraph_id": 56, "end_character": 723, "text": "The effect of money on inflation is most obvious when governments finance spending in a crisis, such as a civil war, by printing money excessively. This sometimes leads to hyperinflation, a condition where prices can double in a month or less. The money supply is also thought to play a major role in determining moderate levels of inflation, although there are differences of opinion on how important it is. For example, monetarist economists believe that the link is very strong; Keynesian economists, by contrast, typically emphasize the role of aggregate demand in the economy rather than the money supply in determining inflation. That is, for Keynesians, the money supply is only one determinant of aggregate demand.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12229474", "title": "Inflation accounting", "section": "Section::::Historical cost basis in financial statements.:Measuring unit principle.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 728, "text": "Under a historical cost-based system of accounting, inflation leads to two basic problems. First, many of the historical numbers appearing on financial statements are not economically relevant because prices have changed since they were incurred. Second, since the numbers on financial statements represent dollars expended at different points of time and, in turn, embody different amounts of purchasing power, they are simply not additive. Hence, adding cash of $10,000 held on December 31, 2002, with $10,000 representing the cost of land acquired in 1955 (when the price level was significantly lower) is a dubious operation because of the significantly different amount of purchasing power represented by the two numbers. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38286", "title": "Inflation", "section": "Section::::Related definitions.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 658, "text": "The term \"inflation\" originally referred to a rise in the general price level caused by an imbalance between the quantity of money and trade needs. However, it is common for economists today to use the term \"inflation\" to refer to a rise in the price level. An increase in the money supply may be called monetary inflation, to distinguish it from rising prices, which may also for clarity be called \"price inflation\". Economists generally agree that in the long run, inflation is caused by increases in the money supply. Federal Reserve Board's semiannual Monetary Policy Report to the Congress. Introductory statement by Jean-Claude Trichet on July 1, 2004\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33237661", "title": "Statement of changes in financial position", "section": "Section::::Significant.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 493, "text": "When financial information is presented in nominal (low inflation), the change in the balance sheet of monetary equivalent to the cash flow generated or invested in such items, however, when inflation is significant and requires the expression of the financial statements in pesos of purchasing power, the change in constant pesos of monetary balance involves not only the cash flow or profit erosion that inflation (monetary effect) resulted in the effect created or invested in these items.\n", "bleu_score": null, "meta": null } ] } ]
null
2brpff
can a planet without conditions similar to earth support any life at all, or have we just not found organisms that require other conditions?
[ { "answer": "It's the latter. We only know life on earth -- life as we know it. It's possible there could be exotic forms of life elsewhere, but we have no idea what it would look like or how to find it. So we focus on the life we understand, because that's what we're best at finding.\n\nTo some degree though, we know the chemicals our life is based on are probably some of the best for \"life\". Carbon has specific properties that make it very nice for replication and other life functions. Silicon is very similar to carbon, which is why you often hear about people looking for \"silicon based life\" -- it's the most likely other element that could produce organisms that follow similar mechanics as ours.", "provenance": null }, { "answer": "Life manages very well without oxygen, evolving into flourishing communities of anaerobes. Acidity... presents no problem, as sulphur bacteria and their co-habitants illustrate, nor does a considerable degree of alkalinity bother alkophiles.... Water purity is a trivial matter: saturated salt brines support abundant bacterial life. And pressure is quite irrelevant, with bacteria growing happily in a near vacuum or at the huge hydrostatic pressure of deep ocean trenches. Temperature, too, presents little problem: boiling hot springs support bacterial life, and bacteria have been found growing at 112 C in superheated geothermal water under hydrostatic pressure; conversely, other types of bacteria thrive at well below zero, provided the water is salty enough not to freeze. And even if they do get frozen, many bacteria revive when their habitat thaws. Even organic food is not a prerequisite....", "provenance": null }, { "answer": "\"Uninhabitable\" generally means \"humans can't live there\" because we're pretty anthropocentric that way. It's entirely possible that there are life-forms out there who would find the planet Earth uninhabitable.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "6410946", "title": "Atmosphere of Venus", "section": "Section::::Possibility of life.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 631, "text": "Due to the harsh conditions on the surface, little of the planet has been explored; in addition to the fact that life as currently understood may not necessarily be the same in other parts of the universe, the extent of the tenacity of life on Earth itself has not yet been shown. Creatures known as extremophiles exist on Earth, preferring extreme habitats. Thermophiles and hyperthermophiles thrive at temperatures reaching above the boiling point of water, acidophiles thrive at a pH level of 3 or below, polyextremophiles can survive a varied number of extreme conditions, and many other types of extremophiles exist on Earth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25395805", "title": "Monsteca Corral", "section": "Section::::Gameplay.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 294, "text": "Lifeforms do not live just anywhere on the surface of a planet randomly. Each species occupies a definite set of surroundings, or environment, to which it is adapted. It cannot survive for long outside the limits of that environment because it can no longer obtain what it requires to survive.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9228", "title": "Earth", "section": "Section::::Habitability.\n", "start_paragraph_id": 91, "start_character": 0, "end_paragraph_id": 91, "end_character": 468, "text": "A planet that can sustain life is termed habitable, even if life did not originate there. Earth provides liquid water—an environment where complex organic molecules can assemble and interact, and sufficient energy to sustain metabolism. The distance of Earth from the Sun, as well as its orbital eccentricity, rate of rotation, axial tilt, geological history, sustaining atmosphere, and magnetic field all contribute to the current climatic conditions at the surface.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1072751", "title": "Circumstellar habitable zone", "section": "Section::::Significance for complex and intelligent life.\n", "start_paragraph_id": 76, "start_character": 0, "end_paragraph_id": 76, "end_character": 667, "text": "Species, including humans, known to possess animal cognition require large amounts of energy, and have adapted to specific conditions, including an abundance of atmospheric oxygen and the availability of large quantities of chemical energy synthesized from radiant energy. If humans are to colonize other planets, true Earth analogs in the CHZ are most likely to provide the closest natural habitat; this concept was the basis of Stephen H. Dole's 1964 study. With suitable temperature, gravity, atmospheric pressure and the presence of water, the necessity of spacesuits or space habitat analogues on the surface may be eliminated and complex Earth life can thrive.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16953152", "title": "Extreme environment", "section": "Section::::Beyond Earth.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 435, "text": "Most of the moons and planets in the Solar System are also extreme environments. Astrobiologists have not yet found life in any environments beyond Earth, though experiments have shown that tardigrades can survive the harsh vacuum and intense radiation of outer space. The conceptual modification of conditions in locations beyond Earth, to make them more habitable by humans and other terrestrial organisms, is known as terraforming.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "827792", "title": "Rare Earth hypothesis", "section": "Section::::Requirements for complex life.:With the right arrangement of planets.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 360, "text": "Rare Earth proponents argue that a planetary system capable of sustaining complex life must be structured more or less like the Solar System, with small and rocky inner planets and outer gas giants. Without the protection of 'celestial vacuum cleaner' planets with strong gravitational pull, a planet would be subject to more catastrophic asteroid collisions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53010729", "title": "Mirror life", "section": "Section::::The concept.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 329, "text": "Hypothetically, it is possible to recreate an entire ecosystem from the bottom up, in chiral form. In this way, the creation of an Earth ecosystem without microbial diseases might be possible. In some distant future, mirror life could be employed to create robust, effective and disease-free ecosystems for use on other planets.\n", "bleu_score": null, "meta": null } ] } ]
null
1xsqbi
How were the economic structures during the classical era in the mediterranean cities?
[ { "answer": "The Roman Empire doesn't seem to have had a formal legal control of business in the same way Medieval cities did with guilds. The question of market fairs is a bit more complex. We know they existed, and were probably controlled through local town governing authorities, but it is difficult to know how this impacted business outside of market days. The undeniable existence of permanent streetfront shops argues against an idea of mercantile activity being overly restricted outside of those days.\n\nThe closest thing to a guild structure you will find are the *collegia*. I'll paste in a discussion I gave of them a few days ago:\n\n > The most well known labor organization of sorts in the Roman world was the collegium, which became prominent and important seemingly everywhere across the empire, although the specific modes of organization seem to have differed. For example, merchants in the East seem to have primarily organized themselves along communal lines (religious, ethnic, familial etc) while evidence from Lyon seems to point towards merchants organizing themselves along specific goods carried. This, of course, is not easily applicable to other professions, but it shows some of the diversity.\n\n > Anyway, collegia seem to have begun as religious and burial organizations, but they quickly acquired commercial and social characters. This is all rather difficult to untangle and requires using a lot of varied evidence. I'll just give three, to give an idea: In Egypt, the environmental conditions allow for the survival of documentary papyrus and so we know an awful lot about collegia there, and we see a great deal of market organization and negotiation in the documents. In Asia Minor, literary evidence allows us to see examples of certain workmen organizations opposing the activity of the imperial elite (specifically the orator Dio Chrysostom) and prevailing, their collective economic interests defeating a very well connected person's political interests. In Pompeii, we see graffiti showing the prominent social role of collegia, and we even have wall paintings of something like festival floats. The problem with integrating is that these are fundamentally different types of evidence—we have no grafitti and wall paintings in Egypt, no papyri in Asia Minor, no literary descriptions for Pompeii. So are the role of collegia the same everywhere, but we just have different sets of evidence? Or are they actually very different?\n\nThe *collegia* do not seem to have had the same sort of legal control over business as guilds did, but they may have wielded extensive social control, and they could directly control the activities of its members. For example, we have papyri from Egypt that describe how a particular *collegium* sold the right to engage in certain types of economic activity to one of its members, and the punishment for those who attempted to engage in that economic activity in competition with the person who bought the contract.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "45222463", "title": "History of the city", "section": "Section::::Middle Ages.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 980, "text": "By the thirteenth and fourteenth centuries some cities become powerful states, taking surrounding areas under their control or establishing extensive maritime empires. In Italy medieval communes developed into city-states including the Republic of Venice and the Republic of Genoa. These cities, with populations in the tens of thousands, amassed enormous wealth by means of extensive trade in eastern luxury goods such as spices and silk, as well as iron, timber, and slaves. Venice introduced the \"ghetto\", a specially regulated neighborhood for Jews only. In Northern Europe, cities including Lübeck and Bruges formed the Hanseatic League for collective defense and commerce. Their power was later challenged and eclipsed by the Dutch commercial cities of Ghent, Ypres, and Amsterdam. (City rights were granted by nobility.) The city's central function was commerce, enabled by waterways and ports; the cities themselves were heavily fortified with walls and sometimes moats. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13584", "title": "History of the Mediterranean region", "section": "Section::::Middle Ages.:Islamic Golden Age.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 362, "text": "Between 831 and 1071, the Emirate of Sicily was one of the major centres of Islamic culture in the Mediterranean. After its conquest by the Christian Normans, the island developed its own distinct culture with the fusion of Latin and Byzantine influences. Palermo remained a leading artistic and commercial centre of the Mediterranean well into the Middle Ages.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "385155", "title": "Italians", "section": "Section::::History.:Rise of the city-states and the Renaissance.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 469, "text": "During the 14th and 15th centuries, some Italian city-states ranked among the most important powers of Europe. Venice, in particular, had become a major maritime power, and the city-states as a group acted as a conduit for goods from the Byzantine and Islamic empires. In this capacity, they provided great impetus to the developing Renaissance, began in Florence in the 14th century, and led to an unparalleled flourishing of the arts, literature, music, and science.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "583117", "title": "Trade route", "section": "Section::::Historic trade routes.:Predominantly maritime routes.:Maritime republics' Mediterranean trade.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 460, "text": "The economic growth of Europe around the year 1000, together with the lack of safety on the mainland trading routes, eased the development of major commercial routes along the coast of the Mediterranean. The growing independence of some coastal cities gave them a leading role in this commerce: Maritime Republics (Italian \"Repubbliche Marinare\") of Venice, Genoa, Amalfi, Pisa and Republic of Ragusa developed their own \"empires\" in the Mediterranean shores.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3513024", "title": "Castro culture", "section": "Section::::Economy and arts.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 379, "text": "In the southern coastal areas the presence of Mediterranean merchants from the 6th century BC onward, would have occasioned an increase in social inequality, bringing a large number of importations (fine pottery, fibulae, wine, glass and other products) and technological innovations, such as round granite millstones, which would have merged with the Atlantic local traditions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3513024", "title": "Castro culture", "section": "Section::::History.:Second Iron Age.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 890, "text": "Although most of the communities of this period had mostly self-sufficient isolated economies, one important change was the return of trade with the Mediterranean by the now independent Carthage, a thriving Western Mediterranean power. Carthaginian merchants brought imports of wine, glass, pottery and other goods through a series of emporia, commercial post which sometimes included temples and other installations. At the same time, the archaeological register shows, through the finding of large quantities of fibulae, pins, pincers for hair extraction, pendants, earrings, torcs, bracelets, and other personal objects, the ongoing importance of the individual and his or her physical appearance. While the archaeological record of the Castro Iron Age suggests a very egalitarian society, these findings imply the development of a privileged class with better access to prestige items.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "246701", "title": "Thalassocracy", "section": "Section::::History and examples of thalassocracies.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 581, "text": "The Early Middle Ages ( 500 – 1000 AD) saw many of the coastal cities of the Mezzogiorno develop into minor thalassocracies whose chief powers lay in their ports and their ability to sail navies to defend friendly coasts and ravage enemy ones. These include the variously Greek and Lombard duchies of Gaeta, Naples, Salerno and Amalfi. Later, northern Italy developed its own trade empires based on Pisa and especially the powerful Republic of Genoa, that rivaled with Venice (these three, along with Amalfi, were to be called the \"Repubbliche marinare\", i.e. Maritime Republics).\n", "bleu_score": null, "meta": null } ] } ]
null
w507a
how come the train tracks don't blow up when it rains?
[ { "answer": " > Why hasn't a fuse blown/breaker tripped like it does when I throw my toaster in the bath.\n\nActually this does happen from time to time. It's pretty rare thanks to good engineering and the fact that most train systems have been around long enough for weak spots that are likely to cause this to have been identified and fixed.\n\nThe breakers in your house are designed with home living in mind, not with trains. The breakers that a train system would use will tolerate a huge amount of current flow before they trip. When it rains, some current does get grounded, but it's usually not enough for the electrical system to be affected. Remember, the third rail can throw off enough current to kill a person without the breakers ever tripping.\n\nAll that said, the breakers on a train system will likely also be much fancier that the ones in your home. There are breakers that can monitor for current leak that is not a train and lower the voltage to lessen the loss of current without opening the circuit. If there is a train going by, then it will bring the voltage back up and just tolerate the loss of current until the train is gone. Some systems can shut off entire sections of track if there won't be a train coming by anytime soon.\n\nIf this seems a bit wasteful to you... you are right it is. But there is a point of diminishing returns for fixing such a minor problem.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "32423465", "title": "Eurostar 9410 derailment", "section": "Section::::Weather over Apulia.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 273, "text": "The rain concerned railway officials, who feared that landslides could hit the hillside line serving the city. Local officers organized a patrol to check conditions of the railway, and drove a service vehicle along part of the track, returning minutes before the disaster.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1055777", "title": "Gandy dancer", "section": "Section::::History.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 407, "text": "Though rail tracks were held in place by wooden ties (\"sleepers\" outside the U.S. and Canada) and the mass of the crushed rock (\"ballast\") beneath them, each pass of a train around a curve would, through centripetal force and vibration, produce a tiny shift in the tracks, requiring that work crews periodically realign the track. If allowed to accumulate, such shifts could eventually cause a derailment. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23729028", "title": "Rudine derailment", "section": "Section::::Details.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 356, "text": "According to Croatian news reports, the cause of the derailment was slippery fire retardant that was just sprayed on a steep downhill section of the track, a normal practice in extreme summer heat but executed improperly using a new chemical. With brakes ineffective, the train gained a speed higher than the track configuration could handle and derailed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52233603", "title": "Woerden train disaster", "section": "Section::::Incident.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 475, "text": "The tracks were blocked in both directions. The overhead wire was destroyed and the overhead supports had fallen on the tracks. The cleanup work was continued through the whole night and in the early morning, after the day of the crash, train traffic was resumed on one track. Train cranes were able to lift some carriages back on the track in the morning. During the day after the crash the president of the Nederlandse Spoorwegen, ir. J. Lohman, visited the disaster site.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52248341", "title": "Houten train accident", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 271, "text": "The cause of the derailment was heat : the outside temperature was about 30 degrees Celsius causing expansion of the rails. It was named a \"slap in the track\". During the evening and night, soldiers who were stationed in the area helped to release one of the two tracks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27444083", "title": "2010 Jiangxi derailment", "section": "Section::::Incident.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 513, "text": "According to the Ministry of Railways, the train was derailed by flaws in the track caused by a landslide. The landslide was caused by previous days of heavy rain and flooding in the region, with recent storms in the area having led to the evacuation of 44,600 people and a further people having been affected by the storms. The incident caused eight of 17 carriages of the train to disconnect from the rails. Some carriages were found overturned, and one carriage was said to have \"twisted and crushed another\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1597422", "title": "Streatham Common railway station", "section": "Section::::History.:Fatal accident.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 338, "text": "As the train entered the curved track leading into the station complex it derailed, causing the carriages to catapult over the locomotive and its boiler to explode with such force that the driver and fireman were thrown into a nearby field. The locomotive and carriages came to rest at the bottom of the embankment adjacent to the track.\n", "bleu_score": null, "meta": null } ] } ]
null
1ivtiv
Why is it so hard to program an effective anti cheat system in online games?
[ { "answer": "Dota 2 works around these issues. The server will not send the enemies location or data until they are in vision, making it impossible to hack. There was one instance of a hack in it, but that was found to be using the enemies camera location, which has since been fixed.", "provenance": null }, { "answer": " > My question now is, why it is apperently not possible for the game to detect that this kind of information is displayed when it shouldnt.\n\nSimply put, because the \"hack\" is usually a separate piece of software running on its own, that is telling the game that it is allowed. At least for FPS games in general, hacks usually insert themselves between the \"mod\" part of the game, and the \"engine\" part; the engine does all the communication with the server and changing/rendering whatever it is told to, it does all the heavy lifting. The \"mod\" controls what the rules of the game are, and determines what to tell the engine to do. If you tell the engine to spawn a player at some point, it will do it; but the mod determines whether this is allowed or not. Most FPS hacks work by \"faking\" both the mod and engine parts, so that the mod thinks it's communicating with the engine, and vice versa, but in reality both of them are communicating with the hack, and the hack is able to tell the engine to do things that the mod wouldn't normally tell it to do -- such as making walls invisible, rendering players off-screen, or in an RTS, something like revealing fog of war or spawning units for free.\n\nIn general, programs are not able to detect or modify what other programs are doing unless they are designed for it. They can be designed to detect and modify a program's behaviour in a specific way (the way a hack targets a specific game's code; note that hacks are specific to each game, and the same hacks won't work for different games), or they can do so in a general way (the way anti-virus software scans code to detect malicious code patterns). It is always much eaiser to do it for specific software than for general software -- this is why making hacks is much easier than detecting them, and also why anti-virus scanners have databases of viruses; so they can catch many specific viruses that are already known. This is in addition to the \"heuristic\" scan which attempts to detect unknown viruses.\n\nAn anti-cheat program would need to work similarly to an anti-virus program -- it would need to be able to detect known cheats through a specific way of detecting them (this requires being familiar with various cheat software), and also it would need to be able to detect unknown cheats in a general way, which is usually very difficult if not impossible. Also, whenever an anti-cheat program comes out with a way to detect a specific hack, usually the hack developer will release a new version of the hack that can't be detected the same way, so it ends up being a back-and-forth race; the honest gamer/developer can never win.\n\nSo the question is, how does one detect whether hacks are active, in general? It's very difficult and it depends on the game and on the specific hack. Many hacks also are designed specifically to hide themselves from the more common detection methods, making the task even more difficult.\n\n > Why can't the game not detect that this kind of information should not be displayed when there is no unit or spell to reveal the fog of war in this area?\n\nBecause the \"mod\" part of the game isn't actually in control of the engine anymore -- the hack has inserted itself between the mod and the engine, and is able to change the interaction between them, so that the engine thinks it's okay to do things that the mod wouldn't do. The mod has to *trust* that what the engine tells it is correct (for example, if the engine comes back and says \"player X moved to location B,\" the mod has to accept this information from the engine or else there may not be data synchronization between the server and the players. Some mods are designed to detect certain engine changes that shouldn't be allowed, and can take measures to preserve the game's integrity -- usually this results in one or all players being \"dropped\" or otherwise disconnected. Sometimes it also manifests as sudden lag or as packet loss. This is especially common in RTS games.\n\n > Couldnt you just programm a code that logs what has been displayed at what time to the player and what units where near this area so he could have had vision of it and then when an area is displayed by the client but there was no mean of units nearby it prooves the maphacking.\n\nYes, you can easily prove that someone is cheating by observing the behaviour of their game. But this is something that a human can do, not a computer. Just because you can log what the engine (or mod) is doing, doesn't mean that you can easily detect that what it's doing is wrong. And good anti-cheat software often will be able to detect these things, by doing exactly this. To give an example, anti-cheat software for the FPS game Counter-Strike often *also* inserts itself between the mod and the engine, and then scrutinizes the engine to look for prohibited or unusual changes.\n\nAlso, this type of thinking doesn't work for all hacks. For example, consider an FPS game \"aimbot\" that automatically aims at people's heads. Since mouse input is a human-controlled thing, and humans can give all kinds of input (fast, jerky, erratic input, or slow, smooth input, etc.) it's usually not possible to verify whether a human or a bot is in control of the input. You might be able to detect very \"shaky\" movement, but a good bot would be programmed to have less shaking and more smoothness, so as to avoid detection that way. This is also true for things like \"turbo buttons\" or \"button macros\" on a console game or emulator.\n\nHope that helps!", "provenance": null }, { "answer": "There's a couple issues at play here, but I'm going to focus mainly on the big one, client side vulnerability.\n\nAnything on the client computer is fair game to hackers. If you setup code that's supposed to check if anything is hook into memory, that code can be tampered with. If you setup code to monitor what programs are running, that code can be tampered with. Any aspect of the game client can also be tampered with.\n\nThis is an issue that'll never go away, and is why you see the sort of back and for escalation between game devs and hackers. Devs will change something to catch more hackers, hackers will change something to hide more effectively. The moral of the story is that you cannot prevent someone from hacking on the client side. You're question, specifically this part...\n\n > My question now is, why it is apperently not possible for the game to detect that this kind of information is displayed when it shouldnt.\n\n...comes into play with this problem. The game CAN sometimes detect it... but only if the detection method isnt ALSO tampered with (assuming what's being display is being shown in game, and not ON TOP of the game). This is why games like DoTA2, which filter what information you have from the SERVER side , are really the way to go with anti-cheat.\n\nThat said... there are reasons to not use this method for every game. There's benefits and loses to everything in coding, and pre-preparing resources is important for some games. There's also the complexity of tracking every player and what every player can see in relation to every other player. \n\nDoTA two does this very well, but it has a low player count and the line of site is fairly static. If you tried filtering based on only LOS for game like planetside (which does almost everything client side), you'd have insanely complex server problems due to having to track thousands of players cones of vision and customize the data sent to them. You also have to factor in movement speed and changes in viewing angle compared to how fast data can be sent. Since the view isnt static, the user doing something as simple as turning around can cause problems. Most games allow you to turn around faster than data can be sent from client to server and then back again. You'd have really wierd problems with people warping, or just appearing out of no where. Being shot at from anyone you're not looking at would be terrible, and ultimately this would likely ruin a game.\n\nBasically, they're are limits to what's actually doable while maintaining quality of service. These limits are the main reason why work is offloaded to the client, which in turn makes them highly vulnerable for anyone who has the time to break the system.\n\nTL;DR: Anything clientside is fair game to hackers, but we cannot do everything server side (yet).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "855364", "title": "Cheating in online games", "section": "Section::::Anti-cheating methods and limitations.\n", "start_paragraph_id": 63, "start_character": 0, "end_paragraph_id": 63, "end_character": 463, "text": "There are many facets of cheating in online games which make the creation of a system to stop cheating very difficult; however, game developers and third party software developers have created or are developing technologies that attempt to prevent cheating. Such countermeasures are commonly used in video games, with notable anti-cheat software being GameGuard, PunkBuster, Valve Anti-Cheat (specifically used on games on the Steam platform), and EasyAntiCheat.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8544406", "title": "Cheating in video games", "section": "Section::::Cheating in online games.\n", "start_paragraph_id": 74, "start_character": 0, "end_paragraph_id": 74, "end_character": 439, "text": "Cheating in online games is common on public game servers. Some online games, such as \"Battlefield 1942\", include specific features to counter cheating exploits, by incorporating tools such as PunkBuster, nProtect GameGuard, or Valve Anti-Cheat. However, much like anti-virus companies, some anti-cheat tools are constantly and consistently bypassed until further updates force cheat creators to find new methods to bypass the protection.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8544406", "title": "Cheating in video games", "section": "Section::::Unusual effects.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 1091, "text": "Cheat codes may sometimes produce unusual or interesting effects which don't necessarily make the game easier to play. For example, one cheat in \"\" makes dinosaurs appear \"undead\". Another example occurs in the game \"Dungeon Siege\", where activating the cheat to extend the range of a bow also allows the enemies to fire at the same distance, thereby eliminating the advantage the cheat would have given. A cheat may even make the game harder to play; for instance, one could give the enemy special abilities, increase general difficulty, make neutral bystanders attack the player or grant the player a disadvantage such as low health points. Cheats in \"Grand Theft Auto\" games can make NPCs start rioting or wield weapons. In \"Grand Theft Auto III\", the player can activate a cheat to enable blowing off the limbs of NPCs, a feature originally included in the game. Recently, however, Rockstar Games has not included such violent or unusual cheat codes in its games, instead choosing to focus on cheats such as vehicle spawns, player effects (for example, invincibility) and weapon spawns.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5363", "title": "Video game", "section": "Section::::Development.:Cheating.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 613, "text": "Cheating in computer games may involve cheat codes and hidden spots implemented by the game developers, modification of game code by third parties, or players exploiting a software glitch. Modifications are facilitated by either cheat cartridge hardware or a software trainer. Cheats usually make the game easier by providing an unlimited amount of some resource; for example weapons, health, or ammunition; or perhaps the ability to walk through walls. Other cheats might give access to otherwise unplayable levels or provide unusual or amusing features, like altered game colors or other graphical appearances.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1654769", "title": "Artificial intelligence in video games", "section": "Section::::Cheating AI.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 557, "text": "In the context of artificial intelligence in video games, cheating refers to the programmer giving agents actions and access to information that would be unavailable to the player in the same situation. Believing that the Atari 8-bit could not compete against a human player, Chris Crawford did not fix a bug in \"Eastern Front (1941)\" that benefited the computer-controlled Russian side. \"Computer Gaming World\" in 1994 reported that \"It is a well-known fact that many AIs 'cheat' (or, at least, 'fudge') in order to be able to keep up with human players\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8544406", "title": "Cheating in video games", "section": "Section::::History.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 560, "text": "Many modern games have removed cheat codes entirely, except when used to unlock certain secret bonuses. The usage of real-time achievement tracking made it unfair for any one player to cheat. In online multiplayer games, cheating is frowned upon and disallowed, often leading to a ban. However, certain games may unlock single-player cheats if the player fulfills a certain condition. Yet other games, such as those using the Source engine, allow developer consoles to be used to activate a wide variety of cheats in single-player or by server administrators.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "855364", "title": "Cheating in online games", "section": "Section::::Implementation of cheats.:Game code modification.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 346, "text": "Many cheats are implemented by modifying game software, despite EULAs which forbid modification. While game software distributed in binary-only versions makes it harder to modify code, reverse engineering is possible. Also game data files can be edited separately from the main program and thereby circumvent protections implemented in software.\n", "bleu_score": null, "meta": null } ] } ]
null
2w6fce
why cant heat/any kind of energy be used to create matter when matter can create heat
[ { "answer": "Theoretically it could be, although from the equation E=mc^2 where E is energy, m is mass and c is the speed of light, you can see that for each small amount of mass you create needs an astronomically huge amount of energy to create it. At the moment this is just totally impractical to do.\n\nTo get 1kg of mass, you would need to 90,000,000,000,000,000 joules of energy. When you consider that just 1 joule is the energy required to lift a small apple one meter in the air, then you can see how much that is.\n\nYou would need enough energy to lift 90,000 *trillion* apples or 189,653,355 Titanics (the ship) by one meter.", "provenance": null }, { "answer": "I think the point you're getting at is less to do with Einstein's equation (E=mc^2) than it is to do with thing's like combustion?\n\nIn the case of combustion and similar effects, the important factor is entropy. The second law of thermodynamics says that in a closed system (free from outside influences) the level of disorder will never decrease. This is observed on a daily basis - things won't spontaneously order themselves.\n\nMatter is a very ordered form of energy. Heat is very disordered. Therefore **matter - > heat** is fine but **heat - > matter** won't occur without without something working to make it happen.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2021419", "title": "Relativistic mechanics", "section": "Section::::Relativistic dynamics.:Closed (isolated) systems.:Center of momentum frame.\n", "start_paragraph_id": 53, "start_character": 0, "end_paragraph_id": 53, "end_character": 469, "text": "Historically, confusion about mass being \"converted\" to energy has been aided by confusion between mass and \"matter\", where matter is defined as fermion particles. In such a definition, electromagnetic radiation and kinetic energy (or heat) are not considered \"matter\". In some situations, matter may indeed be converted to non-matter forms of energy (see above), but in all these situations, the matter and non-matter forms of energy still retain their original mass.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2362494", "title": "Matter creation", "section": "Section::::Photon pair production.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 351, "text": "Because of momentum conservation laws, the creation of a pair of fermions (matter particles) out of a single photon cannot occur. However, matter creation is allowed by these laws when in the presence of another particle (another boson, or even a fermion) which can share the primary photon's momentum. Thus, matter can be created out of two photons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2254029", "title": "Astrophysical plasma", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 607, "text": "When matter becomes sufficiently hot, it becomes ionized and forms a plasma. This process breaks matter into its constituent particles which includes negatively-charged electrons and positively-charged ions. These electrically-charged particles are susceptible to influences by local electromagnetic fields. This includes strong fields generated by stars, and weak fields which exist in star forming regions, in interstellar space, and in intergalactic space. Similarly, electric fields are observed in some stellar astrophysical phenomena, but they are inconsequential in very low-density gaseous mediums.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9426", "title": "Electromagnetic radiation", "section": "Section::::Thermal and electromagnetic radiation as a form of heat.\n", "start_paragraph_id": 83, "start_character": 0, "end_paragraph_id": 83, "end_character": 1078, "text": "The basic structure of matter involves charged particles bound together. When electromagnetic radiation impinges on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the context. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy, or even kinetic energy, in the material. With a few exceptions related to high-energy photons (such as fluorescence, harmonic generation, photochemical reactions, the photovoltaic effect for ionizing radiations at far ultraviolet, X-ray and gamma radiation), absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens for infrared, microwave and radio wave radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can easily set paper afire.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19376", "title": "Materialism", "section": "Section::::Defining matter.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 556, "text": "One challenge to the traditional concept of matter as tangible \"stuff\" came with the rise of field physics in the 19th century. Relativity shows that matter and energy (including the spatially distributed energy of fields) are interchangeable. This enables the ontological view that energy is prima materia and matter is one of its forms. On the other hand, the Standard Model of particle physics uses quantum field theory to describe all interactions. On this view it could be said that fields are prima materia and the energy is a property of the field.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2839", "title": "Angular momentum", "section": "Section::::In classical mechanics.:Discussion.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 488, "text": "Many problems in physics involve matter in motion about some certain point in space, be it in actual rotation about it, or simply moving past it, where it is desired to know what effect the moving matter has on the point—can it exert energy upon it or perform work about it? Energy, the ability to do work, can be stored in matter by setting it in motion—a combination of its inertia and its displacement. Inertia is measured by its mass, and displacement by its velocity. Their product,\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "422481", "title": "Mass–energy equivalence", "section": "Section::::Efficiency.\n", "start_paragraph_id": 78, "start_character": 0, "end_paragraph_id": 78, "end_character": 1195, "text": "Although mass cannot be converted to energy, in some reactions matter particles (which contain a form of rest energy) can be destroyed and the energy released can be converted to other types of energy that are more usable and obvious as forms of energy—such as light and energy of motion (heat, etc.). However, the total amount of energy and mass does not change in such a transformation. Even when particles are not destroyed, a certain fraction of the ill-defined \"matter\" in ordinary objects can be destroyed, and its associated energy liberated and made available as the more dramatic energies of light and heat, even though no identifiable real particles are destroyed, and even though (again) the total energy is unchanged (as also the total mass). Such conversions between types of energy (resting to active energy) happen in nuclear weapons, in which the protons and neutrons in atomic nuclei lose a small fraction of their average mass, but this mass loss is not due to the destruction of any protons or neutrons (or even, in general, lighter particles like electrons). Also the mass is not destroyed, but simply removed from the system in the form of heat and light from the reaction.\n", "bleu_score": null, "meta": null } ] } ]
null
9g3v3u
why do doctors move the stethoscope around, rather than just place it on your heart?
[ { "answer": "Each area that they place the stethoscope allows them to listen to the 4 different valves in your heart. If a murmur (abnormal sound) is heard at a specific area, it can tell the doctor which valve or part of the heart may be affected. ", "provenance": null }, { "answer": "There are 4 points on the chest for the 4 heart valves and 6 points in the back for the lungs. ", "provenance": null }, { "answer": "Not just your heart. They can use it to check your lungs, your intestines, your diaphragm, your stomach. All make sounds when they move, and make different sounds when something is wrong.", "provenance": null }, { "answer": "This listen to a lot more than just your heart. They moved it all over my back and chest when my lung collapsed, to hear my breathing, and how much I could inhale. ", "provenance": null }, { "answer": "They are listening to different parts of your heart, as well as your lung function. They will even use it to listen to intestines at times. ", "provenance": null }, { "answer": "There are 4 valves in your heart, and there are four places on your chest where you can listen to each one to hear for any narrowing of the valve area or backflow happening in a specific valve (these are called murmurs). There are six places in the back that you listen to for lung sounds, and this is because our lungs are made of upper, middle, and lower lobes (basically) and if there's any fluid buildup, wheezing, or other bad sounds in a specific lobe, you can localize the problem to help you figure out what's wrong. On both sides of the neck, you can listen to the carotid arteries, basically a main artery that supplies blood to the head/brain. If someone has narrowing of the artery (like from plaque buildup), you can hear a whooshing sound (and this would be concerning because parts of the plaque could break off and cause a stroke). In the abdomen, there are places where you can listen for the arteries that supply the kidney. Again, any whooshing here would mean narrowing of these arteries and usually warrant further testing (because it can cause someone to have high blood pressure or cause damage to the kidneys due to lack of blood supply). You can also listen for bowel sounds just with the stethoscope anywhere in the belly region, and normally they should happen pretty frequently. No bowel sounds or hyperactive bowel sounds could mean there's an obstruction in your gut or some other underlying disorder (but listening for bowel sounds isn't a great test for detecting these things accurately so usually this ones not that important). And that's most of it, hope this makes sense and was interesting! \n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "7714070", "title": "Acoustic transmission", "section": "Section::::Stethoscope.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 242, "text": "Stethoscopes roughly match the acoustical impedance of the human body, so they transmit sounds from a patient's chest to the doctor's ear much more effectively than the air does. Putting an ear to someone's chest would have a similar effect.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8088660", "title": "Cardiac myxoma", "section": "Section::::Diagnosis.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 274, "text": "A doctor will listen to the heart with stethoscope. A \"tumor plop\" (a sound related to movement of the tumor), abnormal heart sounds, or a murmur similar to the mid-diastolic rumble of mitral stenosis may be heard. These sounds may change when the patient changes position.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31096465", "title": "Cardiovascular examination", "section": "Section::::Cardiac examination.:Percussion and Auscultation.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 992, "text": "BULLET::::- For the best cardiac examination, it is important to have the patient both sit up and lay down at a 30-45˚ angle. Tapping with the fingertips (also known as percussion) can be used to estimate the size of the heart, though palpation is more accurate. From the left side of the chest, the doctor can tap the spaces between the ribs with the tips of their middle finger to listen for the dullness that will be present over the heart. Listening with a stethoscope (also known as auscultation) to all four areas of the heart: aortic, pulmonic, tricuspid and mitral. Any murmurs, rubs or gallops should be noted. Gallops are also known as a third (S3) or fourth (S4) heart sound. The absence of abnormalities (normal) may be recorded as \"no m/r/g\". The ACC and the AHA have called cardiac auscultation \"the most widely used method of screening for valvular heart disease.\" Because of its importance to the cardiac examination, cardiac auscultation has been covered in-depth elsewhere.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14832193", "title": "Necker-Enfants Malades Hospital", "section": "Section::::Famous Physicians.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 1070, "text": "French physician René Laennec invented the stethoscope in 1816 while he was working at the Hôpital Necker. Previously, doctors placed their heads directly on their patient's chest and listened for any irregular sounds to aid in diagnosis. But when a large young woman came to the hospital, he realized that this method would be less effective given her size. Instead, he used a tightly rolled up piece of paper to press against the patient's chest, which made the heartbeat much clearer than ever before. Further experimentation yielded Laennec's famous hollow wooden tube, the forerunner of today's stethoscopes. His invention's ability to magnify the internal sounds of the body advanced the medical practice of auscultation, and proved beneficial to the Hôpital Necker, which had a high fatality rate for Phthisis pulmonalis. This was because Laennec discovered with his stethoscope that patients who developed the disease first displayed a particular irregularity how their voices were manifested within their bodies, thus allowing patients to be diagnosed earlier.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "154664", "title": "Turbulence", "section": "Section::::Examples of turbulence.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 499, "text": "BULLET::::- In the medical field of cardiology, a stethoscope is used to detect heart sounds and bruits, which are due to turbulent blood flow. In normal individuals, heart sounds are a product of turbulent flow as heart valves close. However, in some conditions turbulent flow can be audible due to other reasons, some of them pathological. For example, in advanced atherosclerosis, bruits (and therefore turbulent flow) can be heard in some vessels that have been narrowed by the disease process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "234803", "title": "Heart valve", "section": "Section::::Physiology.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 209, "text": "The motion of the heart valves is used as a boundary condition in the Navier–Stokes equation in determining the fluid dynamics of blood ejection from the left and right ventricles into the aorta and the lung.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36808", "title": "Heart", "section": "Section::::Clinical significance.:Diagnosis.:Examination.\n", "start_paragraph_id": 96, "start_character": 0, "end_paragraph_id": 96, "end_character": 708, "text": "The cardiac examination includes inspection, feeling the chest with the hands (palpation) and listening with a stethoscope (auscultation). It involves assessment of signs that may be visible on a person's hands (such as splinter haemorrhages), joints and other areas. A person's pulse is taken, usually at the radial artery near the wrist, in order to assess for the rhythm and strength of the pulse. The blood pressure is taken, using either a manual or automatic sphygmomanometer or using a more invasive measurement from within the artery. Any elevation of the jugular venous pulse is noted. A person's chest is felt for any transmitted vibrations from the heart, and then listened to with a stethoscope.\n", "bleu_score": null, "meta": null } ] } ]
null
4g3zw8
In the US during WWII, how did rationing work on the homefront if people went to a restaurant? Did people have to give the ration coupons/cards to the restaurant in addition to payment?
[ { "answer": "The restaurants had to collect points in order to buy more stock for their kitchens. They took the points to the local ration board and exchanged them for vouchers that allowed them to buy quantities of food at a time. \n\nLots more here: _URL_0_", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "43364352", "title": "Rationing in the United States", "section": "Section::::World War II.:Food and consumer goods.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 540, "text": "By the end of 1942, ration coupons were used for nine other items. Typewriters, gasoline, bicycles, footwear, silk, nylon, fuel oil, stoves, meat, lard, shortening and food oils, cheese, butter, margarine, processed foods (canned, bottled, and frozen), dried fruits, canned milk, firewood and coal, jams, jellies, and fruit butter were rationed by November 1943. Many retailers welcomed rationing because they were already experiencing shortages of many items due to rumors and panics, such as flashlights and batteries after Pearl Harbor.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15910747", "title": "Minties", "section": "Section::::Depression, then wartime shortages.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 261, "text": "During World War II and until 1946, supply of confectionery was restricted; what output there was went to serving troops. Advertising resumed after cessation of hostilities, anticipating eventual availability. Rationing may have been on a state-by-state basis.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "204043", "title": "C-ration", "section": "Section::::Background and development.:\"Reserve ration\" (1917–1937).\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 490, "text": "The Reserve Ration was issued during the later part of World War I to feed troops who were away from a garrison or field kitchen. It originally consisted of of bacon or of meat (usually canned corned beef), two cans of hard bread or hardtack biscuits, a packet of of pre-ground coffee, a packet of of granulated sugar, and a packet of of salt. There was also a separate \"tobacco ration\" of of tobacco and 10 cigarette rolling papers, later replaced by brand-name machine-rolled cigarettes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "250130", "title": "Rationing", "section": "Section::::Civilian rationing.:Second World War.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 489, "text": "Rationing became common during the Second World War. Ration stamps were often used. These were redeemable stamps or coupons, and every family was issued a set number of each kind of stamp based on the size of the family, ages of children and income. The British Ministry of Food refined the rationing process in the early 1940s to ensure the population did not starve when food imports were severely restricted and local production limited due to the large number of men fighting the war.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "202437", "title": "Meal, Ready-to-Eat", "section": "Section::::History.:Background.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 1116, "text": "The first U.S. soldier ration established by a Congressional Resolution, during the Revolutionary War, consisted of enough food to feed a man for one day, mostly beef, peas, and rice. During the Civil War, the U.S. military moved toward canned goods. Later, self-contained kits were issued as a whole ration and contained canned meat, bread, coffee, sugar and salt. During the First World War, canned meats were replaced with lightweight preserved meats (salted or dried) to save weight and allow more rations to be carried by soldiers carrying their supplies on foot. At the beginning of World War II, a number of new field rations were introduced, including the Mountain ration and the Jungle ration. However, cost-cutting measures by Quartermaster Command officials during the latter part of World War II and the Korean War again saw the predominance of heavy canned C rations issued to troops, regardless of operating environment or mission. During WWII, over 100 million cans of Spam were sent to the Pacific. The use of canned wet rations continued through the Vietnam War, with the improved MCI field ration.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43364352", "title": "Rationing in the United States", "section": "Section::::World War II.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 509, "text": "In the summer of 1941, the British appealed to Americans to conserve food to provide more to go to Britain's fighting men in World War II. The Office of Price Administration warned Americans of potential gasoline, steel, aluminum, and electricity shortages. It believed that with factories converting to military production and consuming many critical supplies, rationing would become necessary if the country entered the war. It established a rationing system after the attack on Pearl Harbor on 7 December.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5219904", "title": "Ration stamp", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 636, "text": "A ration stamp or ration card is a stamp or card issued by a government to allow the holder to obtain food or other commodities that are in short supply during wartime or in other emergency situations when rationing is in force. Ration stamps were widely used during World War II by both sides after hostilities caused interruption to the normal supply of goods. They were also used after the end of the war while the economies of the belligerents gradually returned to normal. Ration stamps were also used to help maintain the amount of food one could hold at a time. This was so that one person would not have more food than another.\n", "bleu_score": null, "meta": null } ] } ]
null
21luex
cell cycle and mitosis?
[ { "answer": "The cell grows and grows through the G1 phase. When it gets big enough, it enters the S phase, where the DNA replicates. Then it goes to the M phase, mitosis. This is where it splits. The two daughter cells repeat this process. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "20369", "title": "Mitosis", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 720, "text": "In cell biology, mitosis () is a part of the cell cycle when replicated chromosomes are separated into two new nuclei. Cell division gives rise to genetically identical cells in which the number of chromosomes is maintained. In general, mitosis (division of the nucleus) is preceded by the S stage of interphase (during which the DNA is replicated) and is often accompanied or followed by cytokinesis, which divides the cytoplasm, organelles and cell membrane into two new cells containing roughly equal shares of these cellular components. Mitosis and cytokinesis together define the mitotic (M) phase of an animal cell cycle—the division of the mother cell into two daughter cells genetically identical to each other.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "59402306", "title": "Neuronal cell cycle", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1068, "text": "The Neuronal cell cycle represents the life cycle of the biological cell, its creation, reproduction and eventual death. The process by which cells divide into two daughter cells is called mitosis. Once these cells are formed they enter G1, the phase in which many of the proteins needed to replicate DNA are made. After G1, the cells enter S phase during which the DNA is replicated. After S, the cell will enter G2 where the proteins required for mitosis to occur are synthesized. Unlike most cell types however, neurons are generally considered incapable of proliferating once they are differentiated, as they are in the adult nervous system. Nevertheless, it remains plausible that neurons may re-enter the cell cycle under certain circumstances. Sympathetic and cortical neurons, for example, try to reactivate the cell cycle when subjected to acute insults such as DNA damage, oxidative stress, and excitotoxicity. This process is referred to as “abortive cell cycle re-entry” because the cells usually die in the G1/S checkpoint before DNA has been replicated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7252", "title": "Cell cycle", "section": "Section::::Phases.:Cytokinesis phase (separation of all cell components).\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 405, "text": "Mitosis is immediately followed by cytokinesis, which divides the nuclei, cytoplasm, organelles and cell membrane into two cells containing roughly equal shares of these cellular components. Mitosis and cytokinesis together define the division of the mother cell into two daughter cells, genetically identical to each other and to their parent cell. This accounts for approximately 10% of the cell cycle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "491067", "title": "Anaphase-promoting complex", "section": "Section::::M to G transition.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 529, "text": "Upon completion of mitosis, it is important that cells (except for embryonic ones) go through a growth period, known as G phase, to grow and produce factors necessary for the next cell cycle. Entry into another round of mitosis is prevented by inhibiting Cdk activity. While different processes are responsible for this inhibition, an important one is activation of the APC/C by Cdh1. This continued activation prevents the accumulation of cyclin that would trigger another round of mitosis and instead drives exit from mitosis.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25181469", "title": "Meiotic recombination checkpoint", "section": "Section::::Overview.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 821, "text": "Generally speaking, the cell cycle regulation of meiosis is similar to that of mitosis. As in the mitotic cycle, these transitions are regulated by combinations of different gene regulatory factors, the cyclin-Cdk complex and the anaphase-promoting complex (APC). The first major regulatory transition occurs in late G1, when the start of meiotic cycle is activated by Ime1 instead of Cln3/Cdk1 in mitosis. The second major transition occurs at the entry into metaphase I. The main purpose of this step is to make sure that DNA replication has completed without error so that spindle pole bodies can separate. This event is triggered by the activation of M-Cdk in late prophase I. Then the spindle assembly checkpoint examines the attachment of microtubules at kinetochores, followed by initiation of metaphase I by APC.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "222320", "title": "Interphase", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 336, "text": "Interphase is the phase of the cell cycle in which a typical cell spends most of its life. If, we consider that the total event (interphase and mitotic cell division) take place about 24 hrs. then the interphase is of 23 hrs. Interphase can also be thought of as lasting for 90% of the cell's life, while Mitosis usually lasts for 10%.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20369", "title": "Mitosis", "section": "Section::::Phases.:Interphase.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 1237, "text": "The mitotic phase is a relatively short period of the cell cycle. It alternates with the much longer \"interphase\", where the cell prepares itself for the process of cell division. Interphase is divided into three phases: G (first gap), S (synthesis), and G (second gap). During all three parts of interphase, the cell grows by producing proteins and cytoplasmic organelles. However, chromosomes are replicated only during the S phase. Thus, a cell grows (G), continues to grow as it duplicates its chromosomes (S), grows more and prepares for mitosis (G), and finally divides (M) before restarting the cycle. All these phases in the cell cycle are highly regulated by cyclins, cyclin-dependent kinases, and other cell cycle proteins. The phases follow one another in strict order and there are \"checkpoints\" that give the cell cues to proceed from one phase to another. Cells may also temporarily or permanently leave the cell cycle and enter G phase to stop dividing. This can occur when cells become overcrowded (density-dependent inhibition) or when they differentiate to carry out specific functions for the organism, as is the case for human heart muscle cells and neurons. Some G cells have the ability to re-enter the cell cycle.\n", "bleu_score": null, "meta": null } ] } ]
null
3yx35z
the impossible trinity of economics
[ { "answer": "Free movement of capital allows you to participate in the international economy, particularly the financial aspects of international trade. Restricting it will isolate your economy, reducing its potential growth by restricting the ability to work with investors and companies outside your country.\n\nYou also asked about the current US exchange rate. It is not stable, and fluctuates constantly on the international currency market.", "provenance": null }, { "answer": "Stable should more accurately be 'fixed foreign exchange rate.' \n\nIf a state wants to keep its exchange rate stable while maintaining free capital movement, it needs to commit to buying/selling its currency at a fixed rate of exchange with other currencies. To do this would mean giving up independent monetary policy, as monetary policy is now dedicated solely to maintaining the exchange rate.\n\nAn alternative would be to impose capital controls so that foreigners cannot buy domestic currency (which affects the exchange rate).\n\nThe free movement of capital is basically part of the neoliberal package of ideas about how to optimally organize the global economy that has been dominant from the 1980s, and is codified in various international economic institutions. Having to impose capital controls goes against free-market ideas and is seen to be a problem of developing countries. So, most developed countries pick (2) & (3) with monetary policy used to target 2% inflation.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1307966", "title": "Impossible trinity", "section": "Section::::Trilemma in practice.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 664, "text": "The idea of the impossible trinity went from theoretical curiosity to becoming the foundation of open economy macroeconomics in the 1980s, by which time capital controls had broken down in many countries, and conflicts were visible between pegged exchange rates and monetary policy autonomy. While one version of the impossible trinity is focused on the extreme case with a perfectly fixed exchange rate and a perfectly open capital account, a country has absolutely no autonomous monetary policy the real world has thrown up repeated examples where the capital controls are loosened, resulting in greater exchange rate rigidity and less monetary-policy autonomy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "488382", "title": "Trilemma", "section": "Section::::In economics.:The \"Impossible trinity\".\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 572, "text": "In 1962 and 1963, a trilemma (or \"impossible trinity\") was introduced by the economists Robert Mundell and Marcus Fleming in articles discussing the problems with creating a stable international financial system. It refers to the trade-offs among the following three goals: a fixed exchange rate, national independence in monetary policy, and capital mobility. According to the Mundell–Fleming model of 1962 and 1963, a small, open economy cannot achieve all three of these policy goals at the same time: in pursuing any two of these goals, a nation must forgo the third.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "42204267", "title": "Ronald H. Nash", "section": "Section::::Thought.:The mixed economy.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 389, "text": "Sometimes called interventionism, the mixed economy is a compromise or mix, between capitalism and socialism. Most countries that we identify as \"capitalist\" and \"socialist\" are really different degrees of a mixed economy. Nash argues that there can never be a sustainable mixed economy. Any economy that tries to mix socialism and capitalism will inevitably collapse into one of the two.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3473088", "title": "Tendency of the rate of profit to fall", "section": "Section::::20th century Marxist controversies.:Transformation problem.:Value theory as add-on.:Eclecticism.\n", "start_paragraph_id": 133, "start_character": 0, "end_paragraph_id": 133, "end_character": 498, "text": "An integral, consistent economic theory becomes impossible with the add-on approach, since market prices, product-values and social relations are always in different baskets. This theoretical eclecticism results in low explanatory power, and low predictive power – there exist multiple different theories and concepts at once, which all can interrelate/combine in all kinds of possible and \"ad hoc\" configurations, like a kaleidoscope, and therefore \"explain\" and \"predict\" everything and nothing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10890", "title": "Fundamental interaction", "section": "Section::::The interactions.:Beyond the Standard Model.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 657, "text": "Grand Unified Theories (GUTs) are proposals to show that the three fundamental interactions described by the Standard Model are all different manifestations of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated, as well as predicting gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces (this was, for example, verified at the Large Electron–Positron Collider in 1991 for supersymmetric theories).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "190424", "title": "Heterodoxy", "section": "Section::::Non-ecclesiastic usage.:Economics.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 600, "text": "Heterodox economics refers to schools of economic thought that are considered outside of mainstream economics, referred to as orthodox economics, often represented by expositors as contrasting with or going beyond neoclassical economics. It means considering a variety of economic schools and methodologies, which can include neoclassical or other orthodox economics as a part. Heterodox economics refers to a variety of separate unorthodox approaches or schools such as institutional, post-Keynesian, socialist, Marxian, feminist, Georgist, Austrian, ecological, and social economics, among others.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1488111", "title": "Theory of the second best", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 622, "text": "In economics, the theory of the second best concerns the situation when one or more optimality conditions cannot be satisfied. The economists Richard Lipsey and Kelvin Lancaster showed in 1956, that if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the values that would otherwise be optimal. Politically, the theory implies that if it is infeasible to remove a particular market distortion, introducing a second (or more) market distortion may partially counteract the first, and lead to a more efficient outcome.\n", "bleu_score": null, "meta": null } ] } ]
null
12ex0g
What is thought to have stabilized earths magnetic field?
[ { "answer": "I'm a graduate student in earth science, but I'm studying glaciology, not geomagnetism. Nonetheless, I'll give this a go:\n\nThe Earth's magnetic field is caused by convection cells in the liquid iron/nickel outer core. It is not known (at least by me) what causes reversals, but my understanding is that they are a chaotic process. \n\nThe \"fixation\" you are referring to was not permanent. The \"Cretaceous Long Normal\" was a period when the magnetic field had a stable direction for several tens of millions of years. However, after that it went back to irregular oscillations, as it did before. The last reversal was the Brunhes-Matuyama reversal, about 800,000 years ago. \n\nAlso, note that Earth's magnetic poles are never truly fixed. Even in between reversals they wander on short timescales (decades to centuries). This drift has been measured and documented within the past few hundred years. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "31696675", "title": "Geomagnetic pole", "section": "Section::::Geomagnetic reversal.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 637, "text": "Over the life of the Earth, the orientation of Earth's magnetic field has reversed many times, with geomagnetic north becoming geomagnetic south and vice versa – an event known as a geomagnetic reversal. Evidence of geomagnetic reversals can be seen at mid-ocean ridges where tectonic plates move apart. As magma seeps out of the mantle and solidifies to become new ocean floor, the magnetic minerals in it are magnetized in the direction of the magnetic field. Thus, starting at the most recently formed ocean floor, one can read out the direction of the magnetic field in previous times as one moves farther away to older ocean floor.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "146983", "title": "Earth's magnetic field", "section": "Section::::Measurement and analysis.:Detection.\n", "start_paragraph_id": 76, "start_character": 0, "end_paragraph_id": 76, "end_character": 515, "text": "The Earth's magnetic field strength was measured by Carl Friedrich Gauss in 1832 and has been repeatedly measured since then, showing a relative decay of about 10% over the last 150 years. The Magsat satellite and later satellites have used 3-axis vector magnetometers to probe the 3-D structure of the Earth's magnetic field. The later Ørsted satellite allowed a comparison indicating a dynamic geodynamo in action that appears to be giving rise to an alternate pole under the Atlantic Ocean west of South Africa.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "146983", "title": "Earth's magnetic field", "section": "Section::::Significance.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 606, "text": "The study of past magnetic field of the Earth is known as paleomagnetism. The polarity of the Earth's magnetic field is recorded in igneous rocks, and reversals of the field are thus detectable as \"stripes\" centered on mid-ocean ridges where the sea floor is spreading, while the stability of the geomagnetic poles between reversals has allowed paleomagnetists to track the past motion of continents. Reversals also provide the basis for magnetostratigraphy, a way of dating rocks and sediments. The field also magnetizes the crust, and magnetic anomalies can be used to search for deposits of metal ores.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30871810", "title": "North Magnetic Pole", "section": "Section::::Geomagnetic reversal.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 516, "text": "Over the life of Earth, the orientation of Earth's magnetic field has reversed many times, with magnetic north becoming magnetic south and vice versa – an event known as a geomagnetic reversal. Evidence of geomagnetic reversals can be seen at mid-ocean ridges where tectonic plates move apart and the seabed is filled in with magma. As the magma seeps out of the mantle, cools, and solidifies into igneous rock, it is imprinted with a record of the direction of the magnetic field at the time that the magma cooled.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1986817", "title": "Magnetic deviation", "section": "Section::::Sources.:Magnetic anomalies.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 532, "text": "The Earth's magnetic field is modified by local magnetic anomalies. These include variations of the magnetization in the Earth's crust caused by geomagnetic reversals as well as nearby mountains and iron ore deposits. Generally, these are indicated on maps as part of the declination. Because the Earth's field changes over time, the maps must be kept up to date for accurate navigation. Short term errors in compass readings are also caused by fields generated in the Earth's magnetosphere, particularly during geomagnetic storms.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "255217", "title": "Dynamo theory", "section": "Section::::History of theory.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 825, "text": "At the dawn of the 21st century, numerical modeling of the Earth's magnetic field has not been successfully demonstrated, but appears to be in reach. Initial models are focused on field generation by convection in the planet's fluid outer core. It was possible to show the generation of a strong, Earth-like field when the model assumed a uniform core-surface temperature and exceptionally high viscosities for the core fluid. Computations which incorporated more realistic parameter values yielded magnetic fields that were less Earth-like, but also point the way to model refinements which may ultimately lead to an accurate analytic model. Slight variations in the core-surface temperature, in the range of a few millikelvins, result in significant increases in convective flow and produce more realistic magnetic fields.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2203131", "title": "Geomagnetic reversal", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 669, "text": "Three decades later, when Earth's magnetic field was better understood, theories were advanced suggesting that the Earth's field might have reversed in the remote past. Most paleomagnetic research in the late 1950s included an examination of the wandering of the poles and continental drift. Although it was discovered that some rocks would reverse their magnetic field while cooling, it became apparent that most magnetized volcanic rocks preserved traces of the Earth's magnetic field at the time the rocks had cooled. In the absence of reliable methods for obtaining absolute ages for rocks, it was thought that reversals occurred approximately every million years.\n", "bleu_score": null, "meta": null } ] } ]
null
sccar
Is anything really impossible?
[ { "answer": "Yes, of course. QM isn't magic.\n\nI don't know what your background in math is, so I'm sorry if some of this is greek to you.\n\nBut for example: transformations of information (the kind of operations you need to do computation) occur as linear transformations (which is already a pretty strict mathematical condition) and furthermore can only occur as unitary transformations (an even bigger restriction). This is why quantum computation can't be used to solve NP-Complete problems, etc. They *can't* just do anything.", "provenance": null }, { "answer": "If you can define your initial and final states precisely in quantum mechanical terms, and if you know exactly what quantum theory you're adopting, a probability for going from initial to final state could exist. \n\nAs you say, the probabilities of all sorts of silly things happening will be extremely tiny but not zero.\n\nWhat those stupidly tiny probabilities are will depend on exactly what theory you're adopting. It's best to bear in mind that distinguishing between quantum theories at that level of detail is not feasible, so it's not really a question that can be answered scientifically.\n\nIf absolutely conserved quantities exist in your theory, then probabilities *will* be strictly zero if your initial and final states have different values for those quantities.\n\nAs an example, in a theory that absolutely conserves charge locally, the probability of even one electron disappearing or appearing is exactly zero. Almost every sane quantum theory adopts the absolute charge conservation locally, so that puts a restriction on what can or can't happen in that kind of a theory.", "provenance": null }, { "answer": "I see you've prescribed to the [Deepak Chopra](_URL_0_) school of thought on QM. \n\nAs other people have pointed out this is wrong, I think I've heard physicists make similar claims but it's usually along the lines of anything that can happen will. As in it has to be something that isn't physically forbidden, like having two particles occupy the same quantum state or transfer information faster than the speed of light. An example of this is, if you remember back to chemistry, there are two electrons in the first orbital, in reality they have almost the same state except for different spins. It is *impossible* to place a third electron in that first orbital. ", "provenance": null }, { "answer": "Your question touches on the nature of casuality, which is a matter of philosphy as much as science.\n\nFor example, it is impossible to travel faster than the speed of light. However, I can assign a non-zero possiblity that my atoms will spontaneous disassemble, and that exact quantum configuration of atoms will spontaneously appear a light year away.\n\nDid I travel faster than the speed of light? The result is the same, but since there was no casuality, most people would say no.", "provenance": null }, { "answer": "Correct me if I'm wrong, but 1^x^x^x is still 1.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "52041", "title": "Miracle", "section": "Section::::Explanations.:Law of truly large numbers.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 769, "text": "Statistically \"impossible\" events are often called miracles. For instance, when three classmates accidentally meet in a different country decades after having left school, they may consider this as \"miraculous\". However, a colossal number of events happen every moment on earth; thus extremely unlikely coincidences also happen every moment. Events that are considered \"impossible\" are therefore not impossible at all — they are just increasingly rare and dependent on the number of individual events. British mathematician J. E. Littlewood suggested that individuals should statistically expect one-in-a-million events (\"miracles\") to happen to them at the rate of about one per month. By Littlewood's definition, seemingly miraculous events are actually commonplace.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54077", "title": "Perpetual motion", "section": "Section::::Basic principles.:Impossibility.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 269, "text": "\"Epistemic impossibility\" describes things which absolutely cannot occur within our \"current\" formulation of the physical laws. This interpretation of the word \"impossible\" is what is intended in discussions of the impossibility of perpetual motion in a closed system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39641467", "title": "Impossible.com", "section": "Section::::Impossible People.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 536, "text": "Impossible People (previously Impossible.com) is an altruism-based mobile app which invites people to give their services and skills away to help others. Created by Lily Cole, the app allows users to post something they would like to do or need so that others can grant their wish. In May 2013, Cole presented the app's beta in conjunction and with the support of Wikipedia co-founder Jimmy Wales at a special event at Cambridge University. It is the first Yunus social business in the UK. The project became open source in March 2017.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2338876", "title": "Impossible Things", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 336, "text": "Impossible Things is a collection of short stories by Connie Willis, first published in 1993, that includes tales of ecological disaster, humorous satire, tragedy, and satirical alternate realities. Its genres range from comedy to tragedy to horror. Three of the stories are Nebula Award winners, and two of these also won Hugo Awards.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12627364", "title": "List of Mr. Men", "section": "Section::::I.:Mr. Impossible.\n", "start_paragraph_id": 84, "start_character": 0, "end_paragraph_id": 84, "end_character": 369, "text": "Mr. Impossible is the 25th book in the \"Mr. Men\" series by Roger Hargreaves. Nothing is impossible to Mr. Impossible. He can do anything. He has magic powers similar to Little Miss Magic and even uses his powers to motivate people. One day he goes to school with a boy named William. He proves he can do anything in some amazing ways! He is purple with a blue top hat.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25921215", "title": "Impossible (Shontelle song)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 440, "text": "\"Impossible\" is a song by Barbadian singer and songwriter Shontelle. It is the lead single from her second studio album, \"No Gravity\" (2010). The song was written by Arnthor Birgisson and Ina Wroldsen, and produced by Birgisson. It was released digitally on 9 February 2010. \"Impossible\" peaked at number 13 on the \"Billboard\" Hot 100 in the United States, number 33 in Canada, number nine in the United Kingdom and number five in Denmark.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15673187", "title": "Impossible (Captain Hollywood Project song)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 589, "text": "\"Impossible\" is a song recorded by the German musician known under the pseudonym of Captain Hollywood Project. It was released in October 1993 as the fourth single from his debut album, \"Love Is Not Sex\". The song features vocals by singer Kim Sanders and was a big hit in several countries. But like \"All I Want\" it achieved moderate success in comparison with the two previous Captain Hollywood Project's singles (\"More and More\" and \"Only with You\"). \"Impossible\" peaked within the top-10 in Denmark, Finland, Portugal, Spain and Sweden. On the Eurochart Hot 100, it reached number 20.\n", "bleu_score": null, "meta": null } ] } ]
null
52skvt
After the Second World War, why did Germany abandon the development of the Panzerkampfwagen VI line of tanks and instead develop the Leopard tanks?
[ { "answer": "While there were a lot of problems and drawbacks with the German heavy tanks, the real reason is that armored doctrine simply shifted away from the 'Heavy' tank as an important weapon and towards an all around Main Battle Tank concept. \n\nTo get to the Leopard, weve first got to talk a little bit about how West Germany rearmed following World War Two. This process, for reasons I wont get into here, was very long and drawn out. Negotiations for rearmament didnt really even get started until 1950, when the Korean invasion shocked the world. Even still, foot dragging in Europe prevented much movement on the issue until 1954 when Germany was finally permitted to rebuild her military. At that point, the Germans had to create a completely new organization (the *Wehrmacht* had been completely destroyed after World War Two) and equip it with weapons which didnt yet exist. In the early 1950s this was done primarily through military assistance from the United States. This meant that for much of the 1950s and early 1960s, the German army used second hand American tanks. \n\nBut German designers were looking to build a domestic tank to replace imported materials. And, as you might expect, they wanted to produce cutting edge weapons. The late '50s had seen a pretty big shift in armored theory, especially in the United States. This was the result of decades worth of thinking, but essentially the US Army had slowly pushed away from an army built up of multiple tank models towards one which used a single design. Rather than have a heavy breakthrough tank (a project which the US never seemed to really get off the ground), a medium exploitation tank, and a light reconnaissance tank, the US began to pair its forces down to one tank. The Patton series of tanks really represented this switch to the MBT role. These tanks had guns which could kill *most* of the tanks in the Soviet armory, had armor which afforded them some protection, and yet maintained enough battlefield agility to relocate quickly.\n\nNow, the Americans did fiddle with a heavy tank and a light tank (the M103 and the M551), but they generally found these tanks unsatisfactory on the battlefield, and quickly relegated them to niche specialist roles. Likewise, the British experimented with also experimented with heavier tank designs as well. But generally these were rejected in favor of lighter MBTs. The Germans realized why when they began to study the problem of the nuclear battlefield. NATO war planners had spent much of the 1960s, with and without German help, trying to figure out how to fight a hypothetical Pact invasion. And the answer they came up with involved a large number of nuclear weapons. Unlike World War Two, were large masses of men concentrated their effort into a narrow section of static enemy defense, the nuclear battlefield was fluid. Forces had to move fast and hit hard, and they especially had to avoid elaborate defensive works which were prime targets for nuclear attack. Heavy tanks simply lacked the speed and mobility to move rapidly across the battlefield and strike the enemy where he was most vulnerable. And this was a problem that wasnt just limited to the West. At nearly the same time the Soviets recognized that their heavy tank designs were simply too large and too good a target to be practical on the battlefield. \n\nSo, I would argue that the Leopard 1 was the German Army's first attempt to solve this question. They attempted to design a weapon which could operate on the nuclear battlefield utilizing a mixture of both off and domestic weapons. While their solution was uniquely German, it recognized the realities of the nuclear battlefield and it conformed to many of the standards set for by their allies. And, to the Tiger Series, unfortunately for such a beautiful tank, time had simply passed the heavy tank by. Had Germany rearmed in 1946, you may have seen a Tiger 3 tank. But by the mid 1960s, when the Leopard entered production, Heavy tanks were simply on the way out. \n\n*A note on Sources*\n\nThis is compiled from a reading of several secondary sources some of which are out of print. The best general summary of tank development at this time would be Richard Ogorkiewicz's books, especially the *Technology of Tanks*. Another good book is [The Cold War U.S. Army: Building Deterrence for Limited War](_URL_0_). Now this book obviously focuses on the US Army, not the German, but it explores many of the problems associated with planning for the nuclear battlefield. It Also talks about the Army's institutional response to those challenges, from strategic and organizational changes, all the way down to weapons development and planning. It also does discuss the Germans extensively as they dealt with the problems of rearmament, and then integration into the overall European command. ", "provenance": null }, { "answer": "The original Tiger tank had a very large amount of problems, the greatest of which was that Allied medium tanks had equivalent firepower in 1944, just over a year after the Tiger first saw combat, but at half the weight. The Germans realized this, and the Tiger B/Tiger II/King Tiger, the development of which began before the first Tiger left the factory, had very little in common with the original Tiger tank. However, even the new Tiger tank was vulnerable to the gun of the Soviet IS-2 and American Pershing at its inception, in addition to lighter guns (17 pdr and D-10) mounted on medium tanks and medium tank destroyers, which again caused the Germans to seek a replacement design. The heavy Maus and E-100 tanks that would be impervious (at least frontally) to these guns ended up being cancelled because they were simply impractical. The decreasing quality of German armour meant that armour had to grow thicker and thicker in order to resist Allied weapons, making these superheavy tanks incredibly impractical, even compared to the Tigers.\n\nBy the time the German army was allowed to rebuild after the war, the heavy tank concept has taken enormous steps forward. The Soviets built the IS-7, a heavy tank with the size and weight of the King Tiger and enough armour protection to be impervious to the 128 mm gun of the Jagdtiger and Maus. That wasn't all, the new generation of heavy tanks, the Object 752 and Object 777 had as much effective armour as the E-100 while weighing a third as much. The Tigers, already not exactly progressive designs in the 1940s, looked hilariously primitive by comparison.\n\nSources:\n\n*Interrogation of Herr Stiele Von Heydekampf: German Tank & Engine Program*\n\n*Ministry of Supply Armour Branch Report on Armour Quality & Vulnerability of Royal Tiger*\n\n*Otchet po ispytnaniyu snaryadnym obstrelom lobovykh detaley korpusa i bashni nemetskogo tyazhelogo tanka Tigr B*\n\nYuri Pasholok, *Panzerkampfwagen Maus*\n\nYuri Pasholok, *Neschastlivye Tri Semyorki*\n\nNikolai Nevsky, *IS-7 Titan Opozdavshiy na Voynu*\n", "provenance": null }, { "answer": "While the other posters have discussed the relative technical demerits of the wartime \"cats\", it is important to realize that the Bundeswehr procurement procurement process was a highly politicized one. Under the auspices of *Amt Blank*, Adenauer erected a shadow defense ministry in 1950 under the leadership of Theodor Blank that sought to prepare the way for a West German rearmament. Adenauer saw that rearmament was desirable both as a guarantor of FRG security (especially if the US withdrew into isolation), and as a symbol that the FRG had regained its sovereignty. much of the proto-defense ministry favored a relatively large, sophisticated force that would guarantee FRG security. But *Amt Blank*'s plans had to tread carefully because the issue of German rearmament was such a charged topic coming five years after the war. Much of Western European public opinion was against rearmament, as was a sizable percentage of the FRG's population, and most Western European policymakers favored a very limited form of rearmament. The British, for example, hemmed and hawed between supporting a limited FRG rearmament and a broader neutralization of both Germanys. \n\nTherefore the Adenauer government had to thread between two contentious needles of both pushing for a quick rearmament and doing so in a way that did not alienate their would-be allies. Rearmament needed to be quick not only because the outbreak of the Korean War stimulated worst-case scenario thinking, but also a *fait accompli* would prevent any of the FRG's partners from getting cold feet, especially since it was known that the USSR would vigorously protest German rearmament. One strategy to accomplish this was through the European Defense Community (EDC), an attempt to create a common European army. The EDC proved to be a dead end though due to inter-European squabbles, but also because of disagreements over German force levels. Both Paris and London pushed for a much smaller German contribution than what Bonn wanted. Although the EDC proved to be a dead end, it did allow for the FRG to test the waters for rearmament and it found that while many Western European leaders were uncomfortable with the idea, they could stomach it. The result was that the FRG instead formed its own national army, the Bundeswehr, and incorporated it into an organization that had fallen into a degree of disuse during the protracted abortion of the EDC, NATO. \n\nBundeswehr procurement followed a two-pronged strategy under the newly-formed Ministry of Defense under Blank and his successor Franz Josef Strauß. The first part of the stratagem was to gain a maximum, modern force quickly. Termed \"Broad Armament\", this meant acquiring advanced weapons from friendly nations. The initial main partner in this process was the US, and much of the groundwork of supplying the Germans had been laid by US Undersecretary of Defense Frank Nash during the EDC debates. The Nash List provided the Bundeswehr both surplus M47 and M48A3 tanks under the Military Assistance Program (MAP) as well as other sophisticated equipment like F-86s. The Bundeswehr also sought to fill in gaps by tendering orders to Western European states. This not only had the effect of making German rearmament appear \"European,\" but also eased the FRG's trade deficit with Western Europe, which had grown with the export-orientated *Wirtschaftswunder*. Thus the new Bundeswehr was armed with American heavy equipment, but French Alouette II helicopters, Belgian small arms, Turkish ammunition, and Swiss-designed and British-built Hispano Suiza HS-30 IFV. Israel too was a beneficiary of Broad Armament providing the new German army with mortars and Uzis (built by Belgium). \n\nBut this heterogeneous mix of weaponry was not an end onto itself, but a means for the Bundeswehr to familiarize itself with modern equipment in order to fulfill \"deep armament.\" This policy stressed that FRG defense firms would gradually take over the Bundeswehr's needs and provide its military chiefs with the equipment they wanted. The Germans, most of whom were veterans of the Second World War, were disappointed with the quality of MAP weapons and the US's poor performance in Korea did not make American methods ones to emulate. Blank had tried to get British Centurion tanks, but the British were alarmed that the Bundeswehr was not becoming an infantry-focused defensive force, but was instead developing a strong armored component, and refused. The British concerns proved apt as the Bundeswehr sought with deep armament to create the type of force structure that reflected what it saw as the lessons of the war. Both Blank and Strauss, as well as the new service heads Spiedel and Heusinger, envisioned a mix of modernized Panzer and Panzer Grenadier formations to be the bedrock for the Bundeswehr. This meant, for example, developing the IFV concept for mechanized infantry and abandoning the US's concept of a battle taxi. Even though the HS-30 was a substandard piece of equipment, it did provide practical experience that went into the design of the Marder IFV. Deep armament also called for the Bundeswehr to be nuclear-capable, with fighter-bombers and artillery, later missile batteries, capable of deploying nuclear weapons. This issue was too much for other NATO partners to completely stomach, as well as quite unpopular at home, and while the Bundeswehr did evolve a nuclear-capable force, the warheads were strictly under NATO (ie non-German) control. \n\nAs the Bundeswehr's nuclear capability demonstrates, German rearmament was still a political hot potato. This had implications for the development of the first generation of MBTs. Strauss, who hewed towards a Gaullist grand strategy in contrast to the more Atlantic-orientated members of Adenauer's cabinet, proposed a joint Franco-German development for a modern tank in 1960, the *Standardpanzer*30. This was part of Strauss's larger game of building up a closer Franco-German rapprochement, which could have included a French nuclear weapons in exchange for a FRG withdrawal from NATO. The resulting competition had one French firm producing what would be the AMX-30, while two German teams worked on the proposed German design. The two teams were headed by both the Porsche and Henschel firms respectively and included many veterans from those firms' wartime design and production teams. The Porsche design eventually won out over its French and Henschel rivals and became the Leopard I. \n\nAs the above indicates, West German rearmament was not just simply a case of picking up where wartime development left off. Instead it was a tricky process for the Bundeswehr as it sought to navigate various political tripwires to achieve the type of force structure the Germans actually wanted. Fielding a tank force was not just a matter of damaged German industrial infrastructure, but making such a reborn force palatable to the FRG's neighbors to the West. Wartime lessons had underscored the need to have a modern and technologically capable force and despite the Wehrmacht's reputation for scientific *Wunderwaffen*, many of their much-vaunted equipment suffered from bugs and defaults from too hasty of a development cycle. By the time the political constellations had formed allowing for an indigenous German heavy tank, the modern battlefield had changed so much, especially with the anticipated NBC environment of a war in Central Europe, that a tank designed in 1960 would be a very different animal than one designed in 1943. \n\n*Sources*\n\nBirtle, A. J. *Rearming the Phoenix: U.S. Military Assistance to the Federal Republic of Germany, 1950-1960*. New York: Garland, 1991. \n\nCorum, James S. *Rearming Germany*. Leiden: Brill, 2011.\n\nLarge, David Clay. *Germans to the Front: West German Rearmament in the Adenauer Era*. Chapel Hill: University of North Carolina Press, 1996. \n\nWenger, Andreas, Christian Nuenlist, and Anna Locher. *Transforming NATO in the Cold War: Challenges Beyond Deterrence in the 1960s*. London: Routledge, 2007. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "29073044", "title": "Tanks in the German Army", "section": "Section::::German design and development.:Cold War.\n", "start_paragraph_id": 61, "start_character": 0, "end_paragraph_id": 61, "end_character": 416, "text": "After the war, the Germans were given US equipment and the Panzerlehrbataillon armour forces established in April 1956. The Leopard tank project started in November 1956 in order to develop a modern German tank, the \"Standard-Panzer\", to replace the Bundeswehr's United States-built M47 and M48 Patton tanks, which, though just delivered to West Germany's recently reconstituted army, were rapidly growing outdated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1002537", "title": "Sturmgeschütz", "section": "Section::::Combat use.:Broadened use.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 810, "text": "Because of the decreased costs and ease of production, the Germans began to use the StuGs to replace standard tank losses. They were used in this fashion as German losses of all types of armored vehicles now exceeded production. The StuGs proved effective in a defensive role, but were a poor substitute for conventional tanks offensively. Thus the panzer regiments continued to be equipped with Panzer IV and Panther medium tanks for offensive operations. Meanwhile, heavier armed tank destroyers were developed, such as the Jagdpanzer IV and the Jagdpanther, which combined the low silhouette of the StuG with the heavier armament of the Panther and Tiger II tanks, respectively. Still, the StuG III was an effective armored fighting vehicle long after the Panzer III had been retired as a main battle tank.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29073044", "title": "Tanks in the German Army", "section": "Section::::Overview.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 723, "text": "The German Army first used light Panzer I tanks, along with the Panzer II, but the mainstays were the medium Panzer IIIs and Panzer IVs which were released in 1937. The IV became the backbone of Germany's panzer force and the power behind the blitzkrieg. During the invasion of Russia in 1941, the Germans encountered the famous and technologically advanced Soviet T-34 tanks. This led Germany to develop the Panther or Panzer V in response. Its 75mm gun could penetrate the new Soviet tanks. Germany also developed the heavy Tiger I, released in 1942. The Tiger could defeat any Allied tank and was soon joined by the Tiger II, also known as King Tiger, but too few were produced to impact the war in any discernible way.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29073044", "title": "Tanks in the German Army", "section": "Section::::Combat history.:Cold War.\n", "start_paragraph_id": 127, "start_character": 0, "end_paragraph_id": 127, "end_character": 249, "text": "After the war, the Germans were given United States-built M47 and M48 Patton tanks and in 1956 the Germans began development of the Leopard tank project to build a modern German tank, the Standard-Panzer, to replace the Bundeswehr's outdated tanks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "241257", "title": "Panzer IV", "section": "Section::::Combat history.:Poland, Western Front and North Africa (1939–1942).\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 868, "text": "Despite increased production of the medium Panzer IIIs and IVs prior to the German invasion of France on 10 May 1940, the majority of German tanks were still light types. According to Heinz Guderian, the Wehrmacht invaded France with 523 Panzer Is, 955 Panzer IIs, 349 Panzer IIIs, 278 Panzer IVs, 106 Panzer 35(t)s and 228 Panzer 38(t)s. Through the use of tactical radios and superior tactics, the Germans were able to outmaneuver and defeat French and British armor. However, Panzer IVs armed with the KwK 37 L/24 tank gun found it difficult to engage French tanks such as the Somua S35 and Char B1. The Somua S35 had a maximum armor thickness of , while the KwK 37 L/24 could only penetrate at a range of . The British Matilda II was also heavily armored, with at least of steel on the front and turret and a minimum of 65 mm on the sides, but were few in number.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "584956", "title": "Leopard 2", "section": "Section::::History.:Development.:Series production.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 1222, "text": "The decision to put the Leopard 2 tank in production for the German army was made after a study was undertaken, which showed that adopting the Leopard 2 mod would result in a greater combat potential of the German army than producing more Leopard 1A4 tanks or developing an improved version of the Leopard 1A4 with 105/120 mm smoothbore gun, improved armour protection, a new fire control system and a or engine. Various changes were applied to the Leopard 2 design before the series production started. Engine, transmission and suspension were slightly modified and improved. The ballistic protection of turret and hull was improved and weak spots were eliminated. The turret bustle containing the ready ammunition racks and the hydraulic systems was separated from the crew compartment and fitted with blow-out panels. The development of several new components introduced to the Leopard 2 during the Leopard 2AV development and after the US testing was completed. For the series version the Hughes-designed laser rangefinder made with US Common Modules was chosen over the passive EMES-13 rangefinder. The EMES-13 system was considered to be the superior solution, but the Hughes system was cheaper and fully developed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1399141", "title": "German tanks in World War II", "section": "Section::::Development and uses.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 518, "text": "The invasion of the Soviet Union in Operation Barbarossa signalled an enormous change in German tank development. In July 1941 36 Panzer and motorized infantry divisions were assigned to the invasion fielding over 3000 AFV's. In June 1941, these tanks first encountered the Soviet T-34. The German tanks were outclassed in every aspect of battle performance. A little later the American-made M3 Lee and then M4 Sherman tanks were encountered in the Western Desert, the M4 outclassing German armor in that theater too.\n", "bleu_score": null, "meta": null } ] } ]
null
8mowta
how can ants jump with these small legs?
[ { "answer": " O. rixosus as the only ant species that can jump with either its legs or its mandibles. Trap-jaw ants are known for using their powerful jaws to launch themselves into the air, somersaulting several times their own body length to evade predators.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2594", "title": "Ant", "section": "Section::::Behaviour and ecology.:Locomotion.\n", "start_paragraph_id": 70, "start_character": 0, "end_paragraph_id": 70, "end_character": 670, "text": "The female worker ants do not have wings and reproductive females lose their wings after their mating flights in order to begin their colonies. Therefore, unlike their wasp ancestors, most ants travel by walking. Some species are capable of leaping. For example, Jerdon's jumping ant (\"Harpegnathos saltator\") is able to jump by synchronising the action of its mid and hind pairs of legs. There are several species of gliding ant including \"Cephalotes atratus\"; this may be a common trait among arboreal ants with small colonies. Ants with this ability are able to control their horizontal movement so as to catch tree trunks when they fall from atop the forest canopy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32460566", "title": "Cephalotes supercilii", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 236, "text": "Cephalotes supercilii is a species of arboreal ant of the genus \"Cephalotes\", characterized by an odd shaped head and the ability to \"parachute\" by steering their fall if they drop off of a tree. Giving their name also as gliding ants.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32460597", "title": "Cephalotes biguttatus", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 249, "text": "Cephalotes biguttatus is a species of arboreal ant of the genus \"Cephalotes\", characterized by an odd shaped head and the ability to \"parachute\" by steering their fall if they drop off of the tree they're on. Giving their name also as gliding ants.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24855175", "title": "Cephalotes atratus", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 249, "text": "Cephalotes atratus is a species of arboreal ant in the genus \"Cephalotes\", a genus characterized by its odd shaped head. These ants are known as gliding ants because of their ability to \"parachute\" by steering their fall if they lose their footing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32460570", "title": "Cephalotes umbraculatus", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 235, "text": "Cephalotes umbraculatus is a species of arboreal ant of the genus \"Cephalotes\", characterized by an odd shaped head and the ability to \"parachute\" by steering their fall if they drop off of a tree. They are also known as gliding ants.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7954779", "title": "Odontomachus bauri", "section": "Section::::Jumping Records.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 547, "text": "On the other hand, “escape jumps” propel the ant vertically and are proved to be intentional because of the behaviors that precede the jump. Before an “escape jump,” the ant will orient its antennae and head perpendicularly to the intruder. Additionally it will sway its entire body and then lift one leg vertically. This is quite an elaborate routine to prepare to propel itself 7 cm off of the ground. The reason behind this maneuver is to be able to grab onto vegetation usually located around their nests in order to provide a form of escape.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32460565", "title": "Cephalotes squamosus", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 248, "text": "Cephalotes squamosus is a species of arboreal ant of the genus \"Cephalotes\", characterized by an odd shaped head and the ability to \"parachute\" by steering their fall if they drop off of the tree they're on. Giving their name also as gliding ants.\n", "bleu_score": null, "meta": null } ] } ]
null
6bbnje
why do mobile game ads look nothing like the actual game play?
[ { "answer": "Because it works and false advertising is a vague law that is hard to enforce outside of the U.S. \n\nJust getting people to download your game makes it appear higher on lists that gets more people to download your game, they don't care if you uninstall it right after, their hope is you will download it, put in a bit of effort and enjoy it, if not, you will make them appear on higher lists which might help them in the long run. ", "provenance": null }, { "answer": "The point of advertising is to get people interested in the product. That's not the same as informing people ABOUT the product. Advertisers use all sorts of tricks to make their product look a lot better than it actually is; [watch this video](_URL_0_) or similar videos, if you want proof that it's not just the videogame industry doing it.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "4115260", "title": "Mobile marketing", "section": "Section::::In-game mobile marketing.\n", "start_paragraph_id": 82, "start_character": 0, "end_paragraph_id": 82, "end_character": 475, "text": "One form of in-game mobile advertising is what allows players to actually play. As a new and effective form of advertising, it allows consumers to try out the content before they actually install it. This type of marketing can also really attract the attention of users like casual players. These advertising blur the lines between game and advertising, and provide players with a richer experience that allows them to spend their precious time interacting with advertising.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4115260", "title": "Mobile marketing", "section": "Section::::In-game mobile marketing.\n", "start_paragraph_id": 83, "start_character": 0, "end_paragraph_id": 83, "end_character": 554, "text": "This kind of advertisement is not only interesting, but also brings some benefits to marketers. As this kind of in-gaming mobile marketing can create more effective conversion rates because they are interactive and have faster conversion speeds than general advertising. Moreover, games can also offer a stronger lifetime value. They measure the quality of the consumer in advance to provide some more in-depth experience,So this type of advertising can be more effective in improving user stickiness than advertising channels such as stories and video.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5809564", "title": "In-game advertising", "section": "Section::::Advertising industry reaction to IGA.:Reducing advertiser risk.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 587, "text": "It is also difficult to plan in-game advertisements because game development generally takes longer than the development and implementation of an advertising campaign; typically, most static advertisements must be disclosed to the developers at least eighteen months before a game is released. This timing discrepancy can be solved though use of dynamic advertisements, which are available for purchase at any time in-game space is available, but this choice constrains the advertisement to the in-game predetermined spaces and sizes and does not allow for highly integrated static ads.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "424542", "title": "Advertising in video games", "section": "Section::::Categories.:In-game advertising.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 336, "text": "The principal advantage of product placement in in-games advertising is visibility and notoriety. For advertisers an ad may be displayed multiple times and a game may provide an opportunity to ally a product's brand image with the image of the game. Such examples include the use Sobe drink in Tom Clancy’s Splinter Cell: Double Agent.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9933471", "title": "Digital marketing", "section": "Section::::Latest developments and strategies.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 307, "text": "6. Game advertising: Game ads are advertisements that exist within computer or video games. One of the most common examples of in-game advertising is billboards appearing in sports games. In-game ads also might appear as brand-name products like guns, cars, or clothing that exist as gaming status symbols.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18680008", "title": "Zylom", "section": "Section::::Games.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 491, "text": "Although Zylom.com is not a free gaming site, it still uses third-party ads for its online trial games. The advertising consist of not only regular images and flash animations, but also constant in-game ads. These ads pause the trial of the game for an amount of time while displaying themselves, and the player is forced to wait until the ad finishes before returning to the game. This form of advertising during gameplay can easily irritate some players and spoil their gaming experience.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4115260", "title": "Mobile marketing", "section": "Section::::In-game mobile marketing.\n", "start_paragraph_id": 80, "start_character": 0, "end_paragraph_id": 80, "end_character": 297, "text": "In in-game mobile marketing, advertisers pay to have their name or products featured in the mobile games. For instance, racing games can feature real cars made by Ford or Chevy. Advertisers have been both creative and aggressive in their attempts to integrate ads organically in the mobile games.\n", "bleu_score": null, "meta": null } ] } ]
null
36drqc
tessellation (video games)
[ { "answer": "[NVidia has some good sample images](_URL_0_)\n\nTessellation basically takes a shape/set of vertices and smooths them by creating more \"in between\" vertices that try to counteract sharp edges. This can be done dynamically so objects use more polygons the closer they get to the camera (so distant objects aren't as straining on the hardware). It's an automatic process so it's limited by how well it's implemented and by the original model.", "provenance": null }, { "answer": "When rendering triangles in a game, you can have the graphics card split up geometry that's very close to the camera into extra triangles, giving nearby surfaces more detail. With it, you don't have to waste processing time giving farther-away surfaces that same detail (which you wouldn't be able to see anyway).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1922399", "title": "3D Tetris", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 605, "text": "3D Tetris is a puzzle video game developed by Technology and Entertainment Software and published by Nintendo. It was initially released for the Virtual Boy on March 22, 1996, in North America only. The game allows players to control multiple falling blocks, rotating and positioning them to clear layers in a \"Well\". The game is similar to other Tetris games, but uses a three-dimensional playing field as opposed to the traditional two-dimensional view. The game contains multiple modes and gametypes, as well as different difficulty settings and levels, which change different aspects of the gameplay.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34168", "title": "Xenogears", "section": "Section::::Gameplay.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 607, "text": "\"Xenogears\" combines traditional role-playing video game structures such as Square's signature Active Time Battle system with new features particular to the game's martial-arts combat style. It features two slightly different battle systems: in the first, the user controls human characters in turn-based combat manipulated through the sequencing of learned combos. The second, making use of \"gears\", introduces different sets of statistics and abilities for each character. \"Xenogears\" features both traditional anime and pre-rendered CGI movie clips by Production I.G to illustrate important plot points.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2348476", "title": "Heavenly Sword", "section": "Section::::Gameplay.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 297, "text": "For exploration and certain battles, the game also makes use of quick time events (QTE). During a QTE, a symbol for a certain button or for an action such as moving the analog stick to the right or left appears on screen and the player must match what is shown to successfully complete the scene.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25433795", "title": "TERA (video game)", "section": "Section::::Gameplay.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 490, "text": "\"TERA\" has typical MMORPG features such as questing, crafting, and player versus player action. The game's combat uses a real-time battle system that incorporates third-person camera view. The player targets an enemy with a cross-hair cursor rather than clicking or tabbing an individual opponent (which is called the \"Non-Target battle system\" by the developer). The Players need to actively dodge enemy attacks. A keyboard and mouse or a control pad can be used to control the character.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1922399", "title": "3D Tetris", "section": "Section::::Gameplay.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 1091, "text": "\"3D Tetris\" is a puzzle game that uses a three-dimensional playing field as opposed to the traditional two dimensions used in other versions of Tetris. The game contains multiple different modes; \"3-D Tetris\", \"Center-Fill\" and \"Puzzle\", each having different gametypes. The player can choose multiple different levels for each of these modes, which change the speed at which the blocks fall, as well as choose three difficulty settings; easy, medium and hard. The difficulty changes which types of block fall. Each mode contains a \"Well\", which itself contains 5 vertical layers that the player must fill with falling three dimensional blocks that can be rotated horizontally and vertically, as well as positioned in four different directions. Each block displays a shadow underneath it which indicates where it will land. The game's camera continually adjusts itself, but the player can manually readjust it. Each mode's HUD displays a \"radar\" which provides information about each of the Well's five layers, as well as the next block to fall, which is represented by a \"block character\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3142580", "title": "Kye (video game)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 323, "text": "Kye is a real-time puzzle game with a variety of interacting objects. It takes ideas from puzzle games like \"Sokoban\" and \"Boulder Dash\", but the inclusion of active objects gives it a real-time component, and it can also produce arcade-game levels like those found in \"Pac-Man\". Anyone can create new levels for the game.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7479772", "title": "The Tesseract (novel)", "section": "Section::::Overview.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 302, "text": "The term 'tesseract' is used for the three-dimensional net of the four-dimensional hypercube rather than the hypercube itself. It is a metaphor for the characters' inability to understand the causes behind the events which shape their lives: they can only visualize the superficial world they inhabit.\n", "bleu_score": null, "meta": null } ] } ]
null
l8m1i
What is the geologist/hydrologist opinion on Fraking to extract shale gas?
[ { "answer": "My dad wrote his Ph.D thesis on hydraulic fracturing, and I've asked him about on-shore shale plays. Also, I have worked for a couple oil companies and a geophysics company that was working on an onshore shale play. \n\nEssentially, there's a renewed interest in shale plays because of hydraulic fracturing, but we're still kinda running around with our heads in our butts in terms of being really efficient at it, versus a normal associated gas collection. Plus, refinement exists, but for whatever reason the process of refinement actually is worse for the environment, or makes the shale gas worse for the environment. \n\nEdit: addition, one of the other reasons for renewed interest is because of the gulf oil ban.", "provenance": null }, { "answer": "I'd like to hear some more informed opinions, too. Particularly about the claims that fraking leads to small tremors and polluted water wells. With all the ruckus coming from varying interest groups and opponents, its really difficult to get a good perspective.", "provenance": null }, { "answer": "Geologist here. Confusion appreciated. Though you need to narrow down what the \"true scientific question\" you seek the answer to is. Off the cuff, Its another way to extract a resource. It can be done badly or very carefully, but this is a case by case basis. We understand the engineering and science and uncertainty that involved with pursuing it as a resource. \nIt opens up a huge potential resource which could be good for the economy, but is bad because we are putting more CO2 into atmosphere. It \"could\" impact aquifers if done poorly in certain places, but there are ways to mitigate those problems. \n\nI would say for the most part there are many people paying attention to doing it carefully, however, because no baseline studies were done, its difficult to defend the putative impacts. There is an institutional laziness at work that must be corrected for to proceed into the future with this technique, good oversight, etc.. I appreciate the spirit of efforts of documentaries like Gasland, though these tend to be polarizing by design. ", "provenance": null }, { "answer": "Professional hydrogeologist, here. I have ZERO actual experience with fracking, but I know some things about well construction and the failure thereof. I also know about contaminant transport in groundwater, though unconsolidated sediments are my usual playground (as opposed to the actual-rock that shale is). \n\nIt appears to me from my cursory reading on the subject that the problems occurring related to fracking are not caused by the fracturing of the rock, but due to poor construction of the wells. A poorly-constructed well becomes a conduit from down deep (where the natural gas and fracking additives are) to shallower depths (where municipal and residential wells will typically be located). The additives and natural gas run up the side of the well casing and contaminate shallower aquifers.\n\nIn my lightly-informed opinion, the fracking industry is shooting themselves in the foot by not self-imposing robust well-construction standards that properly seal off the wellbore. A guy who can light his tap water on fire won't care much whether that occurred due to deep fracturing 2000 feet down, or due to a failed seal in the injection well near his household well.\n\nSo, it seems to me that it is largely an engineering problem that has been allowed to become a political problem.", "provenance": null }, { "answer": "Shale gas can be very profitable because it is one of the large untapped fossil fuels sources available to us. The problems arise because the scientific evidence that comes out of the technique of hydraulic fracturing is usually biased, either in favor of the gas companies themselves or the environmental activists/municipalities. \n\nIt is hard to distinguish what is actually happening. You could view something like the documentary \"Gasland\" and be completely to the entire oil/gas industry from just that movie alone but those people don't understand that the movie was completely one sided and was basically created for propaganda against oil/gas development.\n\nAt this point, it is hard to say what the true opinion is on the hydraulic fracturing process because it is hard to find unbiased scientific data for the process. Another thing is that for something that could be as beneficial and or environmentally damaging as hydraulic fracturing, true scientific evidence could take years to develop and process just because of the how extreme it may be.\n", "provenance": null }, { "answer": "I work for an EGS (_URL_1_) company as a geophysicist. Our entire goal is to \"frack/frak/STIMULATE\" rocks to create reservoirs for fluids. I've spent a few years modeling and simulating stimulations.\n\nIs fracking bad? Can be, sure. If you push gas/oil/chemicals into an aquifer, sure that's no good. And it's plausible that can happen. Should we outlaw all well stimulation techniques like France [_URL_0_]? I think that's maybe an over-reaction.\n\nThe problem is that studies are necessary. That seems easy to fix, but these wells can cost $10 million a piece, easily. A stimulation can cost $500,000. So if you want real, long-term, hard science studies, you're looking at a price tag of at least $50 million. On top of that, if the results showed that fracking was particularly dangerous, you'd better believe that the oil companies would spend a fortune burying it.\n\nAlso, make sure you draw a line between shale fracking and all fracking. Deep oil and EGS stimulation occur well below the water table, and run very little risk of affecting natural aquifers.", "provenance": null }, { "answer": "Geophysicst here, working oil and gas exploration. I've been drilling and fracking wells in Canada for years. Started some of the earliest shale gas plays in western Canada.\n\nFirst off, lets talk about the frac fluid. Yes, it can contain some harmful stuff in small quantites. But for the most part, it's simply water and sand. This was the \"breakthrough\" that started shale gas (beleive it nor, fracking with simply water - was uncommon before shale gas). \n\nIn western canada, we have be \"fracking\", nearly every well that's been drilled for the last 15 years. They were traditional reservoirs, but of a poor quality, and required artificial stimulation to flow. \n\nA few things you should know about shale gas. Firstly, it's almost always, quite deep. We're talking a mile deep or deeper. You're water wells go a hundred feet deep, at the most. The water that your well pulls up, cannot possibly be connceted to the shale reservoir directly. If this were the case, if there was a permiable pathway from 1 mile beneath the surface of the earth to the aquifer accessed by a water well, there would be no possiblity for hydrocarbon accumultion. Imperiable verticle barriers are by their very nature, required for gas and oil accumulationg under the surface (oil and gas \"float\", if you don't stop it somehow, they float up to the surface).\n\nAs a geophysicist, i've been involved in what we call \"cross borehole microseismic\". What that entails, is that you drill 2 wells. When you frack one well, you lower a string of seismographs into the other well. As you apply pressure, and the first well begins to fracture - you can triangulat the position and magnitute of the fracture as if they were wee little tine miniature earthquakes. In a time lapse sense, when processed and mapped, you can \"see\" the network for fractures developing around the borehole. Due to the natural stresses present, these fractures do NOT propogate up (the heaviest stress, is the weight of the rock above -- aka, it's the most difficult fissure to open) or down, nearly as much as they do horizontally. Put your hands together as if you are praying, and with your fingers and palms touching push your knuckes apart. THATS how the fractures open, you're the borehole, and that fracture propgates away from you, not up.\n\nOk, so when we map these \"fracture networks\", even the largest ones go no more than 1000 yards away, horizontally, from the boreholes and usually no more than a 10th or a 5th of that distance up and down. So there is virtually no risk of \"fracking in to an aquifer\".\n\nTechnically, that is somewhat untrue -- you can accidentally frack into an aquifer, but it by no means connected to the surface. Now media and soundbyte hungry politicians, when they say aquifer, typically mean \"drinking water\" or \"fresh water table\", but thats usually only the \"top most acquifer\", there are lots and lots of very salty, naturally poisonous subsurface acquifers. It may be another layer of rock, more permiable and porous, laying above or below the shale -- but it's sill a VERY long ways beneath the surface.\n\nAlso, one more thing. Water KILLS shale gas wells. (ironically, its water thats needed to fracture them), but if you accidentally DO frack into an aquifer - that well is done, you've wasted upwards of 10 million dollars and you're really not looked fondly upon. The water enters the fracture network, binds to walls of the meagre permiability pathways you'e created by fracking (seriously, like millimetre sized fractures) and chokes off the well.\n\nSo, facking isn't the problem.\n\nHere's what the problem is ... a) no well is pefect, and b) it takes TOO MANY WELLS to exploit shale gas.\n\nTypically, before shale gas, you'd go out and drill a well every sq mile, or perhaps 2 in a square mile. Sometimes you had bad casing, or a bad cement job, these problems, while not common, happen alot. A well is essential a screwed together steel straw that goes 10,000 ft in to the ground. At the bottom, you punch a bunch of holes in the steel so that whatever is in the rock outside the steel can get into the well. The steel is a few inches in diameter less than the hole in which it is placed. Upon finishind of the drilling, you \"set casing\" which replaces your drilling aparatus in the hole, with the steel straw. Then you pump high pressure liquid cement down the straw, the cement hits the bottom and then gushes back up outside of the straw filling the gap between the casing and the edges of the borehole. The cement's job is to a) secure the well in place, and b) prevent any communication between all the layers of earth you just poked holes in.\n\nSo you can problably see, that it would be virtually impossible to get a perfect cement job. But \"close enough\" is usually good enough.\n\nNow, combine that fact with the fact that for full exploitation of shale gas you can need 8, 16, or more wells in a square mile. THATS ALOT OF WELLS, a few of which are bound to be shitty.\n\nTHIS is the problem, it's not fracking that's needs to be regulated, it's cement bonding and the number of wells. Because it's poor cement bonds in the borehole which allow things deep to communitcate with things shallw (aka gas in your well water), not the actual fracking at ALL.\n\nIf anything, people should be encouring more and bigger fracs, because the amount of wells that are required is directly related to how much rock we can \"see\" get's fractured. BIgger fracks mean less wells.\n\n ", "provenance": null }, { "answer": "My father in law was a petrophysical engineer at Shell for over forty years and he supports fracking. I think he would do an AMA as well, which would be really interesting imo as he has been the lead engineer on a lot of new rigs in different parts of the world. ", "provenance": null }, { "answer": "My dad wrote his Ph.D thesis on hydraulic fracturing, and I've asked him about on-shore shale plays. Also, I have worked for a couple oil companies and a geophysics company that was working on an onshore shale play. \n\nEssentially, there's a renewed interest in shale plays because of hydraulic fracturing, but we're still kinda running around with our heads in our butts in terms of being really efficient at it, versus a normal associated gas collection. Plus, refinement exists, but for whatever reason the process of refinement actually is worse for the environment, or makes the shale gas worse for the environment. \n\nEdit: addition, one of the other reasons for renewed interest is because of the gulf oil ban.", "provenance": null }, { "answer": "I'd like to hear some more informed opinions, too. Particularly about the claims that fraking leads to small tremors and polluted water wells. With all the ruckus coming from varying interest groups and opponents, its really difficult to get a good perspective.", "provenance": null }, { "answer": "Geologist here. Confusion appreciated. Though you need to narrow down what the \"true scientific question\" you seek the answer to is. Off the cuff, Its another way to extract a resource. It can be done badly or very carefully, but this is a case by case basis. We understand the engineering and science and uncertainty that involved with pursuing it as a resource. \nIt opens up a huge potential resource which could be good for the economy, but is bad because we are putting more CO2 into atmosphere. It \"could\" impact aquifers if done poorly in certain places, but there are ways to mitigate those problems. \n\nI would say for the most part there are many people paying attention to doing it carefully, however, because no baseline studies were done, its difficult to defend the putative impacts. There is an institutional laziness at work that must be corrected for to proceed into the future with this technique, good oversight, etc.. I appreciate the spirit of efforts of documentaries like Gasland, though these tend to be polarizing by design. ", "provenance": null }, { "answer": "Professional hydrogeologist, here. I have ZERO actual experience with fracking, but I know some things about well construction and the failure thereof. I also know about contaminant transport in groundwater, though unconsolidated sediments are my usual playground (as opposed to the actual-rock that shale is). \n\nIt appears to me from my cursory reading on the subject that the problems occurring related to fracking are not caused by the fracturing of the rock, but due to poor construction of the wells. A poorly-constructed well becomes a conduit from down deep (where the natural gas and fracking additives are) to shallower depths (where municipal and residential wells will typically be located). The additives and natural gas run up the side of the well casing and contaminate shallower aquifers.\n\nIn my lightly-informed opinion, the fracking industry is shooting themselves in the foot by not self-imposing robust well-construction standards that properly seal off the wellbore. A guy who can light his tap water on fire won't care much whether that occurred due to deep fracturing 2000 feet down, or due to a failed seal in the injection well near his household well.\n\nSo, it seems to me that it is largely an engineering problem that has been allowed to become a political problem.", "provenance": null }, { "answer": "Shale gas can be very profitable because it is one of the large untapped fossil fuels sources available to us. The problems arise because the scientific evidence that comes out of the technique of hydraulic fracturing is usually biased, either in favor of the gas companies themselves or the environmental activists/municipalities. \n\nIt is hard to distinguish what is actually happening. You could view something like the documentary \"Gasland\" and be completely to the entire oil/gas industry from just that movie alone but those people don't understand that the movie was completely one sided and was basically created for propaganda against oil/gas development.\n\nAt this point, it is hard to say what the true opinion is on the hydraulic fracturing process because it is hard to find unbiased scientific data for the process. Another thing is that for something that could be as beneficial and or environmentally damaging as hydraulic fracturing, true scientific evidence could take years to develop and process just because of the how extreme it may be.\n", "provenance": null }, { "answer": "I work for an EGS (_URL_1_) company as a geophysicist. Our entire goal is to \"frack/frak/STIMULATE\" rocks to create reservoirs for fluids. I've spent a few years modeling and simulating stimulations.\n\nIs fracking bad? Can be, sure. If you push gas/oil/chemicals into an aquifer, sure that's no good. And it's plausible that can happen. Should we outlaw all well stimulation techniques like France [_URL_0_]? I think that's maybe an over-reaction.\n\nThe problem is that studies are necessary. That seems easy to fix, but these wells can cost $10 million a piece, easily. A stimulation can cost $500,000. So if you want real, long-term, hard science studies, you're looking at a price tag of at least $50 million. On top of that, if the results showed that fracking was particularly dangerous, you'd better believe that the oil companies would spend a fortune burying it.\n\nAlso, make sure you draw a line between shale fracking and all fracking. Deep oil and EGS stimulation occur well below the water table, and run very little risk of affecting natural aquifers.", "provenance": null }, { "answer": "Geophysicst here, working oil and gas exploration. I've been drilling and fracking wells in Canada for years. Started some of the earliest shale gas plays in western Canada.\n\nFirst off, lets talk about the frac fluid. Yes, it can contain some harmful stuff in small quantites. But for the most part, it's simply water and sand. This was the \"breakthrough\" that started shale gas (beleive it nor, fracking with simply water - was uncommon before shale gas). \n\nIn western canada, we have be \"fracking\", nearly every well that's been drilled for the last 15 years. They were traditional reservoirs, but of a poor quality, and required artificial stimulation to flow. \n\nA few things you should know about shale gas. Firstly, it's almost always, quite deep. We're talking a mile deep or deeper. You're water wells go a hundred feet deep, at the most. The water that your well pulls up, cannot possibly be connceted to the shale reservoir directly. If this were the case, if there was a permiable pathway from 1 mile beneath the surface of the earth to the aquifer accessed by a water well, there would be no possiblity for hydrocarbon accumultion. Imperiable verticle barriers are by their very nature, required for gas and oil accumulationg under the surface (oil and gas \"float\", if you don't stop it somehow, they float up to the surface).\n\nAs a geophysicist, i've been involved in what we call \"cross borehole microseismic\". What that entails, is that you drill 2 wells. When you frack one well, you lower a string of seismographs into the other well. As you apply pressure, and the first well begins to fracture - you can triangulat the position and magnitute of the fracture as if they were wee little tine miniature earthquakes. In a time lapse sense, when processed and mapped, you can \"see\" the network for fractures developing around the borehole. Due to the natural stresses present, these fractures do NOT propogate up (the heaviest stress, is the weight of the rock above -- aka, it's the most difficult fissure to open) or down, nearly as much as they do horizontally. Put your hands together as if you are praying, and with your fingers and palms touching push your knuckes apart. THATS how the fractures open, you're the borehole, and that fracture propgates away from you, not up.\n\nOk, so when we map these \"fracture networks\", even the largest ones go no more than 1000 yards away, horizontally, from the boreholes and usually no more than a 10th or a 5th of that distance up and down. So there is virtually no risk of \"fracking in to an aquifer\".\n\nTechnically, that is somewhat untrue -- you can accidentally frack into an aquifer, but it by no means connected to the surface. Now media and soundbyte hungry politicians, when they say aquifer, typically mean \"drinking water\" or \"fresh water table\", but thats usually only the \"top most acquifer\", there are lots and lots of very salty, naturally poisonous subsurface acquifers. It may be another layer of rock, more permiable and porous, laying above or below the shale -- but it's sill a VERY long ways beneath the surface.\n\nAlso, one more thing. Water KILLS shale gas wells. (ironically, its water thats needed to fracture them), but if you accidentally DO frack into an aquifer - that well is done, you've wasted upwards of 10 million dollars and you're really not looked fondly upon. The water enters the fracture network, binds to walls of the meagre permiability pathways you'e created by fracking (seriously, like millimetre sized fractures) and chokes off the well.\n\nSo, facking isn't the problem.\n\nHere's what the problem is ... a) no well is pefect, and b) it takes TOO MANY WELLS to exploit shale gas.\n\nTypically, before shale gas, you'd go out and drill a well every sq mile, or perhaps 2 in a square mile. Sometimes you had bad casing, or a bad cement job, these problems, while not common, happen alot. A well is essential a screwed together steel straw that goes 10,000 ft in to the ground. At the bottom, you punch a bunch of holes in the steel so that whatever is in the rock outside the steel can get into the well. The steel is a few inches in diameter less than the hole in which it is placed. Upon finishind of the drilling, you \"set casing\" which replaces your drilling aparatus in the hole, with the steel straw. Then you pump high pressure liquid cement down the straw, the cement hits the bottom and then gushes back up outside of the straw filling the gap between the casing and the edges of the borehole. The cement's job is to a) secure the well in place, and b) prevent any communication between all the layers of earth you just poked holes in.\n\nSo you can problably see, that it would be virtually impossible to get a perfect cement job. But \"close enough\" is usually good enough.\n\nNow, combine that fact with the fact that for full exploitation of shale gas you can need 8, 16, or more wells in a square mile. THATS ALOT OF WELLS, a few of which are bound to be shitty.\n\nTHIS is the problem, it's not fracking that's needs to be regulated, it's cement bonding and the number of wells. Because it's poor cement bonds in the borehole which allow things deep to communitcate with things shallw (aka gas in your well water), not the actual fracking at ALL.\n\nIf anything, people should be encouring more and bigger fracs, because the amount of wells that are required is directly related to how much rock we can \"see\" get's fractured. BIgger fracks mean less wells.\n\n ", "provenance": null }, { "answer": "My father in law was a petrophysical engineer at Shell for over forty years and he supports fracking. I think he would do an AMA as well, which would be really interesting imo as he has been the lead engineer on a lot of new rigs in different parts of the world. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "12321716", "title": "Oil shale industry", "section": "Section::::Environmental considerations.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 670, "text": "Mining oil shale involves a number of environmental impacts, more pronounced in surface mining than in underground mining. These include acid drainage induced by the sudden rapid exposure and subsequent oxidation of formerly buried materials, the introduction of metals including mercury into surface-water and groundwater, increased erosion, sulfur-gas emissions, and air pollution caused by the production of particulates during processing, transport, and support activities. In 2002, about 97% of air pollution, 86% of total waste and 23% of water pollution in Estonia came from the power industry, which uses oil shale as the main resource for its power production.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12321977", "title": "Shale oil extraction", "section": "Section::::Environmental considerations.\n", "start_paragraph_id": 65, "start_character": 0, "end_paragraph_id": 65, "end_character": 670, "text": "Mining oil shale involves a number of environmental impacts, more pronounced in surface mining than in underground mining. These include acid drainage induced by the sudden rapid exposure and subsequent oxidation of formerly buried materials, the introduction of metals including mercury into surface-water and groundwater, increased erosion, sulfur-gas emissions, and air pollution caused by the production of particulates during processing, transport, and support activities. In 2002, about 97% of air pollution, 86% of total waste and 23% of water pollution in Estonia came from the power industry, which uses oil shale as the main resource for its power production.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "45010", "title": "Oil shale", "section": "Section::::Environmental considerations.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 670, "text": "Mining oil shale involves a number of environmental impacts, more pronounced in surface mining than in underground mining. These include acid drainage induced by the sudden rapid exposure and subsequent oxidation of formerly buried materials, the introduction of metals including mercury into surface-water and groundwater, increased erosion, sulfur-gas emissions, and air pollution caused by the production of particulates during processing, transport, and support activities. In 2002, about 97% of air pollution, 86% of total waste and 23% of water pollution in Estonia came from the power industry, which uses oil shale as the main resource for its power production.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8798949", "title": "Hydraulic fracturing in the United States", "section": "Section::::Economic impact.:Oil and gas supply.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 1113, "text": "Some geologists say that the well productivity estimates are inflated and minimize the impact of the reduced productivity of wells after the first year or two. A June 2011 \"New York Times\" investigation of industrial emails and internal documents found that the profitability of unconventional shale gas extraction may be less than previously thought, due to companies intentionally overstating the productivity of their wells and the size of their reserves. The same article said, \"Many people within the industry remain confident.\" T. Boone Pickens said that he was not worried about shale companies and that he believed they would make good money if prices rise. Pickens also said that technological advances, such as the repeated hydraulic fracturing of wells, was making production cheaper. Some companies that specialize in shale gas have shifted to areas where the gas in natural gas liquids such as propane and butane. The article was criticized by, among others, \"The New York Times\" own public editor for lack of balance in omitting facts and viewpoints favorable to shale gas production and economics.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12322101", "title": "Environmental impact of the oil shale industry", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 677, "text": "Environmental impact of the oil shale industry includes the consideration of issues such as land use, waste management, and water and air pollution caused by the extraction and processing of oil shale. Surface mining of oil shale deposits causes the usual environmental impacts of open-pit mining. In addition, the combustion and thermal processing generate waste material, which must be disposed of, and harmful atmospheric emissions, including carbon dioxide, a major greenhouse gas. Experimental in-situ conversion processes and carbon capture and storage technologies may reduce some of these concerns in future, but may raise others, such as the pollution of groundwater.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1728672", "title": "Human impact on the environment", "section": "Section::::Energy industry.:Oil shale industry.\n", "start_paragraph_id": 64, "start_character": 0, "end_paragraph_id": 64, "end_character": 681, "text": "The environmental impact of the oil shale industry includes the consideration of issues such as land use, waste management, and water and air pollution caused by the extraction and processing of oil shale. Surface mining of oil shale deposits causes the usual environmental impacts of open-pit mining. In addition, the combustion and thermal processing generate waste material, which must be disposed of, and harmful atmospheric emissions, including carbon dioxide, a major greenhouse gas. Experimental in-situ conversion processes and carbon capture and storage technologies may reduce some of these concerns in future, but may raise others, such as the pollution of groundwater.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32545907", "title": "Shale gas by country", "section": "Section::::Europe.:Netherlands.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 366, "text": "Up until now there has not been a shale gas well for exploration purposes. The drilling of such a well has been suspended by the Dutch government due to environmental concerns. The Ministry of Economic affairs, Innovation and Agriculture is currently researching the impact of shale gas exploitation, and the results are expected to be published by the end of 2014.\n", "bleu_score": null, "meta": null } ] } ]
null
1oibf0
Was there still any undiscovered land left by the time the aviation age came about?
[ { "answer": "[There were islands discovered with satelite imagery](_URL_0_), but I don't think that's really what you meant. You're more talking about significant land masses.\n\nI think the most important would be Antarctica. The Coastlines had (generally) been mapped in piecemeal efforts, but [significant exploration of the inland area was done in modern times](_URL_1_).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "42254458", "title": "Eagle Farm Women's Prison and Factory Site", "section": "Section::::Description.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 352, "text": "Virtually none of the old airport area exists as it did prior to European settlement. Only the foundations of the Eagle Farm Settlement survive, having been covered with fill in 1942. The Allison Engine Testing Stands and Second World War Hangar No. 7 from World War II also survived on the former airport site and are both separately heritage-listed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22711248", "title": "Keenan Land", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 334, "text": "Starting in 1907 with the Anglo-American Polar Expedition, numerous unsuccessful attempts were made (by Vilhjalmur Stefansson and Roald Amundsen, among others) to relocate Keenan Land. Hubert Wilkins flew over the area in 1937 on his search for the missing Sigizmund Levanevsky and came to the conclusion that the land never existed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1052696", "title": "Bardufoss Air Station", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 726, "text": "The first plane to land at the air station was a de Havilland Tiger Moth on 26 March 1938, making it the country's oldest air station still operational. During World War II, RAF Gloster Gladiators (No. 263 Squadron RAF) and Hawker Hurricanes (No. 46 Squadron RAF) operating from Bardufoss played a vital part in keeping the Luftwaffe at bay during the fighting on the Narvik front in the April–June 1940 Norwegian Campaign. After the allied withdrawal from Norway, the airbase was taken over by the Germans and mostly used as a base for fighters, bombers and reconnaissance planes operating against the Murmansk convoys. Fighters from Bardufoss also had the task of providing aerial support for naval operations in the area. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6052462", "title": "Ladd Army Airfield", "section": "Section::::History.:Origins.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 621, "text": "The U.S. government acquired homesteads southeast of the town of Fairbanks beginning in 1938. From this land, totalling about , was created Ladd Field. The first aircraft to land at Ladd was Douglas O-38F, \"33-324\", c/n 1177, in October 1940, which is now preserved in the National Museum of the United States Air Force. Major construction of facilities began in 1941 and 1942, after the U.S. entered World War II. The initial construction occurred several miles from Fairbanks along a bend of the Chena River, consisting of an airfield, hangars, housing and support buildings. Many of these buildings still stand today.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13414", "title": "Howland Island", "section": "Section::::History.:Japanese attacks during World War II.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 727, "text": "All attempts at habitation were abandoned after 1944. Colonization projects on the other four islands, also disrupted by the war, were also abandoned. No aircraft is known to have landed on the island, though anchorages nearby were used by float planes and flying boats during World War II. For example, on July 10, 1944, a U.S. Navy Martin PBM-3-D Mariner flying boat (BuNo 48199), piloted by William Hines, had an engine fire and made a forced landing in the ocean off Howland. Hines beached the aircraft and, though it burned, the crew were unharmed, rescued by the (the same ship that later took the USCG's Construction Unit 211 and LORAN Unit 92 to Gardner Island), transferred to a sub chaser and taken to Canton Island.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34041971", "title": "RAF Kaldadarnes", "section": "Section::::Squadrons.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 355, "text": "After the cessation of hostilities of the Second World War the British Government handed the airfield over to the Icelandic Civil Aviation Authority and it was used for a short while until it was closed. It is now in ruins with the decaying runways, perimeter track, dispersals and site of some of the buildings still visible on satellite images in 2018.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60608523", "title": "Headcorn Aerodrome", "section": "Section::::History.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 535, "text": "With the Americans having left, farming resumed in 1945 but this was not the end of the land's association with aircraft. In the late 1960s, the landowners started using part of the former wartime east-west runway site adjacent to the A274, for private flying. A grass airstrip was built aligned 10/28 with a grass parking area for light aircraft. This led to the formation of Weald Air Services Limited, a small charter company, and later a flying school was set up and the airfield became a busy centre for light flying in the area.\n", "bleu_score": null, "meta": null } ] } ]
null
qin59
Theoretically, how high must a building/structure be in order for it to be seen from all points on given hemisphere.
[ { "answer": "45-deg away from an object on the earth (radius 6378.1 km) an object would have to be 2642 km high to be seen above the horizon.\n\nYour line of sight being tangent to the curvature, you form a right triangle between you, the top of the building, and the center of the earth. The hypotenuse of the triangle is the distance from the center of the earth to the top of the building.\n\nYou would never be able to see a building 90-deg around the earth. You line of sight would be parallel to the building.", "provenance": null }, { "answer": "The problem isn't just height. The typical human can't perceive a spatial frequency of above around 60 cycles per degree. Therefore, for one to be able to perceive a building from a great distance away, it must also be extremely wide, as well as tall.", "provenance": null }, { "answer": "We're forgetting that light refracts in our atmosphere. See [here](_URL_0_). So, is the building red? Or green? The refraction at great distances is not insignificant, though at work I don't have time for the maths... anyone else feel like working this out?\n\nEdit: Felt like expanding a little.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "19452789", "title": "List of tallest buildings in Ankara", "section": "Section::::Tall buildings of Ankara under construction.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 291, "text": "List of buildings under construction which are higher than 90 m, including spires and architectural details. Based on floorcounts and floorheights; buildings without official height (including spires and architectural details) are also included as they are estimated to be higher than 90 m.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29561879", "title": "Building for Life", "section": "Section::::History.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 268, "text": "BULLET::::- Scale: height \"Scale is the size of a building in relation to its surroundings,or the size of parts of a building or its details, particularly in relation to the size of a person. Height determines the impact of development on views, vistas and skylines.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50918074", "title": "Tower of Gömeç", "section": "Section::::Description.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 645, "text": "The tower, which is almost completely intact, is 11.32 metres high. At ground level it has a floor area of 4.7 x 4.7 metres, which decreases to 4.2 x 4.2 metres at the top. It is built from stone blocks without mortar, using the pseudo-isodomum technique, in which the stones of each layer are the same size, but the size of the stones in different layers may differ. The height of the blocks varies between 80 and 30 centimetres. The door is located on the south side and measures 1.9 x 0.96 metres. Above it, on the exterior wall, there is a series of 13 beam holes, which were added some time after construction in order to support a porch. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11312931", "title": "List of tallest buildings in Sweden", "section": "Section::::Definition.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 316, "text": "BULLET::::1. Height to architectural top: This is the main criterion under which the CTBUH ranks the height of buildings. Heights are measured from the level of the lowest, significant, open-air, pedestrian entrance to the top of the building, inclusive of spires but excluding items such as flagpoles and antennae.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "203798", "title": "Council on Tall Buildings and Urban Habitat", "section": "Section::::Ranking tall buildings.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 316, "text": "BULLET::::1. Height to architectural top: This is the main criterion under which the CTBUH ranks the height of buildings. Heights are measured from the level of the lowest, significant, open-air, pedestrian entrance to the top of the building, inclusive of spires but excluding items such as flagpoles and antennae.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "275519", "title": "Matterhorn", "section": "Section::::Height.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 287, "text": "In 1999, the summit height was precisely determined to be at above sea level by using Global Positioning System technology as part of the TOWER Project (Top of the World Elevations Remeasurement) and to an accuracy of less than one centimetre, which allows future changes to be tracked.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26210425", "title": "Self-framing metal buildings", "section": "Section::::Dimensions.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 245, "text": "Building height: 2.5 m (8' +/-) to 7.5 m (24' +/-) is common. Height is primarily limited by the capability of the wall panel to support the wind load. Height may be limited in narrow buildings due to shear capacity limit in the gable endwalls.\n", "bleu_score": null, "meta": null } ] } ]
null
ro2i8
How does electromagnetic interaction work?
[ { "answer": "Despite not having mass, photons do carry momentum, so the simplistic view of the photon \"bouncing\" off of the charge and transferring some momentum to it is not necessarily a bad way to picture it conceptually.\n\nEdit: Just so we're clear, I'm not saying this is the correct picture, but if you're trying to grasp how a photon can be a force carrier conceptually, this is the way I would think about it initially (until you get introduced to quantum field theory!).\n\nIf you're wondering where the energy comes from, it turns out there is energy stored in electromagnetic fields. The way this is usually introduced is to think about the work (read: energy) you need to put into a system in order to bring two like charges together.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "9532", "title": "Electromagnetism", "section": "Section::::Fundamental forces.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 743, "text": "The electromagnetic force is responsible for practically all phenomena one encounters in daily life above the nuclear scale, with the exception of gravity. Roughly speaking, all the forces involved in interactions between atoms can be explained by the electromagnetic force acting between the electrically charged atomic nuclei and electrons of the atoms. Electromagnetic forces also explain how these particles carry momentum by their movement. This includes the forces we experience in \"pushing\" or \"pulling\" ordinary material objects, which result from the intermolecular forces that act between the individual molecules in our bodies and those in the objects. The electromagnetic force is also involved in all forms of chemical phenomena.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9532", "title": "Electromagnetism", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 700, "text": "Electromagnetic phenomena are defined in terms of the electromagnetic force, sometimes called the Lorentz force, which includes both electricity and magnetism as different manifestations of the same phenomenon. The electromagnetic force plays a major role in determining the internal properties of most objects encountered in daily life. The electromagnetic attraction between atomic nuclei and their orbital electrons holds atoms together. Electromagnetic forces are responsible for the chemical bonds between atoms which create molecules, and intermolecular forces. The electromagnetic force governs all chemical processes, which arise from interactions between the electrons of neighboring atoms.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9735", "title": "Electromagnetic field", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 436, "text": "An electromagnetic field (also EMF or EM field) is a physical field produced by moving electrically charged objects. It affects the behavior of non-comoving charged objects at any distance of the field. The electromagnetic field extends indefinitely throughout space and describes the electromagnetic interaction. It is one of the four fundamental forces of nature (the others are gravitation, weak interaction and strong interaction).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8323351", "title": "Fresnel rhomb", "section": "Section::::Operation.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 584, "text": "Incident electromagnetic waves (such as light) consist of transverse vibrations in the electric and magnetic fields; these are proportional to and at right angles to each other and may therefore be represented by (say) the electric field alone. When striking an interface, the electric field oscillations can be resolved into two perpendicular components, known as the \"s\" and \"p\" components, which are parallel to the \"surface\" and the \"plane\" of incidence, respectively; in other words, the \"s\" and \"p\" components are respectively \"square\" and \"parallel\" to the plane of incidence.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24167445", "title": "Quantum-mechanical explanation of intermolecular interactions", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 379, "text": "In the natural sciences, an intermolecular force is an attraction between two molecules or atoms. They occur from either momentary interactions between molecules (the London dispersion force) or permanent electrostatic attractions between dipoles. They can be explained using a simple phenomenological approach (see intermolecular force), or using a quantum mechanical approach.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11149", "title": "Fresnel equations", "section": "Section::::Power (intensity) reflection and transmission coefficients.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 561, "text": "The behavior of light striking the interface is solved by considering the electric and magnetic fields that constitute an electromagnetic wave, and the laws of electromagnetism, as shown below. The ratio of waves' electric field (or magnetic field) amplitudes are obtained, but in practice one is more often interested in formulae which determine \"power\" coefficients, since power (or irradiance) is what can be directly measured at optical frequencies. The power of a wave is generally proportional to the square of the electric (or magnetic) field amplitude.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7122953", "title": "Plasma contactor", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 387, "text": "An electrical contactor is an electrically controlled switch which closes a power or high voltage electrical circuit. A plasma contactor changes the electrically insulating vacuum into a conductor by providing movable electrons and positive gas ions. This conductive path closes a phantom loop circuit to discharge or neutralize the static electricity that can build up on a spacecraft.\n", "bleu_score": null, "meta": null } ] } ]
null
2qfxv4
why do classical musicians and singers use sheet music / words whereas any other type of musician / singers learns the music / words.
[ { "answer": "Refer to this:\n\n_URL_0_\n\nYou will also note that in comparison with \"other types\" of music, classical compositions tend to be more complex and long to be un-intuitive or difficult to memorize and retain reliably.", "provenance": null }, { "answer": "\"TNT\" by AC_DC is a couple minutes long and only has like the chords in it. I memorized it in 45 seconds. \n\"Folk Dances\" by Dimitri Shostakovich is like 15 minutes long with almost no repetition. ", "provenance": null }, { "answer": "I think it has to do with the repertoires being much different: A rock musician has his line-up of aprox. 12 to 50 songs, its the same genre, the techniques he/she knows and likes and the musician usually has learned and performed the same songs for months/years ergo he knows them by heart.\n\nWhereas the rehearsal periods of professional orchestras are much shorter (usually about two or three rehearsals before a concert and then they're off to a new piece), which doesn't really allow them to learn the pieces by heart. Also the repertoire of a classical musician has to be huge. He has to know or at least be able to primavista-play pieces from renaissance to classical to romantism to 12tone music and many more, there are thousands of composers and so much more works by them. One couldn't possibly expect to know all of these by heart and even if one would know some of them one wouldn't want to get lost in a score.\nOnly Soloists usually are \"required\" to know their pieces by heart and some conductors prefer to conduct without a score but these are exceptions...\n\nI can tell you from my own experience that eventhough I know a piece by heart. I want to have the score ready just in case. Maybe I need to write something down which isn't in the score itself or i just don't want to get lost.", "provenance": null }, { "answer": "A lot of singers and popular musicians can't read sheet music, that's why.", "provenance": null }, { "answer": "That and classical performers want to play the music as it was written. While performers that don't do sheet music can have more variance with reproducing the sound. \n\nEven with the sheet music it still may not be perfect. Which is why there's a conductor to tie them all together and regulate them from playing it too fast/loud/etc. ", "provenance": null }, { "answer": "Your average rock song may have 4 chords. Axis of Awesome makes a joke about it, but it's actually true. They just change the key of the songs so they all fit the exact same chords. I'll link it at the end of my explanation.\n\nAlso, a lot of rock performers WROTE their songs. It's a lot easier to memorise something you created.\n\nFinally, a rock song tends to have 2 verses, a chorus, and maybe a bridge. Depending on the genre of rock, possibly an instrumental solo as well. Those verses will have the same melody, but different lyrics. And lyrical memorisation is easy. Even people who aren't musicians can memorise lyrics. And, as mentioned before, 4 chords. That leaves the bridge. Not present in all songs, but usually a key and melody change. But that's only one short section. Overall, you're probably only learning about a minute worth of music, unless you're the soloist or singer.\n\nClassical Music on the other hand usually has very little repetition. I'm working towards my Diploma of Music currently, and many of my classical pieces have no repetition at all. I remember doing Fur Elise which has the recurring section that you always hear in movies and TV, but it's a 5 page long piece, with about 5-6 different sections, for a soloist.\n\nThat being said, I do know some of my pieces off by heart, but why would I risk it? Shit happens. I get a bit of stage fright, I get distracted and lose my place, I mess up a hand movement, hell, I just have a bit of a brain fart, and my entire performance falls to pieces.\n\nAnd, maybe just me, but I have a foyer show gig coming up. I have a repertoire of 50ish pieces from various genres. I can't memorise 50 pieces. I have a rock playlist I sing for, and that's about 8 songs. Super easy, considering I have to memorise 2 melodies and a set of lyrics (as mentioned before, easy).\n\nSource: Classical pianist, folk mandolinist, miscellaneous singer.\n\n[Axis of Awesome](_URL_0_) on 4 chords. Funny, and surprisingly relevant.\n\nSorry it's a bit ranty, but let me know if I can clarify it for you.", "provenance": null }, { "answer": "Pop/rock music is simple. Classical music is hard.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "95261", "title": "Sheet music", "section": "Section::::Purpose and use.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 697, "text": "Classical musicians playing orchestral works, chamber music, sonatas and singing choral works ordinarily have the sheet music in front of them on a music stand when performing (or held in front of them in a music folder, in the case of a choir), with the exception of solo instrumental performances of solo pieces, concertos, or solo vocal pieces (art song, opera arias, etc.), where memorization is expected. In jazz, which is mostly improvised, sheet music (called a \"lead sheet\" in this context) is used to give basic indications of melodies, chord changes, and arrangements. Even when a jazz band has a lead sheet, chord chart or arranged music, many elements of a performance are improvised.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "95261", "title": "Sheet music", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 408, "text": "Sheet music is the basic form in which Western classical music is notated so that it can be learned and performed by solo singers or instrumentalists or musical ensembles. Many forms of traditional and popular Western music are commonly learned by singers and musicians \"by ear\", rather than by using sheet music (although in many cases, traditional and pop music may also be available in sheet music form).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "95261", "title": "Sheet music", "section": "Section::::Purpose and use.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 560, "text": "Sheet music can be used as a record of, a guide to, or a means to perform, a song or piece of music. Sheet music enables instrumental performers who are able to read music notation (a pianist, orchestral instrument players, a jazz band, etc.) or singers to perform a song or piece. In classical music, authoritative musical information about a piece can be gained by studying the written sketches and early versions of compositions that the composer might have retained, as well as the final autograph score and personal markings on proofs and printed scores.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "95261", "title": "Sheet music", "section": "Section::::Purpose and use.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 891, "text": "Although sheet music is often thought of as being a platform for new music and an aid to composition (i.e., the composer \"writes\" the music down), it can also serve as a visual record of music that already exists. Scholars and others have made transcriptions to render Western and non-Western music in readable form for study, analysis and re-creative performance. This has been done not only with folk or traditional music (e.g., Bartók's volumes of Magyar and Romanian folk music), but also with sound recordings of improvisations by musicians (e.g., jazz piano) and performances that may only partially be based on notation. An exhaustive example of the latter in recent times is the collection \"The Beatles: Complete Scores\" (London: Wise Publications, 1993), which seeks to transcribe into staves and tablature all the songs as recorded by the Beatles in instrumental and vocal detail.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24782516", "title": "Computational musicology", "section": "Section::::Methods.:Sheet Music Data.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 622, "text": "Sheet music is meant to be read by the musician or performer. Generally, the term refers to the standardized nomenclature used by a culture to document their musical notation. In addition to music literacy, musical notation also demands choices from the performer. For example, the notation of Hindustani ragas will begin with an alap that does not demand a strict adherence to a beat or pulse, but is left up to the discretion of the performer. The sheet music notation captures the sequence of gestures the performer is encouraged to make within a musical culture, but is by no means fixed to those performance choices.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "95261", "title": "Sheet music", "section": "Section::::Types.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 758, "text": "Modern sheet music may come in different formats. If a piece is composed for just one instrument or voice (such as a piece for a solo instrument or for \"a cappella\" solo voice), the whole work may be written or printed as one piece of sheet music. If an instrumental piece is intended to be performed by more than one person, each performer will usually have a separate piece of sheet music, called a \"part\", to play from. This is especially the case in the publication of works requiring more than four or so performers, though invariably a \"full score\" is published as well. The sung parts in a vocal work are not usually issued separately today, although this was historically the case, especially before music printing made sheet music widely available.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "95261", "title": "Sheet music", "section": "Section::::Purpose and use.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 707, "text": "Handwritten or printed music is less important in other traditions of musical practice, however, such as traditional music and folk music, in which singers and instrumentalists typically learn songs \"by ear\" or from having a song or tune taught to them by another person. Although much popular music is published in notation of some sort, it is quite common for people to learn a song by ear. This is also the case in most forms of western folk music, where songs and dances are passed down by oral – and aural – tradition. Music of other cultures, both folk and classical, is often transmitted orally, though some non-Western cultures developed their own forms of musical notation and sheet music as well.\n", "bleu_score": null, "meta": null } ] } ]
null
3eq3zm
why do some subreddits have 20+ mods, even though some of them don't do anything?
[ { "answer": "Some may mod quietly, and never really comment, while others are more known because they constantly comment\n\nSome may have stopped using Reddit, but no one's removed them from the mod list yet\n\nSome may have been made a mod, just because of who the are/who they're friends with\n\nSome may only deal with a certain aspect of the subreddit, like designing the layout, and don't actively participate in the moderating of it\n\nThere's dozens of reason why someone could be a mod of a sub without looking like they're doing anything.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "48581597", "title": "Star Wars: Galaxy of Heroes", "section": "Section::::Gameplay.:Mods.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 464, "text": "Mods (short for modifications) are an optional upgrade for characters within the game. Once the player's account reaches level 50, Mods become available to any of their characters that are level 50 or above. There are different categories of mods, each of which yields a different primary effect on the stats of the character that has equipped it. This effect allows players to increase statistical areas of their characters to yield better performance in battle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "407326", "title": "Mod (video gaming)", "section": "Section::::Development.:Motivations of modders.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 634, "text": "Mods can be both useful to players and a means of self-expression. Three motivations have been identified by Olli for fans to create mods: to patch the game, to express themselves, and to get a foot in the door of the video game industry. However, it is very rare for even popular modders to make this leap to the professional video game industry. Poor suggests becoming a professional is not a major motivation of modders, noting that they tend to have a strong sense of community, and that older modders, who may already have established careers, are less motivated by the possibility of becoming professional than younger modders.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "407326", "title": "Mod (video gaming)", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 208, "text": "Mods have arguably become an increasingly important factor in the commercial success of some games, as they add a depth to the original work, and can be both useful to players and a means of self-expression.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49390570", "title": "Minecraft mods", "section": "Section::::Mod content.:Modpacks.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 592, "text": "Single-player mods are sometimes grouped together in so-called \"modpacks\", which can be easily downloaded and played by the end user without requiring the player to have extensive knowledge on how to set up the game. Content creators use that to their advantage in order to allow mods to interact so that a particular experience can be delivered, sometimes aided by throwing configuration files and custom textures into the mix. The most popular modpacks can be downloaded and installed through launchers, like the \"Twitch Desktop App\", \"Feed the Beast\", \"Technic Launcher\" and \"ATLauncher\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "247971", "title": "PhpBB", "section": "Section::::MODs.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 1220, "text": "MODs are code modifications created by the phpBB community, often used to extend the functionality of or change the display of phpBB. The term is capitalised to distinguish code modifications from forum moderators, the latter of which is often abbreviated as \"mods\". Modifications referred to in this manner are not authored by the phpBB developers, and do not enjoy the same level of support as unmodified official code. The phpBB Extensions Team (formerly known as the phpBB MOD Team), headed by David Colón (known as DavidIQ in the community), accepts modifications from community sources for validation, and modifications which meet the Extensions Team's standards are made available for download from the phpBB \"Customisations Database\". Other sites also provide phpBB2 and phpBB3 modifications for download. Some of the sites have their own standards which they validate to, and other sites do not do any validation, however the phpBB teams do not offer support for boards using MODs downloaded from sites other than phpBB.com. Documentation for phpBB3 MODding is provided by the Extensions Team. MODs are not accepted for the 3.1.x line of phpBB since Extensions have taken their place from that version forward.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15357750", "title": "Notrium", "section": "Section::::Gameplay.:Mods.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 219, "text": "Mods can have a variety of different effects, such as adding new items and objectives, altering or creating new environments, or even adding a completely new character for play in the case of the popular \"Werivar\" mod.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "407326", "title": "Mod (video gaming)", "section": "Section::::Development.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 409, "text": "Many mods are not publicly released to the gaming community by their creators. Some are very limited and just include some gameplay changes or even a different loading screen, while others are total conversions and can modify content and gameplay extensively. A few mods become very popular and convert themselves into distinct games, with the rights getting bought and turning into an official modification.\n", "bleu_score": null, "meta": null } ] } ]
null
711thb
How is online gaming possible if there must be some delay?
[ { "answer": "There is latency, (or \"ping,\" or \"lag\"), usually measured in milliseconds. If the latency gets too high, some fast-paced games can become unplayable. Latency below ~100 isn't really noticeable, and this is easily achievable if players are within a few hundred miles of each other. There are also [techniques that attempt to compensate for latency](_URL_0_), and they have varying degrees of success at this.", "provenance": null }, { "answer": "Modern games tend to employ two different techniques simultaneously in order to compensate for lag, but players with lower latency will still have a small advantage.\n\n1. **Server state rewind**: You're playing an FPS at home and you pull the trigger. At that moment, the stuff you see on your screen is already out of date by 20-50ms. And the command to fire your gun doesn't reach the server for *another* 20-50ms. But the server knows *when* you fired the gun, so it just rewinds the game to that exact moment to find out what you hit.\n\n2. **Client-side prediction**: Certain actions that you perform in the game (firing, moving, jumping, whatever) don't actually execute until the server receives the command. But the game would feel terribly sluggish if your client were to wait for official confirmation. So your client *simulates* the command locally under the assumption that the server will allow the action. You pull the trigger, and your client immediately plays a gunshot sound and draws tracer rounds on the screen. You *feel* like it executed immediately... but other players in the game actually don't see your shot until 100ms later.\n\nThese techniques allow games to feel responsive and accurate, but they can still cause conflicts. Sometimes you pull the trigger, but you're already dead on the server (somebody shot you and you don't know it yet) so the action isn't actually performed. You feel like the game robbed you of a kill because that's what it looked like on your screen.", "provenance": null }, { "answer": "Well, you can play chess via snail mail, right? So it's possible to play some games even when latency is an issue, the question is just how you handle it.\n\nThere are a couple different core techniques, each of which has its pros and cons, mostly you are figuring out ways of dealing with the potential for inconsistency in perceived world state between different clients. A typical way to avoid that is to have one particular authoritative source for \"what happened\" in the game, such as a server. You can go a step further and make it so that the client merely renders the information the server provides. Plenty of games that don't require twitchy levels of responsiveness work exactly this way. To do anything a player has makes some control input which is then translated into a player action and that action is relayed as a transaction to the server, the server applies that transaction, adjusts the state of the world, and sends back data on the new state of the world to the client. On the plus side every player sees the same state all the time. On the down side the latency for anything to happen depends on the round-trip communication time to the server, plus processing time. This can be unsatisfying for games that are expected to be highly responsive.\n\nAn alternative is to allow the client to interpolate the state of the world given the information it has and then the server will reconcile that into an \"official\" state of the world and the client will adjust if there's a divergence. This can work out fine most of the time, but in some cases can lead to weird and unsatisfying problems where something you thought you did or something you thought happened turned out to be invalidated by the server's official history and ended up with a different result.\n\nSome very informative talks from GDC on the subject:\n\n* [Fighting Latency on Call of Duty Black Ops III](_URL_0_)\n* [I Shot You First: Networking the Gameplay of HALO: REACH](_URL_1_)\n\nOne interesting story from the 2nd video (Halo: Reach) on how grenades work (starting at 27 mins). Halo uses a mixture of client-side prediction, reconciliation, and server-only updates on state. When you throw a grenade you begin your animation for the throw client-side, while informing the server of the event, when you reach the end of the throw your client tells the server \"create a thrown grenade from x,y with trajectory z\" the client then deletes the grenade from the client and waits on the data from the server to show the server-authoritative grenade movement. The throw is client-side, the movement of the flying grenade is entirely server-side, and the lag for the hand-off right at the end of the grenade throw animation is usually not noticed by players. Because the precise positioning of a grenade can make a big difference to gameplay (e.g. whether other players live or die) having all of the movements of the live grenade be run by the server means you don't end up with situations of players appearing to die then being resurrected as lag gets reconciled. This is a good example of how latency and how it's handled is dependent on game details, psychology, etc.", "provenance": null }, { "answer": "It is also important to note that generally, human reaction time is on the order of about 200ms. Most games within the same continental region will be within 100ms of latency round trip (i.e. RTT or round trip time), so it won't usually appear juttery to you. Combine that with client side prediction, and you get smooth gameplay.\n\nYou will usually see professional gamers and organizations attempt to settle down as close to the regional server as possible though, so as to drop average RTT down into the 40ms or below range. Championship games may be played on pure LAN setups (i.e. no internet, purely local), and thus have even less latency. This can add to some interesting scenarios, since what is humanly possible on LAN may be different than on the web (i.e. suppose there is a wind up time to an ability, 50ms of difference can be an eternity in that sense).\n\nOn the note of client side prediction, the feel of a game can differ depending on how often your client is actually exchanging data with your server. This varies from game to game, with casual games exchanging maybe 20 times per second, while the standard for ESEA grade counter strike is 128 times per second. Your ping and RTT can stay the same, but if you have a higher exchange rate, this usually leads to an overall smoother experience, and creates less scenarios where you think you've done something, but the server disagrees. e.g., for a 20 tickrate, there is a delay of 1/20s (50ms) between packets, whereas for 128 tickrate, there is a delay of 1/128s (~8ms) between packets. This is a latency set by the server/client netcode, and is added on to the actual network lag.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "20646089", "title": "Lag", "section": "Section::::Solutions and lag compensation.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 563, "text": "There are various methods for reducing or disguising delays, though many of these have their drawbacks and may not be applicable in all cases. If synchronization is not possible by the game itself, the clients may be able to choose to play on servers in geographical proximity to themselves in order to reduce latencies, or the servers may simply opt to drop clients with high latencies in order to avoid having to deal with the resulting problems. However, these are hardly optimal solutions. Instead, games will often be designed with lag compensation in mind.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12070411", "title": "Lockstep protocol", "section": "Section::::Drawbacks.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 299, "text": "As all players must wait for all commitments to arrive before sending their actions, the game progresses as slowly as the player with the highest latency. Although this may not be noticeable in a turn-based game, real-time online games, such as first person shooters, require much faster reactions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23654", "title": "Play-by-mail game", "section": "Section::::Play-by-web.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 211, "text": "Some sites have extended this gaming style by allowing the players to see each other's actions as they are made. This allows for real time playing while everyone is online and active, or slower progress if not.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17933", "title": "Latency (engineering)", "section": "Section::::Communication latency.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 412, "text": "Online games are sensitive to latency (or \"lag\"), since fast response times to new events occurring during a game session are rewarded while slow response times may carry penalties. Due to a delay in transmission of game events, a player with a high latency internet connection may show slow responses in spite of appropriate reaction time. This gives players with low latency connections a technical advantage.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6082373", "title": "Castle Crashers", "section": "Section::::Reception.:Technical issues.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 1266, "text": "Some users experienced problems finding available online games, as well as their Xbox 360 sometimes freezing when attempting to join an Xbox Live game, or while already in a game. \"There are certain network settings,\" said Paladin, \"where, if you're in a very specific network environment, it won't work with another person's connection and that's what's happening. But that's something we're already addressing by working with Microsoft to get a patch out as fast as possible.\" In addition to multiplayer problems, the game could also occasionally suffer from corrupted save files, causing players to lose character progress. In an interview with Joystiq, Tom Fulp and Dan Paladin of the Behemoth stated that they were working with Microsoft to get a patch released as soon as possible in order to fix the issues. A patch for the game was released on December 24, 2008 fixing glitches and exploits as well as resolving networking issues that were experienced at the game's launch. Similar networking problems have also been reported for the PlayStation 3 version of the game. The PlayStation 3 version of the game only allows one profile to be signed in per console, with additional players being unable to use their own progress rather than of the profile in use.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "563616", "title": "Ubisoft", "section": "Section::::Controversies.:2010s.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 740, "text": "In January 2010, Ubisoft announced the online services platform Uplay, which requires customers to authenticate on the first game launch and to remain online continually while playing, with the game pausing if network connection is lost. This system prevent to play games offline, to resell them and in the case should Ubisoft's servers go down, games would be unplayable. In 2010, review versions of \"Assassin's Creed II\" and \"Settlers 7\" for the PC contained this new DRM scheme and instead of pausing the game, it would discard all progress since the last checkpoint or save game. However, subsequent patches for \"Assassin's Creed II\" allowed players to continue playing once their connection has been restored without loss of progress.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35965610", "title": "PlayStation 2 online functionality", "section": "Section::::Games.:LAN tunneling.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 232, "text": "Over time, most game servers have been shut down. However, computer programs such as XBSlink, SVDL and XLink Kai allow users to achieve online play for some PS2 games by using a network configuration that simulates a worldwide LAN.\n", "bleu_score": null, "meta": null } ] } ]
null
zpv5a
how do i begin to invest my money in penny stocks?
[ { "answer": "Penny stocks are anything but a sure bet, especially if you're trying to day trade and therefore eating brokerage fees against your small amount of principal. If you need a small amount of steady income, you're much better off working your ass off to find a job, living off whatever money you planned to put into penny stocks.", "provenance": null }, { "answer": "\"Penny stocks\" is a term used to describe any sort of company that trades below a certain amount (usually < $5). People will try to buy them thinking how easy it could be for the stock to go up (\"If the price goes form $0.20 to $0.40 I double my money!\"), however investing in them is *highly* risky because in order to sell them you need to find someone else willing to buy them (and there is not always someone willing to do so). Trading in penny stocks is sometimes considered on par with gambling. Further if you are looking to make a steady income with them it may be hard, because most places will charge you a fee ($5-$10) every time you want to make a trade, so trading often and in small amounts may hurt you in the long run.", "provenance": null }, { "answer": "$75 a week is a lot. Penny stocks don't really offer a higher return on investment because of the high risk involved, as well as transaction fees. Say you average 10% returns a year, which is pretty damn good. 52 weeks of $75 income is $3900. So you would need starting capital of ~$39,000 to be in a situation where you could get $75 a week.\n\nTo put this in perspective, Warren Buffett is one of the greatest living investors. During the period of 2000-2010 he produced 76% returns on his stock investments. If you made 10% a year every year on your investments, you would have 1.1^10 = 2.59 or 159% returns. This is assuming you have 0 transactional costs.\n\nMoral of the story: it's not as easy to make money as you think. You can't invest 100 bucks in penny stocks and expect to make 75 a week. If you could do that everyone would do it and nobody would work. Penny stocks are highly risky; that's why their price is so low. For every stock you double your money on there are 99 where you lost everything. You're actually better taking your $100 and investing it in regular stocks, which have lower top returns but are much less likely to bomb out on you.\n\nI hope you have luck in your job search. It's been tough for everyone out there lately but I'm sure we'll all pull through.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "19372783", "title": "Stock", "section": "Section::::Trading.:Buying.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 771, "text": "When it comes to financing a purchase of stocks there are two ways: purchasing stock with money that is currently in the buyer's ownership, or by buying stock on margin. Buying stock on margin means buying stock with money borrowed against the value of stocks in the same account. These stocks, or collateral, guarantee that the buyer can repay the loan; otherwise, the stockbroker has the right to sell the stock (collateral) to repay the borrowed money. He can sell if the share price drops below the margin requirement, at least 50% of the value of the stocks in the account. Buying on margin works the same way as borrowing money to buy a car or a house, using a car or house as collateral. Moreover, borrowing is not free; the broker usually charges 8–10% interest.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19372783", "title": "Stock", "section": "Section::::Trading.:Buying.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 304, "text": "There are various methods of buying and financing stocks, the most common being through a stockbroker. Brokerage firms, whether they are a full-service or discount broker, arrange the transfer of stock from a seller to a buyer. Most trades are actually done through brokers listed with a stock exchange.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44513732", "title": "Stockspot", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 323, "text": "Stockspot is an online investment adviser and fund manager based in Sydney, Australia. It is the first fully paperless digital investment advice platform in Australia and provides consumers with access to professional investment services for less than the typical cost of a traditional financial adviser or wealth manager.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13374208", "title": "Performance fee", "section": "Section::::Worked example.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 422, "text": "An investor subscribes for shares worth $1,000,000 in a hedge fund. Over the next year the NAV of the fund increases by 10%, making the investor's shares worth $1,100,000. Of the $100,000 increase, 20% (i.e. $20,000) will be paid to the investment manager, thereby reducing the NAV of the fund by that amount and leaving the investor with shares worth $1,080,000, giving a return of 8% before deduction of any other fees.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41915", "title": "Primary market", "section": "Section::::Concept.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 502, "text": "In a primary market, companies, governments or public sector institutions can raise funds through bond issues and corporations can raise capital through the sale of new stock through an initial public offering (IPO). This is often done through an investment bank or finance syndicate of securities dealers. The process of selling new shares to investors is called underwriting. Dealers earn a commission that is built into the price of the security offering, though it can be found in the prospectus. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1321361", "title": "Emerging market", "section": "Section::::Terminology.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 508, "text": "Individual investors can invest in emerging markets by buying into emerging markets or global funds. If they want to pick single stocks or make their own bets they can do it either through ADRs (American depositor Receipts - stocks of foreign companies that trade on US stock exchanges) or through exchange traded funds (exchange traded funds or ETFs hold basket of stocks). The exchange traded funds can be focused on a particular country (e.g., China, India) or region (e.g., Asia-Pacific, Latin America).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27713887", "title": "Commodity index fund", "section": "Section::::Funds that track indexes.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 379, "text": "You cannot invest in an index, but you can invest in a fund. A Commodity Index Fund is a fund which either buys and sells futures to replicate the performance of the index, or sometimes enters into swaps with investment banks who themselves then trade the futures. The biggest and best known such fund is the Pimco Real Return Strategy Fund. There are many other funds, such as:\n", "bleu_score": null, "meta": null } ] } ]
null
2qq28l
why do men's jean sizes have inseam and waist dimensions while women's jeans just have numbers (i.e. 4, 5, 6 vs. 32x34)?
[ { "answer": "Some people are going to hate me for this answer, but at some level many women don't like objective measures of reality and store vendors don't want it. A man for the most part regards the circumference of his waist as a statement of fact even if he isn't particularly pleased with the number. For whatever reason women take this stuff a lot more personally, so instead of a real measurement some phony number is used.\n\nThe problem is that nobody really knows what those numbers mean, so you get a size 4 that is quite different from one vendor to another. It doesn't take long to see that some clothing vendor will make a larger version of a size 4 to flatter someone and make the sale.\n\nI'm baffled by it, but that is the closest explanation I can come up with for it. I know some women who find the whole matter insulting, as they should. An entire industry thinks that women can't choose clothing based on objective measurements.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "21490868", "title": "Female body shape", "section": "Section::::Measurements.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 282, "text": "The waist is typically smaller than the bust and hips, unless there is a high proportion of body fat distributed around it. How much the bust or hips inflect inward, towards the waist, determines a woman's structural shape. The hourglass shape is present in only about 8% of women.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11831909", "title": "Body shape", "section": "Section::::Terminology.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 289, "text": "BULLET::::- Hourglass shape: The female body is significantly narrower in the waist both in front view and profile view. The waist is narrower than the chest region due to the breasts, and narrower than the hip region due to the width of the buttocks, which results in an hourglass shape.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4058023", "title": "Vanity sizing", "section": "Section::::Men's clothing.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 948, "text": "Although more common in women's apparel, vanity sizing occurs in men's clothing as well. For example, men's pants are traditionally marked with two numbers, \"waist\" (waist circumference) and \"inseam\" (distance from the crotch to the hem of the pant). While the nominal inseam is fairly accurate, the nominal waist may be quite a bit smaller than the actual waist, in US sizes. In 2010, Abram Sauer of \"Esquire\" measured several pairs of dress pants with a nominal waist size of 36 at different US retailers and found that actual measurements ranged from 37 to 41 inches. The phenomenon has also been noticed in the United Kingdom, where a 2011 study found misleading labels on more than half of checked items of clothing. In that study, worst offenders understated waist circumferences by 1.5 to 2 inches. London-based market analyst Mintel say that the number of men reporting varying waistlines from store to store doubled between 2005 and 2011.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5258267", "title": "U.S. standard clothing size", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 294, "text": "U.S. standard clothing sizes for women were originally developed from statistical data in the 1940s and 1950s. At that time, they were similar in concept to the EN 13402 European clothing size standard, although individual manufacturers have always deviated from them, sometimes significantly.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2054287", "title": "Dress shirt", "section": "Section::::Fit.\n", "start_paragraph_id": 53, "start_character": 0, "end_paragraph_id": 53, "end_character": 1298, "text": "In the US, ready-to-wear sizes of dress shirts traditionally consist of two numbers such as \"15½ 34\", meaning that the shirt has a neck in girth (measured from centre of top button to centre of corresponding buttonhole) and a sleeve long (measured from midpoint of the back and shoulders to the wrist). However, to reduce the number of sizes needed to be manufactured and stocked, an average sleeve length is sometimes given in the form \"15½ 34/35\" (indicating a neck in girth and a sleeve). Since the cuff frequently features two buttons, the cuff diameter can be reduced so that the cuff does not come down over the hand, allowing the shirt to fit the shorter length. Since the sleeve and neck size do not take into account waist size, some shirts are cut wide to accommodate large belly sizes. Shirts cut for flat stomachs are usually labeled, \"fitted\", \"tailored fit\" \"athletic fit\" or \"trim fit\". The terms for fuller cut shirts are more varied (\"Traditional\", \"Regular\" \"etc.\") and are sometimes explained on a shirt maker's website. Additionally, \"Portly\" or \"Big\" are often used for neck sizes of or more. Very casual button-front shirts are often sized as small, medium, large, and so on. The meaning of these \"ad-hoc\" sizes is similarly not standardized and varies between manufacturers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1597736", "title": "EN 13402", "section": "Section::::Background.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 233, "text": "BULLET::::- For many types of garments, size cannot be adequately described by a single number because a good fit requires a match between two (or sometimes three) independent body dimensions. This is a common issue in sizing jeans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "145560", "title": "Corset", "section": "Section::::Waist reduction.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 1264, "text": "By wearing a tightly-laced corset for extended periods, known as tightlacing or waist training, men and women can learn to tolerate extreme waist constriction and eventually reduce their natural waist size. Although petite women are often able to get down to a smaller waist in absolute numbers, women with more fat are typically able to reduce their waists by a larger percentage. Although many different sizes were used, the smallest sizes that were popularly used were 16, 17 and 18 inches. Some women were so tightly laced that they could breathe only with the top part of their lungs. This caused the bottom part of their lungs to fill with mucus . Symptoms of this include a slight but persistent cough, as well as heavy breathing, causing a heaving appearance of the bosom. Until 1998, the Guinness Book of World Records listed Ethel Granger as having the smallest waist on record at . After 1998, the category changed to \"smallest waist on a living person\". Cathie Jung took the title with a waist measuring . Other women, such as Polaire, also have achieved such reductions ( in her case). However, these are extreme cases. Corsets were and are still usually designed for support, with freedom of body movement an important consideration in their design.\n", "bleu_score": null, "meta": null } ] } ]
null
fn9x93
If quarks are supposed to come in quark-antiquark pairs, how is it there are only 3 quarks, and no antiquarks, in protons and neutrons?
[ { "answer": "Quarks don't have to exist in pairs of quarks and antiquarks. Any QCD bound state must have zero net color charge. There are many ways that you can combine quarks and antiquarks such that their color charges sum to zero.\n\nThe main possibilities are a quark and an antiquark (mesons), three quarks (baryons), or three antiquarks (antibaryons). Then there are other, more exotic exotic possibilities, like four quarks and one antiquark (pentaquarks), etc.", "provenance": null }, { "answer": "\"Come in pairs\" refers to the production of them: If you make a new quark then you always make a new antiquark as well. But nothing stops quarks from existing without antiquarks around. We can also produce new protons and antiprotons, for example: Three quarks go that way, three antiquarks go that way.\n\nIt is an open question why we have more quarks than antiquarks - or generally more matter than antimatter - in the universe.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "257243", "title": "Pentaquark", "section": "Section::::Structure.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 656, "text": "The quarks are bound together by the strong force, which acts in such a way as to cancel the colour charges within the particle. In a meson, this means a quark is partnered with an antiquark with an opposite colour charge – blue and antiblue, for example – while in a baryon, the three quarks have between them all three colour charges – red, blue, and green. In a pentaquark, the colours also need to cancel out, and the only feasible combination is to have one quark with one colour (e.g. red), one quark with a second colour (e.g. green), two quarks with the third colour (e.g. blue), and one antiquark to counteract the surplus colour (e.g. antiblue).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1928465", "title": "Flavour (particle physics)", "section": "Section::::Flavour quantum numbers.:Quarks.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 313, "text": "All quarks carry a baryon number . They also all carry weak isospin, . The positive- quarks (up, charm, and top quarks) are called \"up-type quarks\" and negative- quarks (down, strange, and bottom quarks) are called \"down-type quarks\". Each doublet of up and down type quarks constitutes one generation of quarks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25179", "title": "Quark", "section": "Section::::Classification.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 618, "text": "Quarks are spin- particles, implying that they are fermions according to the spin–statistics theorem. They are subject to the Pauli exclusion principle, which states that no two identical fermions can simultaneously occupy the same quantum state. This is in contrast to bosons (particles with integer spin), of which any number can be in the same state. Unlike leptons, quarks possess color charge, which causes them to engage in the strong interaction. The resulting attraction between different quarks causes the formation of composite particles known as \"hadrons\" (see \"Strong interaction and color charge\" below).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4584", "title": "Baryon", "section": "Section::::Properties.:Spin, orbital angular momentum, and total angular momentum.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 786, "text": "Quarks are fermionic particles of spin (\"S\" = ). Because spin projections vary in increments of 1 (that is 1 ħ), a single quark has a spin vector of length , and has two spin projections (\"S\" = + and \"S\" = −). Two quarks can have their spins aligned, in which case the two spin vectors add to make a vector of length \"S\" = 1 and three spin projections (\"S\" = +1, \"S\" = 0, and \"S\" = −1). If two quarks have unaligned spins, the spin vectors add up to make a vector of length \"S\" = 0 and has only one spin projection (\"S\" = 0), etc. Since baryons are made of three quarks, their spin vectors can add to make a vector of length \"S\" = , which has four spin projections (\"S\" = +, \"S\" = +, \"S\" = −, and \"S\" = −), or a vector of length \"S\" =  with two spin projections (\"S\" = +, and \"S\" = −).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "67227", "title": "A Brief History of Time", "section": "Section::::Summary.:Chapter 5: Elementary Particles and Forces of Nature.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 515, "text": "Quarks are very small things that make up everything we see (matter). There are six different \"flavors\" of quarks: up, down, strange, charm, bottom, and top. Quarks also have three \"colors\": red, green, and blue. There are also antiquarks, which are the opposite of the regular quarks. In total, there are 18 different types of regular quarks, and 18 different types of antiquarks. Quarks are known as the \"building blocks of matter\" because they are the smallest thing that make up all the matter in the Universe.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "385334", "title": "List of particles", "section": "Section::::Elementary particles.:Fermions.:Quarks.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 711, "text": "Quarks are the fundamental constituents of hadrons and interact via the strong interaction. Quarks are the only known carriers of fractional charge, but because they combine in groups of three (baryons) or in pairs of one quark and one antiquark (mesons), only integer charge is observed in nature. Their respective antiparticles are the antiquarks, which are identical except that they carry the opposite electric charge (for example the up quark carries charge +, while the up antiquark carries charge −), color charge, and baryon number. There are six flavors of quarks; the three positively charged quarks are called \"up-type quarks\" while the three negatively charged quarks are called \"down-type quarks\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19870", "title": "Meson", "section": "Section::::Overview.:Spin, orbital angular momentum, and total angular momentum.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 716, "text": "Quarks are fermions—specifically in this case, particles having spin (\"S\" = ). Because spin projections vary in increments of 1 (that is 1 \"ħ\"), a single quark has a spin vector of length , and has two spin projections (\"S\" = + and \"S\" = −). Two quarks can have their spins aligned, in which case the two spin vectors add to make a vector of length \"S\" = 1 and three spin projections (\"S\" = +1, \"S\" = 0, and \"S\" = −1), called the spin-1 triplet. If two quarks have unaligned spins, the spin vectors add up to make a vector of length S = 0 and only one spin projection (\"S\" = 0), called the spin-0 singlet. Because mesons are made of one quark and one antiquark, they can be found in triplet and singlet spin states.\n", "bleu_score": null, "meta": null } ] } ]
null
3q84bc
How does light reflect in every direction if it is a single particle?
[ { "answer": "For large systems like light bouncing off the moon, it is important to keep in mind that there are an unimaginably large number of light particles (called photons) bouncing off the moon. \n\nEach one is going in only one direction, but because there are so many, there are always going to be a lot that are going right towards your eye, allowing you to see them. \n\n\nThe answer to your other question is a bit more subtle. If only one photon bounces off an object, then it can only be detected at one location, to there is no multiplication going on, but before it is detected, it does indeed spread out, and can interfere with itself or other photons over a large volume. \n\nFor more on that, try looking up Young's double slit experiment. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1474467", "title": "Compton wavelength", "section": "Section::::Limitation on measurement.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 481, "text": "To see how, note that we can measure the position of a particle by bouncing light off it – but measuring the position accurately requires light of short wavelength. Light with a short wavelength consists of photons of high energy. If the energy of these photons exceeds , when one hits the particle whose position is being measured the collision may yield enough energy to create a new particle of the same type. This renders moot the question of the original particle's location.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18406", "title": "Luminiferous aether", "section": "Section::::The history of light and aether.:Particles vs. waves.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 722, "text": "Isaac Newton contended that light is made up of numerous small particles. This can explain such features as light's ability to travel in straight lines and reflect off surfaces. Newton imagined that light particles as non-spherical \"corpuscles\", with different \"sides\" that give rise to birefringence. But the particle theory of light can not satisfactorily explain refraction and diffraction. To explain refraction, Newton's \"Opticks\" (1704) postulated an \"Aethereal Medium\" transmitting vibrations faster than light, by which light, when overtaken, is put into \"Fits of easy Reflexion and easy Transmission\", which caused refraction and diffraction. Newton believed that these vibrations were related to heat radiation:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1641247", "title": "Anti-reflective coating", "section": "Section::::Theory.:Reflection.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 618, "text": "Whenever a ray of light moves from one medium to another (for example, when light enters a sheet of glass after travelling through air), some portion of the light is reflected from the surface (known as the \"interface\") between the two media. This can be observed when looking through a window, for instance, where a (weak) reflection from the front and back surfaces of the window glass can be seen. The strength of the reflection depends on the ratio of the refractive indices of the two media, as well as the angle of the surface to the beam of light. The exact value can be calculated using the Fresnel equations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "521267", "title": "Reflection (physics)", "section": "Section::::Reflection of light.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 727, "text": "In fact, reflection of light may occur whenever light travels from a medium of a given refractive index into a medium with a different refractive index. In the most general case, a certain fraction of the light is reflected from the interface, and the remainder is refracted. Solving Maxwell's equations for a light ray striking a boundary allows the derivation of the Fresnel equations, which can be used to predict how much of the light is reflected, and how much is refracted in a given situation. This is analogous to the way impedance mismatch in an electric circuit causes reflection of signals. Total internal reflection of light from a denser medium occurs if the angle of incidence is greater than the critical angle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6170575", "title": "Dynamic light scattering", "section": "Section::::Description.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 819, "text": "When light hits small particles, the light scatters in all directions (Rayleigh scattering) as long as the particles are small compared to the wavelength (below 250 nm). Even if the light source is a laser, and thus is monochromatic and coherent, the scattering intensity fluctuates over time. This fluctuation is due to small molecules in solutions undergoing Brownian motion, and so the distance between the scatterers in the solution is constantly changing with time. This scattered light then undergoes either constructive or destructive interference by the surrounding particles, and within this intensity fluctuation, information is contained about the time scale of movement of the scatterers. Sample preparation either by filtration or centrifugation is critical to remove dust and artifacts from the solution.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "521267", "title": "Reflection (physics)", "section": "Section::::Reflection of light.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 403, "text": "Reflection of light is either \"specular\" (mirror-like) or \"diffuse\" (retaining the energy, but losing the image) depending on the nature of the interface. In specular reflection the phase of the reflected waves depends on the choice of the origin of coordinates, but the relative phase between s and p (TE and TM) polarizations is fixed by the properties of the media and of the interface between them.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "175607", "title": "Photon mapping", "section": "Section::::Effects.:Caustics.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 392, "text": "Light refracted or reflected causes patterns called caustics, usually visible as concentrated patches of light on nearby surfaces. For example, as light rays pass through a wine glass sitting on a table, they are refracted and patterns of light are visible on the table. Photon mapping can trace the paths of individual photons to model where these concentrated patches of light will appear.\n", "bleu_score": null, "meta": null } ] } ]
null
8l0y6d
what would happen if a blob of water is introduced into vaccum without gravity?
[ { "answer": "It would boil, whether there is gravity or not, in a vacuum. You could get a spherical shape if you had it in a pressurized environment e.g. on the ISS. This is also not \"without gravity\" but often what people think of when they say a zero grav environment, since it is in free fall (you and the water could float).", "provenance": null }, { "answer": "[This picture](_URL_0_) is of liquid nitrogen in vacuum, but I believe water would behave similarly:\n\n* Because of the vacuum, water would immediately start boiling.\n\n* Boiling sucks heat energy out of the water in order to use it for the phase change from liquid to steam. Temperature of the remaining liquid-water would drop rapidly.\n\n* At the same time, the steam that forms inside the liquid as it boils would push outward, because there is no pressure to keep it contained. So even if the liquid water would have assumed a spherical shape due to its surface tension, the boiling basically blows the sphere apart.\n\nSo in the end, it's possible that what you get is that weird formation of ice shown in the picture, with lots of holes in it. Or you could just get an exploding and expanding ball of small ice chunks and very cold steam. I haven't done the calculations, but whether any ice remains stuck together is a matter of how fast the temperature drops vs. how fast the water boils to steam. So basically it depends on HOW the water is \"introduced into vacuum\" - sudden vacuum, or \"gradually pumped-out air\" vacuum.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "32347", "title": "Vacuole", "section": "Section::::Plants.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 981, "text": "Aside from storage, the main role of the central vacuole is to maintain turgor pressure against the cell wall. Proteins found in the tonoplast (aquaporins) control the flow of water into and out of the vacuole through active transport, pumping potassium (K) ions into and out of the vacuolar interior. Due to osmosis, water will diffuse into the vacuole, placing pressure on the cell wall. If water loss leads to a significant decline in turgor pressure, the cell will plasmolyze. Turgor pressure exerted by vacuoles is also required for cellular elongation: as the cell wall is partially degraded by the action of expansins, the less rigid wall is expanded by the pressure coming from within the vacuole. Turgor pressure exerted by the vacuole is also essential in supporting plants in an upright position. Another function of a central vacuole is that it pushes all contents of the cell's cytoplasm against the cellular membrane, and thus keeps the chloroplasts closer to light.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19444126", "title": "Pseudofeces", "section": "Section::::Process.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 382, "text": "Pseudofeces accumulate with, and look much like, the actual feces in the bottom of the mantle cavity. The unwanted material is periodically ejected (usually through the inhalant siphon or aperture) by contractions of the adductor muscles, which \"clap\" the shells together, pushing most of the water out of the mantle cavity and forcibly ejecting both the feces and the pseudofeces.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43144", "title": "Echiura", "section": "Section::::Behaviour.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 864, "text": "\"Urechis\", another tube-dweller, has a different method of feeding on detritus. It has a short proboscis and a ring of mucus glands at the front of its body. It expands its muscular body wall to deposit a ring of mucus on the burrow wall then retreats backwards, exuding mucus as it goes and spinning a mucus net. It then draws water through the burrow by peristaltic contractions and food particles stick to the net. When this is sufficiently clogged up, the spoon worm moves forward along its burrow devouring the net and the trapped particles. This process is then repeated and in a detritus-rich area may take only a few minutes to complete. Large particles are squeezed out of the net and eaten by other invertebrates living commensally in the burrow. These typically include a small crab, a scale worm and often a fish lurking just inside the back entrance.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "871210", "title": "Utricularia", "section": "Section::::Carnivory.:Lloyd's experiments.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 346, "text": "He tested the role of the velum by showing that the trap will never set if small cuts are made to it; and showed that the excretion of water can be continued under all conditions likely to be found in the natural environment, but can be prevented by driving the osmotic pressure in the trap beyond normal limits by the introduction of glycerine.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29260379", "title": "Jig concentrators", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 1041, "text": "The particles would usually be of a similar size, often crushed and screened prior to being fed over the jig bed. There are many variations in design; however the basic principles are constant: The particles are introduced to the jig bed (usually a screen) where they are thrust upward by a pulsing water column or body, resulting in the particles being suspended within the water. As the pulse dissipates, the water level returns to its lower starting position and the particles once again settle on the jig bed. As the particles are exposed to gravitational energy whilst in suspension within the water, those with a higher specific gravity (density) settle faster than those with a lower count, resulting in a concentration of material with higher density at the bottom, on the jig bed. The particles are now concentrated according to density and can be extracted from the jig bed separately. In the mining of most heavy minerals, the denser material would be the desired mineral and the rest would be discarded as floats (or tailings). \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10121045", "title": "Pore space in soil", "section": "Section::::Pore types.:Macropore.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 297, "text": "The pores that are too large to have any significant capillary force. Unless impeded, water will drain from these pores, and they are generally air-filled at field capacity. Macropores can be caused by cracking, division of peds and aggregates, as well as plant roots, and zoological exploration.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3371178", "title": "Heron's fountain", "section": "Section::::Motion.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 267, "text": "The gravitational potential energy of the water which falls a long way from the basin into the lower container is transferred by pneumatic pressure tube (only air is moved upwards at this stage) to push the water from the upper container a short way above the basin.\n", "bleu_score": null, "meta": null } ] } ]
null
3mdwjr
if they have cameras/sensors that clearly show if a baseball is a strike/foul/ball or even if a runner is safe/out on base, why does baseball still use umpires?
[ { "answer": "Some neutral party needs to make the judgment call, so an umpire is necessary to some extent. Video can assist with this, however, games would take too long if every call was made with reference to video. It's simply easier and faster to have umpires make most of the calls.\n\nMany sports do make use of video for refereeing, but have compromised between total accuracy and speed. For example, in field hockey, each team is allotted a maximum number of times that they may appeal an umpire's decision and request a review of the video. This limits frivolous use and keeps the game going.", "provenance": null }, { "answer": "The answer is that baseball is still a sport that is played for entertainment and people would rather there be stuff like umpires instead of some super robotic system of lasers or something. \n\nThere is a ton of ways you could make sports more efficient but that wasn't the point. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5888500", "title": "Warning track", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 432, "text": "Despite the warning track's presence, it is common to see outfielders crash into the wall to make a catch, due to a desire to field the play regardless of the outcome and/or because they fail to register the warning in time (as the track is on the ground, an outfielder pursuing a fly ball in the air will be looking in the opposite direction and thus the warning track would be out of the outfielder's line of sight in any event).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3850", "title": "Baseball", "section": "Section::::Personnel.:Other.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 523, "text": "Any baseball game involves one or more umpires, who make rulings on the outcome of each play. At a minimum, one umpire will stand behind the catcher, to have a good view of the strike zone, and call balls and strikes. Additional umpires may be stationed near the other bases, thus making it easier to judge plays such as attempted force outs and tag outs. In MLB, four umpires are used for each game, one near each base. In the playoffs, six umpires are used: one at each base and two in the outfield along the foul lines.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6075168", "title": "Safe (baseball)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 528, "text": "By the rules, a runner is safe when he is entitled to the base he is trying for. Umpires will signal that a runner is safe by extending their elbows to their sides and then extending their arms fully to the side. For emphasis, an umpire may fully cross and extend his arms several times to indicate safe. Verbally, the umpire will usually simply say \"safe\". If a close play occurs that may have appeared to be a putout, the umpire will also call a reason for the safe call, such as \"he dropped the ball\" or \"he missed the tag\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "569822", "title": "Infield fly rule", "section": "Section::::Additional details.:Umpires' signals.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 252, "text": "As the infield fly rule is a special case, umpires signal one another at the start of an at-bat to remind one another that the game situation puts the rule into effect. A typical signal is to touch the brim of the cap so as to show the number of outs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "968099", "title": "Baseball field", "section": "Section::::Warning track.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 377, "text": "The warning track is the strip of dirt at the edges of the baseball field (especially in front of the home run fence and along the left and right sides of a field). Because the warning track's color and feel differ from the grass field, a fielder can remain focused on a fly ball near the fence and measure his proximity to the fence while attempting to catch the ball safely.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "784324", "title": "Count (baseball)", "section": "Section::::About.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 659, "text": "Counts with two strikes (except 3-2) are considered pitchers' counts. An 0-2 count is very favorable to a pitcher. In such a count, the pitcher has the freedom to throw one (or sometimes two) pitches out of the strike zone intentionally, in an attempt to get the batter to chase the pitch (swing at it), and strike out. Arguing as to whether a pitch was a ball or a strike (which is a judgment call by the umpire) is strictly prohibited by Major League Baseball rules. Such an infringement, known as \"arguing balls and strikes,\" will quickly lead to a warning from the umpire, and the player or manager may be ejected from the game if they continue to argue.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2176922", "title": "Ejection (sports)", "section": "Section::::Conditions.:Baseball.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 1050, "text": "In baseball, each umpire has a considerable amount of discretion, and may eject any player, coach, or manager solely on his own judgment of unsportsmanlike conduct. The ejectable offense may be an excessively heated or offensive argument with an umpire, offensive interference (contact with the catcher on a play at the plate), malicious game play (especially pitchers attempting to intentionally strike batters with the ball or a manager or coach ordering a pitcher to do so), illegally applying a foreign substance to a bat or otherwise tampering with a ball (most famously, George Brett's Pine Tar Game), using a corked bat, charging the mound, or otherwise fighting. Between players and umpires, there is a common understanding that a certain level of argument is permitted, but players who too vigorously question an umpire's judgment of balls and strikes, argue a balk and/or use foul language may risk an ejection. A player is also automatically ejected when a bat, glove, cap or helmet is taken off and thrown down in anger and/or confusion.\n", "bleu_score": null, "meta": null } ] } ]
null
1c4d83
Why is the tongue the fastest healing organ in the body?
[ { "answer": "is the tongue the fastest healing organ? I always thought it was the cornea...", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "437751", "title": "Tongue piercing", "section": "Section::::Procedure.:Piercing.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 435, "text": "Because of the tongue's exceptional healing ability, piercings can close very fast. Even completely healed holes can close up in a matter of hours, and larger-stretched holes can close in just a few days. The length of time for the hole to heal varies greatly from person to person – some people with larger-stretched holes (greater than 4 g (5 mm)) can still fit jewelry (albeit smaller) in their piercing after months or even years.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "767067", "title": "Beckwith–Wiedemann syndrome", "section": "Section::::Management.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 294, "text": "The best time to perform surgery for a large tongue is not known. Some surgeons recommend performing the surgery between 3 and 6 months of age. Surgery for macroglossia involves removing a small part of the tongue so that it fits within the mouth to allow for proper jaw and tooth development.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18722789", "title": "Wound licking", "section": "Section::::Mechanism.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1322, "text": "Oral mucosa heals faster than skin, suggesting that saliva may have properties that aid wound healing. Saliva contains cell-derived tissue factor, and many compounds that are antibacterial or promote healing. Salivary tissue factor, associated with microvesicles shed from cells in the mouth, promotes wound healing through the extrinsic blood coagulation cascade. The enzymes lysozyme and peroxidase, defensins, cystatins and an antibody, IgA, are all antibacterial. Thrombospondin and some other components are antiviral. A protease inhibitor, secretory leukocyte protease inhibitor, is present in saliva and is both antibacterial and antiviral, and a promoter of wound healing. Nitrates that are naturally found in saliva break down into nitric oxide on contact with skin, which will inhibit bacterial growth. Saliva contains growth factors such as epidermal growth factor, VEGF, TGF-β1, leptin, IGF-I, lysophosphatidic acid, hyaluronan and NGF, which all promote healing, although levels of EGF and NGF in humans are much lower than those in rats. In humans, histatins may play a larger role. As well as being growth factors, IGF-I and TGF-α induce antimicrobial peptides. Saliva also contains an analgesic, opiorphin. Licking will also tend to debride the wound and remove gross contamination from the affected area.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "251007", "title": "Tongue splitting", "section": "Section::::Process.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 233, "text": "The tongue generally heals in 1–2 weeks, during which time the person may have difficulty with speech or their normal dietary habits. Splitting is reversible but the reversal is even more painful than the tongue splitting procedure.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "411512", "title": "Cymatics", "section": "Section::::In healing.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 258, "text": "It has been speculated by some researchers that application of ultrasound cause wounds to heal faster. Other than select articles on the subject of low-amplitude high-frequency sound in bone fracture healing, there is no medical evidence of this phenomenon.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3487364", "title": "Tongue frenulum piercing", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 729, "text": "Aftercare for tongue frenulum piercings can be more complicated than most other piercings. The healing piercing will come into contact with anything that enters the mouth, including food and smoke, which can cause irritation. Frenulum piercings generally heal faster than other body piercings, though; a healing time of two to eight weeks can be expected. Many certified piercers suggest after care guidelines such as refraining from oral sex and smoke, and regular rinsing with saline or de-iodized salt water. Many professionals recommend rinsing with a 50/50 mixture of mouthwash and distilled water or a pH balanced, non-alcoholic mouthwash such as Dentyl pH after eating, drinking, or smoking, or simply rinsing every hour.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "245973", "title": "Mouth ulcer", "section": "Section::::Diagnostic approach.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 676, "text": "Diagnosis of mouth ulcers usually consists of a medical history followed by an oral examination as well as examination of any other involved area. The following details may be pertinent: The duration that the lesion has been present, the location, the number of ulcers, the size, the color and whether it is hard to touch, bleeds or has a rolled edge. As a general rule, a mouth ulcer that does not heal within 2 or 3 weeks should be examined by a health care professional who is able to rule out oral cancer (e.g. a dentist, oral physician, oral surgeon, or maxillofacial surgeon). If there have been previous ulcers which have healed, then this again makes cancer unlikely.\n", "bleu_score": null, "meta": null } ] } ]
null
54yn05
sellers listing items for dramatically under retail on amazon
[ { "answer": "Either they shut down their accounts and keep the money, or they're harvesting personal info", "provenance": null }, { "answer": "Not everyone bothers to ask for their $3.50 back.\n\n... it was about that time I realized that the Amazon seller was about 8 stories tall and a crustacean from the protozoic era.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "56394373", "title": "Pricesearcher", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 593, "text": "Pricesearcher is an independent e-commerce search engine launched in the UK in 2016 which helps shoppers find the best prices for products online. It does not use the traditional Price Comparison Website (PCW) model adopted by comparison sites such as Moneysupermarket.com and search engines such as Google Shopping where retailers pay to list their products for sale. Since its inception, it has listed products from online retailers, without charging a listing fee or commission for sales. Product search results are consequently unaffected by a retailer's marketing budget or lack thereof.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15945818", "title": "Rakuten.com", "section": "Section::::History.:Buy.com (1997–2010).\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 620, "text": "In 2002, Buy.com went beyond selling solely electronics, movies and music, adding more soft goods to their catalog, such as sports equipment, apparel, shoes, health and beauty products. It was at this time that Blum placed a full-page ad in \"The Wall Street Journal\" promising Amazon.com customers that Buy.com would prove to be the better buying option. This statement came shortly after Buy.com announced a 10% below Amazon.com cost on all books sold on the site and free shipping site-wide, with no minimum purchase required. At the time, Amazon had 25 million customers, approximately five times as many as Buy.com.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34526863", "title": "Showrooming", "section": "Section::::Efforts to combat showrooming by retailers.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 252, "text": "Best Buy has guaranteed to match the online price of goods listed on Amazon.com, and in April 2013 announced it would begin to lease out space to manufacturers such as Samsung, so customers can view working products and then purchase them at the MSRP.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34414719", "title": "DealDash", "section": "Section::::Criticism.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 360, "text": "According to \"Consumer Reports\", the \"buy it now\" prices can be significantly higher than the same products on Amazon.com. Unsuccessful bidders not using the option lose the value of the bids placed. A company spokesperson says DealDash generates significant business from bidders who choose to buy items after losing, with hundreds of orders processed daily.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "90451", "title": "Amazon (company)", "section": "Section::::Finances.\n", "start_paragraph_id": 120, "start_character": 0, "end_paragraph_id": 120, "end_character": 380, "text": "Amazon.com is primarily a retail site with a sales revenue model; Amazon takes a small percentage of the sale price of each item that is sold through its website while also allowing companies to advertise their products by paying to be listed as featured products. , Amazon.com is ranked 8th on the Fortune 500 rankings of the largest United States corporations by total revenue.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16364575", "title": "Rare Book Hub", "section": "Section::::Books For Sale.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 769, "text": "In 2006, the company added an online book listing service known as “Books For Sale.” Booksellers enter their books in the Books For Sale database, which can be searched on the site or through major search engines, including Google Product Search. Similar used book listings are available through websites such as Amazon.com, AbeBooks, Biblio.com, and Alibris. However, differing from those sites, sales are not conducted on the Americana Exchange site. Buyers are sent directly to the listing bookseller via an email form. Listing booksellers pay a fee of $425 annually to list their books and gain access to other services such as the book database. Since the sales are completed by the bookseller rather than on the Rare Book Hub website, no commissions are charged.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "130495", "title": "EBay", "section": "Section::::Use for data analysis.:Bidding.:Auction-style listings.\n", "start_paragraph_id": 103, "start_character": 0, "end_paragraph_id": 103, "end_character": 291, "text": "BULLET::::- eBay also allows sellers to offer a \"Buy it Now\" price that will end the auction immediately. The Buy It Now price is available until someone bids on the item, or until the reserve price is met. When the Buy It Now option disappears, the auction-style listing proceeds normally.\n", "bleu_score": null, "meta": null } ] } ]
null
6oovsv
Is it pure coincidence that the rotation rates of Mars and Earth are both 24 hours (-4 & +39 min)?
[ { "answer": "It's just a coincidence, with a large enough set of known planets you could say if it was an uncommon speed. But as you point out, their days are only roughly 24 hours, it's your choice of unit and rounding that creates the illusion if a pattern. ", "provenance": null }, { "answer": "I'd just like to point out that adding 40 min to every day is a bigger deal (for humans) than it might seem. \n\nNASA employees who were on \"Mars time\" to work with the rovers had a multitude of issues. Basically like worsening jet lag. \n\nPerhaps living on Mars and syncing up the day/night cycle might help, but it appears that our brains are wired for 24hour days. \n\nI do know that submariners who work 6hrs on and 12 hours off in an 18hr \"day\" don't suffer similar effects, even when they lose track of the days and go months w/o seeing the sun. Apparently, just like jet lag, it's easier on the body to subtract hours than add them. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "57226400", "title": "Sol (day on Mars)", "section": "Section::::Length.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 256, "text": "The sidereal rotational period of Mars—its rotation compared to the fixed stars—is only 24 hours, 37 minutes and 22.66 seconds. The solar day lasts slightly longer because of its orbit around the sun which requires it to turn slightly further on its axis.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29486469", "title": "Phase curve (astronomy)", "section": "Section::::Mars.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 510, "text": "Only about 50° of the martian phase curve can be observed from Earth because it orbits farther from the Sun than our planet. There is an opposition surge but it is less pronounced than that of Mercury. The rotation of bright and dark surface markings across its disk and variability of its atmospheric state (including its dust storms) superimpose variations on the phase curve. R. Schmude obtained many of the Mars brightness measurements used in a comprehensive phase curve analysis performed by A. Mallama.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1784060", "title": "Astronomy on Mars", "section": "Section::::Astronomical phenomena.:Long-term variations.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 448, "text": "As on Earth, the period of rotation of Mars (the length of its day) is slowing down. However, this effect is three orders of magnitude smaller than on Earth because the gravitational effect of Phobos is negligible and the effect is mainly due to the Sun. On Earth, the gravitational influence of the Moon has a much greater effect. Eventually, in the far future, the length of a day on Earth will equal and then exceed the length of a day on Mars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1744360", "title": "Colonization of Mars", "section": "Section::::Relative similarity to Earth.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 221, "text": "BULLET::::- Mars has an axial tilt of 25.19°, similar to Earth's 23.44°. As a result, Mars has seasons much like Earth, though on average they last nearly twice as long because the Martian year is about 1.88 Earth years.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14640471", "title": "Mars", "section": "Section::::Orbit and rotation.\n", "start_paragraph_id": 70, "start_character": 0, "end_paragraph_id": 70, "end_character": 347, "text": "The axial tilt of Mars is 25.19 degrees relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day epoch, the orientation of the north pole of Mars is close to the star Deneb.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1784060", "title": "Astronomy on Mars", "section": "Section::::Astronomical phenomena.:Long-term variations.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 254, "text": "As on Earth, there is a second form of precession: the point of perihelion in Mars's orbit changes slowly, causing the anomalistic year to differ from the sidereal year. However, on Mars, this cycle is 83,600 years rather than 112,000 years as on Earth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "735594", "title": "67 Asia", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 246, "text": "Photometry from the Oakley Observatory during 2006 produced a lightcurve that indicated a sidereal rotation period of with an amplitude of in magnitude. It has a 2:1 commensurability with Mars, having an orbital period double that of the planet.\n", "bleu_score": null, "meta": null } ] } ]
null
909n96
What factors led to cultures in Mesopotamia to transition from nomadic to sedentary living?
[ { "answer": "Before the advent of agriculture most human movements can be roughly explained in terms of climate and resources. The story of our transition from nomadic to sedentary life is no exception, and in fact begins around the end of the last ice age. \n\nThe cold and dry climate of the Pleistocene Ice Age made resources relatively scarce. With more of the world’s water locked away in glaciers, the river systems that later supported agriculture were not yet the abundant, fertile places we know them to be. Without the pre existing knowledge of agriculture, there simply wasn’t a place bountiful enough in animal and plant life for nomads to settle long-term. They could basically stay in a place until it’s resources were exhausted or their food moved. \n\nBy 11,000 the climate was far more conducive to the proliferation of flora and fauna, especially near the equator. The Natufians of the Levant began to experiment with more permanent settlements. I say experiment because they were still a hunter-gatherer society, but the greater availability of resources allowed them to forage and hunt from central, stationary communities. However, for about 1,000 years beginning in 10,800 Earth temporarily endured a cold swing with cold climates reminiscent of the Pleistocene Ice Age. Global glaciation brought on droughts and the Nartufians, who may have been using wild cereals to make bread as early as 12,000, could no longer depend on an abundance of wild crops and animals to sustain them. To defray the loss of natural resources, they began collecting seeds and clearing out scrub land to plant them in.\n\nThis question is hotly debated within the scientific community and the above is answer is somewhat controversial depending on how you define sedentary living. It’s not entirely accurate to say that the Nartufians were the first domestic farmers, as simply planting and harvesting does not qualify as “agriculture” in the sense that we think of it in Mesopotamia or the Nile Valley. However, they took an important step towards developing a sustainable model for sedentary living.\n\nTo put it more concisely, sedentary communities largely resulted from human response to local climate change. Sedentary life became sustainable when these communities began to experiment with plant domestication, and ultimately became the norm when animal domestication and technological advances gave way to large scale agriculture", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "11094201", "title": "Housetrucker", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 433, "text": "The notion of living a nomadic lifestyle in mobile collectives and following the seasons is older than civilization itself. Such examples of early tribes like Native Americans wandered across the nation, periodically moving location to maximise the advantages to climate and the environment. Throughout old Europe, the Middle East and Asia are found traditional Gypsies whose lifestyle is similar to that of the modern housetrucker.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "470212", "title": "Arameans", "section": "Section::::History.:Origins.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 820, "text": "Nomadic pastoralists have long played a prominent role in the history and economy of the Middle East, but their numbers seem to vary according to climatic conditions and the force of neighbouring states inducing permanent settlement. The period of the Late Bronze Age seems to have coincided with increasing aridity, which weakened neighbouring states and induced transhumance pastoralists to spend longer and longer periods with their flocks. Urban settlements (hitherto largely Amorite, Canaanite, Hittite, Ugarite inhabited) in The Levant diminished in size, until eventually fully nomadic pastoralist lifestyles came to dominate much of the region. These highly mobile, competitive tribesmen with their sudden raids continually threatened long-distance trade and interfered with the collection of taxes and tribute.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53275137", "title": "Steppe Route", "section": "Section::::History.:Upper paleolithic.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 1441, "text": "The dominant position occupied by nomadic communities in the ecological niche is a result of their nomadic military and technical superiority and is thought to have originated in the North Caucasian steppes as early as the 8th century BCE. The post glacial period was marked by a gradual rise of temperatures which peaked in the 5th and 4th millennia. These more hospitable conditions provided humans with grasslands and more stable food supplies and resulted in a sharp increase in their numbers. The regular collection of wild cereals led to the empirical breeding and selection of cereals (wheat, barley) that could be cultivated. It also led to the domestication of animals (donkeys, asses, horses, sheep, goats predecessors) to stockbreeding. Although the quality and quantity of artifacts varied from site to site, the general impression is that the development of craftsmanship contributed to more stable settlements and a more precise definition of routes connecting certain communities with each other. The Aurignacian culture spread through Siberia, and as a testimony of its presence, an Aurignacian venus was found near Irkutsk, on the Upper Angara river. Traces of the Magdalenian culture were also identified in Manchouria, Siberia and Hopei. Pastoralism introduced a qualitative leap in social development and prepared the necessary base for the creation of ancient semi nomadic civilizations along the Eurasian Steppe Route.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1655855", "title": "Complex society", "section": "Section::::Factors.:Agricultural Development.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 812, "text": "The transition from agrarian, nomadic individuals to industrial and sedentary habits emerged out of the improvements made in agricultural and central food planning. Early sedentary societies have been argued to emerge as early as 1600 BCE along southern Mexico, as there is a correlation between domesticated plant production, sedentism and pottery artifacts. The establishment of a nomadic society entails an emergence of social relations, affecting the patterns and roles each person is tasked with as means for survival. Farmers often found ways to expand agricultural posts by planting on hills and slopes, finding ways to work around environmental and land challenges. Similarly, developments in agriculture enabled societies to focus on central organization, planning and the development of urban centers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "673123", "title": "Archaeology of Israel", "section": "Section::::Archaeological time periods.:Iron Age/Israelite period.:Origins of the Ancient Israelites – the Tel Aviv School.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 906, "text": "According to Israel Finkelstein, this tendency of nomads to settle down, or of sedentary populations to become nomadic, when circumstances make it worth their while, is typical of many Mid-Eastern populations which retain the knowledge of both ways of life and can switch between them fairly easily. This happens on a small scale, but can also happen on a large scale, when regional political and economical circumstances change dramatically. According to Finkelstein, this process of settlement on a large scale in the mountain-ranges of Canaan had already happened twice before, in the Bronze Age, during periods when the urban civilization was in decline. The numbers of settlers were smaller in those previous two instances, and the settlement-systems they created ended up dissipating instead of coalescing into more mature political entities, as was the case with the settlers of the early Iron Age.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2185", "title": "Arabs", "section": "Section::::Culture.:Spirituality.\n", "start_paragraph_id": 137, "start_character": 0, "end_paragraph_id": 137, "end_character": 683, "text": "The religious beliefs and practices of the nomadic bedouin were distinct from those of the settled tribes of towns such as Mecca. Nomadic religious belief systems and practices are believed to have included fetishism, totemism and veneration of the dead but were connected principally with immediate concerns and problems and did not consider larger philosophical questions such as the afterlife. Settled urban Arabs, on the other hand, are thought to have believed in a more complex pantheon of deities. While the Meccans and the other settled inhabitants of the Hejaz worshipped their gods at permanent shrines in towns and oases, the bedouin practised their religion on the move.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15388", "title": "Religion in pre-Islamic Arabia", "section": "Section::::Worship.:Deities.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 682, "text": "The religious beliefs and practices of the nomadic Bedouin were distinct from those of the settled tribes of towns such as Mecca. Nomadic religious belief systems and practices are believed to have included fetishism, totemism and veneration of the dead but were connected principally with immediate concerns and problems and did not consider larger philosophical questions such as the afterlife. Settled urban Arabs, on the other hand, are thought to have believed in a more complex pantheon of deities. While the Meccans and the other settled inhabitants of the Hejaz worshiped their gods at permanent shrines in towns and oases, the Bedouin practiced their religion on the move.\n", "bleu_score": null, "meta": null } ] } ]
null
2auct1
when i choose the "skip" option on certain files in a torrent, why do some of them sometimes still download, often even 100% of the file?
[ { "answer": "The content of a torrent is split into blocks of a fixed size. That's the smallest unit your client can reliably download. Each block can contain one or more files (or parts of them).\n\nWhen you choose to skip a file, but the block containing this file also contains another file that you didn't choose to skip, the whole block has to be downloaded regardless.\n\nMany groups intentionally use this effect to make it impossible to skip their annoying \"downloaded_from_foo!.nfo\" files by making that the first file in the torrent.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "239098", "title": "BitTorrent", "section": "Section::::Description.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 685, "text": "Due to the nature of this approach, the download of any file can be halted at any time and be resumed at a later date, without the loss of previously downloaded information, which in turn makes BitTorrent particularly useful in the transfer of larger files. This also enables the client to seek out readily available pieces and download them immediately, rather than halting the download and waiting for the next (and possibly unavailable) piece in line, which typically reduces the overall time of the download. This eventual transition from peers to seeders determines the overall \"health\" of the file (as determined by the number of times a file is available in its complete form).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "305567", "title": "Royalty payment", "section": "Section::::Music.:Performance.:In digital distribution.:UK legislation.\n", "start_paragraph_id": 158, "start_character": 0, "end_paragraph_id": 158, "end_character": 289, "text": "A Limited Download is similar to a permanent download but differs from it in that the consumer's use of the copy is in some way restricted by associated technology; for instance, becomes unusable when the subscription ends (say, through an encoding, such as DRM, of the downloaded music).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "271525", "title": "Soulseek", "section": "Section::::Key features.:Album downloads.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 546, "text": "While Soulseek, like other P2P clients, allows a user to download individual files from another by selecting each one from a list of search results, a Download Containing Folder option simplifies the downloading of entire albums. For example, a user who wishes to facilitate the distribution of an entire album may place all tracks relating to the album together in a folder on the host PC, and the entire contents of that folder (i.e. all the album's track files) can then be downloaded automatically one after the other using this one command.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24767575", "title": "Torrent file", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 547, "text": "In a nutshell, a torrent file is like an index, which facilitates the efficient lookup of information (but doesn't contain the information itself) and the address of available worldwide computers which upload the content. Torrent files themselves and the method of using torrent files have been created to ease the load on servers. With help of torrents, one can download files from other computers which have the file or even a fraction of the file. These \"peers\" allow downloading of the file in addition to, or in place of, the primary server.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1871104", "title": "Sticky bit", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 478, "text": "When a directory's sticky bit is set, the filesystem treats the files in such directories in a special way so only the file's owner, the directory's owner, or root user can rename or delete the file. Without the sticky bit set, any user with write and execute permissions for the directory can rename or delete contained files, regardless of the file's owner. Typically this is set on the codice_1 directory to prevent ordinary users from deleting or moving other users' files.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15205", "title": "Insertion sort", "section": "Section::::Variants.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 226, "text": "If a skip list is used, the insertion time is brought down to O(log \"n\"), and swaps are not needed because the skip list is implemented on a linked list structure. The final running time for insertion would be O(\"n\" log \"n\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17932591", "title": "Torrentz", "section": "Section::::Usage.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 356, "text": "Selecting a torrent from the search results list would take the user to another page listing the websites currently hosting the specified torrent (with which users would download files). As Torrentz used meta-search engines, users would be redirected to other torrent sites to download content (commonly KickassTorrents, which was considered safe to use).\n", "bleu_score": null, "meta": null } ] } ]
null
1zebps
Did the American entry into the First World War have a significant impact on the eventual outcome?
[ { "answer": "Yes, absolutely - though not in the way you might think!\n\nThe US came into the war just as Russia was knocked out of it and the Germany high command realised that with the massive manpower and manufacturing base of the US, it was only a matter of time before the war was lost. It was therefore imperative to try and win the war before the Americans arrived en masse - there would be over 300,000 US soldiers in France by May 1918 and a further million by August.\n\nForces were redeployed from Russia as fast as they could and a massive offensive - Operation Michael - was launched in March 1918. The German spring offensive broke through the British lines on the Somme and succeeded (finally!) in breaking the trench deadlock and getting through into open country, thanks to tactics learned on the eastern front - a short \"hurricane bombardment\" instead of days of shelling, and infiltration and stormtrooper tactics. \n\nWith the British forces reeling back and the French fighting on the flanks, it looked for a while as if the Germans might actually manage to take Paris, and Haig issued a general order that all British forces were to stand firm, backs to the wall.\n\nUnfortunately for the Germans, the offensive petered out short of Paris thanks to a combination of dogged British defence and French attacks along the flanks of the German line; the difficulties the Germans found in advancing over the shell-ravaged Somme battlefields from two years earlier and the tendency of German troops to drop everything and start looting whenever they took a British supply dump.\n\nOnce the German offensive ground to a halt in August, the Allies counter-attacked, pushing the Germans back to the old trench line and then beyond it, in the \"Hundred Days Offensive\" which continued until the Armistice in 1918. These last 9 months of war were a far cry from the trench warfare that is the usual image of WWI and actually it wasn't until the Hundred Days that US forces were engaged in any number, so the American experiences of WWI were in general somewhat different from those of the British and French, with Pershing's American Expeditionary Force deployed along the Ardennes region in the French sector and attacking along the Verdun-Sedan axis.\n\nSo in WWI the American troops played a significant part in winning the war without actually doing a huge amount of the fighting. If the war had gone on another six months or so in the trench warfare phase, we'd have seen huge American armies manning the trenches and probably having their own Sommes and Verduns while they learned the lessons of Trench Warfare, but as it was it was the mere threat of the American arrival that forced the Germans to risk it all on an all-or-nothing attack.\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "28265", "title": "Spanish–American War", "section": "Section::::Aftermath.\n", "start_paragraph_id": 98, "start_character": 0, "end_paragraph_id": 98, "end_character": 366, "text": "The war marked American entry into world affairs. Since then, the U.S. has had a significant hand in various conflicts around the world, and entered many treaties and agreements. The Panic of 1893 was over by this point, and the U.S. entered a long and prosperous period of economic and population growth, and technological innovation that lasted through the 1920s.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53687987", "title": "Diplomatic history of World War I", "section": "Section::::American entry in 1917.:Decision for war.\n", "start_paragraph_id": 105, "start_character": 0, "end_paragraph_id": 105, "end_character": 1358, "text": "The story of American entry into the war is a study in how public opinion changed radically in three years' time. In 1914 Americans thought the war was a dreadful mistake and were determined to stay out. By 1917 the same public felt just as strongly that going to war was both necessary and morally right. The generals had little to say during this debate, and purely military considerations were seldom raised. The decisive questions dealt with morality and visions of the future. The prevailing attitude was that America possessed a superior moral position as the only great nation devoted to the principles of freedom and democracy. By staying aloof from the squabbles of reactionary empires, it could preserve those ideals—sooner or later the rest of the world would come to appreciate and adopt them. In 1917 this very long-run program faced the severe danger that in the short run powerful forces adverse to democracy and freedom would triumph. Strong support for moralism came from religious leaders, women (led by Jane Addams), and from public figures like long-time Democratic leader William Jennings Bryan, the Secretary of State from 1913 to 1916. The most important moralist of all was President Woodrow Wilson—the man who so dominated the decision for war that the policy has been called Wilsonianism and event has been labelled \"Wilson's War.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "384176", "title": "Military history of Canada", "section": "Section::::19th century.:War of 1812.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 832, "text": "After the cessation of hostilities at the end of the American Revolution, animosity and suspicion continued between the United States and the United Kingdom, erupting in 1812 when the Americans declared war on the British. Among the reasons for the war was British harassment of US ships (including impressment of American seamen into the Royal Navy), a byproduct of British involvement in the ongoing Napoleonic Wars. The Americans did not possess a navy capable of challenging the Royal Navy, and so an invasion of Canada was proposed as the only feasible means of attacking the British Empire. Americans on the western frontier also hoped an invasion would not only bring an end to British support of aboriginal resistance to the westward expansion of the United States, but also finalize their claim to the western territories.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6191450", "title": "United States campaigns in World War I", "section": "Section::::Cambrai, 20 November – 7 December 1917.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 851, "text": "The year the United States entered World War I was marked by near disaster for the Allies on all the European fronts. A French offensive in April, with which the British cooperated, was a failure, and was followed by widespread mutinies in the French armies. The British maintained strong pressure on their front throughout the year; but British attacks at Messines Ridge (7 June), at Passchendaele (31 July), and at Cambrai (20 November) failed in their main objective–the capture of German submarine bases–and took a severe toll of British fighting strength. Three American engineer regiments–the 11th, 12th, and 14th–were engaged in construction activity behind the British lines at Cambrai in November, when they were unexpectedly called upon to go into the front lines during an emergency. They thus became the first AEF units to meet the enemy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36170003", "title": "Social Democratic League of America", "section": "Section::::Organizational history.:Background.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 406, "text": "American entry into World War I came in the immediate aftermath of President Woodrow Wilson's successful November 1916 re-election campaign, which made prominent use of the slogan \"He Kept Us Out of War.\" Just months after Wilson's resounding victory, the resumption of unlimited submarine warfare by Germany early in 1917 pushed Wilson and the United States towards intervention in the European conflict.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25604889", "title": "United States in World War I", "section": "Section::::Entry.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 796, "text": "The American entry into World War I came on April 6, 1917, after a year long effort by President Woodrow Wilson to get the United States into of the war. Apart from an Anglophile element urging early support for the British, American public opinion sentiment for neutrality was particularly strong among Irish Americans, German Americans and Scandinavian Americans, as well as among church leaders and among women in general. On the other hand, even before World War I had broken out, American opinion had been more negative toward Germany than towards any other country in Europe. Over time, especially after reports of atrocities in Belgium in 1914 and following the sinking of the passenger liner \"RMS Lusitania\" in 1915, the American people increasingly came to see Germany as the aggressor.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "760776", "title": "History of the British Isles", "section": "Section::::19th century.:1801 to 1837.:Napoleonic Wars.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 744, "text": "A stepped-up war effort that year brought about some successes such as the burning of Washington, D.C., but the Duke of Wellington argued that an outright victory over the U.S. was impossible because the Americans controlled the western Great Lakes and had destroyed the power of Britain's Indian allies. A full-scale British invasion was defeated in upstate New York. Peace was agreed to at the end of 1814, but unaware of this, Andrew Jackson won a great victory over the British at the Battle of New Orleans in January 1815 (news took several weeks to cross the Atlantic before the advent of steam ships). The Treaty of Ghent subsequently ended the war with no territorial changes. It was the last war between Britain and the United States.\n", "bleu_score": null, "meta": null } ] } ]
null
7onxzo
Are slavs native to Balkans, and if not, who dominated The Balkans before they went there?
[ { "answer": "The Slavs as a linguistic group didn't arrive in the Balkans until the migration period of the 6th century. Around the same time, the region was also invaded by the Turkic Bulgars, who were gradually Slavicised until all that was left was the name. As to who dominated the Balkans before, the answer is Rome, who ruled the region from around 20 CE until the arrival of the Slavs and Bulgars. The Adriatic coast had been a Roman province for much longer than that, and Emperor Diocletian's palace forms the center of the Croatian city of Split. Before that, the eastern portions, which are now Macedonia and Bulgaria, where ruled by the Persians for a time and subsequently the Greeks.\n\nHowever, the answer to who the dominant population was in the Balkans is very different. In the east, the main groups where the Thracians and the Dacians, who had frequent contact with the Greeks to the south, but spoke a very different language and tended to no organize politically on a large scale. The western Balkans were inhabited by the Illyrians, of who not much is actually known. We know that they were never united as one \"Illyrian\" cultural group, and that the place known as Illyria by the Greeks and Romans was inhabited by numerous tribes who didn't necessarily have a lot in common. Some of them where influenced by Celtic cultures from northern Italy and modern day Austria. Occasionally, one group would rise to become particularly powerful, but no one dominated the whole region politically. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "3463", "title": "Bosnia and Herzegovina", "section": "Section::::History.:Middle Ages.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 342, "text": "The Early Slavs raided the Western Balkans, including Bosnia, in the 6th and early 7th century (amid the Migration Period), and were composed of small tribal units drawn from a single Slavic confederation known to the Byzantines as the \"Sclaveni\" (whilst the related \"Antes\", roughly speaking, colonized the eastern portions of the Balkans).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24221571", "title": "Timeline of Kosovo history", "section": "Section::::Prehistory, Roman era – 13th century AD.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 365, "text": "BULLET::::- Slavs are mentioned in the Balkans during Justinian I rule (527–565), when eventually up to 100,000 Slavs raided Thessalonica. The Balkans was settled with \"Sclaveni\", in relation to the Antes which settled in Eastern Europe. Large scale Slavic settlement in the Balkans begins in the early 580s. The Slavs lived in the \"Sklavinia\" (lit. \"Slav lands\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "151876", "title": "Bulgarians", "section": "Section::::Ethnogenesis.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 1043, "text": "The Early Slavs emerged from their original homeland in the early 6th century, and spread to most of the eastern Central Europe, Eastern Europe and the Balkans, thus forming three main branches: the West Slavs in eastern Central Europe, the East Slavs in Eastern Europe, and the South Slavs in Southeastern Europe (Balkans). The latter gradually inflicted total linguistic replacement of Thracian, if the Thracians had not already been Romanized or Hellenized. Most scholars accept that they began large-scale settling of the Balkans in the 580s based on the statement of the 6th century historian Menander speaking of 100,000 Slavs in Thrace and consecutive attacks of Greece in 582. They continued coming to the Balkans in many waves, but also leaving, most notably Justinian II (685-695) settled as many as 30,000 Slavs from Thrace in Asia Minor. The Byzantines grouped the numerous Slavic tribes into two groups: the Sklavenoi and Antes. Some Bulgarian scholars suggest that the Antes became one of the ancestors of the modern Bulgarians.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1711523", "title": "Travunija", "section": "Section::::History.:Early Middle Ages.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 469, "text": "The Slavs invaded the Balkans during the reign of Justinian I (r. 527–565), when eventually up to 100,000 Slavs raided Thessalonica. The Western Balkans was settled with \"Sclaveni\" (Sklavenoi), the east with Antes. The Sklavenoi plunder Thrace in 545, and again the next year. In 551, the Slavs crossed Niš initially headed for Thessalonica, but ended up in Dalmatia. In 577 some 100,000 Slavs poured into Thrace and Illyricum, pillaging cities and then settling down.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20262371", "title": "Albania under Serbia in the Middle Ages", "section": "Section::::Background.:Migration wave through the Balkans.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 1415, "text": "From the sixth century, big numbers of Slavs, Avars and Bulgars invaded the Balkan provinces of the East Roman Empire. Prior to Roman times, the Balkans had already consisted of a large culturally and ethnically mixed population. The 'ancient' inhabitants, generically referred to as Ancient Greeks, Illyrians, Thracians and Dacians, were split into many smaller tribes who had different customs and even languages. The picture was mixed further in Roman times, when Roman colonists were settled the Balkan cities, as well as Germanic, Celtic and Sarmatian \"federates\" in the countryside. This led to a process of Romanization of the natives who dwelt in cities in Illyria and Pannonia, whilst Greek was the formal language in Thrace, Epirus, and Macedonia (Roman province). In the countryside, many of the natives would join the foreign elements in raiding imperial territory. Later, there was an extensive Slavonization of the Balkans. Nevertheless, small pockets of people preserved an archaic language. The geographic origin of these proto-Albanians is disputed, and cannot be proven for lack of big archaeological or historical data pertaining to Albanians prior to the twelfth century. However, scholars explain that Albanian language comes from either Illyrian or Thracian or both, with a considerate influence of Latin. Later on Slavic and Turkish loanwords influenced it, although to a much lesser extent.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4829", "title": "Balkans", "section": "Section::::History and geopolitical significance.:Antiquity.\n", "start_paragraph_id": 67, "start_character": 0, "end_paragraph_id": 67, "end_character": 930, "text": "The Balkan region was the first area in Europe to experience the arrival of farming cultures in the Neolithic era. The Balkans have been inhabited since the Paleolithic and are the route by which farming from the Middle East spread to Europe during the Neolithic (7th millennium BC). The practices of growing grain and raising livestock arrived in the Balkans from the Fertile Crescent by way of Anatolia and spread west and north into Central Europe, particularly through Pannonia. Two early culture-complexes have developed in the region, Starčevo culture and Vinča culture. The Balkans are also the location of the first advanced civilizations. Vinča culture developed a form of proto-writing before the Sumerians and Minoans, known as the Old European script, while the bulk of the symbols had been created in the period between 4500 and 4000 BC, with the ones on the Tărtăria clay tablets even dating back to around 5300 BC.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "461676", "title": "Zachlumia", "section": "Section::::Slavic settlement.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 664, "text": "Slavs invaded Balkans during Justinian I (r. 527–565), when eventually up to 100,000 Slavs raided Thessalonica. The Western Balkans was settled with \"Sclaveni\" (Sklavenoi), the east with Antes. The Sklavenoi plundered Thrace in 545, and again the next year. In 551, the Slavs crossed Niš initially headed for Thessalonica, but ended up in Dalmatia. In 577 some 100,000 Slavs poured into Thrace and Illyricum, pillaging cities and settling down. Hum had also a large number of Vlachs who were descendent from a pre-Slavic population. Related to Romanians and originally speaking a language related to Romanian, the Vlachs of what was Hum are today Slavic speaking.\n", "bleu_score": null, "meta": null } ] } ]
null
15t82f
How viable is Replacement cloning?
[ { "answer": "There isn't a lot of promise at all, in fact there is very little research into this at all, our best technology can't even reconnect nerves currently, otherwise all those paraplegics caused by spinal injuries would be able to be \"fixed\".\n\nI don't know what you have been reading, but it isn't current scientific research.\n\n", "provenance": null }, { "answer": "Not at all. We cant replace heads or brains. we don't have the ability to reattach nerves, regrow spinal columns that fit/work, etc.\n\n > I know we've had some luck with head transplants\n\nNo we haven't. At best we can slap a head on a different body and feed it blood and watch it die sooner than later. The oft-linked on reddit USSR era research of changing heads on dogs is an obvious and proven fake.\n\n > and can use stem cells to repair the spinal cord.\n\nNot really. This wouldn't be a repair as much as a severing from literally a different body. I think these things are very different.\n\nThe day we can trivially heal para and quadrapelics is that day you should be worry about this. We are far from that day.", "provenance": null }, { "answer": "At this point it would be easier to contemplate organ transplant in specific organ failure, or standard of care for disseminated disease (ie best current chemotherapy for widely metastatic cancer). Head transplants are [in fact possible](_URL_0_) if done carefully to ensure the severed donor head is nourished, oxygenated, has proper fluid volume, and post-op does not suffer rejection or graft versus host disease.\n\nThis surgery theoretically does not damage cranial nerves, such that special senses (sight, auditory sense, vestibular sense, taste, and touch to the head) as well as branchial motor control (movement of muscles in the head and face), are preserved. However because we cannot yet attach the spinal cord of the donor head to the host body in any meaningful way, the body would be quadriplegic. This is not ideal for most circumstances.\n\nEthics aside, another reason this would be difficult is that when these kinds of surgeries were carried out in the '60s the surgeons caught [all kinds of hell](_URL_1_). ", "provenance": null }, { "answer": "A supposed [head transplant](_URL_0_) was performed on a monkey once. It was only attached to the blood supply of another monkey body and it survived for a few minutes.", "provenance": null }, { "answer": "The closest thing that science has been researching is Tissue Engineering (or Regenerative Medicine, depending on which buzzword you adopt).\n\nThis isn't outright cloning or duplicating of an organ or tissue. What it is, is utilizing progenitor (stem) or primary cells (osteocytes, chondrocytes, myocytes, fibroblasts) alongside cell culture and expansion methodologies combined with biomimetic/bioactive/resorbable scaffolding and mechanical/chemical stimuli in order to produce a replacement tissue or organ. \n\nWe can grow tissues like skin, bone and cartilage and we've had success in growing ears, bladders, vasculature and heart valves. These things are relatively simple constructs compared to complex organs like an eye, liver or brain. We're still working on getting the simple engineered tissues comparable to native tissues. Tissue engineered articular cartilage, for instance, has a host of problems compared to native. Vastly inferior strength and durability and odd biochemical composition. Science is working on these though, and we're always optimistic that we'll reach the goal eventually.\n\nWhy we do this is to be able to replace or repair tissues (and eventually organs) damaged through injury or disease, so rather than replace someones knee due to osteoarthritis, we can resurface their knee with new cartilage. In the future the hope is, for example, we treat a patient suffering from kidney failure by engineering them a new kidney rather than performing a transplant or having to undergo regular dialysis.\n\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "39379960", "title": "De-extinction", "section": "Section::::Methods.:Cloning.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 425, "text": "Cloning is the method discussed as an option for bringing back extinct species, by extracting the nucleus from a preserved cell from the extinct species and swapping it into an egg of the nearest living relative. This egg can then be inserted into a relative host. It is important to note that this method can only be used when a preserved cell is available. This means that it is most feasible for recently extinct species.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30992262", "title": "Partial cloning", "section": "Section::::Application.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 299, "text": "Classical cloning can rejuvenate old cells but the process demands that the old cells must artificially pass through an embryonic cell stage. Partial cloning affords the advantage that the old cells to be rejuvenated do not have to pass through the embryonic cell stage and are simply made younger.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9556567", "title": "Ethics of cloning", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 385, "text": "Opponents of cloning have concerns that technology is not yet developed enough to be safe, and that it could be prone to abuse, either in the form of clones raised as slaves, or leading to the generation of humans from whom organs and tissues would be harvested. Opponents have also raised concerns about how cloned individuals could integrate with families and with society at large.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4024033", "title": "Julian Savulescu", "section": "Section::::Procreative beneficence.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 760, "text": "\"The most publicly justifiable application of human cloning, if there is one at all, is to provide self-compatible cells or tissues for medical use, especially transplantation. Some have argued that this raises no new ethical issues above those raised by any form of embryo experimentation. I argue that this research is less morally problematic than other embryo research. Indeed, it is not merely morally permissible but morally required that we employ cloning to produce embryos or fetuses for the sake of providing cells, tissues or even organs for therapy, followed by abortion of the embryo or fetus.\" He argues that if it is permissible to destroy fetuses, for social reasons, or no reasons at all, it must be justifiable to destroy them to save lives.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "168927", "title": "Somatic cell nuclear transfer", "section": "Section::::Applications.:Reproductive cloning.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 872, "text": "This technique is currently the basis for cloning animals (such as the famous Dolly the sheep), and has been theoretically proposed as a possible way to clone humans. Using SCNT in reproductive cloning has proven difficult with limited success. High fetal and neonatal death make the process very inefficient. Resulting cloned offspring are also plagued with development and imprinting disorders in non-human species. For these reasons, along with moral and ethical objections, reproductive cloning in humans is proscribed in more than 30 countries. Most researchers believe that in the foreseeable future it will not be possible to use the current cloning technique to produce a human clone that will develop to term. It remains a possibility, though critical adjustments will be required to overcome current limitations during early embryonic development in human SCNT.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6910", "title": "Cloning", "section": "Section::::Organism cloning.:Artificial cloning of organisms.:Ethical issues of cloning.\n", "start_paragraph_id": 74, "start_character": 0, "end_paragraph_id": 74, "end_character": 319, "text": "Opponents of cloning have concerns that technology is not yet developed enough to be safe and that it could be prone to abuse (leading to the generation of humans from whom organs and tissues would be harvested), as well as concerns about how cloned individuals could integrate with families and with society at large.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30992262", "title": "Partial cloning", "section": "Section::::Application.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 482, "text": "The key notion that exemplifies “partial” cloning from “classical” cloning is the separation of the mechanism(s) that “wipe clean” the specialization of a cell from those that “wipe-clean” the age of the cell. In short, partial cloning aims to retain the specialized functions of a cell and simply make it younger, e.g., a skin cell is rejuvenated without having to pass through the embryonic stage that is a must for rejuvenation via the classical cloning technique (see diagram).\n", "bleu_score": null, "meta": null } ] } ]
null
142yu2
rogaine, and why i can't use it to grow a beard.
[ { "answer": "Im actually wondering this myself...", "provenance": null }, { "answer": "I believe you can.\n\nOriginally Rogaine was developed for high blood pressure, and it's my understanding that in clinical trials, it didn't help with the blood pressure, and it caused people to grow hair... everywhere.\n\nSo they started instructing people to put it only on their heads, and marketed it as a baldness treatment,", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "43434325", "title": "Bort (name)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 340, "text": "Bort is an English name meaning \"fortified.\" It is also an eastern Ashkenazic surname that refers to a man with a remarkable beard. It originates from the Yiddish word \"bord\" and the German \"Bart,\" which both mean \"beard.\" It may also originate from the Polish word \"borta,\" a loanword from the German \"borte\" meaning \"braid\" or \"galloon.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "547121", "title": "Bearded Collie", "section": "Section::::Working life.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 509, "text": "The Bearded Collie may have earned its nickname \"bouncing Beardie\" because the dogs would work in thick underbrush on hillsides; they would bounce to catch sight of the sheep. Beardies also have a characteristic way of facing a stubborn ewe, barking and bouncing on the forelegs. Whatever the reason, a typical Bearded Collie is an enthusiastic herding dog which requires structure and care; it moves stock with body, bark and bounce as required. Very few Beardies show \"eye\" when working; most are upright. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "162110", "title": "Beard", "section": "Section::::In religion.:Rastafari Movement.\n", "start_paragraph_id": 91, "start_character": 0, "end_paragraph_id": 91, "end_character": 345, "text": "Male Rastafarians wear beards in conformity with injunctions given in the Bible, such as Leviticus 21:5, which reads \"They shall not make any baldness on their heads, nor shave off the edges of their beards, nor make any cuts in their flesh.\" The beard is a symbol of the covenant between God (Jah or Jehovah in Rastafari usage) and his people.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1986432", "title": "Beard (companion)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 555, "text": "Beard is a slang term describing a person who is used, knowingly or unknowingly, as a date, romantic partner (boyfriend or girlfriend), or spouse either to conceal infidelity or to conceal one's sexual orientation. The American slang term originally referred to anyone who acted on behalf of another, in any transaction, to conceal a person's true identity. The term can be used in heterosexual and homosexual contexts, but is especially used within LGBT culture. References to beards are seen in mainstream television and films, and other entertainment.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29014788", "title": "Facial hair in the military", "section": "Section::::Europe.:Czech Republic.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 321, "text": "The Army of the Czech Republic permits moustaches, sideburns or neat full beard of a natural colour. A moustache has to be trimmed so it would not exceed the lower margin of the upper lip. Sideburns may not reach under the middle of each auricle. Hairs of sideburns and goatee may not exceed 2 cm (0,787 inch) in length.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1137218", "title": "List of types of tinea", "section": "Section::::\"Tinea barbae\" (beard).\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 590, "text": "\"Tinea barbæ\" (also known as \"Barber's itch,\" \"Ringworm of the beard,\" and \"Tinea sycosis\") is a fungal infection of the hair. Tinea barbae is due to a dermatophytic infection around the bearded area of men. Generally, the infection occurs as a follicular inflammation, or as a cutaneous granulomatous lesion, i.e. a chronic inflammatory reaction. It is one of the causes of folliculitis. It is most common among agricultural workers, as the transmission is more common from animal-to-human than human-to-human. The most common causes are \"Trichophyton mentagrophytes\" and \"T. verrucosum\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33265836", "title": "The Beards (Australian band)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 284, "text": "The Beards were an Australian comedy folk rock band which formed in 2005 in Adelaide and disbanded in October 2016. The group played music themed around the virtues of having a beard. They had developed from a four-piece rock band, the Dairy Brothers, which were established in 2003.\n", "bleu_score": null, "meta": null } ] } ]
null
9ycfu4
A rather short question: How did Australia become a thing?
[ { "answer": "Well, that's a big question, but the various answers about Australia [in our FAQ on Oceania](_URL_3_) are definitely a good place for you to start, and some answers within that that are directly relevant to your questions include:\n\n* my answer to [Why was Australia colonized? What motivated people to travel so far only to settle in such a dangerous place?](_URL_1_)\n\n* an answer by /u/Algernon_Asimov on [\nWhat would life have been like back in the colonization days for a prisoner shipped from England to Australia once he/she stepped off the boat?](_URL_2_), and\n\n* /u/AbandoningAll's answer to [Were indigenous Australians ever enslaved?](_URL_0_)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5804962", "title": "History of Australia (1901–45)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 457, "text": "The history of Australia from 1901–1945 begins with the federation of the six colonies to create the Commonwealth of Australia. The young nation joined Britain in the First World War, suffered through the Great Depression in Australia as part of the global Great Depression and again joined Britain in the Second World War against Nazi Germany in 1939. Imperial Japan launched air raids and submarine raids against Australian cities during the Pacific War.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25544562", "title": "Dominion", "section": "Section::::Dominions.:Australia.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 1488, "text": "Four colonies of Australia had enjoyed responsible government since 1856: New South Wales, Victoria, Tasmania and South Australia. Queensland had responsible government soon after its founding in 1859. Because of ongoing financial dependence on Britain, Western Australia became the last Australian colony to attain self-government in 1890. During the 1890s, the colonies voted to unite and in 1901 they were federated under the British Crown as the Commonwealth of Australia by the \"Commonwealth of Australia Constitution Act\". The Constitution of Australia had been drafted in Australia and approved by popular consent. Thus Australia is one of the few countries established by a popular vote. Under the Balfour Declaration of 1926, the federal government was regarded as coequal with (and not subordinate to) the British and other Dominion governments, and this was given formal legal recognition in 1942 (when the \"Statute of Westminster\" was retroactively adopted to the commencement of the Second World War 1939). In 1930, the Australian prime minister, James Scullin, reinforced the right of the overseas Dominions to appoint native-born governors-general, when he advised King George V to appoint Sir Isaac Isaacs as his representative in Australia, against the wishes of the opposition and officials in London. The governments of the States (called colonies before 1901) remained under the Commonwealth but retained links to the UK until the passage of the \"Australia Act 1986\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17616310", "title": "Taiwanese Australians", "section": "Section::::Australia Overview.:History and Culture of Australia.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 654, "text": "Australia's modern history is usually traced back to 1901, when the six states of Australia united in a Federation and drafted up the first Australia constitution. Previously, the six states of Australia were not independent countries, but were instead dominions of the British Crown, whose monarch resides in England, United Kingdom. After being federated, the United Kingdom recognised Australia as an independent country, but still maintained a degree of dominance and administrative control over the country. For example, an independent Australian citizenship was only created in 1949; previously, Australians were considered to be British subjects.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5804970", "title": "History of Australia since 1945", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 311, "text": "The history of Australia since 1945 has seen long periods of economic prosperity and the introduction of an expanded and multi-ethnic immigration program, which has coincided with moves away from Britain in political, social and cultural terms and towards increasing engagement with the United States and Asia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39582", "title": "History of Australia", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 793, "text": "Gold rushes and agricultural industries brought prosperity. Autonomous parliamentary democracies began to be established throughout the six British colonies from the mid-19th century. The colonies voted by referendum to unite in a federation in 1901, and modern Australia came into being. Australia fought on the side of Britain in the two world wars and became a long-standing ally of the United States when threatened by Imperial Japan during World War II. Trade with Asia increased and a post-war immigration programme received more than 6.5 million migrants from every continent. Supported by immigration of people from more than 200 countries since the end of World War II, the population increased to more than 23 million by 2014, and sustains the world's 12th largest national economy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19823859", "title": "Historiography of the British Empire", "section": "Section::::Regions.:Australia.:Debates on the founding.\n", "start_paragraph_id": 108, "start_character": 0, "end_paragraph_id": 108, "end_character": 1026, "text": "Historians have used the founding of Australia to mark the beginning of the Second British Empire. It was planned by the government in London and designed as a replacement for the lost American colonies. The American Loyalist James Matra in 1783 wrote \"A Proposal for Establishing a Settlement in New South Wales\" proposing the establishment of a colony composed of American Loyalists, Chinese and South Sea Islanders (but not convicts). Matra reasoned that the land country was suitable for plantations of sugar, cotton and tobacco; New Zealand timber and hemp or flax could prove valuable commodities; it could form a base for Pacific trade; and it could be a suitable compensation for displaced American Loyalists. At the suggestion of Secretary of State Lord Sydney, Matra amended his proposal to include convicts as settlers, considering that this would benefit both \"Economy to the Publick, & Humanity to the Individual\". The government adopted the basics of Matra’s plan in 1784, and funded the settlement of convicts.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2258135", "title": "History of Australia (1788–1850)", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 314, "text": "Australia's colonial history, from the british arrival in 1788 has shaped both the history of the country from that time and the current development of the nation, helping to allowed the economy to be seen as strongly in the global 'North' and having strong economic trade with all other nations around the world.\n", "bleu_score": null, "meta": null } ] } ]
null
3hg2so
why do parts of a product cost more to repair than the whole thing?
[ { "answer": "Say it's a 20c resistor, but the company has a call-out fee, an hourly charge, overcharge you for parts, etc. It might come out higher due to the labour costs where you are compared to costs of shipping + a whole new TV - especially if it's a cheap TV. Another option is that it's older and parts are harder to find (which seems unlikely for a TV, but will often happen with cars)", "provenance": null }, { "answer": "Because they're not made to be easily repairable, which adds labor cost, and the parts are expensive because of low supply. Most companies only make extra parts for internal repair services, not contractors. Also because they sell more TVs. It's easier to sell a TV because the distribution platform exists, similar platforms do not exist for parts", "provenance": null }, { "answer": "Parts and labor. Surely one piece of a TV won't cost more than the entire TV but if you have to pay for somebody to disassemble and reassemble the entire TV it could cost more. ", "provenance": null }, { "answer": "Producing, storing and selling spare parts is an expensive and inefficient business.\n\nIt is not normally that expensive to produce the part in the first place. But then you have to pay for a warehouse to store it in. You have to track it's presence for years. \n\nAnd then, in a few years time, when no one has ordered this particular part, you will have to pay to dispose of it.\n\nAll of this has to be paid for in the sale price of the few spare parts that you do actually sell. The price of the spare parts has to cover the cost of running the spare parts division of the company.\n\nAnd if you happen to run out of a part before the product is obsolete, you then have to pay to produce a small run of just that part. Small runs of parts are very expensive.\n\nAnd if a part becomes really popular, then you still don't get to recoup your costs, because someone else will step in, make a compatible part cheap and fill that market at a rock bottom price.\n\nAnother part of this is the really low price that mass consumer electronics can be produced for. If you look at the price of many TVs, and compare them with wage and shop prices for an engineer - it is hard to just open the device up and put it back together again for less than the cost of a new one - let alone diagnosing, repairing or replacing anything. This means that repairs don't happen - which means spare parts get sold, so fewer sales to pay all those costs.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1470065", "title": "Repairable component", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 510, "text": "Repairable components tend to be more expensive than non-repairable components (consumables). This is because for items that are inexpensive to procure, it is often more cost-effective not to maintain (repair) them. Repair costs can be expensive, including costs for the labor for the removal the broken or worn out part (described as unserviceable), cost of replacement with a working (serviceable) from inventory, and also the cost of the actual repair, including possible shipping costs to a repair vendor.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28385525", "title": "Spare part", "section": "Section::::Classification.:Repairable.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 507, "text": "Repairable parts are parts that are deemed worthy of repair, usually by virtue of economic consideration of their repair cost. Rather than bear the cost of completely replacing a finished product, repairables typically are designed to enable more affordable maintenance by being more modular. This allows components to be more easily removed, repaired, and replaced, enabling cheaper replacement. Spare parts that are needed to support condemnation of repairable parts are known as \"replenishment\" spares .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40584946", "title": "Car costs", "section": "Section::::Running costs.:Repairs and Improvements.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 338, "text": "Repairs costs are completely unpredictable because they depend on the number and severity of car collisions, like dents repairing for example. These costs also refer to spare parts substitution due to malfunctioning. On this cost item it might be included also the parts bought to improve the performance or the aesthetic of the vehicle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "491658", "title": "Heavy equipment", "section": "Section::::Equipment cost.:Operating cost.\n", "start_paragraph_id": 170, "start_character": 0, "end_paragraph_id": 170, "end_character": 963, "text": "The biggest distinction from a cost standpoint is if a repair is classified as a \"major repair\" or a \"minor repair\". A major repair can change the depreciable equipment value due to an extension in service life, while a minor repair is normal maintenance. How a firm chooses to cost major and minor repairs vary from firm to firm depending on the costing strategies being used. Some firms will charge only major repairs to the equipment while minor repairs are costed to a project. Another common costing strategy is to cost all repairs to the equipment and only frequently replaced wear items are excluded from the equipment cost. Many firms keep their costing structure closely guarded as it can impact the bidding strategies of their competition. In a company with multiple semi-independent divisions, the equipment department often wants to classify all repairs as \"minor\" and charge the work to a job - therefore improving their 'profit' from the equipment.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38008659", "title": "Eight dimensions of quality", "section": "Section::::Durability.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 348, "text": "In other cases, consumers must weigh the expected cost, in both dollars and personal inconvenience, of future repairs against the investment and operating expenses of a newer, more reliable model. Durability, then, may be defined as the amount of use one gets from a product before it breaks down and replacement is preferable to continued repair.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11140963", "title": "National Radio Institute", "section": "Section::::NRI Schools' closing (1999–2002).:Market Force Determining Factors.:Effect of advanced technologies on the electronics service sector in general.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 569, "text": "1) Weak cost justification for repairs: It was becoming hard for consumers to justify the repairing of malfunctioning electronic items when the purchasing of newer models was so affordable, as a result of the advances in semiconductor and electronic materials technology. With the exception of display technologies, the newer television and radio receivers generally had internal components that were fewer in number and smaller in size and, thus, less costly to produce. Weak cost justification for repairs remains a factor today with most consumer electronics items.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7535882", "title": "Precycling", "section": "Section::::Integration of waste management.:Repairing.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 333, "text": "Repair is a type of precycling that corrects specified faults in a product, however the quality of a repaired product is inferior to reconditioned or remanufactured items. One survey found that 68% of the respondents believed repairing was not cost efficient and sought alternative methods such as reconditioning or remanufacturing.\n", "bleu_score": null, "meta": null } ] } ]
null
6sxakm
Why do cows have four stomachs and what does each stomach do?
[ { "answer": "4 digestive departments of a cow's stomach region:\n\n1. The Rumen – this is the largest part and holds upto 50 gallons of partially digested food. This is where the ‘cud’ comes from. Good bacteria in the Rumen helps soften and digest the cows food and provides protein for the cow.\n\n2. The Recticulum – this part of the stomach is called the ‘hardware’ stomach. This is because if the cow eats something it should not have like a peice of fencing, it lodges here in the Recticulum. However, the contractions of the reticulum can force the object into the peritoneal cavity where it initiates inflammation. Nails and screws can even peroferate the heart. The grass that has been eaten is also softened further in this stomach section and is formed into small wads of cud. Each cud returns to the cows mouth and is chewed 40 – 60 times and then swallowed properly.\n\n3. The Omasum – this part of the stomach is a ‘filter’. It filters through all the food the cow eats. The cud is also pressed and broken down further.\n\n4. The Abomasum – this part of the stomach is like a humans stomach and is connected to the intestines. Here, the food is finally digested by the cows stomach juices and essential nutrients that the cow needs are passed through the bloodstream. The rest is passed through to the intestines and produces a ‘cow pat’.\n\n_URL_0_\n", "provenance": null }, { "answer": "Adding on to distrayyss's answer, cows have one stomach with four chambers (think of a heart - you only have one heart, but it has four chambers). The purpose of the four chambers is that the food they eat requires a LOT of time and work to break down enough to digest.\n\nThe cow chews up grass, which is churned in the rumen, regurgitated to be chewed again, swallowed and churned again, etc. to break down the long plant fibers into small bits that can be managed by the bacteria that does the actual work of breaking down the food.\n\nThe reticulum is connected to the rumen, and the two chambers contract back and forth to sort out the broken-down fibers from the big ones. The properly broken-down bits of food get washed into here and caught so it can be further digested - chunks of food that are still too big to digest get pushed back into the rumen for further processing. \n\nThe omasum pulls out some of the nutrients the animal needs, and filters the bits further.\n\nThe abomasum works like a human stomach - secretes things that help digest the food, as well as having the unique ability to digest protein (from the gut bacteria that have died and washed downstream).\n\nThe broken-down food then goes to the intestines for the rest of the work. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "26051975", "title": "Cattle", "section": "Section::::Characteristics.:Anatomy.:Digestive system.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 267, "text": "Cattle are ruminants, meaning their digestive system is highly specialized to allow the use of poorly digestible plants as food. Cattle have one stomach with four compartments, the rumen, reticulum, omasum, and abomasum, with the rumen being the largest compartment.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39747", "title": "Stomach", "section": "Section::::Other animals.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 572, "text": "Although the precise shape and size of the stomach varies widely among different vertebrates, the relative positions of the oesophageal and duodenal openings remain relatively constant. As a result, the organ always curves somewhat to the left before curving back to meet the pyloric sphincter. However, lampreys, hagfishes, chimaeras, lungfishes, and some teleost fish have no stomach at all, with the oesophagus opening directly into the anus. These animals all consume diets that either require little storage of food, or no pre-digestion with gastric juices, or both.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37930465", "title": "Fish physiology", "section": "Section::::Digestion.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 577, "text": "Although the precise shape and size of the stomach varies widely among different vertebrates, the relative positions of the oesophageal and duodenal openings remain relatively constant. As a result, the organ always curves somewhat to the left before curving back to meet the pyloric sphincter. However, lampreys, hagfishes, chimaeras, lungfishes, and some teleost fish have no stomach at all, with the oesophagus opening directly into the intestine. These animals all consume diets that either require little storage of food, or no pre-digestion with gastric juices, or both.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7626", "title": "Cetacea", "section": "Section::::Physiology.:Organs.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 401, "text": "The stomach consists of three chambers. The first region is formed by a loose gland and a muscular forestomach (missing in beaked whales), which is then followed by the main stomach and the pylorus. Both are equipped with glands to help digestion. A bowel adjoins the stomachs, whose individual sections can only be distinguished histologically. The liver is large and separate from the gall bladder.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "331951", "title": "Fish anatomy", "section": "Section::::Internal organs.:Stomach.\n", "start_paragraph_id": 87, "start_character": 0, "end_paragraph_id": 87, "end_character": 518, "text": "As with other vertebrates, the relative positions of the esophageal and duodenal openings to the stomach remain relatively constant. As a result, the stomach always curves somewhat to the left before curving back to meet the pyloric sphincter. However, lampreys, hagfishes, chimaeras, lungfishes, and some teleost fish have no stomach at all, with the esophagus opening directly into the intestine. These fish consume diets that either require little storage of food, or no pre-digestion with gastric juices, or both.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57114197", "title": "Displaced abomasum in cattle", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1384, "text": "Displaced abomasum in cattle occurs when the abomasum, also known as the true stomach, which typically resides on the floor of the abdomen, fills with gas and rises to the top of the abdomen, where it is said to be ‘displaced’. When the abomasum moves from its normal position it prevents the natural passage of gas and feed through the digestive system, creating a restriction. As cattle are ruminants, which have a 4 chambered stomach composed of a rumen, reticulum, omasum and abomasum. Ruminants require this specialized digestive system in order to properly process and break down their high fiber and cellulose rich diets. As this type of digestive system is quite complex it is at a greater risk for incidence. Due to the natural anatomy of cattle it is more common to have the abomasum displace to the left, known as a left-displaced abomasum, than to the right, right-displaced abomasum. When the abomasum becomes displaced there also becomes a chance of an abomasal volvulus, twist, developing. An abomasal volvulus occurs when the abomasum, which is already out of place, will rotate and cut off blood and nutrient supply to the abomasum. Cattle which develop an abomasal twist require immediate vet attention to regain blood supply and food passage through the digestive system or the abomasum will begin to shut down due to lack of blood supply and toxicity development.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5596320", "title": "Equine anatomy", "section": "Section::::Digestive system.:Stomach.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 674, "text": "Horses have a relatively small stomach for their size, and this limits the amount of feed a horse can take in at one time. The average sized horse () has a stomach with a capacity of around , and works best when it contains about . Because the stomach empties when full, whether stomach enzymes have completed their processing of the food or not, and doing so prevents full digestion and proper utilization of feed, continuous foraging or several small feedings per day are preferable to one or two large ones. The horse stomach consists of a non-glandular proximal region (saccus cecus), divided by a distinct border, the margo plicatus, from the glandular distal stomach.\n", "bleu_score": null, "meta": null } ] } ]
null
1cftvu
relevant to the guy getting his artery pinched, how does the body cope with the constant building pressure through that artery with nothing giving back?
[ { "answer": "No, that's just one of many arteries in the body. Pinched shut, the highest the pressure can get in that artery is the pressure the heart can produce. ", "provenance": null }, { "answer": "Related note: I've heard contention that it was just the tail end of a tourniquet in the photo.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "37601885", "title": "Continuous noninvasive arterial pressure", "section": "Section::::Current noninvasive blood pressure technologies.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 482, "text": "Detecting pressure changes inside an artery from the outside is difficult, whereas volume and flow changes of the artery can well be determined by using e.g. light, echography, impedance, etc. But unfortunately these volume changes are not linearly correlated with the arterial pressure– especially when measured in the periphery, where the access to the arteries is easy. Thus, noninvasive devices have to find a way to transform the peripheral volume signal to arterial pressure.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46521228", "title": "Vertebra", "section": "Section::::Clinical significance.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 275, "text": "A pinched nerve caused by pressure from a disc, vertebra or scar tissue might be remedied by a foraminotomy to broaden the intervertebral foramina and relieve pressure. It can also be caused by a foramina stenosis, a narrowing of the nerve opening, as a result of arthritis.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56558", "title": "Blood pressure", "section": "Section::::Disorders of blood pressure.:High blood pressure.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 384, "text": "Levels of arterial pressure put mechanical stress on the arterial walls. Higher pressures increase heart workload and progression of unhealthy tissue growth (atheroma) that develops within the walls of arteries. The higher the pressure, the more stress that is present and the more atheroma tend to progress and the heart muscle tends to thicken, enlarge and become weaker over time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "479444", "title": "Arteriole", "section": "Section::::Physiology.:Blood pressure.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 439, "text": "Blood pressure in the arteries supplying the body is a result of the work needed to pump the cardiac output (the flow of blood pumped by the heart) through the \"vascular resistance\", usually termed total peripheral resistance by physicians and researchers. An increase in the media to lumenal diameter ratio has been observed in hypertensive arterioles (arteriolosclerosis) as the vascular wall thickens and/or lumenal diameter decreases.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "509921", "title": "Pulmonary artery", "section": "Section::::Function.:Pressure.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 432, "text": "The pulmonary artery pressure (PA pressure) is a measure of the blood pressure found in the main pulmonary artery. This is measured by inserting a catheter into the main pulmonary artery. The mean pressure is typically 9 - 18 mmHg, and the wedge pressure measured in the left atrium may be 6-12mmHg. The wedge pressure may be elevated in left heart failure, mitral valve stenosis, and other conditions, such as sickle cell disease.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4041297", "title": "Aortic arches", "section": "Section::::Structure.:Arch 6.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 536, "text": "The ductus arteriosus connects at a junction point that has a low pressure zone (commonly called Bernoulli's principle) created by the inferior curvature (inner radius) of the artery. This low pressure region allows the artery to receive (siphon) the blood flow from the pulmonary artery which is under a higher pressure. However, it is extremely likely that the major force driving flow in this artery is the markedly different arterial pressures in the pulmonary and systemic circulations due to the different arteriolar resistances.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24825331", "title": "Nerve compression syndrome", "section": "Section::::Pathophysiology.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 348, "text": "External pressure reduces flow in the vessels supplying the nerve with blood (the vasa nervorum). This causes local ischaemia, which has an immediate effect on the ability of the nerve axons to transmit action potentials. As the compression becomes more severe over time, focal demyelination occurs, followed by axonal damage and finally scarring.\n", "bleu_score": null, "meta": null } ] } ]
null
1omw7d
How do computer components keep track of timings between all of the systems of components?
[ { "answer": "Generally speaking they don't. There are drifts between components. What is important is that when they access a common bus they latch the clock of the common source [e.g. your PCIe clock]. \n\nThere are collisions all the time on most buses, that's why they build in mechanisms to detect them and work around.\n\n ", "provenance": null }, { "answer": "The system as a whole is not synchronized. But components have different ways of agreeing on how to communicate. \n\nIn parallel communications, individual bits of data ( 1's and 0's) are transferred simultaneously across multiple conductors called a bus. In this case there are signaling lines that go by names like clocks, chip selects, and address latches, that indicate which component on the bus should talk, and when it should read an address or write data. This is how RAM and PCI works. A problem emerging more and more with a parallel bus architectures at high speeds, is that the differences in lengths of tracings (wires on the circuit board), resistance, inductance, etc, causes the signals to propagate at different speeds, that is to say, because of the relatively low speed of light compared to clock speed, the signal arrives at the other end of each wire in the bus at different times. This is called skew, and it limits communication speed as the receiver has to wait for the signal to stabilize on all wires, and it gets significantly worse as the distance between components increases.\n\nIn a serial architecture, data is sent sequentially on one wire, one bit after another. In this case, edge transitions, and timing is usually used to send and decipher the data. Usually both ends have to agree on the timing in advance. This is how RS232, USB, Ethernet work. It is also possible to combine multiple serial lines together in a serial bus, such as PCI Express. Serial architectures don't suffer from the skew issues, but are still limited by the speed of light. Serial usually works better over long distances, with the definition of long getting shorter as data speeds increase.\n\n\n ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "197673", "title": "Delay line memory", "section": "Section::::Acoustic delay lines.:Mercury delay lines.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 555, "text": "For a computer application the timing was still critical, but for a different reason. Conventional computers have a natural \"cycle time\" needed to complete an operation, the start and end of which typically consist of reading or writing memory. Thus the delay lines had to be timed such that the pulses would arrive at the receiver just as the computer was ready to read it. Typically many pulses would be \"in flight\" through the delay, and the computer would count the pulses by comparing to a master clock to find the particular bit it was looking for.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35707540", "title": "Timed automaton", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 251, "text": "Timed automata can be used to model and analyse the timing behavior of computer systems, e.g., real-time systems or networks. Methods for checking both safety and liveness properties have been developed and intensively studied over the last 20 years.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7139215", "title": "Digital timing diagram", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 416, "text": "A digital timing diagram is a representation of a set of signals in the time domain. A timing diagram can contain many rows, usually one of them being the clock. It is a tool that is commonly used in digital electronics, hardware debugging, and digital communications. Besides providing an overall description of the timing relationships, the digital timing diagram can help find and diagnose digital logic hazards.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26023", "title": "RS-232", "section": "Section::::Seldom-used features.:Timing signals.\n", "start_paragraph_id": 84, "start_character": 0, "end_paragraph_id": 84, "end_character": 559, "text": "Some synchronous devices provide a clock signal to synchronize data transmission, especially at higher data rates. Two timing signals are provided by the DCE on pins 15 and 17. Pin 15 is the transmitter clock, or send timing (ST); the DTE puts the next bit on the data line (pin 2) when this clock transitions from OFF to ON (so it is stable during the ON to OFF transition when the DCE registers the bit). Pin 17 is the receiver clock, or receive timing (RT); the DTE reads the next bit from the data line (pin 3) when this clock transitions from ON to OFF.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6449", "title": "Clock", "section": "Section::::Purposes.\n", "start_paragraph_id": 120, "start_character": 0, "end_paragraph_id": 120, "end_character": 441, "text": "Most digital computers depend on an internal signal at constant frequency to synchronize processing; this is referred to as a clock signal. (A few research projects are developing CPUs based on asynchronous circuits.) Some equipment, including computers, also maintains time and date for use as required; this is referred to as time-of-day clock, and is distinct from the system clock signal, although possibly based on counting its cycles.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1234177", "title": "TDMoIP", "section": "Section::::Delay.:Timing recovery.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 514, "text": "In certain cases timing may be derived from the TDM equipment at both ends of the PW. Since each of these clocks is highly accurate, they necessarily agree to high order. The problem arises when at most one side of the TDMoIP tunnel has a highly accurate time standard. For ATM networks, which define a physical layer that carries timing, the synchronous residual time stamp (SRTS) method may be used; IP/MPLS networks, however, do not define the physical layer and thus cannot specify the accuracy of its clock. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1823104", "title": "Clock skew", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 963, "text": "The operation of most digital circuits is synchronized by a periodic signal known as a \"clock\" that dictates the sequence and pacing of the devices on the circuit. This clock is distributed from a single source to all the memory elements of the circuit, which for example could be registers or flip-flops. In a circuit using edge-triggered registers, when the clock edge or tick arrives at a register, the register transfers the register input to the register output, and these new output values flow through combinational logic to provide the values at register inputs for the next clock tick. Ideally, the input to each memory element reaches its final value in time for the next clock tick so that the behavior of the whole circuit can be predicted exactly. The maximum speed at which a system can run must account for the variance that occurs between the various elements of a circuit due to differences in physical composition, temperature, and path length.\n", "bleu_score": null, "meta": null } ] } ]
null
kv683
Is sugar unhealthier when refined?
[ { "answer": "In fact, almost all brown sugars are made by adding molasses to refined white sugar, so as to more carefully control the resulting product. It will contain the same residual chemicals as white sugar. Unrefined sugar such as [muscovado](_URL_0_) is considerably harder to come by (YMMV. Try organic food stores).\n\nRefining agents for granulated sugar, which is the most common, are typically phosphoric acid and calcium hydroxide. These absorb and entrap impurities then float to the top, where they are skimmed off. The sugar liquid goes through active carbon filtering afterward. While phosphoric acid has been linked to lower bone density in some studies, the evidence is somewhat sketchy. Moreover, it's presence in granulated sugar is very small. Granulated sugar is more than 99% pure sucrose. Many foods and soft drinks contain phosphoric acid as well. If you are worried, make sure you get enough calcium (milk is a good source) and you should be more than fine.\n\nSulfur dioxide is used to create what is called Mill white sugar. It doesn't remove impurities but \"bleaches\" the sugar instead. You won't usually find this unless you live in an area where sugar cane is grown, since this type of sugar doesn't store or ship very well. Sulfur Dioxide is also used in wine making and as a preservative, and as far as I know has no significant ill effects in the quantities present in sugar.", "provenance": null }, { "answer": "In fact, almost all brown sugars are made by adding molasses to refined white sugar, so as to more carefully control the resulting product. It will contain the same residual chemicals as white sugar. Unrefined sugar such as [muscovado](_URL_0_) is considerably harder to come by (YMMV. Try organic food stores).\n\nRefining agents for granulated sugar, which is the most common, are typically phosphoric acid and calcium hydroxide. These absorb and entrap impurities then float to the top, where they are skimmed off. The sugar liquid goes through active carbon filtering afterward. While phosphoric acid has been linked to lower bone density in some studies, the evidence is somewhat sketchy. Moreover, it's presence in granulated sugar is very small. Granulated sugar is more than 99% pure sucrose. Many foods and soft drinks contain phosphoric acid as well. If you are worried, make sure you get enough calcium (milk is a good source) and you should be more than fine.\n\nSulfur dioxide is used to create what is called Mill white sugar. It doesn't remove impurities but \"bleaches\" the sugar instead. You won't usually find this unless you live in an area where sugar cane is grown, since this type of sugar doesn't store or ship very well. Sulfur Dioxide is also used in wine making and as a preservative, and as far as I know has no significant ill effects in the quantities present in sugar.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "60492138", "title": "Soul food health trends", "section": "Section::::Modifying soul food to fit within health trends.:Soul food with low sugar.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 583, "text": "Desserts with high sugar are commonly consumed for hedonistic rewards, especially among women. However, high sugar intake tends to increase risk of obesity, type 2 diabetes, hypertension, cardio-metabolic diseases and compromised oral health. Instead, research showed that honey is beneficial to health with its \"gastroprotective, hepatoprotective, reproductive, hypoglycemic, antioxidant, antihypertensive, antibacterial, anti-fungal and anti-inflammatory. Under that circumstance, honey can be replaced to add sweet flavor, such as dressing on smoothies, spreading on bread, etc. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1663823", "title": "Panela", "section": "Section::::Health claims.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 384, "text": "Panela manufacturers and advocates claim the substance to be healthier than refined sugar, suggesting it has immunological benefits, a lower glycemic index, and higher micronutrient content. As the authors of \"The Ultimate Guide to Sugars and Sweeteners\" point out, \"it's still sugar\", with only a trace amount more vitamins and minerals, and little research to support other claims.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50563", "title": "Sucrose", "section": "Section::::Consumption.\n", "start_paragraph_id": 79, "start_character": 0, "end_paragraph_id": 79, "end_character": 404, "text": "Refined sugar was a luxury before the 18th century. It became widely popular in the 18th century, then graduated to becoming a necessary food in the 19th century. This evolution of taste and demand for sugar as an essential food ingredient unleashed major economic and social changes. Eventually, table sugar became sufficiently cheap and common enough to influence standard cuisine and flavored drinks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27712", "title": "Sugar", "section": "Section::::Health effects.:Hyperactivity.\n", "start_paragraph_id": 89, "start_character": 0, "end_paragraph_id": 89, "end_character": 324, "text": "Some studies report evidence of causality between high consumption of refined sugar and hyperactivity. One review of low-quality studies of children consuming high amounts of energy drinks showed association with higher rates of unhealthy behaviors, including smoking and alcohol abuse, and with hyperactivity and insomnia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17448605", "title": "Truvia", "section": "Section::::Safety and health effects.:Gastrointestinal side effects.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 812, "text": "Most of Truvia's side effects are related to erythritol which is a sugar alcohol. Sugar alcohols are valuable as sweeteners since they cause little to no rise in blood glucose levels as sugar does. However, the downside to most sugar alcohols is their propensity to cause gastrointestinal side effects. Erythritol is unique in that among these compounds it has one of the most favorable nutritional profiles. Erythritol is almost as sweet as sucrose, is virtually non-caloric, and cannot be fermented by gut bacteria present in the small intestine. According to Truvia's website, up to 90% of erythritol is absorbed by the small intestine and excreted unchanged in the urine. Only a small amount of it will reach the large intestine where GI symptoms, like bloating, flatulence, and cramping usually originate. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2039262", "title": "Muscovado", "section": "Section::::History.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 504, "text": "Raw sugar was brought to port in a variety of purities that could be sold either as raw sugar direct to market for making alcohol, or as muscovado exported sugar refineries such as those in Glasgow or London. In the British Empire, raw sugars that had been refined enough to lose most of the molasses content were termed raw and deemed higher quality, while poor quality sugars with a high molasses content were referred to as muscovado, though the term \"brown sugar\" was sometimes used interchangeably.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27712", "title": "Sugar", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1066, "text": "Sucrose is used in prepared foods (e.g. cookies and cakes), is sometimes added to commercially available beverages, and may be used by people as a sweetener for foods (e.g. toast and cereal) and beverages (e.g. coffee and tea). The average person consumes about of sugar each year, or in developed countries, equivalent to over 260 food calories per day. As sugar consumption grew in the latter part of the 20th century, researchers began to examine whether a diet high in sugar, especially refined sugar, was damaging to human health. Excessive consumption of sugar has been implicated in the onset of obesity, diabetes, cardiovascular disease, dementia, and tooth decay. Numerous studies have tried to clarify those implications, but with varying results, mainly because of the difficulty of finding populations for use as controls that consume little or no sugar. In 2015, the World Health Organization recommended that adults and children reduce their intake of free sugars to less than 10%, and encouraged a reduction to below 5%, of their total energy intake.\n", "bleu_score": null, "meta": null } ] } ]
null
21v9mp
the scene in trading places with dan ackroyd and eddie murphy where they drive the stock price down and make millions of dollars while at the same time bankrupting the duke brothers. how does this work?
[ { "answer": "Having foreknowledge of the state of the Orange crop they we're able to purchase contracts and a price X. Once the report came out and the price started to drop, they started buying. This price Y, was less than the price of the contracts they already had at price X, therefore they pocket the difference. There is actually an Eddie Murphy rule that was passed by US congress against the insider trading aspect of them making such a deal with the information they had.", "provenance": null }, { "answer": "Basically they bluffed. \n\nThe Dukes thought that the orange harvest was poor due to the falsified weather report they received. They began buying orange stock before the crop report was released, this lead other investors to assume the Dukes had inside information, so they bought too.\nOrange stock price goes up.\n\nWinthorpe and Valentine then start to sell orange stock that they don't yet have at the inflated rate (this is allowed because future contracts represent the obligation to provide the stock and you can enter an agreement to source the stock at a later date). So the pair now have made a colossal amount of money but they don't have any stock to actually give.\n\nThen the crop report comes out, the orange harvest was good, the stock is worthless. So now Winthorpe and Valentine just have to buy what they need at the new low rate.\n\nNow some maths: They start to sell when the stock is $1.42 per unit, they buy when it is roughly $0.29 per unit. So every unit they sell has over 489% profit. The movie doesn't state how many units they bought but they pooled a large amount of money together and they could easily have made millions, considering the same thing happens to the Dukes in reverse and bankrupts them. \n\nEDIT: a word", "provenance": null }, { "answer": "They had stolen the report on the orange crop, and gave a fake one with the opposite information to the Duke brothers. In reality, the crop was fine, so oranges would end up being cheap, but the fake report said that the winter had destroyed much of the crop, which would have made oranges more expensive.\n\nSo the Dukes, thinking that oranges were going to get really expensive, instructed their man to buy as much FCOJ shares as possible, as they would gain value after the report came out. As they buy, the price goes up. Then when the real report came out, the price plummeted and the Dukes lost all their money.\n\nMeanwhile, Akroyd/Murphy did the opposite--they sold FCOJ at the high price caused by the Dukes' purchasing, then bought it back when the price plummeted. This is called *short selling*, a stock market trick where you \"sell high, buy low\", as opposed to the traditional \"buy low, sell high\".", "provenance": null }, { "answer": "Recent thread on this: _URL_1_\n\nOlder thread on this: _URL_0_\n\nMy Answer from older thread:\n\n/u/Pobody is right, but to clarify a bit, here's the stages (numbers are made up, just to illustrate):\n \n1.) **That morning** - The Dukes have a fake crop report, saying that there is an orange shortage. They take out a short term loan so that they can buy orange futures at the current price. Say the price is $10, because people are unsure of whether there will be a shortage.\n \n2.) **That afternoon** - Murphy and Akroyd show up. The price now is at $15, because the Dukes, and many others following the Dukes, have been buying all day, driving up the price. Murphy and Akroyd start to SELL. They don't own any shares (or own just a few), but that's alright, since they don't have to \"settle up,\" meaning actually give the shares they \"have\" to the person buying, until the end of the day.\n\n3.) **Just before the bell** - the crop report comes out. There is no orange shortage. Now, the price drops from $15 to $5, since everyone knows there are too many oranges, and so everyone starts selling. Once the price gets low enough, Murphy and Akroyd start to buy, which is easy, since everyone else is trying to sell. They buy enough to cover the sales they made earlier in the day, and pocket the difference (15-5 = $10 per share). The Dukes, on the other hand, are stuck. They sell what they can, but every sale is a loss.\n \n4.) **after the bell** - the Dukes haven't made enough to cover their short term loans. They are bankrupt.\n \n5.) **way after the bell** - Murphy's unrelated, identical twin, a wealthy African Prince, happens upon the Dukes living on the street, gifts them tens of thousands of dollars. they are back.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2484768", "title": "Algorithmic trading", "section": "Section::::Issues and developments.:Concerns.\n", "start_paragraph_id": 93, "start_character": 0, "end_paragraph_id": 93, "end_character": 412, "text": "Algorithmic and high-frequency trading were shown to have contributed to volatility during the May 6, 2010 Flash Crash, when the Dow Jones Industrial Average plunged about 600 points only to recover those losses within minutes. At the time, it was the second largest point swing, 1,010.14 points, and the biggest one-day point decline, 998.5 points, on an intraday basis in Dow Jones Industrial Average history.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "351519", "title": "Merton Miller", "section": "Section::::Biography.:Career.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 548, "text": "He served as a public director on the Chicago Board of Trade 1983–85 and the Chicago Mercantile Exchange from 1990 until his death in Chicago on June 3, 2000. In 1993, Miller waded into the controversy surrounding $2 billion in trading losses by what was characterized as a rogue futures trader at a subsidiary of Metallgesellschaft, arguing in the \"Wall Street Journal\" that management of the subsidiary was to blame for panicking and liquidating the position too early. In 1995, Miller was engaged by Nasdaq to rebut allegations of price fixing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21402010", "title": "Global financial crisis in October 2008", "section": "Section::::Beginning of October.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 957, "text": "On October 10, within the first five minutes of the trading session on Wall Street, the Dow Jones Industrial Average plunged 697 points, falling below 7900 to its lowest level since March 17, 2003. Later in the afternoon, the Dow made violent swings back and forth across the breakeven line, toppling as much as 600 points and rising 322 points. The Dow ended the day losing only 128 points, or 1.49%. Trading on New York Stock Exchange closed for the week with the Dow at 8,451, down 1,874 points, or 18% for the week, and after 8 days of losses, 40% down from its record high October 9, 2007. Trading on Friday was marked by extreme volatility with a steep loss in the first few minutes followed by a rise into positive territory, closing down at the end of the day. In S&P100 some financial corporate showing signals upwards also. President George W. Bush reassured investors that the government will solve the financial crisis gripping world economies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "63074", "title": "Stock market crash", "section": "Section::::Major crashes in the United States.:Wall Street Crash of 1929.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 451, "text": "On Black Monday, the Dow Jones Industrial Average fell 38.33 points to 260, a drop of 12.8%. The deluge of selling overwhelmed the ticker tape system that normally gave investors the current prices of their shares. Telephone lines and telegraphs were clogged and were unable to cope. This information vacuum only led to more fear and panic. The technology of the New Era, previously much celebrated by investors, now served to deepen their suffering.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3698771", "title": "K-tel", "section": "Section::::Dot-com bubble's effects on K-tel.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 570, "text": "In mid-April 1998, during the dot-com bubble, news that the company was expanding its business to the Internet sent the thinly traded stock shooting from about $3 to over $7 in one day (3:1 split adjusted). The short interest of the stock swelled. The price of the stock peaked at about $34 in early May, and began to decline, reaching $12 in November and eventually pennies. The sudden upswing was fueled mainly by a large short squeeze. Traders with short positions either \"bought in\" or were forced to cover positions at very high prices because of the great losses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46638", "title": "Dow Jones & Company", "section": "Section::::Ownership.:Buyout offer.\n", "start_paragraph_id": 55, "start_character": 0, "end_paragraph_id": 55, "end_character": 337, "text": "On July 17, 2007, The \"Wall Street Journal\", a unit of Dow Jones, reported that the company and News Corporation had agreed in principle on a US$5 billion takeover, that the offer would be put to the full Dow Jones board on the same evening in New York, and that the offer valued the company at 70% more than the company's market value.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1599923", "title": "Sheldon Adelson", "section": "Section::::Personal life.:Wealth.\n", "start_paragraph_id": 115, "start_character": 0, "end_paragraph_id": 115, "end_character": 780, "text": "In 2008, the share prices of the Las Vegas Sands Corp. plunged. In November 2008, Las Vegas Sands Corp. announced it might default on bonds that it had outstanding, signaling the potential bankruptcy of the concern. Adelson lost $4 billion in 2008, more than any other American billionaire. In 2009, his net worth had declined from approximately $30 billion to $2 billion, a drop of 93%. He told ABC News \"So I lost $25 billion. I started out with zero...(there is) no such thing as fear, not to an entrepreneur. Concern, yes. Fear, no.\" In the \"Forbes\" 2009 world billionaires list, Adelson's ranking dropped to #178 with a net worth of $3.4 billion, but by 2011, after his business had recovered, he was ranked as the world's 16th-richest man with a net worth of $23.3 billion.\n", "bleu_score": null, "meta": null } ] } ]
null
14axqf
How does hilbert spaces describe states of systems?
[ { "answer": "In quantum mechanics, the state of a system is specified as a vector in a Hilbert space. Hermitian operators on the Hilbert space are associated with observables, in that their eigenvalues are interpreted as the possible outcomes of measuring the observable.\n\nThe other key interpretation bit is as follows. Suppose that we are given an orthonormal basis of the space consisting of eigenvectors of some Hermitian operator A. Then consider a general state and write it in components in this basis. Then (ignoring degeneracy) the mod squared of the component for some basis vector is interpreted as the probability that measuring the observable for A will yield the eigenvalue corresponding to that basis vector.\n\nI should also mention that states in quantum mechanics evolve in time via a unitary operator, that is, there is some set of operators {U(t)} for t \\in R (with lots of constraints, including some form of continuity/smoothness) such that U(t) acting on the state of the system at time 0 gives the state of the system at time t. This is critical because unitary operators preserve norms. States in quantum mechanics are required to have norm 1 (for the most part) because of the probability interpretations, i.e. the probability of getting *something* in a measurement must always always 1, so having non-unitary time evolution would result in something roughly like disappearing particles.\n\n(Aside: there's also a picture where the states are static and the operators evolve in time, but that's completely equivalent.)\n\nAlso, at least in the prevailing interpretation of quantum mechanics, measuring an observable results in the system immediately changing its state to an eigenstate of that observable.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "546101", "title": "State space", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 450, "text": "In the theory of discrete dynamical systems, a state space is the set of all possible configurations of a system. For example, a system in queueing theory defining the number of customers in a line would have state space {0, 1, 2, 3, ...}. State spaces can be either infinite or finite. An example of a finite state space is that of the toy problem Vacuum World, in which there are a limited set of configurations that the vacuum and dirt can be in.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30876419", "title": "Quantum state", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 388, "text": "Hilbert space is a generalization of the ordinary Euclidean space and it contains all possible pure quantum states of the given system. If this Hilbert space, by choice of representation (essentially a choice of basis corresponding to a complete set of observables), is exhibited as a function space (a Hilbert space in its own right), then the representatives are called wave functions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "429425", "title": "Probability amplitude", "section": "Section::::Overview.:Mathematical.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 373, "text": "In a formal setup, any system in quantum mechanics is described by a state, which is a vector , residing in an abstract complex vector space, called a Hilbert space. It may be either infinite- or finite-dimensional. A usual presentation of that Hilbert space is a special function space, called , on certain set , that is either some configuration space or a discrete set.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4542", "title": "Bra–ket notation", "section": "Section::::Composite bras and kets.\n", "start_paragraph_id": 120, "start_character": 0, "end_paragraph_id": 120, "end_character": 431, "text": "Two Hilbert spaces and may form a third space by a tensor product. In quantum mechanics, this is used for describing composite systems. If a system is composed of two subsystems described in and respectively, then the Hilbert space of the entire system is the tensor product of the two spaces. (The exception to this is if the subsystems are actually identical particles. In that case, the situation is a little more complicated.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9718351", "title": "State space (physics)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 439, "text": "Specifically, in quantum mechanics a state space is a complex Hilbert space in which the possible instantaneous states of the system may be described by unit vectors. These state vectors, using Dirac's bra–ket notation, can often be treated like coordinate vectors and operated on using the rules of linear algebra. This Dirac formalism of quantum mechanics can replace calculation of complicated integrals with simpler vector operations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "429425", "title": "Probability amplitude", "section": "Section::::Overview.:Mathematical.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 592, "text": "Mathematically, many presentations of the system's Hilbert space can exist. We shall consider not an arbitrary one, but a one for the observable in question. A convenient configuration space is such that each point produces some unique value of . For discrete it means that all elements of the standard basis are eigenvectors of . In other words, shall be diagonal in that basis. Then formula_6 is the \"probability amplitude\" for the eigenstate . If it corresponds to a non-degenerate eigenvalue of , then formula_7 gives the probability of the corresponding value of for the initial state .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20598932", "title": "Hilbert space", "section": "Section::::Applications.:Quantum mechanics.\n", "start_paragraph_id": 123, "start_character": 0, "end_paragraph_id": 123, "end_character": 1113, "text": "In the mathematically rigorous formulation of quantum mechanics, developed by John von Neumann, the possible states (more precisely, the pure states) of a quantum mechanical system are represented by unit vectors (called \"state vectors\") residing in a complex separable Hilbert space, known as the state space, well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projectivization of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system; for example, the position and momentum states for a single non-relativistic spin zero particle is the space of all square-integrable functions, while the states for the spin of a single proton are unit elements of the two-dimensional complex Hilbert space of spinors. Each observable is represented by a self-adjoint linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate.\n", "bleu_score": null, "meta": null } ] } ]
null
zg9rc
Is it possible to reflect or focus gravity with some kind of lens or mirror?
[ { "answer": "Yes, but we don't really have the technology to do that. Here's an example of gravitational lensing, [this picture](_URL_0_) of a distance quasar known as the Einstein Cross shows what seems like 4 copies of it due to the light being bent on it's way to Earth and then continuing onward. [Here's an artist's rendition of Gravitational Waves recently discovered around binary system J0651](_URL_1_). ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "266861", "title": "Reflecting telescope", "section": "Section::::Technical considerations.:Optical errors.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 725, "text": "The use of mirrors avoids chromatic aberration but they produce other types of aberrations. A simple spherical mirror cannot bring light from a distant object to a common focus since the reflection of light rays striking the mirror near its edge do not converge with those that reflect from nearer the center of the mirror, a defect called spherical aberration. To avoid this problem most reflecting telescopes use parabolic shaped mirrors, a shape that can focus all the light to a common focus. Parabolic mirrors work well with objects near the center of the image they produce, (light traveling parallel to the mirror's optical axis), but towards the edge of that same field of view they suffer from off axis aberrations:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1032610", "title": "Focus (optics)", "section": "", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 927, "text": "Diverging (negative) lenses and convex mirrors do not focus a collimated beam to a point. Instead, the focus is the point from which the light appears to be emanating, after it travels through the lens or reflects from the mirror. A convex parabolic mirror will reflect a beam of collimated light to make it appear as if it were radiating from the focal point, or conversely, reflect rays directed toward the focus as a collimated beam. A convex elliptical mirror will reflect light directed towards one focus as if it were radiating from the other focus, both of which are behind the mirror. A convex hyperbolic mirror will reflect rays emanating from the focal point in front of the mirror as if they were emanating from the focal point behind the mirror. Conversely, it can focus rays directed at the focal point that is behind the mirror towards the focal point that is in front of the mirror as in a Cassegrain telescope.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3299797", "title": "Liquid mirror telescope", "section": "Section::::Advantages and disadvantages.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 912, "text": "The greatest disadvantage is that the mirror can only be pointed straight up. Research is underway to develop telescopes that can be tilted, but currently if a liquid mirror were to tilt out of the zenith, it would lose its shape. Therefore, the mirror's view changes as the Earth rotates and objects cannot be physically tracked. An object can be briefly electronically tracked while in the field of view by shifting electrons across the CCD at the same speed as the image moves; this tactic is called time delay and integration or drift scanning. Some types of astronomical research are unaffected by these limitations, such as long-term sky surveys and supernova searches. Since the universe is believed to be isotropic and homogeneous (this is called the Cosmological Principle), the investigation of its structure by cosmologists can also use telescopes which are highly reduced in their direction of view.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3299797", "title": "Liquid mirror telescope", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 813, "text": "Another difficulty is that a liquid metal mirror can only be used in zenith telescopes, i.e., that look straight up, so it is not suitable for investigations where the telescope must remain pointing at the same location of inertial space (a possible exception to this rule may exist for a mercury mirror space telescope, where the effect of Earth's gravity is replaced by artificial gravity, perhaps by rotating the telescope on a very long tether, or propelling it gently forward with rockets). Only a telescope located at the North Pole or South Pole would offer a relatively static view of the sky, although the freezing point of mercury and the remoteness of the location would need to be considered. A very large telescope already exists at the South Pole, but the North Pole is located in the Arctic Ocean.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5732433", "title": "Curved mirror", "section": "Section::::Convex mirrors.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 585, "text": "A convex mirror or diverging mirror is a curved mirror in which the reflective surface bulges towards the light source. Convex mirrors reflect light outwards, therefore they are not used to focus light. Such mirrors always form a virtual image, since the focal point (\"F\") and the centre of curvature (\"2F\") are both imaginary points \"inside\" the mirror, that cannot be reached. As a result, images formed by these mirrors cannot be projected on a screen, since the image is inside the mirror. The image is smaller than the object, but gets larger as the object approaches the mirror.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "266611", "title": "Optical telescope", "section": "Section::::Astronomical research telescopes.:Large reflectors.\n", "start_paragraph_id": 118, "start_character": 0, "end_paragraph_id": 118, "end_character": 305, "text": "BULLET::::- There are technical difficulties involved in manufacturing and manipulating large-diameter lenses. One of them is that all real materials sag in gravity. A lens can only be held by its perimeter. A mirror, on the other hand, can be supported by the whole side opposite to its reflecting face.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2482259", "title": "Phoropter", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 499, "text": "The lenses within a phoropter refract light in order to focus images on the patient's retina. The optical power of these lenses is measured in 0.25 diopter increments. By changing these lenses, the examiner is able to determine the spherical power, cylindrical power, and cylindrical axis necessary to correct a person's refractive error. The presence of cylindrical power indicates the presence of astigmatism, which has an axis measured from 0 to 180 degrees away from being aligned horizontally.\n", "bleu_score": null, "meta": null } ] } ]
null
23ezg5
When did testing new hires for drugs become standard practice, and why?
[ { "answer": "Alrighty guys, because I've already had to remove ten comments similar to this, I'm going to leave a top-level mod post here. Please remember when posting here that this is not /r/Politics. We are not interested in contemporary politics, your opinions on current policies of countries, two word answers, one line answers, etc. [For more info on what makes a good answer, please see here.](_URL_0_) If you're interested in our [rules,](_URL_1_) there's a link in the sidebar for your perusal. \n\nHave a great day :)", "provenance": null }, { "answer": "Something to keep in mind is practical drug tests didn't exist until the first part of the 20th century. The first drug testing in modern times was probably the Olympics in 1966. And while anti-doping is a fascinating topic, what you are probably asking about is the war on drugs. \n\n1971 - Operation Golden flow, Essentially nixon begins drug testing the military, This is mainly aimed at heroin use and is considered successful.\n\nIt starts in the main stream with the Reagan's war on drugs. he start pushing for workplace testing around 1986, his September speech on the crack epidemic mentions the dangers of taxi drivers and other transportation professionals doing drugs. In 1988 he signs the Drug Free Workplace Act. This really laid the ground work for workplace testing, as it put the imperative on the employer to certify that they maintained a drug free workplace if they wanted to keep federal contracts. \n\nThis was expanded in 1991 to get OSHA involved with the omnibus transportation act. Which basically put anyone in a transportation job that required operating heavy machines like trains, trucks, planes to be tested as well. \n\nThe last little bit of legislative history is the states began passing laws that allowed companies that had drug free workplace policies to discounts on the workers compensation insurance they were required to have.\n\nI'd have to break the 20 year rule to really discuss state legislation in depth.\n\nOne thing that is interesting is the requirements of a \"Drug free workplace policy\" differ, The federal acts are more stringent, whereas the state ones often only require something the along the lines of offering treatment programs and testing when there is an accident. Many HR departments will lead towards a more safe than sorry attitude. \n\nTL;DR: sometime between 1988-1996 depending on what state you are in. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "34069", "title": "Winter Olympic Games", "section": "Section::::Controversy.:Doping.\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 854, "text": "In 1967 the IOC began enacting drug testing protocols. They started by randomly testing athletes at the 1968 Winter Olympics. The first Winter Games athlete to test positive for a banned substance was Alois Schloder, a West German hockey player, but his team was still allowed to compete. During the 1970s testing outside of competition was escalated because it was found to deter athletes from using performance-enhancing drugs. The problem with testing during this time was a lack of standardisation of the test procedures, which undermined the credibility of the tests. It was not until the late 1980s that international sporting federations began to coordinate efforts to standardise the drug-testing protocols. The IOC took the lead in the fight against steroids when it established the independent World Anti-Doping Agency (WADA) in November 1999.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "175596", "title": "Animal testing", "section": "Section::::Research classification.:Drug testing.\n", "start_paragraph_id": 98, "start_character": 0, "end_paragraph_id": 98, "end_character": 209, "text": "Before the early 20th century, laws regulating drugs were lax. Currently, all new pharmaceuticals undergo rigorous animal testing before being licensed for human use. Tests on pharmaceutical products involve:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13666216", "title": "Drug-Free Workplace Act of 1988", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 450, "text": "The Drug Free Workplace Act of 1988 didn't come into effect until the late 1980s, when more employers began attempting to eliminate drugs in the workplace. Before the Drug Free Workplace Act, there really was not a federal regulation that employers could use to enforce regulations on employees using drugs. Even though drug testing really didn't come into effect until late 70s early 80s it can still be traced back to about the early 20th century.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44287957", "title": "NCAA drug testing", "section": "Section::::History.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1307, "text": "The National Collegiate Athletic Association did not start drug testing athletes until 1986, and even then it was only athletes or teams that made it to championship or bowl games. Although athletes were not tested until 1986 in the year 1970 the NCAA council founded a drug education committee. “The Drug Education Committee conducts a survey of 1,000 male student-athletes in the Big Ten Conference; 40 percent of respondents said that drug use was a slight or growing problem among varsity athletes”. In 1986 NCAA drug-testing program was adopted at the NCAA Convention. The drug testing started that following fall with only championships and bowl games. The following year a Stanford diver filed a lawsuit claiming that this drug testing policy violated his privacy rights. California Supreme Court rules in favor of the NCAA in the privacy-rights lawsuit, saying the Association was \"well within its legal rights\" in adopting a drug-testing program. In 2006 the Year-round testing program is expanded into the summer months. That same year Division III Presidents Council approves a two-year drug-education and testing pilot program. “Today, 90 percent of Division I, 65 percent of Division II and 21 percent of Division III schools conduct their own drug-testing programs in addition to the NCAA’s”.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44287957", "title": "NCAA drug testing", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 795, "text": "The NCAA adopted a drug testing program in 1986, the year after the executive committee formed the Special NCAA Committee on Drug Testing. The drug test ranges from testing player-enhancement drugs to marijuana, and if a student fails a drug test then he or she loses one year of eligibility and is not allowed to compete in events for the first offense. However, not all students are tested because they are selected at random, but students are subject to be tested at any point in the year after the year-round testing program was adopted in 1990. Of the 400,000 athletes competing in the NCAA, around 11,000 drug tests were administered in 2008-09 when the last statistics were available. That number is expected to increase as drugs become more prevalent and easily accessible year by year.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36578799", "title": "The Abbey School, Faversham", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 805, "text": "In September 2004, with the support of parents, the school became the first to introduce random drug tests which commenced in January 2005. The tests could be performed only when parents gave permission for their child to be tested. Students who refused to be available for testing or tested positive would undergo a counselling program. Critics of the program stated an infringement of privacy would lead to legal action. At the school, 20 students were tested weekly and were given a mouth swab for cannabis, cocaine and heroin. Supporters of the program including former Prime Minister Tony Blair who endorsed Walker's efforts and called for the program to be expanded. In 2005, the school reported that the scheme helped to boost examination results to 40% compared with 32% in 2004, and 28% in 2003.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34382035", "title": "Phases of clinical research", "section": "Section::::Pre-clinical studies.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 466, "text": "Before pharmaceutical companies start clinical trials on a drug, they conduct extensive pre-clinical studies. These involve in vitro (test tube or cell culture) and in vivo (animal) experiments using wide-ranging doses of the study drug to obtain preliminary efficacy, toxicity and pharmacokinetic information. Such tests assist pharmaceutical companies to decide whether a drug candidate has scientific merit for further development as an investigational new drug.\n", "bleu_score": null, "meta": null } ] } ]
null
5ciq5f
what were the united states democratic party's general views on gun control in the early 90s?
[ { "answer": "I'm not a historian, but they were very pro gun control legislation. \n\nThe Brady Bill finally came into existence in 1993 after a long fight with gun rights opponents over it's provisions. Interesting enough it was the Pro-gun side that wanted the background checks we currently have included.\n\nThe \"assault weapon ban\" was passed in 1994. Some political pundits blame that legislation for the Democrats losing control of congress. \n\nPresident Clinton also signed an executive order that made all gun dealers \"have a storefront\", thus adding a huge expense to gun dealers and especially gunsmiths who had a machine shop in their basement. FFL's fell Approx. 80% in the decade after this order.\n\nBasically, the views then echoed the recent views held. Although there was a mellowing of the views or at least less of a desire to push them after backlash was felt from the AWB, and that could be a possibility again after the results from the recent election.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "13298666", "title": "Libertarian Democrat", "section": "Section::::History.:Modern era.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 392, "text": "After election losses in 2004, the Democratic Party reexamined its position on gun control which became a matter of discussion, brought up by Howard Dean, Bill Richardson, Brian Schweitzer and other Democrats who had won in states where Second Amendment rights are important to many voters. The resulting stance on gun control brought in libertarian minded voters, influencing other beliefs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5043544", "title": "Democratic Party (United States)", "section": "Section::::Political positions.:Legal issues.:Gun control.\n", "start_paragraph_id": 148, "start_character": 0, "end_paragraph_id": 148, "end_character": 591, "text": "With a stated goal of reducing crime and homicide, the Democratic Party has introduced various gun control measures, most notably the Gun Control Act of 1968, the Brady Bill of 1993 and Crime Control Act of 1994. However, some Democrats, especially rural, Southern, and Western Democrats, favor fewer restrictions on firearm possession and warned the party was defeated in the 2000 presidential election in rural areas because of the issue. In the national platform for 2008, the only statement explicitly favoring gun control was a plan calling for renewal of the 1994 Assault Weapons Ban.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46778080", "title": "Political positions of the Democratic Party", "section": "Section::::Legal issues.:Gun control.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 597, "text": "With a stated goal of reducing crime and homicide, the Democratic Party has introduced various gun control measures, most notably the Gun Control Act of 1968, the Brady Bill of 1993, and Crime Control Act of 1994. However, some Democrats, especially rural, Southern, and Western Democrats, favor fewer restrictions on firearm possession and warned that the party was defeated in the 2000 presidential election in rural areas because of the issue. In the national platform for 2008, the only statement explicitly favoring gun control was a plan calling for renewal of the 1994 Assault Weapons Ban.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30371311", "title": "2011 Tucson shooting", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 814, "text": "Following the shooting, American and international politicians expressed grief and condemnations. Gun control advocates pushed for increased restrictions on the sale of firearms and ammunition, specifically high-capacity magazines. Some commentators criticized the use of harsh political rhetoric in the United States, with a number blaming the political right wing for the shooting; in particular, Sarah Palin was criticized for a poster by her political action committee that featured stylized crosshairs on an electoral map. Palin rejected claims that she bore responsibility for the shooting, and others defended her by noting that Loughner hated all politicians regardless of their affiliation. President Barack Obama led a nationally televised memorial service on January 12, and other memorials took place.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51337334", "title": "Public opinion on gun control in the United States", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 968, "text": "Public opinion on gun control in the United States has been tracked by numerous public opinion organizations and newspapers for more than 20 years. There have also been major gun policies that affected American opinion in the 1990s. Throughout these polling years there are different gun control proposals that show promise for bipartisan action. Over the years listed there have been major tragedies that have affected public opinion. Most of the tragedies are school shootings. There have also been a growth in states around the United States taking more drastic measures on gun control. As of late February and early March 2018, a majority of Americans support stricter gun laws, including wide support for universal background check and mandatory waiting periods for gun purchases and including support for banning assault weapons, adding felons and mental illness patients to background check systems, and prohibiting sales of guns to persons under 21 years old.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "59628", "title": "School shooting", "section": "Section::::Political impact.\n", "start_paragraph_id": 99, "start_character": 0, "end_paragraph_id": 99, "end_character": 1078, "text": "Due to the political impact, this has spurred some to press for more stringent gun control laws. In the United States, the National Rifle Association is opposed to such laws, and some groups have called for fewer gun control laws, citing cases of armed students ending shootings and halting further loss of life, and claiming that the prohibitions against carrying a gun in schools do not deter the gunmen. One such example is the Mercaz HaRav Massacre, where the attacker was stopped by a student, Yitzhak Dadon, who shot him with his personal firearm which he lawfully carried concealed. At a Virginia law school, there is a disputed claim that three students retrieved pistols from their cars and stopped the attacker without firing a shot. Also, at a Mississippi high school, the vice principal retrieved a firearm from his vehicle and then eventually stopped the attacker as he was driving away from the school. In other cases, such as shootings at Columbine and Red Lake High Schools, the presence of an armed police officer did little to nothing to prevent the killings.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56652666", "title": "Political Victory Fund", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 459, "text": "With passage of the Gun Control Act of 1968, an increasing number of NRA members, who has previously downplayed gun control issues, became more involved in gun politics and gun rights. Along with the creation of its lobbying arm, the Institute for Legislative Action (NRA-ILA), with activist Harlon Carter as director, in 1976 the NRA established its non-partisan political action committee (PAC), the Political Victory Fund, in time for the 1976 elections. \n", "bleu_score": null, "meta": null } ] } ]
null
bekodu
Did wooden sailing ships get struck by lightning and catch on fire all the time? Furthermore, later in the age of sail, did magazines explode from this?
[ { "answer": "It is by chance I have a paper (which is unfortunately for the rest of you in Croatian language) that tried to round up notes in official chronicles and ships logs of any mention of lighting strikes in the Adriatic sea for the period of 1300-1800. Most of the notes are just mentions of storms in passage, but some are recounts of tragic events like lightning hitting tall buildings like church towers, often with tragic consequences for bell ringers and clergy inside. Forts would also be hit, and occasionally stored powder inside would explode with horrifying consequences for the surrounding area. \n\nA lot of notes are about ships, but most just noting surviving storms unharmed: But luckily (or unluckily for people involved) some do mention ships being directly hit by lightning strikes. Let's recount the mentioned:\n\nIn 1497 a very brief note in a ship's log says lightning struck the top of the mast. The crew despaired and thought the ship would be lost, yet it appears it was successfully managed, even though we have no extra description of what transpired. \nIn 1501 a note from local chronicles from the island of Hvar said a galley passing by got struck by lightning in the mast and had to get a new mast from the island. \nIn 1530, location unclear, there is a note that a mast of a ship was destroyed by lightning, and the ship asked to get a replacement mast. \nIn 1545, events log that a ship off the coast of southern Italy was struck twice by lightning, but this time in the stern, making it burst into pieces and hit the coastal reefs. \nAn event near Kotor in 1570 describes an event very close to your hypothetical scenario. An anchored commanding galley was struck by lightning, causing the \"forty-year-old, dry wood\" to burst into flames, spreading to rigging and sails, until it reached powder and ammo and blew the ship up into pieces.\n\nThe crews are recorded as being very frightened of lightning striking, even giving birth to several superstitions, like the recorded belief that striking axes in the mast would protect it from strikes, which was mentioned in these 16th-century notes.\n\n---\nTo move forward in time a bit, there is a paper from 1762, a time of tall ships, discussing ways to prevent lightning strikes. In it, there are few descriptions of lightning strikes. The most detailed is the short description of ship Harriet which apparently got struck by lightning that \"split into pieces\" her main mast, main topmast and topgallant mast, made some damage to the bulk-heads, beams, hull, and other parts, as well as set fire to rigging and created much smoke, making crew believe there was a fire. If there was (it appears not) the crew extinguished it anyway. There is also a short note on a ship Bellona, whose main mast got split into pieces.\n\n\nFor even latter period we have a document by Harris from all the way back 1838. In the paper, the appendix lists damages by lightning to ships of Royal Navy in the years 1793-1830s. It records 174 cases of damage by lighting and concludes:\n\n > From about one hundred cases in the above list; the particulars of which have been ascertained, it appears that about one half the ships struck by lightning, are struck on the main-mast; one quarter on the fore-mast; one-twentieth on the mizen-mast, aud not above one in a hundred on the bowsprit. \n\n > About one ship in six is set on fire in some part of the masts, sails, or rigging. \nIn one half the cases, some of the crew are either killed or wounded, or both; the numbers are 62 killed, 111 wounded; this is exclusive of one case in which nearly all the crew perished; of twelve cases in which the numbers killed or wounded have been set down as several or many. In these 100 cases there were damaged or destroyed 93 lower masts, principally line of battle ships and frigates, 83 top-masts, 60 top-gallant masts. \nIn one-tenth of these cases, the services of the silips were urgently demanded. \n\nLooking at the list, only one frigate of 44 guns was totally destroyed by catching fire after getting struck. \n\n---\n\nThe above events are hardly enough to paint a complete picture, but we can make some conclusions. While ships more often passed through storms untouched by lighting, it was quite possible to be struck. This would be very dangerous, but it seems most damages to the ship was confined to the splitting (or bursting into pieces) of the mast. A serious setback, but one that could be handled by getting a replacement mast. More threatening for the entire ships was the possible start of a fire, as fire was always very dangerous on a ship, especially wooden ones. Yet in this small dataset, we luckily see very few such cases where fire has spread uncontrollably to destroy the ships. In most cases the fire was contained, or limited to less dangerous parts of the ships. For memebers of the crew, this was a thin comfort as even without fire the damage could be severe enough to wound or kill a considerable number of the men onboard.\n\n---\n\nSources:\n\nKužić, K.. \"GRMLJAVINSKE OLUJE, UDARI GROMA I VATRA SV. NIKOLE U HRVATSKIM PRIMORSKIM KRAJEVIMA (14.-18. st.).\" Hrvatski meteorološki časopis, vol. 47, br. 47, 2012, str. 69-97. _URL_1_.\n\nWatson, William. “Some Suggestions Concerning the Preventing the Mischiefs, Which Happen to Ships and Their Masts by Lightning; Being the Substance of a Letter to the Late Right Honourable George Lord Anson, First Lord of the Admiralty, and F. R. S. by William Watson, M. D. F. R. S.” Philosophical Transactions (1683-1775), vol. 52, 1761, pp. 629–635. JSTOR, _URL_0_.\n\nW. Snow (William Snow) Harris. State of the Question Relating to the Protection of the British Navy from Lightning, by the Method of Fixed Conductors of Electricity. 1838. JSTOR, _URL_2_.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8185932", "title": "Trench art", "section": "Section::::Categorisation.:Commercial items.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 351, "text": "Ship breaking, particularly if the ship had been involved in significant events such as the Battle of Jutland, resulted in much of the wood from the ship being turned into miniature barrels, letter racks, and boxes, with small brass plaques attached announcing, for example, \"Made of teak from HMS \"Shipsname\", which fought at the Battle of Jutland\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11990902", "title": "USS William C. Miller", "section": "Section::::Japanese submarine sighted and sunk.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 326, "text": "That pattern appears to have proved devastating to . At 0804, \"William C. Miller\" noted pieces of wood popping to the surface about ahead, one point on the starboard bow. One minute later, a \"heavy and prolonged underwater explosion\" — estimated to be about three times the shock of a depth charge explosion — shook the ship.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5080150", "title": "Pelican (1793 ship)", "section": "Section::::Pitching and rolling.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 651, "text": "Suddenly and without warning, at about two in the afternoon, with the ship at the height of her pitch, several cannon, which had been improperly tied down, broke free. These became iron missiles which rolled across the deck and punched huge holes in the ship's opposite side, causing water to flood into the \"Pelican\", which rapidly filled and sank. The location of the wreck was so shallow that her mast tops remained above the water, visible after the storm had died down. Unfortunately, because all unnecessary personnel had been ushered below and because the hatches were battened down during the storm, no one was able to escape the lower decks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "61165743", "title": "SS Swiftstar", "section": "Section::::Operational history.:Disappearance.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 240, "text": "and therefore preventing any fire or smoke developing that could have been observed by passing ships. Finally, the charred body probably belonged to a man who was killed by the lightning outright with his body being thrown into the icebox.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4726004", "title": "HMS Boyne (1790)", "section": "Section::::Post-script.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 254, "text": "The wreck presented something of a hazard to a navigation and as a result it was blown up on 30 August 1838 in a clearance attempt. Today the Boyne buoy marks the site of the explosion. A few metal artifacts from the ship remain atop a mound of shingle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43499039", "title": "Liberty (sternwheeler)", "section": "Section::::Sinking and other hazards.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 563, "text": "Fire was a serious danger to wooden ships like \"Liberty\", particularly steamers due to their use of fireboxes and boilers. On August 15, 1906, a fire at Parkersburgh destroyed the sawmill and almost all of the buildings in the town. The schooner \"Oregon\", tied at the dock and loading cargo, was damaged by the fire. \"Oregon\" would have been completely destroyed, but \"Liberty\" came along and towed the schooner away from the fire. On July 29, 1907, \"Liberty\" under the command of Captain Moomaw, held a fire drill which was praised in the press for its realism.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36926286", "title": "Palatine Light", "section": "Section::::Folklore accounts.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 544, "text": "On the Saturday between Christmas and New Year's Eve, there are still sporadic reports from the locals of seeing a burning ship sail past. Tradition states that a German ship carrying immigrants to Philadelphia ran aground during a snow storm on December 26, 1738 and was stranded near Block Island. Depositions from the remaining crew members reported a loss of half the crew. However, folklorist Michael Bell noted when investigating the legend that two versions of the night's events began to be circulated almost a year after the incident.\n", "bleu_score": null, "meta": null } ] } ]
null
cintgw
Why were armies so much larger in the Punic War than they were during the Thirty Years War?
[ { "answer": "Is there a particular reason you picked those specific conflicts, as opposed to classical vs medieval? For one thing, we could have chosen much bigger Septimus vs Clodius Albinus (estimated 150,000-300,000 on EACH side)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "26198495", "title": "Roman army of the mid-Republic", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1325, "text": "For the vast majority of the period of its existence, the Polybian levy was at war. This led to great strains on Roman and Italian manpower, but forged a superb fighting machine. During the Second Punic War, fully two-thirds of Roman \"iuniores\" were under arms continuously. In the period after the defeat of Carthage in 201 BC, the army was campaigning exclusively outside Italy, resulting in its men being away from their home plots of land for many years at a stretch. They were assuaged by the large amounts of booty that they shared after victories in the rich eastern theatre. But in Italy, the ever-increasing concentration of public lands in the hands of big landowners, and the consequent displacement of the soldiers' families, led to great unrest and demands for land redistribution. This was successfully achieved, but resulted in the disaffection of Rome's Italian allies, who as non-citizens were excluded from the redistribution. This led to the mass revolt of the \"socii\" and the Social War (91-88 BC). The result was the grant of Roman citizenship to all Italians and the end of the Polybian army's dual structure: the \"alae\" were abolished and the \"socii\" recruited into the legions. The Roman army of the late Republic (88–30 BC) resulted, a transitional phase to the Imperial Roman army (30 BC – AD 284).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "753281", "title": "Roman army", "section": "Section::::Historical overview.:Roman army of the mid-Republic (c. 300–88 BC).\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 1224, "text": "After the 2nd Punic War (218–201 BC), the Romans acquired an overseas empire, which necessitated standing forces to fight lengthy wars of conquest and to garrison the newly gained provinces. Thus the army's character mutated from a temporary force based entirely on short-term conscription to a standing army in which the conscripts were supplemented by a large number of volunteers willing to serve for much longer than the legal six-year limit. These volunteers were mainly from the poorest social class, who did not have plots to tend at home and were attracted by the modest military pay and the prospect of a share of war booty. The minimum property requirement for service in the legions, which had been suspended during the 2nd Punic War, was effectively ignored from 201 BC onward in order to recruit sufficient volunteers. Between 150-100 BC, the manipular structure was gradually phased out, and the much larger cohort became the main tactical unit. In addition, from the 2nd Punic War onward, Roman armies were always accompanied by units of non-Italian mercenaries, such as Numidian light cavalry, Cretan archers, and Balearic slingers, who provided specialist functions that Roman armies had previously lacked.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25816", "title": "Roman Republic", "section": "Section::::Military.:Manipular legion (c. 315–107 BC).\n", "start_paragraph_id": 137, "start_character": 0, "end_paragraph_id": 137, "end_character": 714, "text": "The extraordinary demands of the Punic Wars, in addition to a shortage of manpower, exposed the tactical weaknesses of the manipular legion, at least in the short term. In 217, near the beginning of the Second Punic War, Rome was forced to effectively ignore its long-standing principle that its soldiers must be both citizens and property owners. During the 2nd century, Roman territory saw an overall decline in population, partially due to the huge losses incurred during various wars. This was accompanied by severe social stresses and the greater collapse of the middle classes. As a result, the Roman state was forced to arm its soldiers at the expense of the state, which it did not have to do in the past.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13959", "title": "Hannibal", "section": "Section::::Conclusion of the Second Punic War (203–201 BC).:Battle of Zama (202 BC).\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 607, "text": "Unlike most battles of the Second Punic War, at Zama, the Romans were superior in cavalry and the Carthaginians had the edge in infantry. This Roman cavalry superiority was due to the betrayal of Masinissa, who had earlier assisted Carthage in Iberia, but changed sides in 206 BC with the promise of land and due to his personal conflicts with Syphax, a Carthaginian ally. Although the aging Hannibal was suffering from mental exhaustion and deteriorating health after years of campaigning in Italy, the Carthaginians still had the advantage in numbers and were boosted by the presence of 80 war elephants.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14531824", "title": "Late Roman army", "section": "Section::::Army size.:Smaller Late Army.\n", "start_paragraph_id": 76, "start_character": 0, "end_paragraph_id": 76, "end_character": 574, "text": "At the same time, more recent work has suggested that the regular army of the 2nd century was considerably larger than the c. 300,000 traditionally assumed. This is because the 2nd-century auxilia were not just equal in numbers to the legions as in the early 1st century, but some 50% larger. The army of the Principate probably reached a peak of nearly 450,000 (excluding fleets and \"foederati\") at the end of the 2nd century. Furthermore, the evidence is that the actual strength of 2nd-century units was typically much closer to official (c. 85%) than 4th century units.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "753281", "title": "Roman army", "section": "Section::::Roman army of the mid-Republic (c. 300 – 107 BC).\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 899, "text": "The central feature of the Roman army of the mid-Republic, or the Polybian army, was the manipular organization of its battle-line. Instead of a single, large mass (the phalanx) as in the Early Roman army, the Romans now drew up in three lines consisting of small units (maniples) of 120 men, arrayed in chessboard fashion, giving much greater tactical strength and flexibility. This structure was probably introduced in c. 300 BC during the Samnite Wars. Also probably dating from this period was the regular accompaniment of each legion by a non-citizen formation of roughly equal size, the \"ala\", recruited from Rome's Italian allies, or \"socii\". The latter were c. 150 autonomous states which were bound by a treaty of perpetual military alliance with Rome. Their sole obligation was to supply to the Roman army, on demand, a number of fully equipped troops up to a specified maximum each year.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24417", "title": "Punic Wars", "section": "Section::::Background.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 302, "text": "In 200 BC, the Roman Republic had gained control of the Italian peninsula south of the Po River. Unlike Carthage, Rome had a large and disciplined army, but lacked a navy at the start of the First Punic War. This left the Romans at a disadvantage until the construction of large fleets during the war.\n", "bleu_score": null, "meta": null } ] } ]
null
wp8du
If Photons have no mass, then how does sunlight exposure give me Vitamin D?
[ { "answer": "The photons don't carry Vitamin D. Rather, the ultraviolet rays of light penetrate the first few layers of the epidermis to reach the stratum basale and stratum spinosum. In these layers of the skin, the cells contain 7-dehydrocholesterol which, when hit by ultraviolet rays of light, turn into a form of Vit D.\n\nI don't understand the physics behind how a photon actually converts a compound into something else, however. They don't teach us that in medical school. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "25669714", "title": "Health effects of sunlight exposure", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 717, "text": "The ultraviolet radiation in sunlight has both positive and negative health effects, as it is both a principal source of vitamin D and a mutagen. A dietary supplement can supply vitamin D without this mutagenic effect. Vitamin D has been suggested as having a wide range of positive health effects, which include strengthening bones and possibly inhibiting the growth of some cancers. UV exposure also has positive effects for endorphin levels, and possibly for protection against multiple sclerosis. Visible sunlight to the eyes gives health benefits through its association with the timing of melatonin synthesis, maintenance of normal and robust circadian rhythms, and reduced risk of seasonal affective disorder.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31990", "title": "Ultraviolet", "section": "Section::::Human health-related effects.:Beneficial effects.\n", "start_paragraph_id": 56, "start_character": 0, "end_paragraph_id": 56, "end_character": 279, "text": "UV light causes the body to produce vitamin D (specifically, UVB), which is essential for life. The human body needs some UV radiation in order for one to maintain adequate vitamin D levels; however, excess exposure produces harmful effects that typically outweigh the benefits.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32591", "title": "Vegetarianism", "section": "Section::::Health effects.:Nutrition.:Vitamin D.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 351, "text": "Vitamin D needs can be met via the human body's own generation upon sufficient and sensible exposure to ultraviolet (UV) light in sunlight. Products including milk, soy milk and cereal grains may be fortified to provide a source of Vitamin D. For those who do not get adequate sun exposure or food sources, Vitamin D supplementation may be necessary.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25987", "title": "Rickets", "section": "Section::::Cause.:Sunlight.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 601, "text": "Sunlight, especially ultraviolet light, lets human skin cells convert vitamin D from an inactive to active state. In the absence of vitamin D, dietary calcium is not properly absorbed, resulting in hypocalcaemia, leading to skeletal and dental deformities and neuromuscular symptoms, e.g. hyperexcitability. Foods that contain vitamin D include butter, eggs, fish liver oils, margarine, fortified milk and juice, portabella and shiitake mushrooms, and oily fishes such as tuna, herring, and salmon. A rare X-linked dominant form exists called vitamin D-resistant rickets or X-linked hypophosphatemia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25987", "title": "Rickets", "section": "Section::::Treatment.:Supplementation.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 494, "text": "Sufficient vitamin D levels can also be achieved through dietary supplementation and/or exposure to sunlight. Vitamin D (cholecalciferol) is the preferred form since it is more readily absorbed than vitamin D. Most dermatologists recommend vitamin D supplementation as an alternative to unprotected ultraviolet exposure due to the increased risk of skin cancer associated with sun exposure. Endogenous production with full body exposure to sunlight is approximately 250 µg (10,000 IU) per day.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20840716", "title": "Vitamin D deficiency", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 339, "text": "Ultraviolet B rays from sunlight is a large source of vitamin D. Fatty fish such as salmon, herring, and mackerel are also sources of vitamin D. Milk is often fortified with vitamin D and sometimes bread, juices, and other dairy products are fortified with vitamin D as well. Many multivitamins now contain vitamin D in different amounts.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "470087", "title": "Ergocalciferol", "section": "Section::::Sources.:Biosynthesis.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 307, "text": "The vitamin D content in mushrooms and \"C. arbuscula\" increase with exposure to ultraviolet light. Ergosterol (provitamin D) found in these fungi is converted to previtamin D on UV exposure, which then turns into vitamin D. If there is little exposure to UV light (or sunlight), little vitamin D will form.\n", "bleu_score": null, "meta": null } ] } ]
null
36jmik
Can a turtle feel something touch its shell?
[ { "answer": "A turtles shell is part of its bone structure and is used for many metabolic processes (like metabolic depression during anoxic conditions under water), so yes they can. It is quite sensitive because of the nerves that are required for those processes. ", "provenance": null }, { "answer": "Some sea turtles seem to enjoy getting \"scratched\" on the shell by divers. Which I have tried and seen myself. At first they don't know what to make of it but then they come back for more.\n\nBrowsing the literature, if I understand correctly, it turns out that \"spinal turtles\" have a weird scratch reflex which is based on sensory feedback from, amongst other body parts, the shell. [See e.g. here](_URL_0_).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "642800", "title": "Blanding's turtle", "section": "Section::::Behavior and life span.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 256, "text": "Blanding's turtle is a timid turtle and may plunge into water and remain on the bottom for hours when alarmed. If away from water, the turtle will withdraw into its shell. It is very gentle and rarely attempts to bite. It is very agile and a good swimmer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "66733", "title": "Snorkeling", "section": "Section::::Practice of snorkeling.:Safety precautions.\n", "start_paragraph_id": 107, "start_character": 0, "end_paragraph_id": 107, "end_character": 451, "text": "Another safety concern is interaction and contact with the marine life during encounters. While seals and sea turtles can seem harmless and docile, they can become alarmed if approached or feel threatened. Some creatures, like moray eels, can hide in coral crevices and holes and will bite fingers when there is too much prodding going on. For these reasons, snorkeling websites often recommend an \"observe but don't touch\" etiquette when snorkeling.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2184696", "title": "Flatback sea turtle", "section": "Section::::Description.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 390, "text": "Features of this sea turtle which help contribute to its recognition are the single pair of prefrontal scales on the head, and the four pairs of coastal scutes on the carapace. Another unique feature of this species of sea turtle is the fact that its carapace is found to be much thinner than other sea turtle carapaces. This feature causes the shell to crack under the smallest pressures.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37751", "title": "Turtle", "section": "Section::::Anatomy and morphology.:Shell.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 680, "text": "The shape of the shell gives helpful clues about how a turtle lives. Most tortoises have a large, dome-shaped shell that makes it difficult for predators to crush the shell between their jaws. One of the few exceptions is the African pancake tortoise, which has a flat, flexible shell that allows it to hide in rock crevices. Most aquatic turtles have flat, streamlined shells, which aid in swimming and diving. American snapping turtles and musk turtles have small, cross-shaped plastrons that give them more efficient leg movement for walking along the bottom of ponds and streams. Another exception is the Belawan Turtle (Cirebon, West Java), which has sunken-back soft-shell.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17843917", "title": "Turtle shell", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 549, "text": "The turtle shell is a highly complicated shield for the ventral and dorsal parts of turtles, tortoises and terrapins (all classified as \"turtles\" by zoologists), completely enclosing all the vital organs of the turtle and in some cases even the head. It is constructed of modified bony elements such as the ribs, parts of the pelvis and other bones found in most reptiles. The bone of the shell consists of both skeletal and dermal bone, showing that the complete enclosure of the shell probably evolved by including dermal armor into the rib cage.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7533766", "title": "Bafia people", "section": "Section::::Religion.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 374, "text": "The turtle is respected as a traditional totem animal. There is an age-old belief that turtle shells are sacred and can be used to resolve disputes within the community. All those involved are required to lay their hands on the animal's shell as a way of eliciting the truth. The hands of the guilty party will then supposedly contract leprosy as punishment for evil deeds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2186906", "title": "Tanjung Gemuk", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 454, "text": "Turtles have been sighted laying eggs on its beach recently. Locals have seen baby turtles reaching out from the sand heading towards the water. This is a new phenomenon in Tanjung Gemuk. After the 2004 Indian Ocean earthquake and tsunami in Indonesia, sea cucumbers were washed up the beach in Tanjung Gemuk; thousands were seen but most of them were dead. Surprised locals combed the beach for the sea cucumbers because they are considered a delicacy.\n", "bleu_score": null, "meta": null } ] } ]
null
5fi9ik
why are salmon and tuna more 'meaty' than white fish like cod/haddock/sea bass?
[ { "answer": "They both have higher fat content than the milder flavored white fleshed fish. Tuna more so than salmon.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "35566729", "title": "Salmon as food", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 378, "text": "Salmon is a common food classified as an oily fish with a rich content of protein and omega-3 fatty acids. In Norway – a major producer of farmed and wild salmon – farmed and wild salmon differ only slightly in terms of food quality and safety, with farmed salmon having lower content of environmental contaminants, and wild salmon having higher content of omega-3 fatty acids.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35824213", "title": "Canned fish", "section": "Section::::Tuna.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 436, "text": "Tuna is canned in edible vegetable oils, in brine, in water, or in various sauces. In the United States, canned tuna is sometimes called \"tuna fish\" and only albacore can legally be sold in canned form as \"white meat tuna\"; in other countries, yellowfin is also acceptable. While in the early 1980s, canned tuna in Australia was most likely southern bluefin; it was usually yellowfin, skipjack, or tongol (labelled \"northern bluefin\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18788550", "title": "Auxis", "section": "Section::::As food.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 277, "text": "Although fresh fish might be eaten as sashimi or grilled, it has a lot of dark-red meat (), so it is valued much less than the similar \"\" (skipjack tuna). And it degrades quickly so shipment out to market is limited. The frigate tuna () is considered superior between the two.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2543508", "title": "List of fish in Sweden", "section": "Section::::List.:Salmoniformes (salmon-like fish).\n", "start_paragraph_id": 79, "start_character": 0, "end_paragraph_id": 79, "end_character": 251, "text": "The Salmoniformes, salmon fish, are of important both as food fish but also as for sport fishers. For sport fishers, the salmon has the foremost position due to its strength and size. In popularity, it is followed by the Brown trout (\"Salmo trutta\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1139042", "title": "Arripis", "section": "Section::::Importance to humans.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 1441, "text": "Pungently flavoured, coarse, and slightly oily flesh makes Australian salmon less desirable as a food fish; it is often sold canned or is smoked to improve its flavour, and bleeding the fish out is also said to help. What is not sold for human consumption is used as bait for rock lobster (Palinuridae) traps and other commercial and recreational fishing. The Australian salmon fetch no more than a few dollars (AU) per kilogram; nonetheless, large numbers are taken via purse seine nets (and to a lesser extent trawling, hauling, gill, and trap nets) annually; the reported 2002–2003 commercial New Zealand catch of \"kahawai\" was 2,900 tonnes. Such reported catches do not include the untold tonnes taken as bycatch from operations targeting more highly valued species. Low-flying planes are used to locate and target sizeable Australian salmon schools, and critics have cited this practice as a means by the industry to artificially inflate catch records (which would give a false impression of abundance). Australian salmon numbers have declined noticeably however, with large specimens becoming ever rarer; the fish have all but disappeared from some areas. On October 1, 2004, the New Zealand Ministry of Fisheries included \"kahawai\" under its Quota Management System, setting a catch limit of 3,035 tonnes for the season. This was a 5% increase over the previous two years, despite the government's intention of lowering catch limits.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "680270", "title": "Atlantic bluefin tuna", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 393, "text": "Atlantic bluefin tuna may exceed in weight, and rival the black marlin, blue marlin, and swordfish as the largest Perciformes. Throughout recorded history, the Atlantic bluefin tuna has been highly prized as a food fish. Besides their commercial value as food, the great size, speed, and power they display as apex predators has attracted the admiration of fishermen, writers, and scientists.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36984", "title": "Salmon", "section": "Section::::As food.\n", "start_paragraph_id": 77, "start_character": 0, "end_paragraph_id": 77, "end_character": 870, "text": "Salmon is a popular food. Classified as an oily fish, salmon is considered to be healthy due to the fish's high protein, high omega-3 fatty acids, and high vitamin D content. Salmon is also a source of cholesterol, with a range of depending on the species. According to reports in the journal \"Science\", farmed salmon may contain high levels of dioxins. PCB (polychlorinated biphenyl) levels may be up to eight times higher in farmed salmon than in wild salmon, but still well below levels considered dangerous. Nonetheless, according to a 2006 study published in the Journal of the American Medical Association, the benefits of eating even farmed salmon still outweigh any risks imposed by contaminants. Farmed salmon has a high omega 3 fatty acid content comparable to wild salmon. The type of omega-3 present may not be a factor for other important health functions.\n", "bleu_score": null, "meta": null } ] } ]
null
342z40
why is it when i'm cold my jaw can shiver ridiculously fast, but when i'm warm i can't physically force myself to shiver my jaw that fast?
[ { "answer": "You can.\n\nHowever, what you're doing is most likely different, when you attempt shiver manually, it sounds like you're trying to contract your muscles rapidly. Shivering isn't exactly that, it's closer to simply vibrating your muscles, it's not a full contraction.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "53641837", "title": "Post micturition convulsion syndrome", "section": "Section::::Explanation.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 218, "text": "There has yet to be any peer-reviewed research on the topic. The most plausible theory, is that the shiver is a result of the autonomic nervous system (ANS) getting its signals mixed up between its two main divisions:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "166035", "title": "Shame", "section": "Section::::Shame Code.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 803, "text": "Individuals who scored higher on this factor typically displayed a lack of any movement, facial tension such as lip biting and furrowing their brows, and a lack of any spoken words. Freezing is ultimately a withdrawal from a situation that one cannot escape physically, hence providing no action (in this case a speech) may reflect an effort to eliminate the possibility of negative evaluation. These behaviors that are included in the freeze factor \"reflected participants\" actual internalized shame, consistent with previous research. Freezing is a behavioral response to threat in mammals and it may be that those who scored higher on this factor were experiencing more intense shame during the speech. They convey a sense of helplessness that may initially elicit sympathetic or comforting actions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1189582", "title": "Shivering", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 754, "text": "Shivering (also called shaking) is a bodily function in response to cold in warm-blooded animals. When the core body temperature drops, the shivering reflex is triggered to maintain homeostasis. Skeletal muscles begin to shake in small movements, creating warmth by expending energy. Shivering can also be a response to a fever, as a person may feel cold. During fever the hypothalamic set point for temperature is raised. The increased set point causes the body temperature to rise (pyrexia), but also makes the patient feel cold until the new set point is reached. Severe chills with violent shivering are called rigors. Rigors occur because the patient's body is shivering in a physiological attempt to increase body temperature to the new set point.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "600102", "title": "Stage fright", "section": "Section::::Effects.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 438, "text": "In trying to resist this position, the body will begin to shake in places such as the legs and hands. Several other things happen besides this. Muscles in the body contract, causing them to be tense and ready to attack. Second, \"blood vessels in the extremities constrict\". This can leave a person with the feeling of cold fingers, toes, nose, and ears. Constricted blood vessels also gives the body extra blood flow to the vital organs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36168126", "title": "Iranian traditional medicine", "section": "Section::::How to recognize one's temperament.:Activity.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 282, "text": "BULLET::::- Warmness brings about increase in exuberance and energy level which boost speech and body movements speed while coldness causes quite the reverse symptoms as people with cold Mizaj don't have much energy and are generally slow, they take their time speaking and acting.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1251647", "title": "Yukon Quest", "section": "Section::::Weather.\n", "start_paragraph_id": 58, "start_character": 0, "end_paragraph_id": 58, "end_character": 661, "text": "The extreme temperatures pose a serious health hazard. Frostbite is common, as is hypothermia. In the 1988 Yukon Quest, Jeff King suffered an entirely frozen hand because of nerve damage from an earlier injury which left him unable to feel the cold. King said his hand became \"like something from a frozen corpse\". In 1989, King and his team drove through a break in the Yukon River in temperatures. Frozen by the extreme cold, King managed to reach a cabin and thaw out. Other racers have suffered permanent damage from the cold: Lance Mackey suffered frostbitten feet during the 2008 Yukon Quest, and Hugh Neff lost the tips of several toes in the 2004 race.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "758763", "title": "Cold-stimulus headache", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 699, "text": "A cold-stimulus headache, colloquially known as an ice-cream headache or brain freeze, is a form of brief pain or headache commonly associated with consumption (particularly quick consumption) of cold beverages or foods such as ice cream and ice pops. It is caused by having something cold touch the roof of the mouth, and is believed to result from a nerve response causing rapid constriction and swelling of blood vessels or a \"referring\" of pain from the roof of the mouth to the head. The rate of intake for cold foods has been studied as a contributing factor. A cold-stimulus headache is distinct from dentin hypersensitivity, a type of dental pain that can occur under similar circumstances.\n", "bleu_score": null, "meta": null } ] } ]
null
233dq5
what does it mean to compile code?
[ { "answer": " > Does it just mean to make the program executable?\n\nLike you're 5, yeah pretty much. It is the process of converting the source code, which is text written/readable by humans, into an executable, which is something a computer can run.\n\nJavascript is an \"interpreted\" language, so the browser reads the source code and runs it directly, but compiled languages have this stage where it's converted into a machine code executable first.", "provenance": null }, { "answer": "Pretty much, yes.\nMany languages, like JavaScript, are interpreted, others are compiled. Compiling turns the human-readable language into a specific set of instructions understandable by the computer. Usually this ends up in an executable file, but you can also compile between languages (like coffeescript into javascript) or compile code into libraries that are used as re-usable components of another executable.\n\nEdit: And in terms of newly acquired software: it depends, but sometimes when you get some open source bit of software, you download the source code and build (compile) the executable program yourself. ", "provenance": null }, { "answer": "A program is, originally, just a bunch of text files that give English-like instructions to the computer. However, the computer doesn't understand these instructions, so the text files are converted to machine language which the computer can use. The conversion is done by a compiler which, using instructions found in some file either written manually or generated by another program, gathers all the necessary files, organizes them, stitches them all together, and translates them into machine language.", "provenance": null }, { "answer": "Put it this way. Code is human readable, but it cannot be \"read\" by computers. Compiling \"converts\" those human readable code into bytecode (this is for Java), so that the computer can understand it, and eventually execute it.\n\nJavascript is not compiled anymore, it's just executed directly. It is a type of interpreted language.", "provenance": null }, { "answer": "As you may know, everything in a computer is represented by a series of 1's and 0's (which themselves represent high and low voltages on transistors, but that's a topic for another time). When the computer runs a program, the program itself is made of a bunch of 1's and 0's.\n\nHowever, since we still need humans to write our programs, putting everything in 1's and 0's (called machine language) would be very difficult. So we made higher level languages like Java and C# to write code in. These languages look a lot more like English, so they're a lot easier to write and maintain.\n\nWhen you compile code, the compilor (usually another program) takes the program the human wrote, and converts it into the program the computer can understand (i.e. converts from Java to machine language). The very short version could be, yes, compile means to make the code executable.\n\nSomething you may run into is people saying code does or does not compile. This means the compilor they used checks to make sure their program is written correctly according to the rules of the programming language. For example, most programming languages make you put a semicolon (;) at the end of every line. A very common mistake is to forget that semicolon, so when you try and compile the compilor gives you an error.\n\nIt's also important to note that just because the code compiles doesn't mean it works. It's sort of like how 3 + 4 < 5 is an equation that has the right form, but it is incorrect.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "225563", "title": "GoboLinux", "section": "Section::::\"Compile\" program.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 220, "text": "Compile is a program that downloads, unpacks, compiles source code tarballs, and installs the resulting executable code, all with a single command (such as codice_25) using simple compilation scripts known as \"recipes\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37764426", "title": "Outline of natural language processing", "section": "Section::::Natural language processing.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 403, "text": "BULLET::::- A subfield of computer programming – process of designing, writing, testing, debugging, and maintaining the source code of computer programs. This source code is written in one or more programming languages (such as Java, C++, C#, Python, etc.). The purpose of programming is to create a set of instructions that computers use to perform specific operations or to exhibit desired behaviors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "904443", "title": "Outline of human–computer interaction", "section": "Section::::What \"type\" of thing is human–computer interaction?\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 407, "text": "BULLET::::- A subfield of computer programming – process of designing, writing, testing, debugging, and maintaining the source code of computer programs. This source code is written in one or more programming languages (such as Java, C++, C#, Python, Php etc.). The purpose of programming is to create a set of instructions that computers use to perform specific operations or to exhibit desired behaviors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40095816", "title": "OS/360 Object File Format", "section": "Section::::Use.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 494, "text": "This format provides for the description of a compiled application's object code, which can be fed to a linkage editor to be made into an executable program, or run directly through an object module loader. It is created by the Assembler or by a programming language compiler. For the rest of this article, unless a reason for being explicit in the difference between a language compiler and an assembler is required, the term \"compile\" includes \"assemble\" and \"compiler\" includes \"assembler.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "189015", "title": "Code generation (compiler)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 215, "text": "In computing, code generation is the process by which a compiler's code generator converts some intermediate representation of source code into a form (e.g., machine code) that can be readily executed by a machine.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5783", "title": "Computer program", "section": "Section::::Computer programming.:Compilation and interpretation.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 243, "text": "A computer program in the form of a human-readable, computer programming language is called source code. Source code may be converted into an executable image by a compiler or assembler, or executed immediately with the aid of an interpreter.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6301802", "title": "Outline of computer programming", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 624, "text": "Computer programming – process that leads from an original formulation of a computing problem to executable computer programs. Programming involves activities such as analysis, developing understanding, generating algorithms, verification of requirements of algorithms including their correctness and resources consumption, and implementation (commonly referred to as coding) of algorithms in a target programming language. Source code is written in one or more programming languages. The purpose of programming is to find a sequence of instructions that will automate performing a specific task or solving a given problem.\n", "bleu_score": null, "meta": null } ] } ]
null
14ep3y
How profound was the influence of Jazz and Blues on American Culture in the twenties?
[ { "answer": "I'm not sure how we could give you a gauge of just how profound the impact was, but for reference the death of David Brubeck made the front pages of the both the New York Times and Washington Post among other today. He first splashed onto the music scene in the 1950s. Some would argue that jazz music is a uniquely American phenomenon and one of the only thoroughly American forms there is.\n\nThe first thing to consider is that even defining jazz is highly controversial. Some believe in a broadly inclusive definition of jazz that can even include elements of hip hop. Others believe that bebop is quintessential jazz music and that music that get labeled jazz that came after is just watered down pop music. The Ken Burns Jazz series more closely reflected the latter view, which is why I always warn folks that that particular documentary series is too one-sided to consume without also looking into other viewpoints.\n\nIt would be tough to underestimate Jazz's cultural impact because the music often went hand-in-hand with other social and cultural issues. Jazz music grew up surrounded with controversy and there was a good deal of reactionary push-back against it. Jazz clubs were among some of the earlier integrated public gatherings. African American communities used jazz as a form of high art; some wanted to develop a distinct black culture while others thought that jazz as an art form could help break down racial barriers. Jazz and prohibition were part of a unique subculture during prohibition. Jazz was highly contentious within the music community because it was a pretty radical departure from the western tradition. The Nazis thought it was influential enough to label it \"degenerate art\" and prohibit it. The U.S. government sent jazz musicians overseas as cultural emissaries and it was used as part of the U.S.'s strategy to undermine soviet regimes. Early in his career Louis Armstrong was labeled by some as an Uncle Tom, but by the 1950s he was a forceful civil rights voice.\n\nThe contemporary literature can give you great insight into some of these issues and Robert Walser's collection of articles in *Keeping Time: Readings in Jazz History* is a great starting point to gain a cultural understanding.\n\nedit: This post doesn't really answer the question, but since nobody else answered I was hoping that some of the themes I mentioned could spur some discussion by some professional historians.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "369155", "title": "Roaring Twenties", "section": "Section::::Society.:Jazz Age.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 789, "text": "The 1920s brought new styles of music into the mainstream of culture in avant-garde cities. Jazz became the most popular form of music for youth. Historian Kathy J. Ogren wrote that, by the 1920s, jazz had become the \"dominant influence on America's popular music generally\" Scott DeVeaux argues that a standard history of jazz has emerged such that: \"After an obligatory nod to African origins and ragtime antecedents, the music is shown to move through a succession of styles or periods: New Orleans jazz up through the 1920s, swing in the 1930s, bebop in the 1940s, cool jazz and hard bop in the 1950s, free jazz and fusion in the 1960s... There is substantial agreement on the defining features of each style, the pantheon of great innovators, and the canon of recorded masterpieces.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "199915", "title": "Jazz Age", "section": "Section::::Elements and influences.:Youth.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 648, "text": "Young people in the 1920s used the influence of jazz to rebel against the traditional culture of previous generations. This youth rebellion of the 1920s went hand-in-hand with fads like bold fashion statements (flappers), women that smoked cigarettes, a willingness to talk about sex freely, and new radio concerts. Dances like the Charleston, developed by African Americans, suddenly became popular among the youth. Traditionalists were aghast at what they considered the breakdown of morality. Some urban middle-class African Americans perceived jazz as \"devil's music\", and believed the improvised rhythms and sounds were promoting promiscuity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "199915", "title": "Jazz Age", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1062, "text": "The Jazz Age was a period in the 1920s and 1930s in which jazz music and dance styles rapidly gained nationwide popularity in the United States. The Jazz Age's cultural repercussions were primarily felt in the United States, the birthplace of jazz. Originating in New Orleans as a fusion of African and European music, jazz played a significant part in wider cultural changes in this period, and its influence on pop culture continued long afterward. The Jazz Age is often referred to in conjunction with the Roaring Twenties, and in the United States it overlapped in significant cross-cultural ways with the Prohibition Era. The movement was largely affected by the introduction of radios nationwide. During this time, the Jazz Age was intertwined with the developing cultures of young people, women, and African Americans. The movement also helped start the beginning of the European Jazz movement. American author F. Scott Fitzgerald is widely credited with coining the term, first using it in his 1922 short story collection titled \"Tales of the Jazz Age\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29790369", "title": "1920s in jazz", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 842, "text": "The period from the end of the First World War until the start of the Depression in 1929 is known as the \"Jazz Age\". Jazz had become popular music in America, although older generations considered the music immoral and threatening to cultural values. Dances such as the Charleston and the Black Bottom were very popular during the period, and jazz bands typically consisted of seven to twelve musicians. Important orchestras in New York were led by Fletcher Henderson, Paul Whiteman and Duke Ellington. Many New Orleans jazzmen had moved to Chicago during the late 1910s in search of employment; among others, the New Orleans Rhythm Kings, King Oliver's Creole Jazz Band and Jelly Roll Morton recorded in the city. However, Chicago's importance as a center of jazz music started to diminish toward the end of the 1920s in favor of New York. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "250056", "title": "Jazz standard", "section": "Section::::1920s.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 808, "text": "A period known as the \"Jazz Age\" started in the United States in the 1920s. Jazz had become popular music in the country, although older generations considered the music immoral and threatening to old cultural values. Dances such as the Charleston and the Black Bottom were very popular during the period, and jazz bands typically consisted of seven to twelve musicians. Important orchestras in New York were led by Fletcher Henderson, Paul Whiteman and Duke Ellington. Many New Orleans jazzmen had moved to Chicago during the late 1910s in search of employment; among others, the New Orleans Rhythm Kings, King Oliver's Creole Jazz Band and Jelly Roll Morton recorded in the city. However, Chicago's importance as a center of jazz music started to diminish toward the end of the 1920s in favor of New York.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22851209", "title": "List of 1920s jazz standards", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 808, "text": "A period known as the \"Jazz Age\" started in the United States in the 1920s. Jazz had become popular music in the country, although older generations considered the music immoral and threatening to old cultural values. Dances such as the Charleston and the Black Bottom were very popular during the period, and jazz bands typically consisted of seven to twelve musicians. Important orchestras in New York were led by Fletcher Henderson, Paul Whiteman and Duke Ellington. Many New Orleans jazzmen had moved to Chicago during the late 1910s in search of employment; among others, the New Orleans Rhythm Kings, King Oliver's Creole Jazz Band and Jelly Roll Morton recorded in the city. However, Chicago's importance as a center of jazz music started to diminish toward the end of the 1920s in favor of New York.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21145141", "title": "Jazz education", "section": "Section::::Early jazz education.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 483, "text": "In the late 1910s and early 1920s, jazz begins to move north to Chicago and New York City. These two urban areas were particularly popular because they provided a larger audience base for performers and closer proximity to recording studios. During the early part of the 1920s, New Orleans Jazz was prevalent in the many nightclubs sprung up in Chicago. In New York a new style of jazz became immensely popular. This style, known as Big Band, ushered in a new era of jazz education.\n", "bleu_score": null, "meta": null } ] } ]
null
2glbgt
why do antivirus programs identify videogame cracks as dangerous viruses or trojans?
[ { "answer": "Because sometimes (read mostly) they are. In that form the attacker finds easy targets. \n\nThere is no free lunch. Someone is getting screwed when you install a cracked videogame.", "provenance": null }, { "answer": "They will often have information from game companies about the file signature the game should have. If the file doesn't match the signature, then it marks the file as being compromised -- because, well, it has, even if you're the one who intentionally compromised it with the crack.", "provenance": null }, { "answer": "It depends. With some code, especially in some games, it can be misidentified as a virus due to the fact it is similar to a virus' code, marking it as a virus or trojan. On top of this, some malicious uploaders put viruses or trojans into your download, so that, when you run it, it activates the malicious code. Other times they are false-positives, where the anti-virus program flags it as a virus when there is nothing wrong with it. Moral of the story, don't get PC game cracks if you want to be safe, and legal, but if you want to, I will not stop you.", "provenance": null }, { "answer": "Because most of the time crack are viruses and/or trojans. That is how they break into the game. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "218447", "title": "Polymorphic code", "section": "Section::::Malicious code.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 446, "text": "Most anti-virus software and intrusion detection systems (IDS) attempt to locate malicious code by searching through computer files and data packets sent over a computer network. If the security software finds patterns that correspond to known computer viruses or worms, it takes appropriate steps to neutralize the threat. Polymorphic algorithms make it difficult for such software to recognize the offending code because it constantly mutates.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47785", "title": "Dr. Mario", "section": "Section::::Reception.:Legacy.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 261, "text": "The viruses appear as enemies in \"\" and \"\". In that game, they change colors every time they are attacked, and they are all defeated when they are all the same color, in a similar fashion to how they are defeated by the same color of the capsules in Dr. Mario.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31469655", "title": "Cybercrime countermeasures", "section": "Section::::Types of threats.:Technical.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 430, "text": "Antivirus can be used to prevent propagation of malicious code. Most computer viruses have similar characteristics which allow for signature based detection. Heuristics such as file analysis and file emulation are also used to identify and remove malicious programs. Virus definitions should be regularly updated in addition to applying operating system hotfixes, service packs, and patches to keep computers on a network secure.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20901", "title": "Malware", "section": "Section::::Anti-malware strategies.:Anti-virus and anti-malware software.\n", "start_paragraph_id": 64, "start_character": 0, "end_paragraph_id": 64, "end_character": 648, "text": "Real-time protection from malware works identically to real-time antivirus protection: the software scans disk files at download time, and blocks the activity of components known to represent malware. In some cases, it may also intercept attempts to install start-up items or to modify browser settings. Because many malware components are installed as a result of browser exploits or user error, using security software (some of which are anti-malware, though many are not) to \"sandbox\" browsers (essentially isolate the browser from the computer and hence any malware induced change) can also be effective in helping to restrict any damage done.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53770851", "title": "Cyber self-defense", "section": "Section::::Measures.:Preventative Software Measures.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 230, "text": "BULLET::::- Use, but do not rely solely on antivirus software, as evading it is trivial for threat actors due to its reliance on an easily altered digital signature, a form of applied hash, of the previously known malicious code.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "268622", "title": "Antivirus software", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 762, "text": "Antivirus software was originally developed to detect and remove computer viruses, hence the name. However, with the proliferation of other kinds of malware, antivirus software started to provide protection from other computer threats. In particular, modern antivirus software can protect users from: malicious browser helper objects (BHOs), browser hijackers, ransomware, keyloggers, backdoors, rootkits, trojan horses, worms, malicious LSPs, dialers, fraudtools, adware and spyware. Some products also include protection from other computer threats, such as infected and malicious URLs, spam, scam and phishing attacks, online identity (privacy), online banking attacks, social engineering techniques, advanced persistent threat (APT) and botnet DDoS attacks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18994196", "title": "Computer virus", "section": "Section::::Vulnerabilities and infection vectors.:Software bugs.\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 428, "text": "As software is often designed with security features to prevent unauthorized use of system resources, many viruses must exploit and manipulate security bugs, which are security defects in a system or application software, to spread themselves and infect other computers. Software development strategies that produce large numbers of \"bugs\" will generally also produce potential exploitable \"holes\" or \"entrances\" for the virus.\n", "bleu_score": null, "meta": null } ] } ]
null
2gynsa
What's the fewest celled multicellular life? Are there any 2-celled organisms?
[ { "answer": "There certainly is something in between: a vast array of small creatures. The simplest of which and the smallest integrated multicellular organism is the [Tetrabaena Socialis](_URL_1_). This volvocid forms a colony of four ovoid cells, each with two equal flagella, two contractile vacuoles, and a pyrenoid and a red eyespot within a single green chloroplast. \nColonial cells are attached to each other by the protuberances of their cellular sheaths and are also held together by a gelatinous capsule surrounding the entire colony. \n[Here](_URL_0_) is a video of one swimming with it's eight tails, really cute.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "164897", "title": "Three-domain system", "section": "Section::::Niches.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 206, "text": "\"Parakaryon myojinensis\" (\"incertae sedis\") is a single-celled organism known by a unique example. \"This organism appears to be a life form distinct from prokaryotes and eukaryotes\", with features of both.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12305127", "title": "Evolutionary history of life", "section": "Section::::Sexual reproduction and multicellular organisms.:Multicellularity.\n", "start_paragraph_id": 55, "start_character": 0, "end_paragraph_id": 55, "end_character": 747, "text": "The simplest definitions of \"multicellular,\" for example \"having multiple cells,\" could include colonial cyanobacteria like \"Nostoc\". Even a technical definition such as \"having the same genome but different types of cell\" would still include some genera of the green algae Volvox, which have cells that specialize in reproduction. Multicellularity evolved independently in organisms as diverse as sponges and other animals, fungi, plants, brown algae, cyanobacteria, slime molds and myxobacteria. For the sake of brevity, this article focuses on the organisms that show the greatest specialization of cells and variety of cell types, although this approach to the evolution of biological complexity could be regarded as \"rather anthropocentric.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23084", "title": "Paleontology", "section": "Section::::History of life.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 456, "text": "Multicellular life is composed only of eukaryotic cells, and the earliest evidence for it is the Francevillian Group Fossils from , although specialisation of cells for different functions first appears between (a possible fungus) and (a probable red alga). Sexual reproduction may be a prerequisite for specialisation of cells, as an asexual multicellular organism might be at risk of being taken over by rogue cells that retain the ability to reproduce.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6994700", "title": "Picoplankton", "section": "Section::::Classification.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 350, "text": "Furthermore, some species can also be mixotrophic. The smallest of cells (200 nm) are on the order of nanometers, not picometers. The SI prefix pico- is used quite loosely here, as nanoplankton and microplankton are only 10 and 100 times larger, respectively, although it is somewhat more accurate when considering the volume rather than the length.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "285948", "title": "Unicellular organism", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 570, "text": "A unicellular organism, also known as a single-celled organism, is an organism that consists of only one cell, unlike a multicellular organism that consists of more than one cell. Unicellular organisms fall into two general categories: prokaryotic organisms and eukaryotic organisms. Prokaryotes include bacteria and archaea. Many eukaryotes are multicellular, but the group includes the protozoa, unicellular algae, and unicellular fungi. Unicellular organisms are thought to be the oldest form of life, with early protocells possibly emerging 3.8–4 billion years ago.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4230", "title": "Cell (biology)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 267, "text": "Cells consist of cytoplasm enclosed within a membrane, which contains many biomolecules such as proteins and nucleic acids. Organisms can be classified as unicellular (consisting of a single cell; including bacteria) or multicellular (including plants and animals). \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13141637", "title": "Multinucleate", "section": "Section::::Physiological examples.:Coenocytes.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 496, "text": "Furthermore, multinucleate cells are produced from specialized cell cycles in which nuclear division occurs without cytokinesis, thus leading to large coenocytes or plasmodia. In filamentous fungi, multinucleate cells may extend over hundreds of meters so that different regions of a single cell experience dramatically different microenvironments. Other examples include, the plasmodia of plasmodial slime molds (myxogastrids) and the schizont of the \"Plasmodium\" parasite which causes malaria.\n", "bleu_score": null, "meta": null } ] } ]
null
ngont
Why is it that in airliners, the cockpit windows are polygonal whereas the passenger windows have to be round to avoid stress concentrations?
[ { "answer": "Cockpit windows are designed to withstand bird strikes and are much stronger than passenger windows. ", "provenance": null }, { "answer": "The cockpit windows are in the rounded front of the fuselage, which is not a load-bearing part of the structure. An airplane fuselage is essentially just a tube, a structure that will maintain its shape and strength whether capped or open-ended. The rounded front end of an airplane is just there for aerodynamic purposes (and to keep people from falling out), so it can structurally afford large-paned windows.\n\nThe lack of a structural role for the nose of an airplane can be seen in many bombers from WWII (especially [German ones](_URL_0_)), where the entire nose was just glazed panels of glass. Rest assured that glazed noses like this were not capable of carrying significant stress loads.\n\n", "provenance": null }, { "answer": "The stress you are talking about comes from pressurization. For safety, the windows have to withstand more than the operational 5-9 PSI of pressure pressing out at all times. It is expensive (cost and weight) to install windows that can withstand thousands of pounds of force. The cockpit windows are special because they need to be big enough for pilot view. You'll notice that there are several partitioned windows in the front of a jumbo jet. This is to minimize the pressure each window has to withstand.", "provenance": null }, { "answer": "Cockpit windows are designed to withstand bird strikes and are much stronger than passenger windows. ", "provenance": null }, { "answer": "The cockpit windows are in the rounded front of the fuselage, which is not a load-bearing part of the structure. An airplane fuselage is essentially just a tube, a structure that will maintain its shape and strength whether capped or open-ended. The rounded front end of an airplane is just there for aerodynamic purposes (and to keep people from falling out), so it can structurally afford large-paned windows.\n\nThe lack of a structural role for the nose of an airplane can be seen in many bombers from WWII (especially [German ones](_URL_0_)), where the entire nose was just glazed panels of glass. Rest assured that glazed noses like this were not capable of carrying significant stress loads.\n\n", "provenance": null }, { "answer": "The stress you are talking about comes from pressurization. For safety, the windows have to withstand more than the operational 5-9 PSI of pressure pressing out at all times. It is expensive (cost and weight) to install windows that can withstand thousands of pounds of force. The cockpit windows are special because they need to be big enough for pilot view. You'll notice that there are several partitioned windows in the front of a jumbo jet. This is to minimize the pressure each window has to withstand.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "11931379", "title": "East Lancs 1984-style double-deck body", "section": "Section::::Description.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 304, "text": "This distinctive style of bodywork has a downward-sloping front window bay on the upper deck, with both top and bottom edges angled downwards. The side windows are square-cornered. A large double-curvature upper deck windscreen (either single-piece or two-piece) is one of the most distinctive features.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8657638", "title": "Airline seat", "section": "Section::::Seating layout.:Arrangement.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 711, "text": "Window seats are located at the sides of the aircraft, and usually next to a window, although some aircraft have seat rows where there is a window missing. Window seats are preferred by passengers who want to have a view, or a wall which they can lean against. Passengers in seats adjacent to the aisle have the advantage of being able to leave the seat without having to clamber over the other passengers, and having an aisle they can stretch their legs into. If a seat block has three or more seats, there will also be middle seats which are unpopular because the passenger is sandwiched between two other passengers without advantages of either window or aisle seats. Middle seats are typically booked last.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "307133", "title": "Boeing 787 Dreamliner", "section": "Section::::Design.:Interior.\n", "start_paragraph_id": 82, "start_character": 0, "end_paragraph_id": 82, "end_character": 684, "text": "The 787's cabin windows are larger than any other civil air transport in-service or in development, with dimensions of , and a higher eye level so passengers can maintain a view of the horizon. The composite fuselage permits larger windows without the need for structural reinforcement. Instead of plastic window shades, the windows use electrochromism-based smart glass (supplied by PPG Industries) allowing flight attendants and passengers to adjust five levels of sunlight and visibility to their liking, reducing cabin glare while maintaining a view to the outside world, but the most opaque setting still has some transparency. The lavatory, however, has a traditional sunshade.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "160878", "title": "Cockpit", "section": "Section::::Ergonomics.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 323, "text": "Cockpit windows may be equipped with a sun shield. Most cockpits have windows that can be opened when the aircraft is on the ground. Nearly all glass windows in large aircraft have an anti-reflective coating, and an internal heating element to melt ice. Smaller aircraft may be equipped with a transparent aircraft canopy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3686275", "title": "BOAC Flight 781", "section": "Section::::Effects of the disaster and findings.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 427, "text": "In addition, it was discovered that the stresses around pressure cabin apertures were considerably higher than had been anticipated, particularly around sharp-cornered cut-outs, such as square windows. As a result, future jet airliners would feature windows with rounded corners, the purpose of the curve being to eliminate a stress concentration. This was a noticeable distinguishing feature of all later models of the Comet.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2496256", "title": "Amfleet", "section": "Section::::Design.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 600, "text": "The interior design recalled contemporary jet airliners. In common with airliners the cars featured narrow windows, which inhibited sight-seeing. The windows on the Amfleet I cars were ; this was increased to in the Amfleet II. Another factor in choosing small windows was the high incidence of rocks thrown at train windows in the 1970s. Reinforcing the impression of traveling in an airliner, the passenger seats themselves were built by the Amirail division of Aircraft Mechanics Inc. Cesar Vergara, head of car design at Amtrak in the 1990s, criticized the choice to copy the airliner aesthetic:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "348898", "title": "Fatigue (material)", "section": "Section::::Notable fatigue failures.:de Havilland Comet.\n", "start_paragraph_id": 121, "start_character": 0, "end_paragraph_id": 121, "end_character": 652, "text": "In addition, it was discovered that the stresses around pressure cabin apertures were considerably higher than had been anticipated, especially around sharp-cornered cut-outs, such as windows. As a result, all future jet airliners would feature windows with rounded corners, greatly reducing the stress concentration. This was a noticeable distinguishing feature of all later models of the Comet. Investigators from the RAE told a public inquiry that the sharp corners near the Comets' window openings acted as initiation sites for cracks. The skin of the aircraft was also too thin, and cracks from manufacturing stresses were present at the corners.\n", "bleu_score": null, "meta": null } ] } ]
null
7ueul0
In 1939, the eastern half of Poland was occupied by the Soviet Union. What was life like under the Soviet occupation and what happened to the previously Soviet occupied territories after their liberation from the Germans?
[ { "answer": "During the Soviet occupation, one of the first things they did was to eliminate any chance of resistance to their occupation. The NKVD already had a large number of Polish Army officers and enlisted in prisoner camps, and to these numbers added university professors, lawyers, police officers, priests, politicians, essentially anyone who could become the focal point of resistance or who could inspire others to resist. Some were just “capitalists” such as factory owners, but the majority was part of the social and civil fabric and could become potential sources of resistance. The NKVD at one point had approximately 500,000 Poles in prison camps in Poland. The NKVD proceeded to liquidate the most troublesome @22,000m of which 14,000 were captured military or police officers. Enlisted soldiers in many instances were shipped to Siberia to provide labor gangs. The executed were then collected and buried in several mass graves in the Katyn Forrest. These graves would remain hidden until the Germans overran the area during their invasion eastward. There are another estimated 130,000 executed by the Soviets during the occupation, on top of the Katyn incident. The Katyn incident is the most notorious. The Soviets justified in that Poland was never a country, just merely a rebellious extension of Belarus and Ukraine, so captured Polish army members were not afforded the rights of POWs and were simply criminals. \n\nInitially the occupation was resented by Poles, but welcomed by ethnic Ukrainians and other minorities within the Polish boundaries. This ended quickly as it became clear the Soviets were not going to allow these groups any measure of self-determination, and would be suppressed like the Poles. The Soviets begain a process of de-Polandizing Poland. Disbanded the government, replaced the currency, the Soviets also engaged in widespread looting of Polish industry, shipping the machinery east, as well as plundering Polish national treasures, as well as petty looting of the populace. No public organizations were allowed to exist, the university and the school system had any elements of Polish culture stripped from them and were reorganized to be Soviet institutions. Sexual violence by Soviet troops appears to also have reached epidemic levels among Polish women, but historians have had only scattered and fragmented accounts and cannot accurately place a number, or even really give an accurate guess on the number of victims. (The Red Army would repeat this during their return to Poland as they drove West into Germany years later.) The Soviets began mass deportations in several cities, as well as deporting those arrested and convicted of anti-revolutionary activity or crimes against the Soviet Union. Roughly a million or so Poles were deported to Siberia and Kazakhstan. The land was reorganized according to Soviet guidelines and collectivized, as were all remaining industries. Basically, the Soviets were doing everything possible to annex and integrate the captured territory into the Soviet Union.\n\n*Between Nazis and Soviets* - Jan Chodakiewicz \n\n*Katyn* – Paul Allen \n\n*Poland's Holocaust* - Tadeusz Piotrowski\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "464698", "title": "Massacres of Poles in Volhynia and Eastern Galicia", "section": "Section::::Background.:Second World War.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 1010, "text": "In September 1939, at the outbreak of World War II and in accordance with the secret protocol of the Molotov–Ribbentrop Pact, Poland was invaded from the west by Nazi Germany and from the east by the Soviet Union. Volhynia was split by the Soviets into two oblasts, Rovno and Volyn of the Ukrainian SSR. Upon the annexation, the Soviet NKVD started to eliminate the predominantly Polish middle and upper classes, including social activists and military leaders. Between 1939–1941, 200,000 Poles were deported to Siberia by the Soviet authorities. Many Polish prisoners of war were deported to the East Ukraine where most of them were executed in basements of the Kharkiv NKVD offices. Estimates of the number of Polish citizens transferred to the Eastern European part of the USSR, the Urals, and Siberia range from 1.2 to 1.7 million. Tens of thousands of Poles fled from the Soviet-occupied zone to areas controlled by the Germans. The deportations and murders deprived the Poles of their community leaders.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "358259", "title": "Poles in the Soviet Union", "section": "Section::::History of Poles in the Soviet Union.:1939–1947.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 438, "text": "During World War II, after the Soviet invasion of Poland the Soviet Union occupied vast areas of eastern Poland (referred to in Poland as \"Kresy wschodnie\" or \"eastern Borderlands\"), and another 5.2–6.5 million ethnic Poles (from the total population of about 13.5 million residents of these territories) were added, followed by further large-scale forcible deportations to Siberia, Kazakhstan and other remote areas of the Soviet Union.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "194937", "title": "Byelorussian Soviet Socialist Republic", "section": "Section::::History.:Stalinist years.\n", "start_paragraph_id": 58, "start_character": 0, "end_paragraph_id": 58, "end_character": 429, "text": "In September 1939, the Soviet Union, following the Molotov–Ribbentrop Pact with Nazi Germany, occupied eastern Poland after the 1939 invasion of Poland. The former Polish territories referred to as West Belarus were incorporated into the Belarusian SSR, with an exception of the city of Vilnius and its surroundings that were transferred to Lithuania. The annexation was internationally recognized after the end of World War II.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "629572", "title": "Population transfer in the Soviet Union", "section": "Section::::Ethnic operations.:Western annexations and deportations, 1939–1941.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 832, "text": "After the Soviet invasion of Poland following the corresponding German invasion that marked the start of World War II in 1939, the Soviet Union annexed eastern parts (known as \"Kresy\" to the Polish or as West Belarus and West Ukraine in the USSR and among Belarusians and Ukrainians) of the Second Polish Republic, which since then became western parts of the Belarusian SSR and the Ukrainian SSR. During 1939–1941, 1.45 million people inhabiting the region were deported by the Soviet regime. According to Polish historians, 63.1% of these people were Poles and 7.4% were Jews. Previously it was believed that about 1.0 million Polish citizens died at the hands of the Soviets, but recently Polish historians, based mostly on queries in Soviet archives, estimate the number of deaths at about 350,000 people deported in 1939–1945.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11839455", "title": "Janowa Dolina massacre", "section": "Section::::World War II.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 569, "text": "In September 1939, Soviet troops, following the Molotov-Ribbentrop Pact, attacked the eastern part of Poland, which was not guarded by the Polish Army, as at the same time the Poles were fighting the Germans in the West. Eastern Poland (Kresy) was quickly occupied, together with Janowa Dolina, which, like the entire Volhynian Voivodeship, became part of the Ukrainian Soviet Socialist Republic. Together with Soviet rule came mass deportations to Siberia and other areas of the empire; between September 1939 and June 1941 Janowa Dolina lost hundreds of inhabitants.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3457", "title": "Belarus", "section": "Section::::History.:Byelorussian Soviet Socialist Republic.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 1222, "text": "In 1939, Nazi Germany and the Soviet Union invaded and occupied Poland, marking the beginning of World War II. The Soviets invaded and annexed much of eastern Poland, which had been part of the country since the Peace of Riga two decades earlier. Much of the northern section of this area was added to the Byelorussian SSR, and now constitutes West Belarus. The Soviet-controlled Byelorussian People's Council officially took control of the territories, whose populations consisted of a mixture of Poles, Ukrainians, Belarusians and Jews, on 28 October 1939 in Białystok. Nazi Germany invaded the Soviet Union in 1941. The Brest Fortress, which had been annexed in 1939, at this time was subjected to one of the most destructive onslaughts that happened during the war. Statistically, the Byelorussian SSR was the hardest-hit Soviet republic in World War II; it remained in Nazi hands until 1944. During that time, Germany destroyed 209 out of 290 cities in the republic, 85% of the republic's industry, and more than one million buildings. The Nazi \"Generalplan Ost\" called for the extermination, expulsion or enslavement of most or all Belarusians for the purpose of providing more living space in the East for Germans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11829761", "title": "Military occupations by the Soviet Union", "section": "Section::::Poland (1939–1956).\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 410, "text": "After the end of World War II, the Soviet Union kept most of the territories it occupied in 1939, while territories with an area of 21,275 square kilometers with 1.5 million inhabitants were returned to communist-controlled Poland, notably the areas near Białystok and Przemyśl. In 1944–1947, over a million Poles were resettled from the annexed territories into Poland (mostly into the Regained Territories).\n", "bleu_score": null, "meta": null } ] } ]
null
46jmwv
i have a microwave oven that has a spinning carousel that turns clockwise sometimes and counterclockwise at others with no discernible pattern. why and how does it do this?
[ { "answer": "It took me an embarrassingly long amount of time to figure this out but I noticed a similar thing with mine and it drove me nuts. The spin changed direction every time the door was opened, assuming you are checking the temp and closing it for the next round BUT only if you opened it all the way. You have to open it way wider than you actually need most times. So sometimes I was opening it enough to switch the direction, sometimes I wasn't just by happenstance.", "provenance": null }, { "answer": "Cheap motor. It will continue spinning in either direction, and starting direction depends on the exact time it's started (from the alternating power supply)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2447871", "title": "Nordström's theory of gravitation", "section": "Section::::Features of Nordström's theory.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 480, "text": "Thus, in Nordström's theory, if the nearly elliptical orbit is transversed counterclockwise, the long axis slowly rotates \"clockwise\", whereas in general relativity, it rotates \"counterclockwise\" six times faster. In the first case we may speak of a periastrion \"lag\" and in the second case, a periastrion \"advance\". In either theory, with more work, we can derive more general expressions, but we shall be satisfied here with treating the special case of nearly circular orbits.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43764346", "title": "AI Mk. VIII radar", "section": "Section::::Development.:Scanning.\n", "start_paragraph_id": 53, "start_character": 0, "end_paragraph_id": 53, "end_character": 1090, "text": "The team first considered spinning the radar dish around a vertical axis and then angling the dish up and down a few degrees with each complete circuit. The vertical motion could be smoothed out by moving continually rather than in steps, producing a helix pattern. However, this helical-scan solution had two disadvantages; one was that the dish spent half of its time pointed backwards, limiting the amount of energy broadcast forward, and the other was that it required the microwave energy to somehow be sent to the antenna through a rotating feed. At a 25 October all-hands meeting attended by Dee, Hodgkin and members of the GEC group at GEC's labs, the decision was made to proceed with the helical-scan solution in spite of these issues. GEC solved the problem of having the signal turned off half the time by using two dishes mounted back-to-back and switching the output of the magnetron to the one facing forward at that instant. They initially suggested that the system would be available by December 1940, but as work progressed it became clear that it would take much longer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5879463", "title": "Electric clock", "section": "Section::::Synchronous electric clock.:Spin-start clocks.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 637, "text": "The earliest synchronous clocks from the 1930s were not self-starting, and had to be started by spinning a starter knob on the back. An interesting flaw in these \"spin-start\" clocks was that the motor could be started in either direction, so if the starter knob was spun in the wrong direction the clock would run backwards, the hands turning counterclockwise. Later manual-start clocks had ratchets or other linkages which prevented backwards starting. The invention of the shaded-pole motor allowed self-starting clocks to be made, but since the clock would restart after a power interruption, the loss of time would not be indicated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43764346", "title": "AI Mk. VIII radar", "section": "Section::::Description.:Displays and interpretation.\n", "start_paragraph_id": 181, "start_character": 0, "end_paragraph_id": 181, "end_character": 879, "text": "The other effect occurred when the dish was pointed towards the ground, causing a strong return that produced a sharp return on the display. Due to the circular scanning pattern, the dish would be pointed to the sides when the beam first struck the ground, continuing to strike the ground while the scanner continued rotating until it is pointed down, and then back up until the beam no longer intersects the ground again. Since the beam strikes the ground at a point closer to the aircraft when it is pointed straight down, the returns during this period are closest to the zero ring. When the reflector rotated further to the sides the beam would strike the ground further away and produce blips further from the zero line. Conveniently, the geometry of the situation causes the returns to form a series of straight lines, producing an effect similar to an artificial horizon.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "661104", "title": "Henry T. Hazard", "section": "Section::::Wheel clock.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 233, "text": "A neighbor who had participated in setting up the wheel said it was \"so delicately balanced that it could be started rotating by the touch of a matchstick and continue to turn in the opposite direction to the rotation of the earth.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22591478", "title": "Ring spinning", "section": "Section::::How it works.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 679, "text": "The traveller, and the spindle share the same axis but rotate at different speeds. The spindle is driven and the traveller drags behind thus distributing the rotation between winding up on the spindle and twist into the yarn. The bobbin is fixed on the spindle. In a ring frames, the different speed was achieved by drag caused by air resistance and friction (lubrication of the contact surface between the traveller and the ring was a necessity). Spindles could rotate at speeds up to 25,000 rpm, this spins the yarn. The up and down ring rail motion guides the thread onto the bobbin into the shape required: i.e. a cop. The lifting must be adjusted for different yarn counts.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "251196", "title": "Twistor memory", "section": "Section::::Twistor.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 654, "text": "Twistor was similar in concept to core memory, but replaced the circular magnets with magnetic tape to store the patterns. The tape was wrapped around one set of the wires, the equivalent of the X line, in such a way that it formed a 45-degree helix. The Y wires were replaced by solenoids wrapping a number of twistor wires. Selection of a particular bit was the same as in core, with one X and Y line being powered, generating a field at 45 degrees. The magnetic tape was specifically selected to only allow magnetization along the length of the tape, so only a single point of the twistor would have the right direction of field to become magnetized.\n", "bleu_score": null, "meta": null } ] } ]
null
51co2s
psychologists, psychiatrists, and other experts of reddit, what makes one an addict?
[ { "answer": "Questions similar to this come up fairly often, and I always point out the following:\n\n**Addiction** is *not* the same thing as **Physical Dependency**. \n\nIf you are physically dependent on a substance (such as nicotine or alcohol) then it is 100% the substance you are physically dependent on. That's *not* \"in your mind\", as it were. Your body gets used to the substance, depends on it, and if you stop you have symptoms that can range from bing irritable to life-threatening conditions. \n\nNow, in *addition* to that, you also form a mental addiction. A habit. We are creatures of routine; classical conditioning works extremely well with Humans. We get used to stuff like smoking after a meal or drinking while watching TV, to the point where removing one makes the other feel uncomfortable. \n\nPlease note that mental addiction need not be to a substance. It can really be to *anything*. Playing video games for hours upon end after work/school, for example. \n\nIt can also *easily* apply to things people will claim are not addictive (I'm looking at you, marijuana). The truth is it might not cause physical dependency, but it absolutely can cause addiction. \n\nAs to what makes one more susceptible to addiction - we're not absolutely sure. Again, Humans in general like routine and habits, so that's kind of built-in. We also have a reward center in our brain that likes to release dopamine when something good happens - this helps addiction form (think gambling addiction: your brain releases dopamine when you win, which feels good, so you keep gambling, chasing the dopamine high!)\n\nThere are some genetic traits we're pretty sure affect this. For example, people who's brain is a bit liberal with dopamine are in a higher risk group.\n\nBut like any other genetic disposition, there are likely numerous other factors towards or against it. For example, you may be predisposed to high blood pressure, but you also stay in good shape and eat healthy, so you *don't* have high blood pressure. \n\nAs for psychological/sociological markers, I'm afraid there's no real answer. Addiction strikes across cultures, across economic classes, ages, genders, races, you name it. You can use statistics to say who's more likely to become addicted, but that doesn't work on individuals. \n\nAs for real-world applications, one thing seems to be true - addicts need to *want* to kick an addiction. \n\nAfter that, again, it's a very individual process. Finding out the reasons that lead one to their addiction is key, and then figuring out ways to avoid them. Whether that means \"Stay away from bars\" if you're an alcoholic, \"Find new friends\" if your friends are your catalyst, etc. There's not one magic treatment that works for everyone - that's why there are many different psychological models that therapists can use, and many use a combination of several. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "26955647", "title": "Addictive personality", "section": "Section::::Signs and symptoms.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 376, "text": "An addict is more prone to depression, anxiety, and anger. Both the addict's environment, genetics and biological tendency contribute to their addiction. People with very severe personality disorders are more likely to become addicts. Addictive substances usually stop primary and secondary neuroses, meaning people with personality disorders like the relief from their pain.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31398", "title": "Twelve-step program", "section": "Section::::Overview.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 398, "text": "Demographic preferences related to the addicts' drug of choice has led to the creation of Cocaine Anonymous, Crystal Meth Anonymous and Marijuana Anonymous. Behavioral issues such as compulsion for, and/or addiction to, gambling, crime, food, sex, hoarding, debting and work are addressed in fellowships such as Gamblers Anonymous, Overeaters Anonymous, Sexaholics Anonymous and Debtors Anonymous.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47392548", "title": "Addictive Behaviors", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 471, "text": "Addictive Behaviors is a monthly peer-reviewed scientific journal published by Elsevier. It was established in 1975 by Peter M. Miller (Medical University of South Carolina), who remained at the helm of the journal until December 2017. The current editor-in-chief is Marcantonio M. Spada (London South Bank University), who took over from Miller in January 2018. The journal covers behavioral and psychosocial research concerning addictive behaviors in its widest sense.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6975626", "title": "Psychology of Addictive Behaviors", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 315, "text": "Psychology of Addictive Behaviors is a peer-reviewed academic journal of the American Psychological Association that publishes original articles related to the psychological aspects of addictive behaviors 8 times a year. The current editor-in-chief is Nancy M. Petry (University of Connecticut School of Medicine).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41961484", "title": "Peg O'Connor", "section": "Section::::Publications.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 543, "text": "O'Connor's third book will explore issues of addiction and recovery through the lens of philosophy. In an interview on her book, she states: “Addicts are frequently very philosophical; we tend to be armchair thinkers. Addicts struggle with issues of self-identity, self-knowledge and self-deception, the nature of God, existential dilemmas, marking the line between appearance and reality, free will and voluntariness, and moral responsibility. These are prompted by acute instances of self-examination and reflection about how to live well.”\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26955647", "title": "Addictive personality", "section": "Section::::Treatment.\n", "start_paragraph_id": 53, "start_character": 0, "end_paragraph_id": 53, "end_character": 636, "text": "Common forms of treatment for addictive personalities include cognitive behavioral therapy, as well as other behavioral approaches. These treatments help patients by providing healthy coping skills training, relapse prevention, behavior interventions, family and group therapy, facilitated self-change approaches, and aversion therapy. Behavioral approaches include using positive reinforcement and behavioral modeling. Along with these, other options that help with treating those who suffer with addictive personality include social support, help with goal direction, rewards, enhancing self-efficacy and help teaching coping skills.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26955647", "title": "Addictive personality", "section": "Section::::Signs and symptoms.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 415, "text": "People who suffer from an addictive personality spend excessive time on a behavior or with an item, not as a hobby but because they feel they have to. Addiction can be defined when the engagement in the activity or experience affects the person’s quality of life in some way. In this way, many people who maintain an addictive personality isolate themselves from social situations in order to mask their addiction.\n", "bleu_score": null, "meta": null } ] } ]
null
1gsdmu
Why do planets have axial tilts which deviate greatly from the normal of their orbital plane?
[ { "answer": "I am no astronomer, but in the off chance this doesn't get answered, I'll give it a go.\n\nTheir orbit around the sun, as you said, is planar. However, the gravitational force acting on each planetary body is relatively uniform. That is to say, the gravity pulls no more on the poles than anywhere else on the planet. The tilts, therefore, are not influenced by the sun's gravitational well. The planets' individual rotations and original tilts are due to formative collisions and other influencing forces.", "provenance": null }, { "answer": "Astronomers believe that planets with large tilts (such as Uranus) and planets with retrograde orbits came to be due to collisions in the protoplanetary disk. Collisions between different planetesimals and other debris was very common, and if these collisions were large enough, a planet's tilt and orbit could be affected. \n\nIn additions, if two stars form relatively close together, their protoplanetary disks can collide. This would cause debris to collide, and the materials in the disk would be affected greatly by another star's gravitational pull.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "18352021", "title": "Kepler orbit", "section": "Section::::Development of the laws.:Simplified two body problem.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 509, "text": "Planets rotate at varying rates and thus may take a slightly oblate shape because of the centrifugal force. With such an oblate shape, the gravitational attraction will deviate somewhat from that of a homogeneous sphere. This phenomenon is quite noticeable for artificial Earth satellites, especially those in low orbits. At larger distances the effect of this oblateness becomes negligible. Planetary motions in the Solar System can be computed with sufficient precision if they are treated as point masses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "145381", "title": "Ellipsoid", "section": "Section::::Applications.:Dynamical properties.\n", "start_paragraph_id": 82, "start_character": 0, "end_paragraph_id": 82, "end_character": 296, "text": "One practical effect of this is that scalene astronomical bodies such as generally rotate along their minor axes (as does Earth, which is merely oblate); in addition, because of tidal locking, moons in synchronous orbit such as Mimas orbit with their major axis aligned radially to their planet.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6997062", "title": "HAT-P-1b", "section": "Section::::Characteristics.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 306, "text": "An alternative possibility is that the planet has a high axial tilt, like Uranus in the Solar System. The problem with this explanation is that it is thought to be quite difficult to get a planet into this configuration, so having two such planets among the set of known transiting planets is problematic.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56440", "title": "Orbital inclination", "section": "Section::::Other meaning.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 275, "text": "For planets and other rotating celestial bodies, the angle of the equatorial plane relative to the orbital plane — such as the tilt of the Earth's poles toward or away from the Sun — is sometimes also called inclination, but less ambiguous terms are axial tilt or obliquity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39789", "title": "Rotation", "section": "Section::::Astronomy.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 397, "text": "Another consequence of the rotation of a planet is the phenomenon of precession. Like a gyroscope, the overall effect is a slight \"wobble\" in the movement of the axis of a planet. Currently the tilt of the Earth's axis to its orbital plane (obliquity of the ecliptic) is 23.44 degrees, but this angle changes slowly (over thousands of years). (See also Precession of the equinoxes and Pole star.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15032003", "title": "Exoplanetology", "section": "Section::::Rotation and axial tilt.:Tidal effects.\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 377, "text": "For most planets, the rotation period and axial tilt (also called obliquity) are not known, but a large number of planets have been detected with very short orbits (where tidal effects are greater) that will probably have reached an equilibrium rotation that can be predicted (\"i.e.\" tidal lock, spin–orbit resonances, and non-resonant equilibria such as retrograde rotation).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15032003", "title": "Exoplanetology", "section": "Section::::Rotation and axial tilt.:Tidal effects.\n", "start_paragraph_id": 63, "start_character": 0, "end_paragraph_id": 63, "end_character": 371, "text": "Gravitational tides tend to reduce the axial tilt to zero but over a longer timescale than the rotation rate reaches equilibrium. However, the presence of multiple planets in a system can cause axial tilt to be captured in a resonance called a Cassini state. There are small oscillations around this state and in the case of Mars these axial tilt variations are chaotic.\n", "bleu_score": null, "meta": null } ] } ]
null
1r8p24
When DNA is copied where do the new nucleotides that create more DNA come from?
[ { "answer": "Nucleotides can be synthesized \"de novo\" from precursor molecules (obtained from the breakdown of food, for example). The major organ involved in this process is the liver. However, nucleotides can also be [recycled](_URL_0_) through a process that synthesizes nucleotides from the components of degraded nucleotides.", "provenance": null }, { "answer": "The new nucleotides are synthesized from a large number of other precursors, such as folic acid, glutamine, glycine, etc. The method of synthesis differs between purines (A and G) and pyrimidines (T and C). \n\nThe purine synthesis pathway is [quite long](_URL_0_), but can be summed up as resulting in the end product inosine monophosphate (IMP). This can be interconverted to GMP or AMP. Two more phosphate groups are added on to give the triphosphate tail of a nucleotide. These ribonucleotides (NTPs) are then converted to deoxyribonucleotides (dNTPs) using [Ribonucleotide Reductase](_URL_2_) and a dNTP is born.\n\nThe pyrimidine synthesis pathway is of [similar length](_URL_1_) and gives uridine monophosphate (UMP), which is then converted to UTP (used in RNA synthesis). UTP can be interconverted with CTP and TTP. Ribonucleotide Reductase once again converts the NTPs into dNTPs.", "provenance": null }, { "answer": "The same mechanism that creates energy from your food has a built in arm that takes the energy in food and instead of creating energy for later use, it uses the energy from food to create the bases needed for DNA replication. But it only does this when the cell decides to replicate so most of the time, it just stores energy from food. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8406655", "title": "Introduction to genetics", "section": "Section::::How genes work.:Genes are copied.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 729, "text": "When DNA is copied, the two strands of the old DNA are pulled apart by enzymes; then they pair up with new nucleotides and then close. This produces two new pieces of DNA, each containing one strand from the old DNA and one newly made strand. This process is not predictably perfect as proteins attach to a nucleotide while they are building and cause a change in the sequence of that gene. These changes in DNA sequence are called mutations. Mutations produce new alleles of genes. Sometimes these changes stop the functioning of that gene or make it serve another advantageous function, such as the melanin genes discussed above. These mutations and their effects on the traits of organisms are one of the causes of evolution.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "235926", "title": "DNA polymerase", "section": "Section::::Function.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 769, "text": "When synthesizing new DNA, DNA polymerase can add free nucleotides only to the 3' end of the newly forming strand. This results in elongation of the newly forming strand in a 5'–3' direction. No known DNA polymerase is able to begin a new chain (\"de novo\"); it can only add a nucleotide onto a pre-existing 3'-OH group, and therefore needs a primer at which it can add the first nucleotide. Primers consist of RNA or DNA bases (or both). In DNA replication, the first two bases are always RNA, and are synthesized by another enzyme called primase. Helicase and topoisomerase II are required to unwind DNA from a double-strand structure to a single-strand structure to facilitate replication of each strand consistent with the semiconservative model of DNA replication.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8406655", "title": "Introduction to genetics", "section": "Section::::How genes work.:Genes are copied.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 261, "text": "Genes are copied each time a cell divides into two new cells. The process that copies DNA is called DNA replication. It is through a similar process that a child inherits genes from its parents, when a copy from the mother is mixed with a copy from the father.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8406655", "title": "Introduction to genetics", "section": "Section::::How genes work.:Genes are copied.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 564, "text": "DNA can be copied very easily and accurately because each piece of DNA can direct the creation of a new copy of its information. This is because DNA is made of two strands that pair together like the two sides of a zipper. The nucleotides are in the center, like the teeth in the zipper, and pair up to hold the two strands together. Importantly, the four different sorts of nucleotides are different shapes, so for the strands to close up properly, an A nucleotide must go opposite a T nucleotide, and a G opposite a C. This exact pairing is called base pairing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13537626", "title": "Quantum biology", "section": "Section::::Applications.:DNA mutation.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 535, "text": "Whenever a cell reproduces, it must copy these strands of DNA. However, sometimes throughout the process of copying the strand of DNA a mutation, or an error in the DNA code, can occur. A theory for the reasoning behind DNA mutation is explained in the Lowdin DNA mutation model. In this model, a nucleotide may change its form through a process of quantum tunneling. Because of this, the changed nucleotide will lose its ability to pair with its original base pair and consequently changing the structure and order of the DNA strand.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47275926", "title": "Janet E. Mertz", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 333, "text": "In the interim, in collaboration with Ronald W. Davis, Mertz discovered that DNA ends generated by cutting with the EcoRI restriction enzyme are “sticky”, permitting any two such DNAs to be readily “recombined”. Using this discovery, in June 1972 she easily created the first recombinant DNA that could have been cloned in bacteria \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34656547", "title": "Staggered extension process", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 292, "text": "In these cycles the elongation of DNA is very quick (only a few hundred base pairs) and synthesized fragments anneal with complementary fragments of other strands. In this way, mutations of the initial genes are shuffled and in the end genes with new combinations of mutations are amplified.\n", "bleu_score": null, "meta": null } ] } ]
null
qpb7h
what are the main features of the unity engine? its advantages and disadvantages against other game engines?
[ { "answer": "The Unity Engine is a 3d based engine used primary for games. \n\nThe Unitiy's Engines advantages are \n\n1.) Run on almost every platfrom (Unreal 3 for example only runs on PC, Window, PS3 with a Lite version running on Iphone)\n\n2.) Comes with a easy to use user interface that is visual and lower the amount of programming in contrast to say the Quake Engine. (I believe Unity is a LUA friendly)\n\n3.) Well supported in contrast to say the horrible engines you haven't heard about.\n\n4.) Cheap, in comparision to the Unreal Engine which is significantly more expensive to license for games that you are selling.\n\nUnity's disadvanatages are.\n\n1.) Because it runs on so many platfrom it's difficult for the system to be truly optimized for one platform. If your look to do super high performance things you will have more difficult in contrast to say your own engine that you've developed. \n\n2.) Doesn't have the peneration of say Flash. If you want people to download and play your game in a browser more people will have Flash then Unity.\n\n3.) As with all engines your still abstracted from what your doing. If you make your own Engine then you'll have more control and will fight with the engine less. For example using OpenCL in the Unity would be difficult.\n\n4.) Not as cheap as Open Source or writing you own in something like DirectX or OpenGl .\n\n\n\n:P Innovating gameplay mechanics sound cool. But in reality innovative game mechanic actually means \"I really enjoy Algebra, and Trig.\" ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5462396", "title": "Unity (game engine)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 569, "text": "Unity is a cross-platform game engine developed by Unity Technologies, first announced and released in June 2005 at Apple Inc.'s Worldwide Developers Conference as a Mac OS X-exclusive game engine. As of 2018, the engine had been extended to support more than 25 platforms. The engine can be used to create three-dimensional, two-dimensional, virtual reality, and augmented reality games, as well as simulations and other experiences. The engine has been adopted by industries outside video gaming, such as film, automotive, architecture, engineering and construction.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5462396", "title": "Unity (game engine)", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 353, "text": "The Unity game engine launched in 2005, aiming to \"democratize\" game development by making it accessible to more developers. The next year, Unity was named runner-up in the Best Use of Mac OS X Graphics category in Apple Inc.'s Apple Design Awards. Unity was initially released for Mac OS X, later adding support for Microsoft Windows and Web browsers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5462396", "title": "Unity (game engine)", "section": "Section::::Overview.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 532, "text": "Unity gives users the ability to create games and experiences in both 2D and 3D, and the engine offers a primary scripting API in C#, for both the Unity editor in the form of plugins, and games themselves, as well as drag and drop functionality. Prior to C# being the primary programming language used for the engine, it previously supported Boo, which was removed with the release of Unity 5, and a version of JavaScript called \"UnityScript\", which was deprecated in August 2017, after the release of Unity 2017.1, in favor of C#.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1133784", "title": "Mobile game", "section": "Section::::Different platforms.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 331, "text": "Due to its ease of porting between mobile operating systems and extensive developer community, Unity is one of the most widely used engines used by modern mobile games. Apple provide a number of proprietary technologies (such as Metal) intended to allow developers to make more effective use of their hardware in iOS-native games.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5462396", "title": "Unity (game engine)", "section": "Section::::History.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 1661, "text": "\"The Verge\" said of 2015's Unity 5 release: \"Unity started with the goal of making game development universally accessible. [...] Unity 5 is a long-awaited step towards that future.\" With Unity 5, the engine improved its lighting and audio. Through WebGL, Unity developers could add their games to compatible Web browsers with no plug-ins required for players. Unity 5.0 offered real-time global illumination, light mapping previews, Unity Cloud, a new audio system, and the Nvidia PhysX3.3 physics engine. The fifth generation of the Unity engine also introduced Cinematic Image Effects to help make Unity games look less generic. Unity 5.6 added new lighting and particle effects, updated the engine's overall performance, and added native support for Nintendo Switch, Facebook Gameroom, Google Daydream VR, and the Vulkan graphics API. It introduced a 4K video player capable of running 360-degree videos for virtual reality. However, some gamers criticized Unity's accessibility due to the high volume of quickly produced games published on the Steam distribution platform by inexperienced developers. CEO John Riccitiello said in an interview that he believes this to be a side-effect of Unity's success in democratizing game development: \"If I had my way, I'd like to see 50 million people using Unity – although I don't think we're going to get there any time soon. I'd like to see high school and college kids using it, people outside the core industry. I think it's sad that most people are consumers of technology and not creators. The world's a better place when people know how to create, not just consume, and that's what we're trying to promote.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5462396", "title": "Unity (game engine)", "section": "Section::::History.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 746, "text": "In 2012 \"VentureBeat\" wrote, \"Few companies have contributed as much to the flowing of independently produced games as Unity Technologies. [...] More than 1.3 million developers are using its tools to create gee-whiz graphics in their iOS, Android, console, PC, and web-based games. [...] Unity wants to be the engine for multi-platform games, period.\" A May 2012 survey by \"Game Developer\" magazine indicated Unity as its top game engine for mobile platforms. In July 2014, Unity won the \"Best Engine\" award at the UK's annual Develop Industry Excellence Awards. In November 2012, Unity Technologies delivered Unity 4.0. This version added DirectX 11 and Adobe Flash support, new animation tools called Mecanim, and access to the Linux preview.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38993494", "title": "Might & Magic X: Legacy", "section": "Section::::Development.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 238, "text": "Unity 3D game engine was chosen for the game and game was intended to be modifiable by community. This however required a function only available in Unity 3D Pro version, creating controversy between players, Ubisoft and Limbic dev team.\n", "bleu_score": null, "meta": null } ] } ]
null
qw5wl
What is it that actually transfers light in a perfect vacuum?
[ { "answer": "You do not need particles to move for energy to exist. Light consists of an oscillating electric field that produces an oscillating magnetic field that produces an oscillating electric field and so on. These fields happily exist separately from particles.", "provenance": null }, { "answer": "You've actually captured the of the idea behind [Aether theories](_URL_1_). People thought that electromagnetic waves needed a medium to pass through, and they called it the Aether. (In an interesting historical side note, in the 1800's many physics buildings were built without the use of nails or screws, because one of the last great challenges of physics was to measure the aether, and it was believed ferrous metal would interfere with the experiments) \n\nWell, [the Michelson-Morley experiments](_URL_0_) are the ones that finally put a rest to the idea of an aether which electromagnetic waves travel through- adding more evidence to the particle theory of light. We now know that light propagates via photons, massless particles that are able to pass through a vacuum no problem. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "26000", "title": "Ray tracing (graphics)", "section": "Section::::Detailed description of ray tracing computer algorithm and its genesis.:What happens in (simplified) nature.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 1640, "text": "In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this \"ray\" as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring relativistic effects). Any combination of four things might happen with this light ray: absorption, reflection, refraction and fluorescence. A surface may absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more directions. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength color in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4068", "title": "Blaise Pascal", "section": "Section::::Contributions to the physical sciences.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 261, "text": "BULLET::::- Therefore, since there had to be an invisible \"something\" to move the light through the glass tube, there was no vacuum in the tube. Not in the glass tube or anywhere else. Vacuums – the absence of any and everything – were simply an impossibility.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11439", "title": "Faster-than-light", "section": "Section::::Justifications.:Casimir vacuum and quantum tunnelling.\n", "start_paragraph_id": 63, "start_character": 0, "end_paragraph_id": 63, "end_character": 1835, "text": "The experimental determination has been made in vacuum. However, the vacuum we know is not the only possible vacuum which can exist. The vacuum has energy associated with it, called simply the vacuum energy, which could perhaps be altered in certain cases. When vacuum energy is lowered, light itself has been predicted to go faster than the standard value \"c\". This is known as the Scharnhorst effect. Such a vacuum can be produced by bringing two perfectly smooth metal plates together at near atomic diameter spacing. It is called a Casimir vacuum. Calculations imply that light will go faster in such a vacuum by a minuscule amount: a photon traveling between two plates that are 1 micrometer apart would increase the photon's speed by only about one part in 10. Accordingly, there has as yet been no experimental verification of the prediction. A recent analysis argued that the Scharnhorst effect cannot be used to send information backwards in time with a single set of plates since the plates' rest frame would define a \"preferred frame\" for FTL signalling. However, with multiple pairs of plates in motion relative to one another the authors noted that they had no arguments that could \"guarantee the total absence of causality violations\", and invoked Hawking's speculative chronology protection conjecture which suggests that feedback loops of virtual particles would create \"uncontrollable singularities in the renormalized quantum stress-energy\" on the boundary of any potential time machine, and thus would require a theory of quantum gravity to fully analyze. Other authors argue that Scharnhorst's original analysis, which seemed to show the possibility of faster-than-\"c\" signals, involved approximations which may be incorrect, so that it is not clear whether this effect could actually increase signal speed at all.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32496", "title": "Vacuum tube", "section": "Section::::Reliability.\n", "start_paragraph_id": 134, "start_character": 0, "end_paragraph_id": 134, "end_character": 317, "text": "When a vacuum tube is overloaded or operated past its design dissipation, its anode (plate) may glow red. In consumer equipment, a glowing plate is universally a sign of an overloaded tube. However, some large transmitting tubes are designed to operate with their anodes at red, orange, or in rare cases, white heat.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32502", "title": "Vacuum", "section": "Section::::Uses.\n", "start_paragraph_id": 67, "start_character": 0, "end_paragraph_id": 67, "end_character": 1487, "text": "Vacuum is useful in a variety of processes and devices. Its first widespread use was in the incandescent light bulb to protect the filament from chemical degradation. The chemical inertness produced by a vacuum is also useful for electron beam welding, cold welding, vacuum packing and vacuum frying. Ultra-high vacuum is used in the study of atomically clean substrates, as only a very good vacuum preserves atomic-scale clean surfaces for a reasonably long time (on the order of minutes to days). High to ultra-high vacuum removes the obstruction of air, allowing particle beams to deposit or remove materials without contamination. This is the principle behind chemical vapor deposition, physical vapor deposition, and dry etching which are essential to the fabrication of semiconductors and optical coatings, and to surface science. The reduction of convection provides the thermal insulation of thermos bottles. Deep vacuum lowers the boiling point of liquids and promotes low temperature outgassing which is used in freeze drying, adhesive preparation, distillation, metallurgy, and process purging. The electrical properties of vacuum make electron microscopes and vacuum tubes possible, including cathode ray tubes. Vacuum interrupters are used in electrical switchgear. Vacuum arc processes are industrially important for production of certain grades of steel or high purity materials. The elimination of air friction is useful for flywheel energy storage and ultracentrifuges.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41564", "title": "Polarization (waves)", "section": "Section::::Implications for reflection and propagation.:Polarization in wave propagation.\n", "start_paragraph_id": 63, "start_character": 0, "end_paragraph_id": 63, "end_character": 261, "text": "In a vacuum, the components of the electric field propagate at the speed of light, so that the phase of the wave varies in space and time while the polarization state does not. That is, the electric field vector e of a plane wave in the +\"z\" direction follows:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32502", "title": "Vacuum", "section": "Section::::Classical field theories.:Electromagnetism.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 476, "text": "In classical electromagnetism, the vacuum of free space, or sometimes just \"free space\" or \"perfect vacuum\", is a standard reference medium for electromagnetic effects. Some authors refer to this reference medium as \"classical vacuum\", a terminology intended to separate this concept from QED vacuum or QCD vacuum, where vacuum fluctuations can produce transient virtual particle densities and a relative permittivity and relative permeability that are not identically unity.\n", "bleu_score": null, "meta": null } ] } ]
null
2px3a7
i've been hearing a lot on reddit lately about how the baby boomer generation "screwed over" the generations to follow. what specifically have they done that was wrong? it seems like they were just dealt a better hand by circumstance and weren't able to control the bad things to follow.
[ { "answer": "Because of the size of their generation, the Boomers caused a lot of disruptions in society as they passed through, especially in the education system in the 1950s and 1960s, and the healthcare system today. There simply wasn't the capacity for them at the time and those systems were strained. \n\nMost of America's current political leaders are also Boomers, so the whole generation gets sort of conflated with \"those grey haired bastards that aren't doing anything about tuition hikes or youth unemployment\". ", "provenance": null }, { "answer": "I think the general idea is that coming out of the depression America created a country where there was a lot of manufacturing, benefits, solid pay, and a very reasonable cost of living. As the boomers came into power and moved into higher positions in companies they controlled things like eliminating pensions, slashing benefits, raising executive salaries while freezing or extremely slowly raising low level wages, and eliminating many decently paid blue collar positions by moving manufacturing and customer service overseas. In addition, they still hold a lot of higher positions and aren't retiring.\n\nWhile some of that is likely true like many things it's probably a lot more complex than that. Outsourcing of manufacturing is inevitable as the populace demands lower prices for things and other countries will work for pennies on the dollar; similarly cuts need to be made in companies and that results in slashing benefits and pensions. Issues like obnoxiously high executive pay aren't limited to boomers and they certainly aren't the only ones that are benefitting or enabling it. \n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "47127", "title": "Baby boomers", "section": "Section::::Aging and end-of-life issues.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 550, "text": ", it was reported that, as a generation, boomers had tended to avoid discussions and long-term planning for their demise. However, since 1998 or earlier, there has been a growing dialogue on how to manage aging and end-of-life issues as the generation ages. In particular, a number of commentators have argued that Baby Boomers are in a state of denial regarding their own aging and death and are leaving an undue economic burden on their children for their retirement and care. According to the 2011 Associated Press and LifeGoesStrong.com surveys:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51240626", "title": "Cusper", "section": "Section::::Notable cusper groups.:Baby Boomers/Generation X.:Characteristics.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 766, "text": "This population is sometimes referred to as Generation Jones, and less commonly as Tweeners. These cuspers were not as financially successful as older Baby Boomers. They experienced a recession like many Generation Xers but had a much more difficult time finding jobs than Generation X did. While they learned to be IT-savvy, they didn't have computers until after high school but were some of the first to purchase them for their homes. They were among some of the first to take an interest in video games. They get along well with Baby Boomers, but share different values. While they are comfortable in office environments, they are more relaxed at home. They're less interested in advancing their careers than Baby Boomers and more interested in quality of life.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22380657", "title": "Francis Beckett", "section": "Section::::Career.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 303, "text": "In 2010 \"What Did the Baby Boomers Ever Do For Us?\" was published by Biteback. The book claims that the baby boomer generation inherited the good years, and pulled the ladder up after them. \"Blair Inc: The Man Behind The Mask\", co-written with David Hencke and Nick Kochan, was published in March 2015.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47127", "title": "Baby boomers", "section": "Section::::Characteristics.:Size and economic impact.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 583, "text": "In addition to the size of the group, Steve Gillon has suggested that one thing that sets the baby boomers apart from other generational groups is the fact that \"almost from the time they were conceived, Boomers were dissected, analyzed, and pitched to by modern marketers, who reinforced a sense of generational distinctiveness.\" This is supported by the articles of the late 1940s identifying the increasing number of babies as an economic boom, such as a 1948 \"Newsweek\" article whose title proclaimed \"Babies Mean Business\", or a 1948 \"Time\" magazine article called \"Baby Boom.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47127", "title": "Baby boomers", "section": "Section::::Impact on history and culture.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 266, "text": "People often take it for granted that each succeeding generation will be \"better off\" than the one before it. When Generation X came along just after the boomers, they would be the first generation to enjoy a lesser quality of life than the generation preceding it.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47127", "title": "Baby boomers", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 671, "text": "The boomers have tended to think of themselves as a special generation, very different from preceding and subsequent generations. In the 1960s and 1970s, as a relatively large number of young people entered their late teens—the oldest turned 18 in 1964—they, and those around them, created a very specific rhetoric around their cohort and the changes brought about by their size in numbers. This rhetoric had an important impact in the self-perceptions of the boomers, as well as their tendency to define the world in terms of generations, which was a relatively new phenomenon. The baby boom has been described variously as a \"shockwave\" and as \"the pig in the python\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47127", "title": "Baby boomers", "section": "Section::::Impact on history and culture.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 373, "text": "An indication of the importance put on the impact of the boomer was the selection by \"TIME\" magazine of the Baby Boom Generation as its 1966 \"Man of the Year.\" As Claire Raines points out in \"Beyond Generation X\", \"never before in history had youth been so idealized as they were at this moment.\" When Generation X came along it had much to live up to according to Raines.\n", "bleu_score": null, "meta": null } ] } ]
null
1k6pms
Does applying water or ice to your blood pulses actually cool you off?
[ { "answer": "The most effective places to place ice are neck (carotid/jugular), armpits (axillary) and groin (femoral). It's simple physics that these areas have the largest volume of blood circulating closest to the surface of the skin and thus dissipate heat more effectively.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "726241", "title": "Anti-inflammatory", "section": "Section::::Ice treatment.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 333, "text": "Applying ice, or even cool water, to a tissue injury has an anti-inflammatory effect and is often suggested as an injury treatment and pain management technique for athletes. One common approach is rest, ice, compression and elevation. Cool temperatures inhibit local blood circulation, which reduces swelling in the injured tissue.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "862281", "title": "Hyperhidrosis", "section": "Section::::Treatment.:Medications.\n", "start_paragraph_id": 78, "start_character": 0, "end_paragraph_id": 78, "end_character": 314, "text": "For peripheral hyperhidrosis, some chronic sufferers have found relief by simply ingesting crushed ice water. Ice water helps to cool excessive body heat during its transport through the blood vessels to the extremities, effectively lowering overall body temperature to normal levels within ten to thirty minutes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "390757", "title": "Sprain", "section": "Section::::Treatment.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 437, "text": "BULLET::::- Ice: Ice should be applied immediately to the sprain to reduce swelling and pain. It can be applied for 10–15 minutes at a time, 3-4 times a day. Ice can be combined with a wrapping to minimize swelling and provide support. Ice to numb the pain is effective, but only for a short period of time (no more than twenty minutes.) Longer than 20 minutes can reduce the blood flow to the injured area and slow the healing process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25087021", "title": "Pumpable ice technology", "section": "Section::::Applications.:Medicine.\n", "start_paragraph_id": 105, "start_character": 0, "end_paragraph_id": 105, "end_character": 648, "text": "A protective cooling process based on the implementation of a developed special ice slurry has been developed for medical applications. In this case pumpable ice can be injected intra-arterially, intravenously, along the external surfaces of organs using laparoscopy, or even via the endotracheal tube. It is being confirmed that pumpable ice can selectively cool organs to prevent or limit ischemic damage after a stroke or heart attack. Completed medical tests on animals simulated conditions requiring in-hospital kidney laparoscopic procedures. Results of French and US research are yet to be approved by the U.S. Food and Drug Administration.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7188009", "title": "RICE (medicine)", "section": "Section::::Primary four terms.:Ice.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 473, "text": "Ice is excellent at reducing the inflammatory response and pain associated with heat generated by increased blood flow and/or blood loss. A good method is apply ice for 20 minutes of each hour. Other recommendations are an alternation of ice and no-ice for 15–20 minutes each, for a 48-hour period. To prevent localised ischemia or frostbite to the skin, it is recommended that the ice be placed within a towel or other insulating material before wrapping around the area.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "45517504", "title": "Treatment of equine lameness", "section": "Section::::Cryotherapy, thermotherapy, and compression.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 832, "text": "Cold application to the skin (cryotherapy) is used to decrease pain and inflammation of acute soft tissue injuries. At a cellular level, cold application decreases the formation of exudate and diapedesis of inflammatory cells, thereby reducing edema. Cryotherapy has also been shown to reduce metabolism and thus oxygen demand of tissues, helping to prevent hypoxic tissue damage. Cold is often applied to the site of injury by hosing cold water onto the area (hydrotherapy), icing, or medical devices such as the Game Ready system that provides both cold therapy and compression. Cold salt-water spas are also available, and are used to bathe a patient’s injury in aerated, hypertonic, cold water. This combines the benefits of cryotherapy with the osmotic effect of salt, producing better analgesia and reduction of inflammation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "153520", "title": "Fish farming", "section": "Section::::Slaughter methods.:Inhumane methods.\n", "start_paragraph_id": 82, "start_character": 0, "end_paragraph_id": 82, "end_character": 396, "text": "BULLET::::- Ice baths or chilling of farmed fish on ice or submerged in near-freezing water is used to dampen muscle movements by the fish and to delay the onset of post-death decay. However, it does not necessarily reduce sensibility to pain; indeed, the chilling process has been shown to elevate cortisol. In addition, reduced body temperature extends the time before fish lose consciousness.\n", "bleu_score": null, "meta": null } ] } ]
null
6osc3p
what are spectral lines and how do we use them to determine what light has passed?
[ { "answer": "If you direct light through a prism, you can spread it out and look at all the different frequencies it's composed of.\n\nIf you look very closely, you'll notice that certain frequencies are [\"missing\"](_URL_0_) (or at least severely reduced in intensity).\n\nThat means that somewhere between when this light was emitted, and when you looked at it, that particular frequency was absorbed by something.\n\nEvery material has a unique absorption spectrum, so you can tell what kind of materials the light has passed through.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "177320", "title": "Spectral line", "section": "Section::::Types of line spectra.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 606, "text": "Spectral lines are the result of interaction between a quantum system (usually atoms, but sometimes molecules or atomic nuclei) and a single photon. When a photon has about the right amount of energy to allow a change in the energy state of the system (in the case of an atom this is usually an electron changing orbitals), the photon is absorbed. Then it will be spontaneously re-emitted, either in the same frequency as the original or in a cascade, where the sum of the energies of the photons emitted will be equal to the energy of the one absorbed (assuming the system returns to its original state).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "177320", "title": "Spectral line", "section": "Section::::Types of line spectra.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 490, "text": "Spectral lines are highly atom-specific, and can be used to identify the chemical composition of any medium capable of letting light pass through it. Several elements were discovered by spectroscopic means, including helium, thallium, and caesium. Spectral lines also depend on the physical conditions of the gas, so they are widely used to determine the chemical composition of stars and other celestial bodies that cannot be analyzed by other means, as well as their physical conditions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "177320", "title": "Spectral line", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 496, "text": "A spectral line is a dark or bright line in an otherwise uniform and continuous spectrum, resulting from emission or absorption of light in a narrow frequency range, compared with the nearby frequencies. Spectral lines are often used to identify atoms and molecules. These \"fingerprints\" can be compared to the previously collected \"fingerprints\" of atoms and molecules, and are thus used to identify the atomic and molecular components of stars and planets, which would otherwise be impossible.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "177320", "title": "Spectral line", "section": "Section::::Spectral lines of chemical elements.:Other wavelengths.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 454, "text": "Without qualification, \"spectral lines\" generally implies that one is talking about lines with wavelengths which fall into the range of the visible spectrum. However, there are also many spectral lines which show up at wavelengths outside this range. At the much shorter wavelengths of x-rays, these are known as characteristic X-rays. Other frequencies have atomic spectral lines as well, such as the Lyman series, which falls in the ultraviolet range.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "177320", "title": "Spectral line", "section": "Section::::Types of line spectra.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 699, "text": "A spectral line may be observed either as an emission line or an absorption line. Which type of line is observed depends on the type of material and its temperature relative to another emission source. An absorption line is produced when photons from a hot, broad spectrum source pass through a cold material. The intensity of light, over a narrow frequency range, is reduced due to absorption by the material and re-emission in random directions. By contrast, a bright emission line is produced when photons from a hot material are detected in the presence of a broad spectrum from a cold source. The intensity of light, over a narrow frequency range, is increased due to emission by the material.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29329", "title": "Spectrum", "section": "Section::::Physical science.:Electromagnetic spectrum.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 671, "text": "In radiometry and colorimetry (or color science more generally), the spectral power distribution (SPD) of a light source is a measure of the power contributed by each frequency or color in a light source. The light spectrum is usually measured at points (often 31) along the visible spectrum, in wavelength space instead of frequency space, which makes it not strictly a spectral density. Some spectrophotometers can measure increments as fine as one to two nanometers. the values are used to calculate other specifications and then plotted to show the spectral attributes of the source. This can be helpful in analyzing the color characteristics of a particular source.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "75450", "title": "Metamerism (color)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 820, "text": "A spectral power distribution describes the proportion of total light given off (emitted, transmitted, or reflected) by a color sample at each visible wavelength; it defines the complete information about the light coming from the sample. However, the human eye contains only three color receptors (three types of cone cells), which means that all colors are reduced to three sensory quantities, called the tristimulus values. Metamerism occurs because each type of cone responds to the cumulative energy from a broad range of wavelengths, so that different combinations of light across all wavelengths can produce an equivalent receptor response and the same tristimulus values or color sensation. In color science, the set of sensory spectral sensitivity curves is numerically represented by color matching functions.\n", "bleu_score": null, "meta": null } ] } ]
null
uy51d
Earlier today i saw two clouds going in opposite directions, how is this possible ?
[ { "answer": "Air at different elevations is nearly always traveling in different directions and speeds, your clouds were in two different layers. The motion of air is anything but uniform at all elevations. \n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "529760", "title": "Chinook wind", "section": "Section::::Cause of occurrence.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 248, "text": "Two common cloud patterns seen during this time are a chinook arch overhead, and a bank of clouds (also referred to as a cloud wall) obscuring the mountains to the west. It appears to be an approaching storm, but does not advance any further east.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47515", "title": "Cloud", "section": "Section::::Classification: How clouds are identified in the troposphere.:Accessory clouds, supplementary features, and other derivative types.:Vortex streets.\n", "start_paragraph_id": 114, "start_character": 0, "end_paragraph_id": 114, "end_character": 420, "text": "These patterns are formed from a phenomenon known as a Kármán vortex which is named after the engineer and fluid dynamicist Theodore von Kármán. Wind driven clouds can form into parallel rows that follow the wind direction. When the wind and clouds encounter high elevation land features such as a vertically prominent islands, they can form eddies around the high land masses that give the clouds a twisted appearance.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "623960", "title": "Ode to the West Wind", "section": "Section::::Interpretation of the poem.:Second Canto.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 615, "text": "Shelley in this canto \"expands his vision from the earthly scene with the leaves before him to take in the vaster commotion of the skies\". This means that the wind is now no longer at the horizon and therefore far away, but he is exactly above us. The clouds now reflect the image of the swirling leaves; this is a parallelism that gives evidence that we lifted \"our attention from the finite world into the macrocosm\". The \"clouds\" can also be compared with the leaves; but the clouds are more unstable and bigger than the leaves and they can be seen as messengers of rain and lightning as it was mentioned above.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1621854", "title": "Outflow boundary", "section": "Section::::Appearance.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 1078, "text": "At ground level, shelf clouds and roll clouds can be seen at the leading edge of outflow boundaries. Through satellite imagery, an arc cloud is visible as an arc of low clouds spreading out from a thunderstorm. If the skies are cloudy behind the arc, or if the arc is moving quickly, high wind gusts are likely behind the gust front. Sometimes a gust front can be seen on weather radar, showing as a thin arc or line of weak radar echos pushing out from a collapsing storm. The thin line of weak radar echoes is known as a fine line. Occasionally, winds caused by the gust front are so high in velocity that they also show up on radar. This cool outdraft can then energize other storms which it hits by assisting in updrafts. Gust fronts colliding from two storms can even create new storms. Usually, however, no rain accompanies the shifting winds. An expansion of the rain shaft near ground level, in the general shape of a human foot, is a telltale sign of a downburst. \"Gustnadoes\", short-lived vertical circulations near ground level, can be spawned by outflow boundaries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20562313", "title": "1924 British Mount Everest expedition", "section": "Section::::After the expedition.:Odell's sighting of Mallory and Irvine.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 811, "text": "They scrambled up the small hillock to take photographs of the remaining route, much as the French did in 1981, when they too were blocked from further progress. As to which step they were seen on, Conrad Anker has stated that \"it's hard to say because Odell was looking at it obliquely ... you're at altitude, the clouds were coming in\" but that he believes \"they were probably in the vicinity of the First Step when they turned back, because the First Step itself is very challenging and the Second Step is more challenging... [T]o put them where they might have fallen in the evening and where Mallory's body is resting, because it's a traversing route, he couldn't have fallen off either the First or Second Step and ended up where he was at, they were well to the East of that descending the Yellow Band\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17863375", "title": "Scud (cloud)", "section": "Section::::Formation.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 338, "text": "Pannus clouds can often be mistaken for a developing tornado, landspout, or waterspout. The difference is determinable by observing the presence or absence of rotation (not just movement) of the scud clouds. If rotation is present, then a tornado, landspout, or waterspout is possible, and the more intense the rotation, the more likely.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "617947", "title": "Weather lore", "section": "Section::::Reliability.:Sayings which may be locally accurate.:Cloud movement.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 1442, "text": "This rule may be true under a few special circumstances, otherwise it is false. By standing with one's back to the ground-level wind and observing the movement of the clouds, it is possible to determine whether the weather will improve or deteriorate. For the Northern Hemisphere, it works like this: If the upper-level clouds are moving from the right, a low-pressure area has passed and the weather will improve; if from the left, a low pressure area is arriving and the weather will deteriorate. (Reverse for the Southern Hemisphere.) This is known as the \"crossed-winds\" rule. Clouds traveling parallel to but against the wind may indicate a thunderstorm approaching. Outflow winds typically blow opposite to the updraft zone, and clouds carried in the upper level wind will appear to be moving against the surface wind. However, if such a storm is in the offing, it is not necessary to observe the cloud motions to know rain is a good possibility. The nature of airflows \"directly at\" a frontal boundary can also create conditions in which lower winds contradict the motions of upper clouds, and the passage of a frontal boundary is often marked by precipitation. Most often, however, this situation occurs in the lee of a low pressure area, to the north of the frontal zones and convergence region, and does not indicate a change in weather, but rather, that the weather, fair or showery, will remain so for a period of hours at least.\n", "bleu_score": null, "meta": null } ] } ]
null
4qrtr7
How did it came about that we do not call Japan an Empire today even though its current head of state Akihito is still officially referred to as "Emperor"?
[ { "answer": "I will speak in very broad generalizations here.\n\nJapan was a very famous, literal empire in the '40s, and after their capitulation in WWII, their empire was dismantled, both their territorial holdings, and most of the functioning government.\nThe emperor was not dethroned, for political reasons. That's why Japan has an emperor \"left over\" so to speak. \n\n\"Empire\" is a vaguely defined word that comes with serious baggage. The big problem is that most emperors and kings of the world aren't actually called \"emperor\" or \"king\", these are translations of foreign titles of foreign institutions. But they're translations of convenience, kings are monarchs that rule big lands, emperors rule even bigger lands. \n\nHowever, Western concept of the emperor is fairly cohesive. We draw our definition from the Roman emperors, who were the martial and administrative nuclei of the overwhelmingly powerful Roman Empire (For the most part). But because the formative years of mass politics were in the 19th and 20th centuries, we also associate the word empire with the vast, powerful, wildly expansionist powers of that era (Who were mostly ruled by emperors using that title to evoke Roman magnificence)\n\nNow back to Japan. Japan has had a figure that we call emperor since at least 500 AD (The earliest emperors are sketchily recorded). While the Roman emperors were below gods and politically dominant, the emperors of Japan were heads of the Shinto faith, and for quite a long time merely figureheads for *shoguns. The Japanese word for the Emperor of Japan is *tenno\n\nQuite obviously, these offices aren't the same. But in the 19th century, a Japanese state became a vast, powerful, expansionist power. It's head of state was the *tenno, who Westerners then called emperor. \n\nWhy are there no more empires in the West? Most of the Western emperors had their empires dismantled, or were replaced by republicans (any non-monarchical governments). \n\nThe British monarch is a secular, apolitical office, and its imperial title was 'Emperor/Empress of India\". When India became independent in '48, they lost that title. No more emperor, and with decolonisation, no imperium to rule.\n\nThe Russian monarch was a semi-religious title with absolutist political power, but the last Emperor was overthrown in violence. The Russian Revolution was organised around explicitly deposing the Emperor of Russia to be replaced by a People's Government, and they succeeded. The great imperium gained by Tsarist Russia was mostly still controlled by Russians though.\n\nThe Japanese monarch is a religious title, with constitutional powers. After WWII the secular governing body of Japan was deposed, and the remaining political power of the emperor was removed. But the US occupying force did not want to depose the emperor, because they believed the Japanese would not accept a foreign power abolishing their Religious leader. The *tenno remained.\n\nFinally, it should be said the empires and imperialism do not require emperors. With Britain and Japan, for example, the administrators and governors of their empire was not actually the emperor themselves. You've probably heard of people referring to an \"American empire\", or a \"business empire\", and these phrases are simply meant to evoke a powerful institution with a heavily centralized nucleus of power. They do not require literal emperors.\n\nBasically, an empire is as much a construct of our thinking as it is an actual title. Japan today does not act like an \"empire\", so we don't call it that. The Emperor of Japan was not the same as a Western emperor, we just translate *tenno as emperor. \n\n", "provenance": null }, { "answer": "The Japanese Constitution of 1947, written by the US Occupation (SCAP - Supreme Commander of Allied Powers) of Japan and re-written and approved by a postwar Japanese elected body, formally declared the name of Japan to be the \"State of Japan\" (Nihon-koku) instead of the previous \"Empire of Greater Japan\" ( Dai Nippon Teikoku).\n\nSince the Emperor is formally designated in the 1947 constitution as a constitutional monarch not possessing soverignty, only ceremonial and symbolic importance, it's relatively easy not to think of Japan as an Empire, since there is no meaningful sense in which the current Emperor rules it.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "10110", "title": "Emperor of Japan", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 698, "text": "Currently, the Emperor of Japan is the only head of state in the world with the English title of \"emperor\". The Imperial House of Japan is the oldest continuing monarchical house in the world. The historical origins of the emperors lie in the late Kofun period of the 3rd–7th centuries AD, but according to the traditional account of the \"Kojiki\" (finished 712) and \"Nihon Shoki\" (finished 720), Japan was founded in 660 BC by Emperor Jimmu, who was said to be a direct descendant of the sun-goddess Amaterasu. The current emperor is Naruhito. He acceded to the Chrysanthemum Throne upon the abdication of his father, the now-Emperor Emeritus Akihito on 1 May 2019 at 00:00 local time (15:00 UTC).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10110", "title": "Emperor of Japan", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 436, "text": "The Emperor of Japan is the head of the Imperial Family and the head of state of Japan. Under the 1947 constitution, he is defined as \"the symbol of the State and of the unity of the people.\" Historically, he is also the highest authority of the Shinto religion. In Japanese, the emperor is called , literally \"heavenly sovereign\". In English, the use of the term ( or ) for the emperor was once common, but is now considered obsolete.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10110", "title": "Emperor of Japan", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 905, "text": "The role of the Emperor of Japan has historically alternated between a largely ceremonial symbolic role and that of an actual imperial ruler. Since the establishment of the first shogunate in 1199, the emperors have rarely taken on a role as supreme battlefield commander, unlike many Western monarchs. Japanese emperors have nearly always been controlled by external political forces, to varying degrees. For example, between 1192 and 1867, the \"shōguns\", or their \"shikken\" regents in Kamakura (1203–1333), were the \"de facto\" rulers of Japan, although they were nominally appointed by the emperor. After the Meiji Restoration in 1867, the emperor was the embodiment of all sovereign power in the realm, as enshrined in the Meiji Constitution of 1889. Since the enactment of the 1947 Constitution, the role of emperor has been to act as a ceremonial head of state without even nominal political powers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30858300", "title": "Government of Japan", "section": "Section::::The Emperor.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 484, "text": "The Imperial House of Japan is said to be the oldest continuing hereditary monarchy in the world. According to the Kojiki and Nihon Shoki, Japan was founded by the Imperial House in 660 BC by Emperor Jimmu (神武天皇). Emperor Jimmu was the first Emperor of Japan and the ancestor of all of the Emperors that followed. He is, according to Japanese mythology, the direct descendant of Amaterasu (天照大御神), the sun goddess of the native Shinto religion, through Ninigi, his great-grandfather.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16208786", "title": "Reigning Emperor", "section": "Section::::History.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 259, "text": "Attaching the title \"Emperor\" and his Japanese era name has formed a posthumous name, from \"Emperor Meiji\" to \"Emperor Taishō\" and \"Emperor Shōwa\", so doing it to refer to still living Emperor Emeritus Akihito and the Reigning Emperor Naruhito is a faux pas.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10110", "title": "Emperor of Japan", "section": "Section::::Succession.\n", "start_paragraph_id": 78, "start_character": 0, "end_paragraph_id": 78, "end_character": 820, "text": "The origins of the Japanese imperial dynasty are obscure, and it bases its position on the claim that it has \"reigned since time immemorial\". There are no records of any Emperor who was not said to have been a descendant of other, yet earlier Emperor ( \"bansei ikkei\"). There is suspicion that Emperor Keitai (c. AD 500) may have been an unrelated outsider, though the sources (Kojiki, Nihon-Shoki) state that he was a male-line descendant of Emperor Ōjin. However, his descendants, including his successors, were according to records descended from at least one and probably several imperial princesses of the older lineage. The tradition built by those legends has chosen to recognize just the putative male ancestry as valid for legitimizing his succession, not giving any weight to ties through the said princesses.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2921680", "title": "Controversies regarding the role of the Emperor of Japan", "section": "Section::::Shōchō.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 651, "text": "The use of the Japanese word shōchō (象徴), meaning symbol, to describe the emperor is unusual and, depending upon one's viewpoint, conveniently or frustratingly vague. The emperor is neither head of state nor sovereign, as are many European constitutional monarchs, although in October 1988 Japan's Ministry of Foreign Affairs claimed, controversially, that the emperor is the country's sovereign in the context of its external relations. Nor does the emperor have an official priestly or religious role. Although he continues to perform ancient Shinto rituals, such as ceremonial planting of the rice crop in spring, he does so in a private capacity.\n", "bleu_score": null, "meta": null } ] } ]
null
21bx2o
why do companies with a large amount of cash still issue debt?
[ { "answer": "Taking on debt spreads financial risk across a longer period of time.\n\nIf I shell out 100% today, I'm out 100% today, and 100% tomorrow of my investment goes south. I'm screwed the day after tomorrow.\n\nIf I shell out 2% today, I'm out 2% today and 2% for the next 4 years, so if things don't work out, I can continue to operate and pay my bills.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1478555", "title": "Sukuk", "section": "Section::::Challenges, criticism and controversy.:Challenges.\n", "start_paragraph_id": 102, "start_character": 0, "end_paragraph_id": 102, "end_character": 293, "text": "There have been at least two cases of companies seeking to restructure their debt (i.e. pay creditors less), claiming that debt they had issued was not in compliance with sharia. In a 2009 court filing Investment Dar, a Kuwaiti company claimed a transaction \"was taking deposits at interest\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3473088", "title": "Tendency of the rate of profit to fall", "section": "Section::::Mainstream economics.:Diminishing returns.:Larry Summers.\n", "start_paragraph_id": 514, "start_character": 0, "end_paragraph_id": 514, "end_character": 922, "text": "In 2014, for example, it was reported that the world's corporations had accumulated $7 trillion in cash reserves, with the United States at $2 trillion. In turn, a large chunk of these cash surpluses was used for stock buybacks. In mid-2018, Steven Rattner explained, that the US buybacks were \"really a consequence of the vast cash reserves — $2.4 trillion and rising — held by American companies.\" Since 2009, US companies also piled up large debts, which funded in total circa $4.7 trillion spent on buybacks and $3.4 trillion on dividends. According to Goldman Sachs, US companies were set to authorize $1 trillion worth of stock buybacks in 2018, and Europe was also joining in the spree. Buybacks in Japan were estimated at 6 trillion yen in 2016 (=US$55.1 billion) and 5.8 trillion yen in 2017 (=US$51.7 billion). In China, the value of buybacks in the first three quarters of 2018 was estimated at US$3.5 billion.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7048889", "title": "Debt buyer (United States)", "section": "Section::::History.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 553, "text": "The debt buying industry in the United States began as a result of the savings and loan crisis (S&L crisis) in which from 1986 and 1995, 1,043 out of the 3,234 American savings and loan associations, failed and hundreds of banks were closed by the Federal Savings and Loan Insurance Corporation (FSLIC) and the Resolution Trust Corporation (RTC). The Federal Deposit Insurance Corporation (FDIC), which insures deposits up to a certain amount, received the assets of the bank to cover the expenses associated with repaying the closed banks' depositors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22099091", "title": "Subprime mortgage crisis solutions debate", "section": "Section::::Solvency.:Nationalization.:Arguments for nationalization or recapitalization.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 566, "text": "These factors (among others) are why insolvent financial institutions have historically been taken over by regulators. Further, loans to a struggling bank increase assets and liabilities, not equity. Capital is therefore \"tied up\" on the insolvent bank's balance sheet and cannot be used as productively as it could be at a healthier financial institution. Banks have taken significant steps to obtain additional capital from private sources. Further, the U.S. and other countries have injected capital (willingly or unwillingly) into larger financial institutions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7048889", "title": "Debt buyer (United States)", "section": "Section::::Types of collection agencies.\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 330, "text": "Due to the varying size of debt buying organizations, not all organizations have the capital required to purchase large portfolios directly from the debt issuer. Historically, smaller debt-buying firms would purchase their debt accounts from a larger buyer after that larger buyer had already attempted to collect on the account.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18190185", "title": "2000s commodities boom", "section": "Section::::Late-2000s economic fallout.\n", "start_paragraph_id": 101, "start_character": 0, "end_paragraph_id": 101, "end_character": 310, "text": "Many firms, individuals, and hedge funds went bankrupt or suffered heavy losses due to purchasing commodities at high prices only to see their values decline sharply in mid to late 2008. Many manufacturing companies were also crippled by the rising cost of oil and other commodities such as transition metals.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "888187", "title": "War chest", "section": "Section::::In business.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 292, "text": "Today companies can use accumulated cash or rely on quickly raised debt which costs less to carry when you don't need it. This is not always a reasonable substitute, as the credit available to a company typically drops as a result of the same actions that require the war chest to be opened.\n", "bleu_score": null, "meta": null } ] } ]
null
32nk7u
why "everything" is one word and "every time" is two
[ { "answer": "Every time I hear this everyday question I wonder will I be asked it every day.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "17181902", "title": "Everything", "section": "Section::::Scope.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 207, "text": "In ordinary conversation, \"everything\" usually refers only to the totality of things relevant to the subject matter. When there is no expressed limitation, \"everything\" may refer to the universe, the world.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43669554", "title": "Something (concept)", "section": "Section::::Anything.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 975, "text": "One can make the statement that \"anything\" is a specific word where \"everything\" can be seen as a general word. Still, both meanings may readily be understood by everyone, while their definitions will equally contain some aspects of murkiness as to what is included and what is not. First of all, \"anything\" does not need to be covered by an actual something, since an act of god or fate, a coincident or an unintended consequence can also be included in the list of \"anything\". Also, the question whether an \"actual\" nothing can also be used to take up the place of \"anything\" is harder to debate at the abstract level and requires actual input to declare whether this is true or false. Examples of this position are that not the amount of money, but rather the lack of money can make us rise and shine early from bed to go to work, and that not the abundance of food, but rather hunger and the lack of food make us hunt and till the soil. See also: Much Ado About Nothing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "698149", "title": "Fallacy of four terms", "section": "Section::::Definition.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 339, "text": "The word \"nothing\" in the example above has two meanings, as presented: \"nothing is better\" means the thing being named has the highest value possible; \"better than nothing\" only means that the thing being described has some value. Therefore, \"nothing\" acts as two different terms in this example, thus creating the fallacy of four terms.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "187013", "title": "And/or", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 362, "text": "And/or (also and or) is a grammatical conjunction used to indicate that one or more of the cases it connects may occur. For example, the sentence \"He will eat cake, pie, and/or brownies\" indicates that although the person may eat any of the three listed desserts, the choices are not mutually exclusive; the person may eat one, two, or all three of the choices.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13389856", "title": "Everyday (Buddy Holly song)", "section": "Section::::Artistic license.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 252, "text": "The word \"everyday\" is an adjective (meaning \"commonplace\", \"ordinary\", or \"normal\"), whereas in the context of the song the phrase \"every day\" (meaning \"each day\") is clearly meant: \"Every day seems a little longer / Every day it's a-gettin' closer.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "163901", "title": "Information society", "section": "Section::::Second and third nature.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 616, "text": "\"Second nature\" refers a group of experiences that get made over by culture. They then get remade into something else that can then take on a new meaning. As a society we transform this process so it becomes something natural to us, i.e. second nature. So, by following a particular pattern created by culture we are able to recognise how we use and move information in different ways. From sharing information via different time zones (such as talking online) to information ending up in a different location (sending a letter overseas) this has all become a habitual process that we as a society take for granted.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43669554", "title": "Something (concept)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 504, "text": "Something and anything are concepts of existence in ontology, contrasting with the concept of nothing. Both are used to describe the understanding that what exists is not nothing without needing to address the existence of everything. The philosopher, David Lewis, has pointed out that these are necessarily vague terms, asserting that \"ontological assertions of common sense are correct if the quantifiers—such words as \"something\" and \"anything\"—are restricted roughly to ordinary or familiar things.\"\n", "bleu_score": null, "meta": null } ] } ]
null
6y1avo
why do some people find it hard to eat enough, while others over-eat?
[ { "answer": "There are two commandants here, one being on how you were taught to think about food, and the other simply being your body itself, and how it reacts with food.\n\nSome people were raised to think food = happiness. If they were sad they were given food. If they were bored they were given food. They turn to food because of the way they were raised to think that food is the answer. People who don't eat enough probably were not raised that way and realize that food is only food, something we need to eat to live.. and might even find eating boring, or a chore.\n\nSome people who may have tried to starve themselves in the past to be thin might have bodies that now crave more food.\n\nSome people over ate and then their stomach got larger so they are always hungry and they create a cycle of never feeling satisfied and keep over eating. They perhaps don't have the will power to stop.", "provenance": null }, { "answer": "Leptin sensitivity, insulin sensitivity, ghrelin and other hormones cause different appetite levels for different people. That's an oversimplification but basically leptin is one of the main hormones that puts the \"brakes\" on hunger. Thinner people have high leptin sensitivity and therefore recognize feelings of fullness more readily after eating. Overweight people develop resistance to leptin, leading them to feel hungry for longer. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "9005643", "title": "Caregiver", "section": "Section::::Technique.:Eating assistance.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 636, "text": "Difficulty eating is most often caused by difficulty swallowing. This symptom is common in people after a stroke, people with Parkinson's disease or who have multiple sclerosis, and people with dementia. The most common way to help people with trouble swallowing is to change the texture of their food to be softer. Another way is to use special eating equipment to make it easier for the person to eat. In some situations, caregivers can be supportive by providing assisted feeding in which the person's independence is respected while the caregiver helps them take food in their mouth by placing it there and being patient with them.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21551874", "title": "Food studies", "section": "Section::::Food insecurity and health outcomes.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 1167, "text": "In America, almost 50 million people are considered food insecure. This is because they do not have the means to buy healthy food, therefore, lead an unhealthy lifestyle. At least 1.4 times more children who are food insecure are likely to have asthma, compared to food-secure children. And older Americans who are food-insecure will tend to have limitations in their daily activities. When a household is lacking the means (money) to buy proper food, their health ultimately suffers. Supplemental Nutrition Assistance Program (SNAP, formerly known as the Food Stamp Program) is put in place to help families in need to get the proper nutrition they need in order to live a healthy lifestyle. There are 3 points that make a household eligible for SNAP. One, is their gross monthly income must be 130% of the federal poverty level. The second point they have to meet is being below poverty. And the last thing is they have to have assets of less than $2,000 except that households with at least one senior and households that include at least one person with a disability can have more assets. Multiple studies have shown SNAP as being successful in reducing poverty.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47677036", "title": "Child nutrition in Australia", "section": "Section::::The Results of Poor Nutrition.:Diseases.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 257, "text": "An inadequate diet provides an opportunity for several diseases to manifest within humans. This is due to the fact that a diet that does not adopt all five of the food groups can leave children malnourished and unable to develop at a steady or normal rate.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37024980", "title": "School meal programs in the United States", "section": "Section::::Nutritional guidelines.:Unhealthy meals and malnutrition.\n", "start_paragraph_id": 64, "start_character": 0, "end_paragraph_id": 64, "end_character": 427, "text": "Unhealthy school lunches contribute to malnutrition in both the short term and the long term. In many cases, unhealthy adult eating patterns can be traced back to unhealthy school lunches, because children learn eating habits from social settings such as school. A 2010 study of 1,003 middle-school students in Michigan found that those who ate school lunches were significantly more likely to be obese than those who did not.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "311916", "title": "Binge eating", "section": "Section::::Effects.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 325, "text": "Most people who have eating binges try to hide this behavior from others, and often feel ashamed about being overweight or depressed about their overeating. Although people who do not have any eating disorder may occasionally experience episodes of overeating, frequent binge eating is often a symptom of an eating disorder.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9005643", "title": "Caregiver", "section": "Section::::Technique.:Eating assistance.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 687, "text": "A healthy diet includes everything to meet a person's food energy and nutritional needs. People become at risk for not having a healthy diet when they are inactive or bedbound; living alone; sick; having difficulty eating; affected by medication; depressed; having difficulty hearing, seeing, or tasting; unable to get food they enjoy; or are having communication problems. A poor diet contributes to many health problems, including increased risk of infection, poor recovery time from surgery or wound healing, skin problems, difficulty in activities of daily living, fatigue, and irritability. Older people are less likely to recognize thirst and may benefit from being offered water.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1780780", "title": "Child Nutrition Act", "section": "Section::::Importance.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 670, "text": "It is important for food programs such as these in schools because some students may receive all their meals from school. According to the CDC, a poor diet can lead to energy imbalance (e.g., eating more calories than one expends through physical activity) and can increase one’s risk for overweight and obesity. Without a well balanced diet it could cause a child's brain to not develop normally (Berger, 172). Children may be malnourished and could possibly suffer from Protein-calorie malnutrition (Berger 172). In the long run if children do suffer from lack of nutrients it will not only impede brain growth but effect their ability to learn as well (Berger, 172).\n", "bleu_score": null, "meta": null } ] } ]
null
1y8jph
How can you "see" your breath when it's cold?
[ { "answer": " > You may already know that when you breathe in, your body takes in oxygen from the air. When you breathe out, your lungs expel carbon dioxide back into the air. But the breath you breathe out contains more than just carbon dioxide.\n > \n > When you exhale (breathe out), your breath also contains moisture. Because your mouth and lungs are moist, each breath you exhale contains a little bit of water in the form of water vapor (the gas form of water).\n > \n > For water to stay a gas in the form of water vapor, it needs enough energy to keep its molecules moving. Inside your lungs where it’s nice and warm, this isn’t a problem.\n > \n > When you exhale and it’s cold outside, though, the water vapor in your breath loses its energy quickly. Rather than continuing to move freely, the molecules begin to pack themselves closely together. As they do so, they slow down and begin to change into either liquid or solid forms of water.\n > \n > This scientific process is called condensation. When you exhale when it’s cold outside, the water vapor in your breath condenses into lots of tiny droplets of liquid water and ice (solid water) that you can see in the air as a cloud, similar to fog.\n > \n > When it’s warm out, though, the invisible water vapor gas stays invisible, because the warm air provides energy that allows the water vapor to remain a gas. As temperatures drop, it’s more likely that you’ll be able to see your breath.\n > \n > There’s no exact temperature at which condensation will occur. Many environmental factors other than temperature can play a role in condensation, including relative humidity (the amount of moisture in the air). When it falls below 45° F, though, you can usually expect to be able to see your breath.\n > \n > - See more at: _URL_0_", "provenance": null }, { "answer": "The tempature drop from body temp to outside temp causes the gaseous water from your breath to condense to liquid forming tiny water droplets that refract light differently and cause the visual effect that we see when we breath out in the cold. ", "provenance": null }, { "answer": "Exhaled air contains water vapor, at a relatively high percentage. Since the air inside the lungs is quite warm, the partial pressure of water in the lungs can be high without saturating the air in the lungs, and condensation does not occur. But in the cold winter air, the air can hold very little water without condensation. Thus as the warm, water-laden exhaled air cools, the partial pressure of water vapor exceeds the saturated vapor pressure in the cold air, and some of the water will condense. The white cloud seen is due to the condensed water vapor.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "446596", "title": "Diving reflex", "section": "Section::::Physiological response.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 481, "text": "In humans, the diving reflex is not induced when limbs are introduced to cold water. Mild bradycardia is caused by subjects holding their breath without submerging the face in water. When breathing with the face submerged, the diving response increases proportionally to decreasing water temperature. However, the greatest bradycardia effect is induced when the subject is holding his breath with his face wetted. Apnea with nostril and facial cooling are triggers of this reflex.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "89547", "title": "Water vapor", "section": "Section::::Properties.:General discussion.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 372, "text": "Exhaled air is almost fully at equilibrium with water vapor at the body temperature. In the cold air the exhaled vapor quickly condenses, thus showing up as a fog or mist of water droplets and as condensation or frost on surfaces. Forcibly condensing these water droplets from exhaled breath is the basis of exhaled breath condensate, an evolving medical diagnostic test.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22509369", "title": "Iron phosphide", "section": "Section::::Hazards and mitigation.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 264, "text": "In case of inhalation, the person should be moved to fresh air or given artificial respiration if not breathing. In case of ingestion, the person's mouth should be rinsed with water unless unconscious. In case of eye contact, immediate eye flushing is necessary. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "305649", "title": "Oceanic dolphin", "section": "Section::::Biology.:Anatomy.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 260, "text": "Breathing involves expelling stale air from the blowhole, forming an upward, steamy spout, followed by inhaling fresh air into the lungs; a spout only occurs when the warm air from the lungs meets the cold external air, so it may only form in colder climates.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "326837", "title": "Toothed whale", "section": "Section::::Biology.:Anatomy.\n", "start_paragraph_id": 169, "start_character": 0, "end_paragraph_id": 169, "end_character": 343, "text": "Breathing involves expelling stale air from their one blowhole, forming an upward, steamy spout, followed by inhaling fresh air into the lungs. Spout shapes differ among species, which facilitates identification. The spout only forms when warm air from the lungs meets cold air, so it does not form in warmer climates, as with river dolphins.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3802867", "title": "Inert gas asphyxiation", "section": "Section::::Process.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1025, "text": "When humans breathe in an asphyxiant gas, such as pure nitrogen, helium, neon, argon, sulfur hexafluoride, methane, or any other physiologically inert gas(es), they exhale carbon dioxide without re-supplying oxygen. Physiologically inert gases (those that have no toxic effect, but merely dilute oxygen) are generally free of odor and taste. As such, the human subject detects little abnormal sensation as the oxygen level falls. This leads to asphyxiation (death from lack of oxygen) without the painful and traumatic feeling of suffocation (the hypercapnic alarm response, which in humans arises mostly from carbon dioxide levels rising), or the side effects of poisoning. In scuba diving rebreather accidents, there is often little sensation, however, a slow decrease in oxygen breathing gas content has effects which are quite variable. By contrast, suddenly breathing pure inert gas causes oxygen levels in the blood to fall precipitously, and may lead to unconsciousness in only a few breaths, with no symptoms at all.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5888087", "title": "Thoracic wall", "section": "Section::::Function.:Diving reflex.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 940, "text": "When not breathing for long and dangerous periods of time in cold water, a person's body undergoes great temporary changes to try to prevent death. It achieves this through the activation of the mammalian diving reflex, which has 3 main properties. Other than Bradycardia and Peripheral vasoconstriction, there is a blood shift which occurs only during very deep dives that affects the thoracic cavity (a chamber of the body protected by the thoracic wall.) When this happens, organ and circulatory walls allow plasma/water to pass freely throughout the thoracic cavity, so its pressure stays constant and the organs aren't crushed. In this stage, the lungs' alveoli fill up with blood plasma, which is reabsorbed when the organism leaves the pressurized environment. This stage of the diving reflex has been observed in humans (such as world champion freediver Martin Štěpánek) during extremely deep (over 90 metres or 300 ft) free dives.\n", "bleu_score": null, "meta": null } ] } ]
null
36ziww
How was the Dutch military rebuilt after the Second World War and what were the consequences for the Indonesian War of Independence only 2 years later?
[ { "answer": "The forces that were used in the Dutch East Indies were part of two groups. There was the \"Mariniersbrigade\" which was trained in the United States during the Second World War. It consisted of a few Dutch volunteers who were preparing for the war versus Japan, their primair goal was to liberate the Dutch East Indies. However, when Japan surrendered in 1945 and Soekarno called for independence, their purpose changed. Now, the brigade was used by the Dutch government for the war versus Indonesia.\n\nThe more famous Koninklijk Nederlands-Indisch Leger (KNIL) or in English the Royal Netherlands East Indies Army, was used during the Second World War versus Japan, in contrast to the Mariniersbrigade. Though no Dutch soldiers fought the Germans after the surrender that followed the bombing of Rotterdam, the war in Asia continued, with military oppositon towards the Japanese. Though many KNIL soldiers (they were Dutch but also native men!) were taken by the Japanese as a prisoner of war, many were still able to flee and continued the fight versus Japan, sometimes cooperating with the British, Australians and of course the Americans. \n\nThus the armies that were involved was well trained, it was not necessary to build anything from scratch. Nor did the hunger winter influenced the soldiers, since most were already in Asia when the hunger winter happened. \n\nSources: \nKarel Davids and Marjolein 't Hart, De wereld & Nederland (Amsterdam 2011). \nPierre Heijboer, De Politionele Acties (Haarlem 1979).\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "14643", "title": "History of Indonesia", "section": "Section::::The emergence of Indonesia.:Japanese occupation.\n", "start_paragraph_id": 76, "start_character": 0, "end_paragraph_id": 76, "end_character": 714, "text": "The Japanese invasion and subsequent occupation during World War II ended Dutch rule and encouraged the previously suppressed Indonesian independence movement. In May 1940, early in World War II, the Netherlands was occupied by Nazi Germany. The Dutch East Indies declared a state of siege and in July redirected exports for Japan to the US and Britain. Negotiations with the Japanese aimed at securing supplies of aviation fuel collapsed in June 1941, and the Japanese started their conquest of Southeast Asia in December of that year. That same month, factions from Sumatra sought Japanese assistance for a revolt against the Dutch wartime government. The last Dutch forces were defeated by Japan in March 1942.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4602123", "title": "Military history of the Netherlands", "section": "Section::::Cold War.\n", "start_paragraph_id": 203, "start_character": 0, "end_paragraph_id": 203, "end_character": 354, "text": "After the Second World War, the Dutch were first involved in a colonial war against the nationalists in Indonesia. As a result, the home forces were much neglected and had to rearm by begging for (or simply taking) surplus allied equipment, such as the RAM tank. In 1949 Bernard Montgomery judged the Royal Netherlands Army as simply \"unfit for battle\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14643", "title": "History of Indonesia", "section": "Section::::The emergence of Indonesia.:Indonesian National Revolution.\n", "start_paragraph_id": 82, "start_character": 0, "end_paragraph_id": 82, "end_character": 1487, "text": "Dutch efforts to re-establish complete control met resistance. At the end of World War II, a power vacuum arose, and the nationalists often succeeded in seizing the arms of the demoralised Japanese. A period of unrest with city guerrilla warfare called the Bersiap period ensued. Groups of Indonesian nationalists armed with improvised weapons (like bamboo spears) and firearms attacked returning Allied troops. 3,500 Europeans were killed and 20,000 were missing, meaning there were more European deaths in Indonesia after the war than during the war. After returning to Java, Dutch forces quickly re-occupied the colonial capital of Batavia (now Jakarta), so the city of Yogyakarta in central Java became the capital of the nationalist forces. Negotiations with the nationalists led to two major truce agreements, but disputes about their implementation, and much mutual provocation, led each time to renewed conflict. Within four years the Dutch had recaptured almost the whole of Indonesia, but guerrilla resistance persisted, led on Java by commander Nasution. On 27 December 1949, after four years of sporadic warfare and fierce criticism of the Dutch by the UN, the Netherlands officially recognised Indonesian sovereignty under the federal structure of the United States of Indonesia (RUSI). On 17 August 1950, exactly five years after the proclamation of independence, the last of the federal states were dissolved and Sukarno proclaimed a single unitary Republic of Indonesia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26964392", "title": "Bersiap", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 294, "text": "The period ended with the departure of the British military in 1946, by which time the Dutch had rebuilt their military capacity. Meanwhile, the Indonesian revolutionary fighters were well into the process of forming a formal military. The last Japanese troops had been evacuated by July 1946.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23476997", "title": "Dutch East Indies", "section": "Section::::Government.:Armed forces.\n", "start_paragraph_id": 72, "start_character": 0, "end_paragraph_id": 72, "end_character": 928, "text": "Following World War II, a reconstituted KNIL joined with Dutch Army troops to re-establish colonial \"law and order\". Despite two successful military campaigns in 1947 and 1948, Dutch efforts to re-establish their colony failed and the Netherlands recognised Indonesian sovereignty in December 1949. The KNIL was disbanded by 26 July 1950 with its indigenous personnel being given the option of demobilising or joining the Indonesian military. At the time of disbandment the KNIL numbered 65,000, of whom 26,000 were incorporated into the new Indonesian Army. The remainder were either demobilised or transferred to the Netherlands Army. Key officers in the Indonesian National Armed Forces that were former KNIL soldiers include: Suharto second president of Indonesia, A.H. Nasution, commander of the Siliwangi Division and Chief of Staff of the Indonesian army and A.E. Kawilarang founder of the elite special forces Kopassus.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1199200", "title": "Royal Netherlands East Indies Army", "section": "Section::::World War II.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 967, "text": "Dutch forces in the Netherlands East Indies were severely weakened by the defeat and occupation of the Netherlands itself, by Nazi Germany, in 1940. The KNIL was cut off from external Dutch assistance, except by Royal Netherlands Navy units. The KNIL, hastily and inadequately, attempted to transform into a modern military force able to protect the Dutch East Indies from foreign invasion. By December 1941, Dutch forces in Indonesia numbered around 85,000 personnel: regular troops consisted of about 1,000 officers and 34,000 enlisted soldiers, of whom 28,000 were indigenous. The remainder were made up of locally organised militia, territorial guard units and civilian auxiliaries. The KNIL air force, \"Militaire Luchtvaart KNIL\" (Royal Netherlands East Indies Air Force (\"ML-KNIL\")) numbered 389 planes of all types, but was largely outclassed by superior Japanese planes. The Royal Netherlands Navy Air Service, or MLD, also had significant forces in the NEI.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1486117", "title": "History of the Netherlands (1900–present)", "section": "Section::::Post-war years.:Indonesia.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 1715, "text": "Allied forces liberated parts of the Dutch East Indies in mid-1945. However the Japanese-installed local leadership declared independence as Indonesia, and controlled the main islands. A confusing phase followed. Its massive oil reserves provided about 14 percent of the prewar Dutch national product and supported a large population of ethnic Dutch government officials and businessmen in Jakarta and other major cities. In 1945, the Netherlands could not regain these islands on its own; had to depend on British military action and American financial grants. By the time Dutch soldiers returned, an independent government under Sukarno, was in power. The Dutch in the East Indies, and at home, were practically unanimous (except for the Communists) that Dutch power and prestige and wealth depended on an extremely expensive war to regain the islands. Compromises were negotiated, were trusted by neither side. When the Indonesian Republic successfully suppressed a large-scale communist revolt, the United States realized that it needed the nationalist government as an ally in the Cold War. Dutch possession was an obstacle to American Cold War goals, so Washington forced the Dutch to grant full independence. A few years later, Sukarno seized all Dutch properties and expelled all ethnic Dutch—over 300,000—as well as several hundred thousand ethnic Indonesians who supported the Dutch cause. In the aftermath, the Netherlands prospered greatly in the 1950s and 1960s but nevertheless public opinion was bitterly hostile to the United States for betrayal. Washington remained baffled why the Dutch were so inexplicably enamored of an obviously hopeless cause. Western New Guinea remained Dutch (until 1961).\n", "bleu_score": null, "meta": null } ] } ]
null
bv6kgx
how do tarrifs work?
[ { "answer": "So a 5% tariff is essentially an additional 5% tax that companies have to pay upon importing the product. The tariff is supposed to discourage companies from buying from a specific country. In reality, that doesn't happen. That 5% just makes customers pay 5% more because that is cheaper and/or easier than finding another source for the product.", "provenance": null }, { "answer": "So let's say that I'm building widgets in my factory in Canada. These are fine widgets. I make them for $70 each, and they sell (to stores) for $80. There are also factories in America that produce and sell widgets at around the same price. That puts us in competition; after all, people only need to buy so many widgets per year, but they do need widgets. You can't *not* have a widget, after all.\n\nAmerican shops buy widgets at around $80, then sell them on to the customer at $100. Everyone gets a profit, and because there's very little difference in cost, it doesn't really matter whether the widgets used are Canadian or American. Some people prefer one; some the other.\n\nBut say the President wants to boost the American widget industry. He can either do that by putting money into US widgets (either directly or via things like lowering their taxes to make it a more profitable venture), or by making Canadian widgets less attractive. He chooses the latter. He decides that from now on, anyone importing Canadian widgets is going to have to pay an extra $20 to get their widgets across the border. Here's the logic:\n\nAmerican stores now have a choice between buying American widgets for $80, or Canadian widgets for $100. Given that the market pretty much lets them sell widgets at or around $100, they're suddenly making *much* less profit on Canadian widgets. To make more profit on widgets, they either need to increase the price they sell Canadian widgets at (making them less attractive to consumers, driving them towards buying American widgets and increasing demand), or they need to stop buying Canadian widgets for resale, meaning that they'll have to buy American widgets instead. In the long term, if the price of importing widgets from Canada becomes too high, companies might choose to just produce widgets in the US rather than sending their widget-making jobs abroad. That's jobs for the US workforce, which means taxes for the US government. Either way, it's a win for the hardy American widget manufacturer *and* the wider population.\n\nIn theory.\n\n*Except.*\n\nThere are a couple of things that can go wrong here. The first is that Canada does the same thing to American widgets, but also puts tariffs on American doohickeys and thingumabobs. Now the doohickey and thingumabob industry are pissed at you because you've just dragged them into a trade war that they wanted no part of; by trying to help one industry, you've hurt another.\n\nThe second is that some stores may not have access to American widgets to sell. If you sell the kind of widget that they only make in Canada, you're pretty much boned. All of a sudden, you're now not making a profit unless you sell your widgets at $120. You can't afford to keep the store open without it, but because people need widgets, they have to pay extra. Now you're just hurting the American consumer, because they're paying more for their widgets.\n\nThe third is that American stores don't *really* want to lose money, and may see a business opportunity. They may just start selling Canadian widgets at $120, but they may increase the price of American widgets too -- because after all, if you need a widget and the Canadians can't drop their prices (because of the tariffs keeping it artificially high), you're still going to have to spend the money. Why not make them spend $120 on a Canadian widget (giving you $20 profit) or $110 on American widgets (making you $30 profit)?\n\nThis is the most ELI5 version. There are lots of other things that can happen too. (Notably, a tariff often has the effect of devaluing the currency of the target population. That means that goods are cheaper when you buy them abroad, which may -- in some cases -- temporarily make it *more* profitable to buy Canadian widgets if the Canadian dollar takes a hit.)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "608992", "title": "Tarpaulin", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 461, "text": "A tarpaulin ( , ), or tarp, is a large sheet of strong, flexible, water-resistant or waterproof material, often cloth such as canvas or polyester coated with polyurethane, or made of plastics such as polyethylene. In some places such as Australia, and in military slang, a tarp may be known as a hootch. Tarpaulins often have reinforced grommets at the corners and along the sides to form attachment points for rope, allowing them to be tied down or suspended.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "608992", "title": "Tarpaulin", "section": "Section::::Uses.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 436, "text": "Tarpaulins are used in many ways to protect persons and things from wind, rain, and sunlight. They are used during construction or after disasters to protect partially built or damaged structures, to prevent mess during painting and similar activities, and to contain and collect debris. They are used to protect the loads of open trucks and wagons, to keep wood piles dry, and for shelters such as tents or other temporary structures.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "151694", "title": "Tar (computing)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 767, "text": "In computing, tar is a computer software utility for collecting many files into one archive file, often referred to as a tarball, for distribution or backup purposes. The name is derived from \"(t)ape (ar)chive\", as it was originally developed to write data to sequential I/O devices with no file system of their own. The archive data sets created by tar contain various file system parameters, such as name, time stamps, ownership, file access permissions, and directory organization. The command line utility was first introduced in the Version 7 Unix in January 1979, replacing the tp program. The file structure to store this information was standardized in POSIX.1-1988 and later POSIX.1-2001, and became a format supported by most modern file archiving systems.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36729683", "title": "Tarsnap", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 205, "text": "Tarsnap is a secure online backup service for UNIX-like operating systems, including BSD, Linux, and OS X. It was created in 2008 by Colin Percival. Tarsnap encrypts data, and then stores it on Amazon S3.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "151694", "title": "Tar (computing)", "section": "Section::::Limitations.:Tarbomb.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 482, "text": "A tarbomb, in hacker slang, is a tar file that contains many files that extract into the working directory. Such a tar file can create problems by overwriting files of the same name in the working directory, or mixing one project's files into another. It is at best an inconvenience to the user, who is obliged to identify and delete a number of files interspersed with the directory's other contents. Such behavior is considered bad etiquette on the part of the archive's creator.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "151694", "title": "Tar (computing)", "section": "Section::::Uses.:Tarpipe.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 300, "text": "A tarpipe is the method of creating an archive on the standard output file of the tar utility and piping it to another tar process on its standard input, working in another directory, where it is unpacked. This process copies an entire source directory tree including all special files, for example:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6942239", "title": "Hammock camping", "section": "Section::::Suspension systems, tarpaulins, and amenities.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 290, "text": "Some tarps have an asymmetrical pattern which matches the shape of the hammock, but the majority of hammock campers use a hex-shaped tarpaulin, many of which have a catenary shape for strength against wind and reduction in size and weight. The diamond-shape tarpaulin is also used by some.\n", "bleu_score": null, "meta": null } ] } ]
null
68jjsx
when a movie is said to have begun "filming", typically how long is this process vs. the rest of the movie making process?
[ { "answer": "Well that depends. If you have a stellar editor, than they might be able to finish up rather quickly, and you have a stellar crew, they might finish a scene in one wrap. But generally speaking, actually filming a movie takes alot longer than most people think. You usually end up filming one scene in a 12 hour work day. Think of scene in a movie you saw recently. It seems like the actor must have performed that scene very naturally in one take, but in reality, he probably said the same lines 100 times, each being shot from various angles and acted differently. Then comes the editing, which picks out the best version and angle of the previous 100 shots. The filming aspect could generally last anywhere from 1 to 6 months depending on the length and complexity of the film, and then the editing an additional 1 to 3. Of course these are just ballpark estimates as some films could take much longer for both filming and editing. (A film heavy in cgi effects would take much longer to edit) and of course these all depend on the scale of the production. (A set with 200 staff will be able to produce the same project much quicker than a set with 20 staff). But none of this is even taking to account the PR and marketing that goes in to play throughout the process, and most finalized films will take longer to come to the big screen because they will compete in various film competitions and do test screenings first. But to answer your question, the duration of each aspect depends on alot of different variables, making it a very difficult questions to answer. I would like to see a comment from someone who has worked on a numerous amount of feature films with a large production staff to see if they could give some average numbers at that size.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "11334798", "title": "The Back of Beyond", "section": "Section::::Production.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 284, "text": "The film took 3 years to make: one year of thinking and planning, one year of production, and one year to edit and finish it. The film was scripted in advance, though changes were made during filming and production. Of the three years, only six weeks were spent shooting on location.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49320076", "title": "Priyanka (2016 film)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 236, "text": "Filming began in the November month of 2014 and took a single stretch of 32 days to complete. However, the film could not release for a long time. After multiple announcements of the release dates, the film released on 5 February 2016.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1343178", "title": "The Alamo (1960 film)", "section": "Section::::Production.:Filming.\n", "start_paragraph_id": 59, "start_character": 0, "end_paragraph_id": 59, "end_character": 258, "text": "Filming ended on December 15. A total of 560,000 feet of film was produced for 566 scenes. Despite the scope of the filming, it lasted only three weeks longer than scheduled. By the end of development, the film had been edited to three hours and 13 minutes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18884586", "title": "Ara (film)", "section": "Section::::Production.:Shooting and editing.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 258, "text": "Shooting began on May 10, 2007, and both shooting and editing were completed in only 13 days, due to the being done on a daily basis as the shooting went on (a standard feature film usually takes from one month up to a year for the editing to be completed).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27602393", "title": "Monsters (2010 film)", "section": "Section::::Production.:Editing and effects.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 726, "text": "Every night after shooting, editor Colin Goudie and his assistant Justin Hall would download the footage so the memory sticks could be cleared and ready for the next day. While new footage was being captured, the previously filmed footage was edited at the production team's hotel. After filming concluded, the crew had over 100 hours of footage. The original cut was over four hours long but was trimmed to 94 minutes after eight months of editing. Edwards originally had the ending of the film both at the beginning and the end. He and the film's producers disagreed about the placement, so he decided to put the chronological ending of the film at the beginning and end the film immediately after Andrew and Samantha kiss.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17477511", "title": "Nekromantik 2", "section": "Section::::Production.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 530, "text": "The shooting of the film occurred in September and October 1990. The editing of the film was completed by April 1991. The film was originally planned to last 85 minutes, but the print shown at the Berlin premiere lasted 111 minutes. It was soon shortened to 104 minutes, after removing \"unimportant bits and pieces\" from various scenes. Reportedly, no scene of the film was completely removed. David Kerekes commented that it could stand to be further shortened, since several sequences were, in his view, protracted and tedious.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "45195386", "title": "The Dam Keeper", "section": "Section::::Production.:Pre-production.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 979, "text": "Official production began in early 2013. Although Kondo and Tsutsumi had planned to carry out most of the film's production within the three months that they took off from Pixar, this schedule was developed around an early version of the film that only ran for eight minutes. As the film's length expanded, eventually reaching eighteen minutes, the production period stretched into a total of nine months. \"We just couldn't foresee the length of the story we wanted to tell in the beginning\", Tsutsumi reflected. \"[But] if you look at the math, three months of eight minutes to nine months of 18 minutes is not too bad.\" Commenting on the film's burgeoning run time, Kondo explained that eight minutes seemed insufficient to convey the kind of story that he and Tsutsumi wanted to tell - one in which a character's perception of life significantly changes. However, Kondo hopes that in the future, they will be able to make films with \"just as much emotion in a shorter format.\"\n", "bleu_score": null, "meta": null } ] } ]
null
18ryla
Who paid for the early running water and electricity systems in America? How did they work?
[ { "answer": "This is not my area of expertise, but my grandfather had a role in rural electrification in Kansas, so I know a bit. \n\nDuring the Roosevelt administration a program of rural electrification, supported by co-ops (many which still exist in some form today) brought power to the farms of the Midwest. It was a large undertaking, with one of the aims being to staunch the flow of people leaving rural communities for cities by creating jobs and improving quality if life. \n\n_URL_0_\n_URL_1_", "provenance": null }, { "answer": "You will find \"Beneath the Metropolis: The Natural and Man-Made Underground of the World's Great Cities\" by Alex Marshall rather interesting. He looks at twelve cities and looks in depth at the historical development of their sewage and fresh water infrastructure. From what I remember about the chapter on Rome, the Imperial government would build the aqueducts to bring fresh water to the city, and an under ground sewage system to drain the city. Water was distributed to public fountains free of charge, but wealthier Romans could build pipes to deliver water directly to their house. The elite could have indoor plumbing, if they paid for it themselves. In the three chapters about US cities and the chapters on Paris and London, the water and sewer systems were all government funded. Mainly to improve public health. \n Electricity is a much more recent development. In the United States, privately owned companies built and maintained the electrical generating and distribution systems. These privately owned companies were tightly regulated by Public Utility Commissions very early. They wanted to wire the big cities, where there was a dense concentration of customers, but they were reluctant to electrify the sparsely populated rural areas. It took a New Deal program to break this log-jam, the Rural Electrification Agency, during the mid 1930s.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "15713153", "title": "Bethlehem Waterworks", "section": "Section::::Description and history.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 236, "text": "The system is believed to be the first pump-powered water supply to be implemented in what is now the United States. The town of Boston, Massachusetts had a municipal water supply as early as 1652, but it was purely powered by gravity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "115451", "title": "Monticello, Kentucky", "section": "Section::::History.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 464, "text": "Electricity was available in 1905 and city water in 1929. Manufacturing dominated the economy from the late 1950s and 1960s until the late 20th and early 21st century. In 1973, Belden Corporation (wire and cable) employed 300 people; Gamble Brothers (wood products) employed 161 people, Waterbury Garment (clothing) employed 271 people, and Monticello Manufacturing (clothing) employed 240 people. All four of these companies no longer do business in Monticello. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19523143", "title": "Bullendale", "section": "Section::::Phoenix Mine, Battery and Power Plant.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 214, "text": "In 1896 a new water race was built, enabling the water to be used directly for power once again. The electric system continued to be used as auxiliary until about 1901 when the dynamos were used for the last time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1633240", "title": "Boston Elevated Railway", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 528, "text": "In the late 19th century, the electric power industry was in its infancy; the power grid as we know it today simply did not exist. The railway company constructed its own power stations; by 1897, these included distributed generation stations in downtown Boston, Allston, Cambridge (near Harvard), Dorchester, Charlestown, East Cambridge, and East Boston. By 1904, the system had 36 megawatts of generating capacity, of track for over 1550 street cars (mostly closed but some open), and of elevated track for 174 elevated cars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "125745", "title": "Battersea Power Station", "section": "Section::::History.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 636, "text": "Until the late 1930s electricity was supplied by municipal undertakings. These were small power companies that built power stations dedicated to a single industry or group of factories, and sold any excess electricity to the public. These companies used widely differing standards of voltage and frequency. In 1925 Parliament decided that the power grid should be a single system with uniform standards and under public ownership. Several of the private power companies reacted to the proposal by forming the London Power Company (LPC). They planned to heed parliament's recommendations and build a small number of very large stations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14105333", "title": "Electric power system", "section": "Section::::History.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1595, "text": "By 1888, the electric power industry was flourishing, and power companies had built thousands of power systems (both direct and alternating current) in the United States and Europe. These networks were effectively dedicated to providing electric lighting. During this time the rivalry between Thomas Edison and George Westinghouse's companies had grown into a propaganda campaign over which form of transmission (direct or alternating current) was superior, a series of events known as the \"War of Currents\". In 1891, Westinghouse installed the first major power system that was designed to drive a synchronous electric motor, not just provide electric lighting, at Telluride, Colorado. On the other side of the Atlantic, Mikhail Dolivo-Dobrovolsky of AEG and Charles Eugene Lancelot Brown of Maschinenfabrik Oerlikon, built the very first long-distance (175 km, a distance never tried before) high-voltage (15 kV, then a record) three-phase transmission line from Lauffen am Neckar to Frankfurt am Main for the Electrical Engineering Exhibition in Frankfurt, where power was used light lamps and move a water pump. In the US the AC/DC competition came to an end when Edison General Electric was taken over by their chief AC rival, the Thomson-Houston Electric Company, forming General Electric. In 1895, after a protracted decision-making process, alternating current was chosen as the transmission standard with Westinghouse building the Adams No. 1 generating station at Niagara Falls and General Electric building the three-phase alternating current power system to supply Buffalo at 11 kV.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "112815", "title": "Parkersburg, Iowa", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 285, "text": "The first electric power was furnished by the Parkersburg Electric Light & Power Company, composed of local citizens who built the plant in 1898 at a cost of ten thousand dollars. Parkersburg's roads were paved around 1920. The first water tower only had a capacity of 40,000 gallons.\n", "bleu_score": null, "meta": null } ] } ]
null
3rt1jr
Why did the British Empire collapse while the Soviet Union remained a superpower after WW2?
[ { "answer": "Britain wasn't exactly \"devastated\" by the war in the same way as the USSR. German planes actually did relatively little damage to Britain's infrastructure, which remained largely intact and increased in productive capacity throughout the war. But Britain financed her role in the war by *heavy* borrowing; when the war ended, she could not maintain the expense of an overseas empire and service her now-enormous debt at the same time. So she was devastated financially.\n\nThe USSR was \"devastated\" in the more conventional sense, having an enormous chunk of her agricultural lands taken away and losing 25 million people (the high estimate of the death toll of the Great Purge is about 300,000, so not very significant compared to WWII). But her industrial capacity was not greatly affected, as the USSR managed to evacuate most of their factories to the unreachable Urals early in the war; and the USSR got back all its lost agricultural land and then some when the war ended (they also more than made up for their lost population by absorbing Poland, half of Germany and other parts of Eastern Europe). Moreover, while they did do some conventional borrowing, most of the aid they received from the West was Lend Lease material from the US or outright \"gifts\" from Britain, so they did not find themselves facing a crushing national debt after the war.", "provenance": null }, { "answer": "So /u/ThePutback wrote about the differences in how the nations were \"devastated\" in the war, but I wanted to expand on another difference between the British Empire and the Soviet Union:\n\nThe Soviet Union was the ideological leader of an increasingly significant portion of the world in the wake of WW2.\n\nSo while the British Empire physically possessed numerous colonies and protectorates across the world, the Soviet Union had the virtual **ideological ownership** of many nations and revolutionary groups all around the world. \n\nThis only grew as Europe de-colonized rapidly after WW2 when many new governments and revolutionary groups aligned themselves with Eastern bloc (regardless of how much adherence they had to Marxist/communist ideology). \n\nWhether it was the split between the Koreas immediately after WW2, or the ideological struggle between the Vietnams split after the French withdrew in the 1950s, the wave of pan-Arabism in the 1950s and 1960s that was closely aligned with the Soviet Union, or the formation of People's Democratic Republic of Yemen after the British protectorate there ended in the late 60s, or the fight over Angola after independence from Portugal in the 70s... the Soviet Union possessed a massive amount of power with a large bloc of nations in the world throughout all the decades of the Cold War.\n\nThe British simply didn't have the manpower or financial situation to maintain such an extensive network of colonies, especially as de-colonization often ended up in bloody affairs in many places around the world. Furthermore, the backbone of the British Empire, the Royal Navy, could no longer compete with the US or even the Soviet navies.\n\nIn fact, by the end of WW2, the UK had already ceded domination of the seas to the United States and navies are very expensive. For instance, the US maintained at times during the Cold War over 16 fleet aircraft carriers simultaneously (over 24 of them in different configurations forms in 1960 alone), and by the end of the Cold War in 1991 had 14 supercarriers in simultaneous service.\n\nIn contrast, the UK retired its last fleet carriers using conventional catapult and arresting gear configurations in the 1970s. No new designs were bought or commissioned, and the UK nearly sold its remaining light carriers using Harriers until the Falklands War stopped their sale. Add on the humiliation of the Suez Crisis, and the UK finally pulling its last carrier out from Hong Kong in the 1970s, the end of the \"East of Suez\" era for British power projection was evident. And, without the ability to project power to defend its colonies and protectorates abroad, decolonization only hastened and many of those protectorates became aligned with the US, which filled in the role that the British once did (such as the end of protectorate status for the Trucial States, which would form Qatar, Bahrain, and the UAE, nations that have US military bases/forces stationed there).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "78366", "title": "Superpower collapse", "section": "Section::::British Empire.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 951, "text": "The consequence of fighting two World Wars in a relatively short amount of time, along with the emergence of the United States and the Soviet Union rise to superpower status after the end of World War II, both of which were hostile to British imperialism and along with the change in ideology led to a rapid wave of decolonization all over the world in the decades after World War II. The Suez Crisis of 1956 is generally considered the beginning of the end of Britain's period as a superpower, although other commentators have pointed to World War I, the Depression of 1920-21, the Partition of Ireland, the return of the pound sterling to the gold standard at its prewar parity in 1925, the loss of wealth from World War II, the end of Lend-Lease Aid from the United States in 1945, the postwar , the Winter of 1946–47, the beginning of decolonization, and the independence of India as key points in Britain's decline and loss of superpower status.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31731", "title": "Foreign relations of the United Kingdom", "section": "Section::::History.:Since 1945.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 744, "text": "The British had built up a very large worldwide British Empire, which peaked in size in 1922, after more than half a century of unchallenged global supremacy. The cumulative costs of fighting two world wars, however, placed a heavy burden upon the UK economy, and after 1945 the British Empire gradually began to disintegrate, with many territories granted independence. By the mid-to-late 1950s, the UK's status as a superpower had been largely diminished by the rise of the United States and the Soviet Union. Many former colonial territories joined the \"Commonwealth of Nations,\" an organisation of fully independent nations now with equal status to the UK. Britain finally turned its attention to the continent, joining the European Union.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "362048", "title": "Consequences of Nazism", "section": "Section::::Western Europe.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 395, "text": "Britain and France, two of the victors, were exhausted and bankrupted by the war, and Britain lost its superpower status. With Germany and Japan in ruins as well, the world was left with two dominant powers, the United States and the Soviet Union. Economic and political reality in Western Europe would soon force the dismantling of the European colonial empires, especially in Africa and Asia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15316", "title": "Imperialism", "section": "Section::::Imperialism by country.:Britain.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 602, "text": "World War II has further weakened Britain's position in the world, especially financially. Decolonization movements proliferated throughout the Cold War, resulting in Indian independence and partition in 1947 and the establishment of independent states throughout Africa. British imperialism continued for a few years, notably with its involvement in the Iranian coup d'état of 1953 and in Egypt during the Suez Crisis in 1956. However, with the United States and Soviet Union emerging from World War II as the sole superpowers, Britain's role as a worldwide power declined significantly and rapidly. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4305070", "title": "History of Western civilization", "section": "Section::::Fall of the western empires: 1945–1999.\n", "start_paragraph_id": 214, "start_character": 0, "end_paragraph_id": 214, "end_character": 1002, "text": "Following World War II, the great colonial empires established by the Western powers beginning in early modern times began to collapse. There were several reasons for this. Firstly, World War II had devastated European economies and had forced governments to spend great deals of money, making the price of colonial administration increasingly hard to manage. Secondly, the two new superpowers following the war, the United States and Soviet Union were both opposed to imperialism, so the now weakened European Empires could generally not look to the outside for help. Thirdly, Westerners increasingly were not interested in maintaining and even opposed the existence of empires. The fourth reason was the rise of independence movements following the war. The future leaders of these movements had often been educated at colonial schools run by Westerners where they adopted Western ideas like freedom, equality, self-determination and nationalism, and which turned them against their colonial rulers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4721", "title": "British Empire", "section": "Section::::Decolonisation and decline (1945–1997).\n", "start_paragraph_id": 79, "start_character": 0, "end_paragraph_id": 79, "end_character": 1592, "text": "Though Britain and the empire emerged victorious from the Second World War, the effects of the conflict were profound, both at home and abroad. Much of Europe, a continent that had dominated the world for several centuries, was in ruins, and host to the armies of the United States and the Soviet Union, who now held the balance of global power. Britain was left essentially bankrupt, with insolvency only averted in 1946 after the negotiation of a $US 4.33 billion loan from the United States, the last instalment of which was repaid in 2006. At the same time, anti-colonial movements were on the rise in the colonies of European nations. The situation was complicated further by the increasing Cold War rivalry of the United States and the Soviet Union. In principle, both nations were opposed to European colonialism. In practice, however, American anti-communism prevailed over anti-imperialism, and therefore the United States supported the continued existence of the British Empire to keep Communist expansion in check. The \"wind of change\" ultimately meant that the British Empire's days were numbered, and on the whole, Britain adopted a policy of peaceful disengagement from its colonies once stable, non-Communist governments were established to assume power. This was in contrast to other European powers such as France and Portugal, which waged costly and ultimately unsuccessful wars to keep their empires intact. Between 1945 and 1965, the number of people under British rule outside the UK itself fell from 700 million to five million, three million of whom were in Hong Kong.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31723", "title": "History of the United Kingdom", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1614, "text": "Britain was no longer a military or economic superpower, as seen in the Suez Crisis of 1956. Britain no longer had the wealth to maintain an empire, so it granted independence to almost all its possessions. The new states typically joined the Commonwealth of Nations. The postwar years saw great hardships, alleviated somewhat by large-scale financial aid from the United States, and some from Canada. Prosperity returned in the 1950s. Meanwhile, in 1945–50 the Labour Party built a welfare state, nationalized many industries, and created the National Health Service. The UK took a strong stand against Communist expansion after 1945, playing a major role in the Cold War and the formation of NATO as an anti-Soviet military alliance with West Germany, France, the U.S., Canada and smaller countries. NATO remains a powerful military coalition. The UK has been a leading member of the United Nations since its founding, as well as numerous other international organizations. In the 1990s neoliberalism led to the privatisation of nationalized industries and significant deregulation of business affairs. London's status as a world financial hub grew continuously. Since the 1990s large-scale devolution movements in Northern Ireland, Scotland and Wales have decentralized political decision-making. Britain has wobbled back and forth on its economic relationships with Western Europe. It joined the European Economic Community in 1973, thereby weakening economic ties with its Commonwealth. However, the Brexit referendum in 2016 committed the UK to leave the European Union; negotiations are currently underway.\n", "bleu_score": null, "meta": null } ] } ]
null
2qtms9
why do progressive countries put their focus and resources on free healthcare and free education and not on free food, free clothing and free shelter?
[ { "answer": "Give a man a fish and he will eat for a day, teach a man to fish and you feed him for a lifetime.\n\nBasically education and health care is a much more cost effective way to reach the same end goal. Plus if you provide shelter food etc free to everyone then lots of people might just stop working. ", "provenance": null }, { "answer": "probably because with healthcare and education being readily available to the masses, theres a lot less need for shelter and food for the poor.", "provenance": null }, { "answer": "The logic seems to be that education and healthcare are expensive, so it has to be something provided by the government to make up for that. Free education has its merits for making a better society, but the others don't really make sense. Mostly they are decided by the voters, who tend to vote for selfish reasons.\n\nIt should also be noted that \"Progressive countries\" don't provide top notch healthcare or education in the same way a prison doesn't provide the best food.", "provenance": null }, { "answer": "The reason is that most progressive states have already taken care of their citizens' need for food, clothing, and shelter. People in those countries are either able to buy their own food, clothing, and shelter easily or there are government programs that provide food, clothing and/or shelter.\n\nI would also say that the states aren't providing \"free\" healthcare and education, but rather they are socializing it. People still have to pay taxes to support those systems, but in return no individual person has to pay a lot of money out of pocket when they use those services.\n\nYou could also look at it more cynically and say that governments aren't concerned with their citizens' needs, but rather strengthening the state economically and militarily. Schools and healthcare for productive citizens do a better job of that than providing aid to less productive, poorer citizens. Realistically, though, I think governments try to help with all of the things you listed, but they can't do it all because they have limited resources.\n\nEdit: Here are some examples of subsidized food, clothing, and shelter in the US. Food in the US is very cheap because of government subsidies. A lot of it goes towards unhealthy food that uses a lot of corn, but it's still cheap. The US also gives out food stamps. \n\nFor shelter, there are lots of homeless shelters. They sometimes get filled up, especially during the winter, but for the most part shelter is available. There are other issues with homeless shelters that may make being on the street preferable for some people, but they exist. There's also government subsidized housing through either Section 8 subsidies. \n\nI don't know about any clothing programs specifically, but that's probably because clothes are easy to get here at charities and thrift stores. Heavy winter clothes may be harder to come by, but an old pair of jeans and a t-shirt are not.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8695082", "title": "Health policy", "section": "Section::::Background.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 697, "text": "Other countries have an explicit policy to ensure and support access for all of its citizens, to fund health research, and to plan for adequate numbers, distribution and quality of health workers to meet healthcare goals. Many governments around the world have established universal health care, which takes the burden of healthcare expenses off of private businesses or individuals through pooling of financial risk. There are a variety of arguments for and against universal healthcare and related health policies. Healthcare is an important part of health systems and therefore it often accounts for one of the largest areas of spending for both governments and individuals all over the world.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52546879", "title": "Universal Health Coverage Day", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 440, "text": "Universal health coverage has been included in the new Sustainable Development Goals for 2015-2030, adopted by the United Nations. In many nations, inclusive healthcare is very rudimentary and does not include heroic interventions or long term care. WaterAid reports that national infrastructure in many nations cannot support first world healthcare delivery mechanisms because it may not even provide potable water, let alone electricity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13061359", "title": "International healthcare accreditation", "section": "Section::::Accreditation services.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 368, "text": "For example, some countries, such as the USA, perform very poorly when it comes to providing anything close to universal access to healthcare of adequate quality to the population living within their own borders. Others, such as the United Kingdom and Australia, have created state-funded systems which provide everything without the assistance of the private sector.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1043143", "title": "Single-payer healthcare", "section": "Section::::Description.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 530, "text": "Single-payer healthcare systems pay for all covered healthcare-related services by a single government or government-related source. It is a strategy employed by governments to achieve several goals, including universal healthcare, decreased economic burden of health care, and improved health outcomes for the population. Universal health care worldwide was established as a goal of the World Health Organization in 2010 and adopted by the United Nations General Assembly in 2015 for the 2030 Agenda for Sustainable Development.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7699151", "title": "The Market for Liberty", "section": "Section::::Summary.:Part II – A Laissez-Faire Society.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 1127, "text": "Chapter 5, \"A Free and Healthy Economy\", begins by noting the difficulties people have in picturing a society radically different from their own. It concludes that poverty would be better addressed by a laissez-faire society for many reasons, including the fact that unemployment is caused by the government, that untaxed businesses would have more profits to reinvest in productivity-enhancing technology, that private charities are more efficient than government, that parents would be more likely to avoid having excess children in the absence of social safety nets, etc. It argues that a plethora of choices in education would emerge in a free market. It also notes that the focus of media in a laissez-faire society would shift from covering government to covering business and individuals and that abuses would be checked by reporters looking for stories on aggression or fraud. The chapter argues that the quality of health care could be more efficiently kept at an adequate level through reputation, standards instituted by insurance companies, etc. It also discusses how currency could be provided without government.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "811714", "title": "Comparison of the healthcare systems in Canada and the United States", "section": "Section::::Price of health care and administration overheads.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 641, "text": "Through all entities in its public–private system, the US spends more per capita than any other nation in the world, but is the only wealthy industrialized country in the world that lacks some form of universal healthcare. In March 2010, the US Congress passed regulatory reform of the American \"health insurance\" system. However, since this legislation is not fundamental \"healthcare\" reform, it is unclear what its effect will be and as the new legislation is implemented in stages, with the last provision in effect in 2018, it will be some years before any empirical evaluation of the full effects on the comparison could be determined.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35458791", "title": "List of countries with universal health care", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 547, "text": "The logistics of universal healthcare vary by country. Some programs are paid for entirely out of tax revenues. In others tax revenues are used either to fund insurance for the very poor or for those needing long term chronic care. In some cases such as the UK, government involvement also includes directly managing the health care system, but many countries use mixed public-private systems to deliver universal health care. In most European countries, universal healthcare entails a government-regulated network of private insurance companies.\n", "bleu_score": null, "meta": null } ] } ]
null
2vm3oj
Why do certain instruments have to transpose to concert pitches? For example, why does a trumpet play a C instead of concert B-flat? Was this some sort of musical evolution?
[ { "answer": "I can answer this for the brass more convincingly than for the woodwinds, perhaps someone else can fill in those blanks. \n\nBack before the valve was invented, composers would write music in a certain key. The trombone has a slide which enables it to change length at will, meaning it can play in Eb as easily as in F. But the Trumpet and Horn couldn't change their length as easily, meaning they only had the notes in their overtone series to work with--major chords built upon the fundamental, mostly. In order to play more than a note or two in any other key, the music had to either be written REALLY high--where the notes of the overtone series get closer together--or a horn of a different length was needed. So horn and trumpet players would need several horns to play in any piece that changed to keys that were not closely related. \n\nNow in order to make things the music easier to read, the tradition was that the fundamental pitch of each horn (F on the F horn, Bb on the Bb horn, etc) would be notated in music as a C. This made it so that the player could easily just pick up a longer or shorter horn, and the written music would look the same, and the physical sensation of playing the horn--such as distance between overtone notes--would be the same (or nearly so), but notes of the correct key would be coming out of the bell.\n\nAs instrument design progressed, horns and trumpets came with extra tuning slides of various lengths, so that a whole new horn was not needed to play in other keys, but rather you could take a short tuning slide out and put a longer one in, putting the horn in a different key. These slides were called crooks because by the time of Weber and Wagner, composers were changing keys so quickly and fluently, that the slides had to be shaped with a crook in them to hang over the arm of the performer to facilitate quicker slide changes. \n\nWith the invention of the valve in 1815, and its subsequent adoption/popularization, this practice fell out of favor (much to Brahms' chagrin: he loved the sound of the natural horn, and hated the valve horn). So the tuba doesn't have this tradition of transposition, but rather it reads the notes on the page as they are, and the player will play that note based upon the pitch of their instrument. The trumpet and horn, though kept the tradition of transposition even after the invention of the valve. The Bb trumpet became standard. Modern players play a lot on both Bb and C trumpets and need to be able to read many different transpositions on both instruments. The horns have kinda settled on the F horn, but need to be able to read in any transposition. \n\nHowever, one caveat must be included here. When the valve came to be, inventors like Adolph Sax, created whole families of instruments that were meant to sound the same over the entire range of music. The saxophone family alternates Bb and Eb instruments (Bb for soprano, Eb for Alto, Bb for Tenor, Eb for Baritone, Bb for Bass, etc). This made it so that a player of one instrument could pick up another instrument and play the same fingerings for the same written music and be okay. The third space C is fingered the same on all the saxophones, and depending on which saxophone it is, you'll get either a Bb or an Eb coming out of the instrument. \n\nSax also invented the saxhorn, which accounts for euphoniums, alto horns, and some tubas, also alternating (Eb for alto/tenor horns, Bb for baritone/euphonium, Eb for bass tuba, BBb for contrabass tuba, etc). This is particularly useful in the British Brass Band world, where everyone (except the bass trombone for some reason) reads treble clef parts in Bb or in Eb. Again, the idea here is that a euphonium player can move to tenor horn and read the music the same way with the same fingerings and be in the right key without having to think about the key of the piece AND the key of the horn AND the fingerings that go with both. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1969913", "title": "B-flat major", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 220, "text": "Many transposing instruments are pitched in B-flat major, including the clarinet, trumpet, tenor saxophone, and soprano saxophone. As a result, B-flat major is one of the most popular keys for concert band compositions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "415183", "title": "Minor third", "section": "", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 309, "text": "The sopranino saxophone and E♭ clarinet sound in the concert pitch ( C ) a minor third higher than the written pitch; therefore, to get the sounding pitch one must transpose the written pitch up a minor third. Instruments in A – most commonly the A clarinet, sound a minor third lower than the written pitch.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47140228", "title": "German horn", "section": "Section::::Related horns.:Mellophone.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 443, "text": "In orchestral or concert band settings, regular concert horns are normally preferred to mellophones because of their tone, which blends better with woodwinds and strings, and their greater intonational subtlety—since the player can adjust the tuning by hand. For these reasons, mellophones are played more usually in marching bands and brass band ensembles, occasionally in jazz bands, and almost never in orchestral or concert band settings.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47140228", "title": "German horn", "section": "Section::::Related horns.:Mellophone.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 651, "text": "As they are pitched in F or G and their range overlaps that of the horn, mellophones can be used in place of the horn in brass and marching band settings. Mellophones are, however, sometimes unpopular with horn players because the mouthpiece change can be difficult and requires a different embouchure. Because the bore is more cylindrical than the orchestral horn the \"feel\" of the mellophone can be foreign to a horn player. Another unfamiliar aspect of the mellophone is that it is designed to be played with the right hand instead of the left (although it can be played with the left). Intonation can also be an issue when playing the mellophone.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "162401", "title": "Power chord", "section": "Section::::Analysis.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 695, "text": "When two or more notes are played through a distortion process that non-linearly transforms the audio signal, additional partials are generated at the sums and differences of the frequencies of the harmonics of those notes (intermodulation distortion). When a typical chord containing such intervals (for example, a major or minor chord) is played through distortion, the number of different frequencies generated, and the complex ratios between them, can make the resulting sound messy and indistinct. This effect is accentuated as most guitars are tuned based on equal temperament, with the result that minor thirds are narrower, and major thirds wider, than they would be in just intonation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26795", "title": "Saxophone", "section": "Section::::Description.:Pitch and range.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 695, "text": "Because all saxophones use the same key arrangement and fingering to produce a given notated pitch, it is not difficult for a competent player to switch among the various sizes when the music has been suitably transposed, and many do so. Since the baritone and alto are pitched in E, players can read concert pitch music notated in the bass clef by reading it as if it were treble clef and adding three sharps to the key signature. This process, referred to as \"clef substitution\", makes it possible for the Eb instruments to play from parts written for baritone horn, bassoon, euphonium, string bass, trombone, or tuba. This can be useful if a band or orchestra lacks one of those instruments.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "61009", "title": "Mellotron", "section": "Section::::Operation.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 538, "text": "Another factor in the Mellotron's sound is that the individual notes were recorded in isolation. For a musician accustomed to playing in an orchestral setting, this was unusual, and meant that they had nothing against which to intonate. Noted cellist Reginald Kirby refused to downtune his cello to cover the lower range of the Mellotron, and so the bottom notes are actually performed on a double bass. According to Mellotron author Nick Awde, one note of the string sounds contains the sound of a chair being scraped in the background.\n", "bleu_score": null, "meta": null } ] } ]
null
w45af
Is Civil War revisionist history going on?
[ { "answer": "How do you want your question to be answered here? Are you asking for a run down of the historical threads weaving together? Are you looking for a \"proximate cause\"?\n\nDescribing the Civil war as \"about harbor dominance\" is akin to saying WWI was \"about naval dominance\".", "provenance": null }, { "answer": "There's a complex set of reasons, slavery foremost among them but hardly in isolation, that precipitated the Civil War. My knowledge of that period isn't as comprehensive as it is for other historical eras, but I've never heard of the Southern ports themselves as a *casus belli.*\n\nThat said, the Union blockade of Southern ports during the war itself was a key portion of the Northern grand strategy. Your friend may have conflated that aspect of the war with the war's origins.", "provenance": null }, { "answer": "Actually, this is exactly why the Midwest was willing to fight. It's not that they had all the ports, that's kind of silly, but the Mississippi was the **lifeblood** of the midwestern economy (float the food and livestock down to new orleans), and without that they were screwed (this was also before the trans-continental railroad, which the South also opposed for similar reasons).\n\nOne of the first threats the South made was to shut down or heavily tax the Mississippi, and the Midwest was rather pissed about that (their only real beef with the South I believe), as their livelihoods relied on that trade as much as the South's relied on slavery.\n\nBut otherwise, the civil war had many causes. I like to think of it as the South losing dominance due to a fixed economy, while the North and Midwest were booming due to improved technology, and general growth of trade.\n\nAlso, the constitution was not really capable of handling as many states as were added smoothly. The South needed massive amounts of land for its agriculture, while the North needed much less for its industry. Expanding the number of slave states was actually quite important for its very survival.\n\nFinally, the South was dependent on low import tariffs for finished goods, while the rapidly growing economy of the north required high import tariffs for finished goods to allow them to compete while they scaled upwards. This led to the Treaty of Abominations of .. whenever.. which lead to the First Secession Crisis. Personally I find this to be the biggest giveaway about the real cause of the war, that this was the first thing that truly outraged the South so much it wished to Secede immediately.\n\nBy the way, ironically this whole event was for nothing, because during the Civil War, the main hope of the South was that England would step in, as it was felt England could not survive 3 years without Southern cotton (their industry was largely built upon it). The really funny bit is, England, rather pissed about this situation, ended up shifting its cotton sources to Egypt and India around that time, so given 5 years, the South would have gone into a deep depression either way, and this would have been solved some other way.", "provenance": null }, { "answer": "Oh man, the US Civil War has seen revisionism since before it was over.", "provenance": null }, { "answer": "Technically both of you are correct in some sense. Like any war, there are going to be multiple reasons as to why people decide to engage in armed conflict. \n\nLincoln's stated goal was to keep the South in the Union. He never saw the Southern states as legally separate, but rather as a group with rebellious leaders who were making an illegal decision based on the Constitution.\n\nJefferson Davis in his inaugural address stated that the war was over slavery. Something that he changed once the war was a lost cause, which actually is one of the roots to much of the lost cause writings that continue to be popular today. (Some quick quotes follow to show this detail. At the beginning of the war, states rights meant slavery, of course this changed later)\n\n\"It is not safe ... to trust $800 million worth of Negroes in the hands of a power which says that we do not own the property. ... So we must get out ...\" -- The Daily Constitutionalist, Augusta, Ga., Dec. 1, 1860\n\n\"(Northerners) have denounced as sinful the institution of slavery. ... We, therefore, the people of South Carolina ... have solemnly declared that the Union heretofore existing between this State and other States of North America dissolved.\" -- from \"Declaration of the Causes of Secession\"\n\n\"As long as slavery is looked upon by the North with abhorrence ... there can be no satisfactory political union between the two sections.\" -- New Orleans Bee, Dec. 14, 1860\n\n\"Our new government is founded upon ... the great truth that the Negro is not equal to the white man; that slavery, subordination to the superior race is his natural and moral condition.\" -- Alexander Stephens, vice president of the Confederacy, March 21, 1861\n\nThe midwestern states became interested in the conflict over another multitude of reasons. Namely the settlers were often very involved with the idea of Popular Sovereignty (a false choice in itself) and often the residents of the states would select their side based on personal convictions. \n\nNow back to your question, it had nothing to do with Ports. The ports issue comes into play when discussing operation anaconda. Which the Union blockaded the Southern Ports and slowly strangled the South with the sweep from the west (Grant's army). ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "352697", "title": "Radical Republicans", "section": "Section::::Historiography.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 1184, "text": "In the aftermath of the Civil War and Reconstruction, new battles took place over the construction of memory and the meaning of historical events. The earliest historians to study Reconstruction and the Radical Republican participation in it were members of the Dunning School, led by William Archibald Dunning and John W. Burgess. The Dunning School, based at Columbia University in the early 20th century, saw the Radicals as motivated by an irrational hatred of the Confederacy and a lust for power at the expense of national reconciliation. According to Dunning School historians, the Radical Republicans reversed the gains Abraham Lincoln and Andrew Johnson had made in reintegrating the South, established corrupt shadow governments made up of Northern carpetbaggers and Southern scalawags in the former Confederate states, and to increase their power, foisted political rights on the newly-freed slaves that they were allegedly unprepared for or incapable of utilizing. For the Dunning School, the Radical Republicans made Reconstruction a dark age that only ended when Southern whites rose up and reestablished a \"home rule\" free of Northern, Republican, and black influence.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1405473", "title": "Lost Cause of the Confederacy", "section": "Section::::History.:19th century.:Reunification of North and South.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 658, "text": "In exploring the literature of reconciliation, historian William Tynes Cowa wrote, \"The cult of the Lost Cause was part of a larger cultural project: the reconciliation of North and South after the Civil War.\" He says that a typical image in postwar fiction was a materialistic, rich Yankee man marrying an impoverished spiritual Southern bride as a symbol of happy national reunion. Examining films and visual art, Gallagher identifies the theme of \"white people North and South [who] extol the \"American\" virtues both sides manifested during the war, to exalt the restored nation that emerged from the conflict, and to mute the role of African Americans\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2048508", "title": "Harold Hyman", "section": "Section::::Evaluations.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 329, "text": "Bodenhamer (2012) says, \"The best guide to the constitutional changes brought by the Civil War and Reconstruction is Harold Hyman, \"A More Perfect Union: The Impact of the Civil War and Reconstruction on the Constitution\" (1973). Mayer (2001) says Hyman, \"wrote the definitive work on loyalty tests throughout American history.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55040", "title": "Reconstruction era", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 428, "text": "Three visions of Civil War memory appeared during Reconstruction: the reconciliationist vision, which was rooted in coping with the death and devastation the war had brought; the white supremacist vision, which included segregation and the preservation of the traditional cultural standards of the South; and the emancipationist vision, which sought full freedom, citizenship, and Constitutional equality for African Americans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30761079", "title": "American Civil War Centennial", "section": "Section::::Centennial Commissions.:Legacy.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 676, "text": "One major legacy of the Civil War Centennial was the creation of an infrastructure of Civil War reenactment. At least two major Civil War battlefields, Pea Ridge National Military Park in Arkansas and Wilson's Creek National Battlefield in Missouri, were added to the roster of parklands administered by the National Park Service during the Centennial years. Civil War-related State parks, such as Perryville Battlefield State Historic Site in Kentucky, also trace their heritage back to the Centennial years. In addition, much of the current interpretive infrastructure of other major American Civil War battlefields dates back to planning decisions made in the early 1960s.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "408840", "title": "Origins of the American Civil War", "section": "Section::::Onset of the Civil War and the question of compromise.:Revisionists.\n", "start_paragraph_id": 249, "start_character": 0, "end_paragraph_id": 249, "end_character": 945, "text": "Revisionism challenged the view that fundamental and irreconcilable sectional differences made the outbreak of war inevitable. It scorned a previous generation's easy identification of the Northern cause with abolition, but it continued a tradition of hostility to the Reconstruction measures that followed the war. The Civil War became a needless conflict brought on by a blundering generation that exaggerated sectional differences between North and South. Revisionists revived the reputation of the Democratic party as great nationalists before the war and as dependable loyalists during it. Revisionism gave Lincoln's Presidency a tragic beginning at Fort Sumter, a rancorous political setting of bitter factional conflicts between radicals and moderates within Lincoln's own party, and an even more tragic ending. The benevolent Lincoln died at the moment when benevolence was most needed to blunt radical designs for revenge on the South.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8436554", "title": "A Nation Torn", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 304, "text": "A Nation Torn, by Delia Ray, is a child-oriented history of how the American Civil War began. It is in the history series \"A Young Reader's History of the Civil War\" and was first published in 1990 \"A Nation Torn\" describes the events from 1861 to the first battle of the Civil War at Charleston Harbor.\n", "bleu_score": null, "meta": null } ] } ]
null
s21tq
Is there an effect the moon has on the atmosphere similar to the effect it has on the ocean by creating the tides?
[ { "answer": "These comments are a bit misleading. The atmospheric tide caused by the sun is overwhelmingly due to thermal tides, much unlike the gravitational tides caused by the moon and to a lesser extent, the sun. The moon's (and the sun's) gravity has little to do with atmospheric tides.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "30718", "title": "Tide", "section": "Section::::Physics.:Amplitude and cycle time.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 968, "text": "The theoretical amplitude of oceanic tides caused by the Moon is about at the highest point, which corresponds to the amplitude that would be reached if the ocean possessed a uniform depth, there were no landmasses, and the Earth were rotating in step with the Moon's orbit. The Sun similarly causes tides, of which the theoretical amplitude is about (46% of that of the Moon) with a cycle time of 12 hours. At spring tide the two effects add to each other to a theoretical level of , while at neap tide the theoretical level is reduced to . Since the orbits of the Earth about the Sun, and the Moon about the Earth, are elliptical, tidal amplitudes change somewhat as a result of the varying Earth–Sun and Earth–Moon distances. This causes a variation in the tidal force and theoretical amplitude of about ±18% for the Moon and ±5% for the Sun. If both the Sun and Moon were at their closest positions and aligned at new moon, the theoretical amplitude would reach .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10151726", "title": "Atmospheric tide", "section": "Section::::Lunar atmospheric tides.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 325, "text": "Atmospheric tides are also produced through the gravitational effects of the Moon. \"Lunar (gravitational) tides\" are much weaker than \"solar (thermal) tides\" and are generated by the motion of the Earth's oceans (caused by the Moon) and to a lesser extent the effect of the Moon's gravitational attraction on the atmosphere.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31156672", "title": "Supermoon", "section": "Section::::Effects on Earth.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 367, "text": "Scientists have confirmed that the combined effect of the Sun and Moon on the Earth's oceans, the tide, is when the Moon is either new or full. and that during lunar perigee, the tidal force is somewhat stronger, resulting in perigean spring tides. However, even at its most powerful, this force is still relatively weak, causing tidal differences of inches at most.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "801420", "title": "Atmospheric physics", "section": "Section::::Atmospheric tide.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 424, "text": "i) Atmospheric tides are primarily excited by the Sun's heating of the atmosphere whereas ocean tides are primarily excited by the Moon's gravitational field. This means that most atmospheric tides have periods of oscillation related to the 24-hour length of the solar day whereas ocean tides have longer periods of oscillation related to the lunar day (time between successive lunar transits) of about 24 hours 51 minutes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10151726", "title": "Atmospheric tide", "section": "Section::::General characteristics.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 500, "text": "BULLET::::1. Atmospheric tides are primarily excited by the Sun's heating of the atmosphere whereas ocean tides are excited by the Moon's gravitational pull and to a lesser extent by the Sun's gravity. This means that most atmospheric tides have periods of oscillation related to the 24-hour length of the solar day whereas ocean tides have periods of oscillation related both to the solar day as well as to the longer lunar day (time between successive lunar transits) of about 24 hours 51 minutes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19331", "title": "Moon", "section": "Section::::Earth-Moon system.:Tidal effects.\n", "start_paragraph_id": 82, "start_character": 0, "end_paragraph_id": 82, "end_character": 982, "text": "In a like manner, the lunar surface experiences tides of around amplitude over 27 days, with two components: a fixed one due to Earth, because they are in synchronous rotation, and a varying component from the Sun. The Earth-induced component arises from libration, a result of the Moon's orbital eccentricity (if the Moon's orbit were perfectly circular, there would only be solar tides). Libration also changes the angle from which the Moon is seen, allowing a total of about 59% of its surface to be seen from Earth over time. The cumulative effects of stress built up by these tidal forces produces moonquakes. Moonquakes are much less common and weaker than are earthquakes, although moonquakes can last for up to an hour – significantly longer than terrestrial quakes – because of the absence of water to damp out the seismic vibrations. The existence of moonquakes was an unexpected discovery from seismometers placed on the Moon by Apollo astronauts from 1969 through 1972.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5672031", "title": "Orbit of the Moon", "section": "Section::::Tidal evolution.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 791, "text": "The gravitational attraction that the Moon exerts on Earth is the cause of tides in the sea; the Sun has a smaller tidal influence. If Earth had a global ocean of uniform depth, the Moon would act to deform both the solid Earth (by a small amount) and the ocean in the shape of an ellipsoid with the high points roughly beneath the Moon and on the opposite side of Earth. However, because of the presence of the continents, Earth's much faster rotation and varying ocean depths, this simplistic visualisation does not happen. Although the tidal flow period is generally synchronized to the Moon's orbit around Earth, its relative timing varies greatly. In some places on Earth, there is only one high tide per day, whereas others such as Southampton have four, though this is somewhat rare.\n", "bleu_score": null, "meta": null } ] } ]
null
219hmv
if hypothetically every u.s. state successfully seceded into a sovereign state, would the federal government reign over the last remaining state? how would this work?
[ { "answer": "This scenario would never happen, the federal government wouldn't allow it.\n\nBut I'll play it out. No, the federal government would dissolve. If the states decided to break apart the federal government would fall apart because it exists as a unified front for the states.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1724761", "title": "List of U.S. state partition proposals", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 331, "text": "The clause has served this same function since then whenever a proposal to partition an existing state or states has come before Congress. New breakaway states are permitted to join the Union, but only with the proper consents. Of the 37 states admitted to the Union by Congress, three were set off from an already existing state:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18618239", "title": "U.S. state", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 503, "text": "The Constitution grants to Congress the authority to admit new states into the Union. Since the establishment of the United States in 1776, the number of states has expanded from the original 13 to 50. Alaska and Hawaii are the most recent states admitted, both in 1959. The Constitution is silent on the question of whether states have the power to secede (withdraw) from the Union. Shortly after the Civil War, the U.S. Supreme Court, in \"Texas v. White\", held that a state cannot unilaterally do so.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18557327", "title": "Tidelands", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 220, "text": "For other states that were formerly independent, such as the Thirteen Colonies, there was no explicit retention of state sovereignty and the federal government had long asserted its own sovereignty over their tidelands.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17832214", "title": "Secession in the United States", "section": "Section::::Partition of a state.\n", "start_paragraph_id": 71, "start_character": 0, "end_paragraph_id": 71, "end_character": 248, "text": "Of the new states admitted to the Union by Congress, three were set off from already existing states, while one was established upon land claimed by an existing state after existing for several years as a \"de facto\" independent republic. They are:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24930751", "title": "State governments of Mexico", "section": "Section::::State governments.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 550, "text": "All states are both independent & autonomous in their internal administration. The federal government cannot intervene in any particular state's affairs unless there is a full cessation of government powers and through previous study, recommendation and/or approval of the Congress of the Union. The states cannot make an alliance with any foreign power or with any other state. They cannot unilaterally declare war against a foreign nation unless their territory is invaded & cannot wait for the Congress of the Union to issue a declaration of war.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1724761", "title": "List of U.S. state partition proposals", "section": "", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 564, "text": "The following is a list of substantive proposals (both successful and unsuccessful) put forward since the nation's founding to partition or set-off a portion of an existing U.S. state (or states) in order that the region might either join another state or create a new state. Proposals to secede from the Union are not included, nor are proposals to create states from either organized incorporated or unorganized U.S. territories. Land cessions made by several individual states to the Federal government during the 18th and 19th centuries are not listed either.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22498677", "title": "Texas secession movements", "section": "Section::::Secession in the United States.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 661, "text": "Discussion about the right of U.S. states to secede from the union began shortly after the American Revolutionary War. The United States Constitution does not address secession. Each of the colonies originated by separate grants from the British Crown and had evolved relatively distinct political and cultural institutions prior to national independence. Craig S. Lerner has written that the Constitution's Supremacy Clause weighs against a right of secession, but that the Republican Guarantee Clause can be interpreted to indicate that the federal government has no right to keep a state from leaving as long as it maintains a republican form of government.\n", "bleu_score": null, "meta": null } ] } ]
null
3p3gb4
How did Greece didn't see the impending threat of Roman empire and do something to stop it ?
[ { "answer": "First off, there was no such thing as \"Greece\" whilst the Roman Republic was on the rise. \n\nThere was a multitude of indepenent Greek city states, such as Athens, Corinth or Sparta. Sometimes, these would align in leagues like the Achaean or Aetolian leagues, to defend themselves against some kind of threat or threaten some kind of common enemy. (Often, another league.) There was Macedon, the homeland of Philip III and Alexander the Great, which had more or less asserted itself as Greece's overlord when those two ruled, but now was just one of many Hellenistic successor states to the great empire of Alexander. It frequently tried to re-assert its dominance over various cities or parts of the Greek peninsula, leading to many a war. There was also Epirus, an up-and-coming kingdom on the eastern coast of Adriatic, which though it had never been a part of Alexander's domain, firmly stood in the influence of Greek and Hellenistic thought and culture. Finally, we must not forget the Greek diaspora: many a Greek colony had been founded even before Alexander started conquering, on the coasts of the Black Sea, Sicily, southern Italy and even as far afield as France (Massilia) and Spain. (Emporion.) \n\nSouthern Italy is particularly relevant to your question: it was so thoroughly dominated by Greek colonies that it was called *Megálē Hellás* or *Magna Graecia*: Great(er) Greece. It was this part of the Greek world that first came into contact with the Romans.\n\nNow, as should be clear, none of these parts and entities I named were united. They were forever fighting one another, or the other Successor Kingdoms, or the Carthaginians, or various local peoples. Alliances and leagues were made and broken with dizzying frequency, and nobody was able to assert himself as the undisputed master of the Greek world as Alexander had been. (And even then, he had not ruled over all Greeks.)\n\nSo who was going to stop the Romans? Not the southern Greek city states, who were far more worried about their neighbours or the Antigonid Macedonians. Not those of Sicily, for they were preoccupied with Carthage. Not the Macedonians themselves, for they were struggling to hold on to their dominions and at times to even survive against Epirus to their west. Let alone the Ptolemies or Seleucids all the way in the east. \n\nFinally, the Greeks in *Magna Graecia* did try to stop the Romans, shortly after they had defeated the Samnites. This was very early on in the history of the Roman empire, you should note. There hardly was an empire to speak of. Rome was a regional Italic power, very successful in conquering and integrating their neighbours, but definitely not something that looked like it could threaten the Greek world as a whole.\n\nDespite this, the city of Tarās(Tarentum) asked and got the help of Pyrrhus of Epirus, one of the best generals of his day and commander of one of the finest professional armies in the Hellenistic tradition. All that was \"more technologically advanced and had a better knowledge of war\" was represented in Pyrrhus' army.\n\nAnd he lost the war.\n\nPyrrhus beat the Romans twice, but it's those battles that give us the term \"Phyrric Victory.\" He also got entangled with the Carthaginians in Sicily, and the third battle against the Romans at Beneventum, though still not tactically decisive, proved too much. He withdrew to Greece. Rome was left to rule Italy. Despite all his apparent advantages, Greek military sophistication had proven unable to decisively defeat Rome.\n\nThere would be further attempts at intervention. During the second Punic War, when Hannibal was marching up and down Italy, Carthage and Macedon tried to ally against Rome, so as to negate the potential threat. But other Greek cities of the Aetolian League saw Macedon as the greater threat to their survival, and allied with Rome. They occupied the Macedonians until the Romans could win their war against Carthage. Later, when Rome could focus its full attention east, two further Macedonian wars followed and saw the (at this time highly experienced, thanks to decades of war against the Carthaginians) Roman armies again victorious against their Greek counterparts.\n\nWhilst this was going on, the Seleucid empire under Antiochus the Great tried to intervene also. He send armies west to Greece, but the Romans again proved too strong, and his empire was facing many other troubles in the east. Also, the Romans still had other Greeks on their side in all these conflicts. There's a reason that \"divida et impera\" is a Latin phrase.\n\nBy the time Pompey the Great was marching up and down the Eastern Mediterranean, making and unmaking kings and incorporating provinces left and right, it was far too late for anyone to do anything about Rome's rise. \n\nIn summary: \"Greece\" was never united, and never able to truly make a common cause against Rome. Nonetheless, many of the most powerful Greek kings and kingdoms of their day did try to smack down Rome before they became too powerful. All lost.\n\nAs Philip Sabin puts it in his *lost battles:*\n > Lendon has highlighted very well how Greek authors such as Polybius tended to ascribe military success to better tactics, techniques or equipment, whereas Roman writers like Caesar or Livy were more concerned with the superior bravery and *virtus* of the victorious soldiers. The evidence of both Greek and Roman battles lends much more support to the Roman interpretation.\n\n(The Lendon he refers to here is: Lendon, E.*Soldiers & Ghosts: a History of Battle in Classical Antiquity.* New Haven: Yale University Press (1999) )\n\nIn other words: being technologically advanced and having a better knowledge of war doesn't seem to really have helped the Greeks. Roman diplomacy, the quality of Roman armies (at a particular high during this time, thanks to the Punic wars, according to Goldsworthy.) and the reserves of manpower Rome had due to their militia system and many allies proved far more decisive.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "15440356", "title": "History of the Cyclades", "section": "Section::::Roman and Byzantine Empires.:The Cyclades in Rome’s orbit.\n", "start_paragraph_id": 80, "start_character": 0, "end_paragraph_id": 80, "end_character": 882, "text": "The reasons for Rome's intervention in Greece from the 3rd century BC are many: a call for help from the cities of Illyria; the fight against Philip V of Macedon, whose naval policy troubled Rome and who had been an ally of Hannibal’s; or assistance to Macedon’s adversaries in the region (Pergamon, Rhodes and the Achaean League). After his victory at Battle of Cynoscephalae, Flaminius proclaimed the “liberation” of Greece. Neither were commercial interests absent as a factor in Rome's involvement. Delos became a free port under the Roman Republic's protection in 167 BC. Thus Italian merchants grew wealthier, more or less at the expense of Rhodes and Corinth (finally destroyed the same year as Carthage in 146 BC). The political system of the Greek city, on the continent and on the islands, was maintained, indeed developed, during the first centuries of the Roman Empire.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "500639", "title": "Greco-Turkish War (1919–1922)", "section": "Section::::Background.:The Greek community in Anatolia.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 506, "text": "Through its failure, the Greek invasion may have instead exacerbated the atrocities that it was supposed to prevent. Arnold J. Toynbee blamed the policies pursued by Great Britain and Greece, and the decisions of the Paris Peace conference as factors leading to the atrocities committed by both sides during and after the war: \"The Greeks of 'Pontus' and the Turks of the Greek occupied territories, were in some degree victims of Mr. Venizelos's and Mr. Lloyd George's original miscalculations at Paris.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "455379", "title": "Hellenistic period", "section": "Section::::Rise of Rome.\n", "start_paragraph_id": 113, "start_character": 0, "end_paragraph_id": 113, "end_character": 812, "text": "Widespread Roman interference in the Greek world was probably inevitable given the general manner of the ascendancy of the Roman Republic. This Roman-Greek interaction began as a consequence of the Greek city-states located along the coast of southern Italy. Rome had come to dominate the Italian peninsula, and desired the submission of the Greek cities to its rule. Although they initially resisted, allying themselves with Pyrrhus of Epirus, and defeating the Romans at several battles, the Greek cities were unable to maintain this position and were absorbed by the Roman republic. Shortly afterwards, Rome became involved in Sicily, fighting against the Carthaginians in the First Punic War. The end result was the complete conquest of Sicily, including its previously powerful Greek cities, by the Romans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "838382", "title": "Macedonian Wars", "section": "Section::::Second Macedonian war (200 to 196 BC).\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 1079, "text": "This represented the most significant threat to the century-old political order that had kept the Greek world in relative stability, and in particular represented a major threat to the smaller Greek kingdoms which had remained independent. As Macedonia and the Seleucid Empire were the problem, and Egypt the cause of the problem, the only place to turn was Rome. This represented a major change, as the Greeks had recently shown little more than contempt towards Rome, and Rome little more than apathy towards Greece. Ambassadors from Pergamon and Rhodes brought evidence before the Roman Senate that Philip V of Macedon and Antiochus III of the Seleucid Empire had signed the non-aggression pact. Although the exact nature of this treaty is unclear, and the exact Roman reason for getting involved despite decades of apathy towards Greece (the relevant passages on this from our primary source, Polybius, have been lost), the Greek delegation was successful. Initially, Rome didn't intend to fight a war against Macedon, but rather to intervene on their behalf diplomatically.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10255756", "title": "Serpent Column", "section": "Section::::History.:The significance of the Battle of Plataea.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 498, "text": "The Greek victories at Plataea and contemporaneous naval battle at Mycale had the result that never again would the Persian Empire launch an attack on mainland Greece. Afterwards, Persia pursued its policies by diplomacy, bribery and cajolement, playing one city state against another. But, by these victories, and through the Delian League, Athens was able to consolidate its power in the flowering of Athenian democracy in 5th century Athens, under the leadership of Pericles, son of Xanthippus.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11936957", "title": "Classical Greece", "section": "Section::::4th century BC.:The rise of Athens.:Athenian hegemony halted.\n", "start_paragraph_id": 84, "start_character": 0, "end_paragraph_id": 84, "end_character": 635, "text": "The main reasons for the eventual failure were structural. This alliance was only valued out of fear of Sparta, which evaporated after Sparta's fall in 371 BC, losing the alliance its sole 'raison d'etre'. The Athenians no longer had the means to fulfill their ambitions, and found it difficult merely to finance their own navy, let alone that of an entire alliance, and so could not properly defend their allies. Thus, the tyrant of Pherae was able to destroy a number of cities with impunity. From 360 BC, Athens lost its reputation for invincibility and a number of allies (such as Byzantium and Naxos in 364 BC) decided to secede.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "367431", "title": "Sicilian Expedition", "section": "Section::::Athenian reaction.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 742, "text": "The defeat caused a great shift in policy for many other states, as well. States which had until now been neutral joined with Sparta, assuming that Athens's defeat was imminent. Many of Athens' allies in the Delian League also revolted, and although the city immediately began to rebuild its fleet, there was little they could do about the revolts for the time being. The expedition and consequent disaster left Athens reeling. Some 10,000 hoplites had perished and, though this was a blow, the real concern was the loss of the huge fleet dispatched to Sicily. Triremes could be replaced, but the 30,000 experienced oarsmen lost in Sicily were irreplaceable and Athens had to rely on ill-trained slaves to form the backbone of her new fleet.\n", "bleu_score": null, "meta": null } ] } ]
null
5dm7c5
has obama done a good job as potus?
[ { "answer": "Reddit is a terrible place to find this out. \nAs for my opinion (which is all you'll get here, with biased sources), Obama is the 5th-best president in our country's history. All the good he has done must be accompanied with the reminder that he has faced the most blatant and extreme obstructionism in the history of the office.", "provenance": null }, { "answer": "I do not think that he has done a good job. I do commend him for going all out for what he believes, what else can you ask for in a president. We just have completely different ideas on how to run the country. A few things I don't like..\n\nAllowing ISIS to rise by pulling out of Iraq too early, we never should have been there in the first place but once we were, leaving a broken state up for grabs was a bad idea.\n\nObamacare. A mess of a bill that was passed in a rush and now insurance premiums are going through the roof. Everyone predicted that would happen and I really believe that it was just an attempt to move us towards universal healthcare by spiking the prices. I would remove state lines on health insurance.\n\nInsane debt hike: Obama has spent more than every other president combined. This is partially due to Obamacare as well, another reason why I hate it. We are in insane debt and we cannot keep going at this rate.\n\nMeddling in lower level court cases. Obama makes statements about court cases, solely involving \"white on black\" crime, and seems to always be on the side of the victim before a court case has even been settled. I think he has dramatically increased racial tensions. If he really needs to meddle in the judicial branch at least wait until the case has been finished.\n\nZero illegal immigration reform and has even tried to promote amnesty. Even Bernie is against weak borders", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "34733384", "title": "William A. Darity Jr.", "section": "Section::::Research.:Notable studies.:Unemployment rates.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 473, "text": "He has also been referenced by the media as an expert concerning Barack Obama's political strategies as related to unemployment ratings and economic policy. Darity criticized Obama's October 2011 economic strategy as \"bribing the private sector to put people back to work. I was hoping that there would be some effort to create a plan, there would be some effort to have a plan for direct job creation, where the federal government would directly put people back to work.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25325879", "title": "Economic policy of the Barack Obama administration", "section": "Section::::Views before election.:Views related to income inequality.:Labor rights.\n", "start_paragraph_id": 165, "start_character": 0, "end_paragraph_id": 165, "end_character": 583, "text": "Obama supports the Employee Free Choice Act, a bill that adds penalties for labor violations and which would circumvent the secret ballot requirement to organize a union. Obama promises to sign the EFCA into law. He is also a co-sponsor of the \"Re-empowerment of Skilled and Professional Employees and Construction Tradesworkers\" or RESPECT act (S. 969) that aims to overturn the National Labor Relations Board's \"Kentucky River\" decision that redefined many employees lacking the authority to hire, fire, or discipline, as \"supervisors\" who are not protected by federal labor laws.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23150158", "title": "Bob McDonald (businessman)", "section": "Section::::Career.:U.S. Secretary of Veterans Affairs.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 765, "text": "Obama cited McDonald's business background with P&G and experience revitalizing organizations in his decision. Obama said, \"[W]hat especially makes Bob the right choice to lead the VA now is his three decades of experience in building and managing one of the world's most recognized companies, Procter & Gamble. The VA is not a business, but it is one of our largest departments... And the workload at the VHA alone is enormous...\" Obama added, \"Bob is an expert at making organizations better. In his career he's taken over struggling business units... putting an end to what doesn't work; adopting the best practices that do; restructuring, introducing innovations, making operations more efficient and effective. In short, he's about delivering better results.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "410131", "title": "Nick Rahall", "section": "Section::::Political issues.:Endorsement of Barack Obama.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 449, "text": "In 2008, Rahall endorsed Barack Obama, saying Obama understands the needs and aspirations of West Virginians. He was also Chair of the Arab Americans for Obama group. Explaining his position, Rahall cited Senator Byrd, who said \"I work for no President. I work with Presidents.\" In an interview with Keith Olbermann, Rahall said that Obama had the courage and conviction to win the presidency, and that the then-senator was a true agent for change.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29494442", "title": "Adam Hanft", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 227, "text": "Obama for America cited Hanft as an early “tech leader” who endorsed Obama in his 2008 run for office. He went on to be an unpaid digital adviser to the campaign. Hanft also advised the FCC on its “Future of Media” initiative.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "469017", "title": "Edwin Edwards", "section": "Section::::2014 Congressional election.\n", "start_paragraph_id": 107, "start_character": 0, "end_paragraph_id": 107, "end_character": 780, "text": "An April 2014 article in \"Politico\" that discussed his chances noted that he was \"still sharp as a razor\" and \"in remarkably vigorous health\". He pronounced himself \"disappointed\" with President Obama for \"sitting\" on the Keystone Pipeline and has listed his campaign priorities as \"Building support for a high-speed rail system between Baton Rouge and New Orleans and emphasizing the good aspects of Obamacare, while doing what I can to change or amend the provisions that I think are onerous.\" He said that he would have voted against the Affordable Care Act, but criticized Governor Jindal for not accepting the Medicaid expansion. If elected, he hoped to serve on the Committee on Transportation and Infrastructure, to spur the construction of elevated roadways in the state.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25325879", "title": "Economic policy of the Barack Obama administration", "section": "Section::::Views before election.:Lobbying and campaign finance reform.\n", "start_paragraph_id": 196, "start_character": 0, "end_paragraph_id": 196, "end_character": 223, "text": "According to his website, Obama would create an online database of lobbying reports, campaign finance filings and ethics records, and would create an independent watchdog agency to oversee congressional ethical violations.\n", "bleu_score": null, "meta": null } ] } ]
null
lyx2m
Why did the Great War last so long? Why did people in the West keep fighting?
[ { "answer": "Why has any democracy carried on with a war that was going only so-so? Look at the Vietnam war, I think it took 6+ years of fighting before a simple majority of the country thought the US should get out.\n\nTo answer your question in a word, pride. Humans don't do a good job admitting that we might be wrong. Especially when our brother or father or husband might have died at the front. Do we really want to accept that they died just to restore \"status quo ante bellum?\" \n\nPlus I don't think there was ever a point in which there wouldn't be a \"losing\" side in the negotiations. Even if there had been successful negotiations in 1916 or 1917, Germany still would have likely been forced to give up some colonies. Now as a German citizen who's lost loved ones, that's a tough proposition to swallow.\n\nPeople hate compromising and surrendering, no matter how rational it might be. War is like gambling, much to gain, much to be lost, and if you're not careful you'll bankrupt yourself and everyone around you trying to win.", "provenance": null }, { "answer": "I'll try to tackle the first question. Most children, in the UK for example, were raised with Victorian and Edwardian values and propaganda, reflecting the times ideology which in turn influenced the officers and privates to think and feel that what they were doing was their duty. To them, they were protecting their family, their homeland and even the allies of their homeland. This was a whole other era than today. Most of the young men had grown up with exaggerated war stories from the past, about kings and generals, redcoats in Africa etc. and that influenced their later way of thinking. Many did this for honor, as they felt it. But there were other factors. Some did it because they didn't want to be condemned by their social circles or to let down their fellow comrades. The actual motivation for the individual soldier on both side is different from individual to individual, of course, but many based it on personal feelings. Not hatred towards the enemy, but for a personal honor and to conquer fear. \n\nThat didn't mean that there weren't any protests or mutinies because of the high amount of casualties. While the British army did reach highs of disappointment and demoralization, it never broke out in a full-scale mutiny. It kept itself disciplined during the duration of the war. The French, of course, had its famous mutiny of 1917, after continuous horrendous casualties during the Nivelle offensives. \n\n > Even as early as August 1914, if you were reading newspapers in England or its Dominions or France, you knew exactly how bloody the war was with stories of ditches being crossed with bridges of corpses.\n\nThat doesn't mean that everybody read them. The letters soldiers wrote home from the front where usually quite reductive from realistic depictions of life at the front. The soldiers felt a constant isolation and that anyone who wasn't there could never understand what they have to endure. Some felt that nobody even cared in the first place. In England, for example, there was even a remarkable display of ignorance by the relatives to dead soldiers about the actual conditions of the war. They even sent requests to commanders about their dead relatives personal watches, money and other personal items. French and German civilians didn't know that well either. French soldiers could get letters from the home front by concerned old women asking things like \"You don't fight if it's raining, right?\". The people who imagined the trenches didn't see the images that we see today. They often imagined organized, cozy and even lovely trenches. There's a great illustration from *L'illustration* showing the contemporary image of the trenches from the eyes of somebody who hadn't been there. \n\nSources: *Eye-Deep in Hell: Trench Warfare in World War I* by John Ellis.", "provenance": null }, { "answer": "I can speak to the first question from a Canadian point of view. There are a few reasons I can think of, some obvious, some not so much.\n\nThe first and most obvious reason is patriotism. Now this will require some deconstruction. There is two kinds of patriotism I'm talking about. The \"We must stand up for the British Empire\" kind of patriotism, and the sort of patriotism that is about gaining acceptance into the club. From Southern Ontario and east there was a very strong British connection, both through settlement and culture. There was almost an idea among some that Canada had to prove it was almost \"more British than British\", some believed it had to pro-actively support GB to maintain standing in the empire. In Western Canada there was a different patriotism, a kind that was more about being \"let into the club\" so to speak. A great many new immigrants enlisted in the armed forces because it was a traditionally patriotic thing to do, and they wanted to join the club, jump in with both feet and all that.\n\nAnother reason is that the horrors of trench life were not as we think of them today. Yes, it was awful, and yes, many people were dying, but based on letters home and diaries we see that people didn't see war as a constant bloodbath, at least from a personal point of view. For example, I've read letters home from a soldier who enlisted in 1914, and then spent most of 1915 and into 1916 training, going to NCO school, being an instructor at machine gun school, etc. The actual time spent in the front accounted for little of the time spent in the forces. I believe soldiers only spent 2 weeks at the front at a time, and then went through a rotation as reserves, then training, etc. A lot of people have the idea that young soldiers walked off the boat and directly into the meat grinder, and that simply wasn't the case.\n\nThere was also the idea that Allied soldiers actually were fighting to rid the world of German militarism. People act as though the First World War snuck up on an unsuspecting Europe and that is simply not the case. I was reading newspapers from the early 1900s last week and almost every one had some sort of reference to German militarism, or an interview with a German General where he scoffs at the idea of a war, or something of that nature. After 10-15 years of having it constantly drilled into your head, you're going to believe that German militarism is a real threat and can only be put down by rifle and bayonet.\n\nThere are the age old ideas of young men wanting to go off to war for adventure, looking sharp in a uniform, because their friends were doing it, etc. This cannot be discounted. no one would sign their attestation papers if it listed how they would die above where you sign. \"Hmm, let's see, have my legs/nuts blown clean off then drown in the mud at Passchendaele? Sounds good, sign me up.\" That wouldn't happen, people wouldn't stand for it. No who enlists thinks they're going to die a horrible death. Those thoughts come later. Not just that, but many young men in Canadian cities were choosing between their mundane jobs at the bakery and train yard, and with the Canadian Army. They'd see the soldiers looking tough as hell marching through the streets with the bagpipes going and they wanted to join up. They were full of piss and vinegar, risks be damned. There is a reason why the modern Canadian Forces hit their recruiting target for the first time in 30 years during the campaign in Kandahar. Like it or not, war/risk/\"adventure\" draws people in.\n\nNow, this does not get into the idea of the conscription crisis in Canada, which is a whole other can of worms. To answer your question though, the fact is, in 1917 many people did refuse to keep fighting, and they voted with their feet by not enlisting. Long story short, some Canadians wanted to fight, others though it was silly. It caused a large rift in the country politically, a rift that in many ways we're still dealing with.\n\n", "provenance": null }, { "answer": "So people have suggested some answers to the first question, and that's pretty straightforward, because pretty much no one anticipated the war that they actually got. So it's not too tough to explain why people initially thought the war would be fast, even if bloody.\n\nThe other issue is as you pointed out in your question, why they KEPT fighting. I think it's because of a discourse of nationalism in a situation of total war. From at least the late 19th century and maybe even earlier, you see a kind of zero-sum nationalism at work. Each nation tells itself that other nations are out to get them and that the conflicts between nations are essentially fights to the death. For example, advertisements in Britain that say \"Every German job is a British job lost; every German article manufactured is a British article lost.\" The French were terrified of the way that Germany's population was growing faster than their own, and they saw this as leading to their eventual demise. The Germans were terrified of Russia's potential to overwhelm them. In all these cases, nations saw other nations as mortal enemies. Why this is so is a bigger question which we can get into, but let's leave it aside for now.\n\nThe result of this kind of discourse of cataclysmic conflict is that once the war starts, it is sold to people as a total conflict in which victory means ultimate national triumph and a secure and prosperous future, while defeat means utter annihilation. Consider the \"Rape of Belgium.\" British political leaders ostensibly went to war with Germany when Germany violated Belgian neutrality. Britain had promised to protect Belgian sovereignty back in the 1830s (I think), and so was obligated to respond. However, the German violation of Belgium was sold to the British public as a \"rape\" in which German soldiers were utterly barbarous Huns that raped the women and bayonetted babies. This got the British public behind the war--but at the cost of creating a situation that political leaders could not easily undo. If the enemy really is the barbarous Hun out to destroy civilization and everything good, then how can you negotiate with them? How can you make peace with the enemy who is the ultimate Other? In this way, I would argue, the political leadership of many countries talked themselves into a corner. They created the conditions for a war that could only end in total victory or total defeat--but they lacked the weapon systems to achieve this. Thus, we have a long, bloody stalemate.\n\nSome books you can check out:\n\nHow it started:\nJames Joll, *The Origins of the First World War* (1984)\nDavid Herrmann, *The Arming of Europe* (1997)\nNiall Ferguson, *The Pity of War* (2000) (I think Ferguson has become a bit of a hack, but this earlier work is quite useful)\n\nTwo great books on Verdun, Oubsy in particular would be really useful for answering your question:\nAlistair Horne, *The Price of Glory* (1962)\nIan Ousby, *The Road to Verdun* (2002)\n\nSome really important recent scholarship on gender and the war, in particular on things like the \"Rape of Belgium\":\nSusan Grayzel, *Women's Identities at War: Gender, Motherhood, and Politics in Britain and France during the First World War* (1999)\nNicoletta Gullace, *The Blood of Our Sons* (2000?)", "provenance": null }, { "answer": "All the answers so far have been fantastic. I'm going to look at it from another perspective.\n\nWar is a difficult thing to end. With the build-up of troops, materials, manpower, the industries are booming, money is flowing, thousands of jobs are created. Imagine the number of guns needed to be crafted for the newly raised regiments? That's work for iron miners, lumberjacks, steel mills , lumber yards, and a foundry/craftsman. The bonuses of a war are hard to ignore. \n\nSo with this locomotive of the economy, building up a head of steam, the politicians, who likely own the lumber and steel mills, would have a personal reason to keep the war going.\n\nI especially agree with the idea of patriotism \"We must stand up for the British Empire\". Coming from South Africa, we didn't have to fight. We could have sat back, and watched as Europe tore itself apart. But as of 1910, we had been given near independence, by being named a dominion. This dominionship has a clause in it, that we should come to the aid of the Commonwealth whenever called. That is why we fought.\n\nThis is probably why the Canadians fought, and the Australians and New Zealanders (ANZACs in Gallipoli), and other dominion countries.\n\nOn another tangent, it was a common place situation where entire streets or villages of young men would sign-up together, in Pals battalion as they were called. This was mostly in the beginning, when the call for help had only just been sent out, and pride in the country and the desire to fight was strong. This created a huge draw on the population's desire to fight. Their sons, husbands, boyfriends, neighbours, teachers, were all going off to war, to fight the Hun! Be proud.\n\nAfter this period of time, this pride turned more towards revenge. Thousands of young men, killed in the trenches, in attacks over no-man's land, for some spit of French or Belgian soil. Anger and hatred rose up in the populations. They wanted their side to not only win against, but defeat utterly, their enemy.\n\nIt is my opinion of the people at the time that they wanted the war to end, and the only way for it to end in a sufficient manner would be to destroy the Germans.\n\n(As a side note, I had a great uncle who fought at the Battle of Delville Wood, and was shot through the arm. When he got home, he was given a doubled salary and a hero's welcome. Just an example of the thoughts of the people at the time)\n\nSources:\n\n_URL_2_\n_URL_3_\n_URL_0_\n_URL_1_", "provenance": null }, { "answer": "Strange that nobody draws the obvious parallel to the American Civil War. Early 19th century military romanticism, mixed with an unfamiliarity with industrial warfare leading to mass casualties, and post-war popular literature that sobers the population up to reality. It was a major reason for American neutrality during the war, and a major reason behind the hesitant response of the western democracies to Germany in the '30s.", "provenance": null }, { "answer": "Why has any democracy carried on with a war that was going only so-so? Look at the Vietnam war, I think it took 6+ years of fighting before a simple majority of the country thought the US should get out.\n\nTo answer your question in a word, pride. Humans don't do a good job admitting that we might be wrong. Especially when our brother or father or husband might have died at the front. Do we really want to accept that they died just to restore \"status quo ante bellum?\" \n\nPlus I don't think there was ever a point in which there wouldn't be a \"losing\" side in the negotiations. Even if there had been successful negotiations in 1916 or 1917, Germany still would have likely been forced to give up some colonies. Now as a German citizen who's lost loved ones, that's a tough proposition to swallow.\n\nPeople hate compromising and surrendering, no matter how rational it might be. War is like gambling, much to gain, much to be lost, and if you're not careful you'll bankrupt yourself and everyone around you trying to win.", "provenance": null }, { "answer": "I'll try to tackle the first question. Most children, in the UK for example, were raised with Victorian and Edwardian values and propaganda, reflecting the times ideology which in turn influenced the officers and privates to think and feel that what they were doing was their duty. To them, they were protecting their family, their homeland and even the allies of their homeland. This was a whole other era than today. Most of the young men had grown up with exaggerated war stories from the past, about kings and generals, redcoats in Africa etc. and that influenced their later way of thinking. Many did this for honor, as they felt it. But there were other factors. Some did it because they didn't want to be condemned by their social circles or to let down their fellow comrades. The actual motivation for the individual soldier on both side is different from individual to individual, of course, but many based it on personal feelings. Not hatred towards the enemy, but for a personal honor and to conquer fear. \n\nThat didn't mean that there weren't any protests or mutinies because of the high amount of casualties. While the British army did reach highs of disappointment and demoralization, it never broke out in a full-scale mutiny. It kept itself disciplined during the duration of the war. The French, of course, had its famous mutiny of 1917, after continuous horrendous casualties during the Nivelle offensives. \n\n > Even as early as August 1914, if you were reading newspapers in England or its Dominions or France, you knew exactly how bloody the war was with stories of ditches being crossed with bridges of corpses.\n\nThat doesn't mean that everybody read them. The letters soldiers wrote home from the front where usually quite reductive from realistic depictions of life at the front. The soldiers felt a constant isolation and that anyone who wasn't there could never understand what they have to endure. Some felt that nobody even cared in the first place. In England, for example, there was even a remarkable display of ignorance by the relatives to dead soldiers about the actual conditions of the war. They even sent requests to commanders about their dead relatives personal watches, money and other personal items. French and German civilians didn't know that well either. French soldiers could get letters from the home front by concerned old women asking things like \"You don't fight if it's raining, right?\". The people who imagined the trenches didn't see the images that we see today. They often imagined organized, cozy and even lovely trenches. There's a great illustration from *L'illustration* showing the contemporary image of the trenches from the eyes of somebody who hadn't been there. \n\nSources: *Eye-Deep in Hell: Trench Warfare in World War I* by John Ellis.", "provenance": null }, { "answer": "I can speak to the first question from a Canadian point of view. There are a few reasons I can think of, some obvious, some not so much.\n\nThe first and most obvious reason is patriotism. Now this will require some deconstruction. There is two kinds of patriotism I'm talking about. The \"We must stand up for the British Empire\" kind of patriotism, and the sort of patriotism that is about gaining acceptance into the club. From Southern Ontario and east there was a very strong British connection, both through settlement and culture. There was almost an idea among some that Canada had to prove it was almost \"more British than British\", some believed it had to pro-actively support GB to maintain standing in the empire. In Western Canada there was a different patriotism, a kind that was more about being \"let into the club\" so to speak. A great many new immigrants enlisted in the armed forces because it was a traditionally patriotic thing to do, and they wanted to join the club, jump in with both feet and all that.\n\nAnother reason is that the horrors of trench life were not as we think of them today. Yes, it was awful, and yes, many people were dying, but based on letters home and diaries we see that people didn't see war as a constant bloodbath, at least from a personal point of view. For example, I've read letters home from a soldier who enlisted in 1914, and then spent most of 1915 and into 1916 training, going to NCO school, being an instructor at machine gun school, etc. The actual time spent in the front accounted for little of the time spent in the forces. I believe soldiers only spent 2 weeks at the front at a time, and then went through a rotation as reserves, then training, etc. A lot of people have the idea that young soldiers walked off the boat and directly into the meat grinder, and that simply wasn't the case.\n\nThere was also the idea that Allied soldiers actually were fighting to rid the world of German militarism. People act as though the First World War snuck up on an unsuspecting Europe and that is simply not the case. I was reading newspapers from the early 1900s last week and almost every one had some sort of reference to German militarism, or an interview with a German General where he scoffs at the idea of a war, or something of that nature. After 10-15 years of having it constantly drilled into your head, you're going to believe that German militarism is a real threat and can only be put down by rifle and bayonet.\n\nThere are the age old ideas of young men wanting to go off to war for adventure, looking sharp in a uniform, because their friends were doing it, etc. This cannot be discounted. no one would sign their attestation papers if it listed how they would die above where you sign. \"Hmm, let's see, have my legs/nuts blown clean off then drown in the mud at Passchendaele? Sounds good, sign me up.\" That wouldn't happen, people wouldn't stand for it. No who enlists thinks they're going to die a horrible death. Those thoughts come later. Not just that, but many young men in Canadian cities were choosing between their mundane jobs at the bakery and train yard, and with the Canadian Army. They'd see the soldiers looking tough as hell marching through the streets with the bagpipes going and they wanted to join up. They were full of piss and vinegar, risks be damned. There is a reason why the modern Canadian Forces hit their recruiting target for the first time in 30 years during the campaign in Kandahar. Like it or not, war/risk/\"adventure\" draws people in.\n\nNow, this does not get into the idea of the conscription crisis in Canada, which is a whole other can of worms. To answer your question though, the fact is, in 1917 many people did refuse to keep fighting, and they voted with their feet by not enlisting. Long story short, some Canadians wanted to fight, others though it was silly. It caused a large rift in the country politically, a rift that in many ways we're still dealing with.\n\n", "provenance": null }, { "answer": "So people have suggested some answers to the first question, and that's pretty straightforward, because pretty much no one anticipated the war that they actually got. So it's not too tough to explain why people initially thought the war would be fast, even if bloody.\n\nThe other issue is as you pointed out in your question, why they KEPT fighting. I think it's because of a discourse of nationalism in a situation of total war. From at least the late 19th century and maybe even earlier, you see a kind of zero-sum nationalism at work. Each nation tells itself that other nations are out to get them and that the conflicts between nations are essentially fights to the death. For example, advertisements in Britain that say \"Every German job is a British job lost; every German article manufactured is a British article lost.\" The French were terrified of the way that Germany's population was growing faster than their own, and they saw this as leading to their eventual demise. The Germans were terrified of Russia's potential to overwhelm them. In all these cases, nations saw other nations as mortal enemies. Why this is so is a bigger question which we can get into, but let's leave it aside for now.\n\nThe result of this kind of discourse of cataclysmic conflict is that once the war starts, it is sold to people as a total conflict in which victory means ultimate national triumph and a secure and prosperous future, while defeat means utter annihilation. Consider the \"Rape of Belgium.\" British political leaders ostensibly went to war with Germany when Germany violated Belgian neutrality. Britain had promised to protect Belgian sovereignty back in the 1830s (I think), and so was obligated to respond. However, the German violation of Belgium was sold to the British public as a \"rape\" in which German soldiers were utterly barbarous Huns that raped the women and bayonetted babies. This got the British public behind the war--but at the cost of creating a situation that political leaders could not easily undo. If the enemy really is the barbarous Hun out to destroy civilization and everything good, then how can you negotiate with them? How can you make peace with the enemy who is the ultimate Other? In this way, I would argue, the political leadership of many countries talked themselves into a corner. They created the conditions for a war that could only end in total victory or total defeat--but they lacked the weapon systems to achieve this. Thus, we have a long, bloody stalemate.\n\nSome books you can check out:\n\nHow it started:\nJames Joll, *The Origins of the First World War* (1984)\nDavid Herrmann, *The Arming of Europe* (1997)\nNiall Ferguson, *The Pity of War* (2000) (I think Ferguson has become a bit of a hack, but this earlier work is quite useful)\n\nTwo great books on Verdun, Oubsy in particular would be really useful for answering your question:\nAlistair Horne, *The Price of Glory* (1962)\nIan Ousby, *The Road to Verdun* (2002)\n\nSome really important recent scholarship on gender and the war, in particular on things like the \"Rape of Belgium\":\nSusan Grayzel, *Women's Identities at War: Gender, Motherhood, and Politics in Britain and France during the First World War* (1999)\nNicoletta Gullace, *The Blood of Our Sons* (2000?)", "provenance": null }, { "answer": "All the answers so far have been fantastic. I'm going to look at it from another perspective.\n\nWar is a difficult thing to end. With the build-up of troops, materials, manpower, the industries are booming, money is flowing, thousands of jobs are created. Imagine the number of guns needed to be crafted for the newly raised regiments? That's work for iron miners, lumberjacks, steel mills , lumber yards, and a foundry/craftsman. The bonuses of a war are hard to ignore. \n\nSo with this locomotive of the economy, building up a head of steam, the politicians, who likely own the lumber and steel mills, would have a personal reason to keep the war going.\n\nI especially agree with the idea of patriotism \"We must stand up for the British Empire\". Coming from South Africa, we didn't have to fight. We could have sat back, and watched as Europe tore itself apart. But as of 1910, we had been given near independence, by being named a dominion. This dominionship has a clause in it, that we should come to the aid of the Commonwealth whenever called. That is why we fought.\n\nThis is probably why the Canadians fought, and the Australians and New Zealanders (ANZACs in Gallipoli), and other dominion countries.\n\nOn another tangent, it was a common place situation where entire streets or villages of young men would sign-up together, in Pals battalion as they were called. This was mostly in the beginning, when the call for help had only just been sent out, and pride in the country and the desire to fight was strong. This created a huge draw on the population's desire to fight. Their sons, husbands, boyfriends, neighbours, teachers, were all going off to war, to fight the Hun! Be proud.\n\nAfter this period of time, this pride turned more towards revenge. Thousands of young men, killed in the trenches, in attacks over no-man's land, for some spit of French or Belgian soil. Anger and hatred rose up in the populations. They wanted their side to not only win against, but defeat utterly, their enemy.\n\nIt is my opinion of the people at the time that they wanted the war to end, and the only way for it to end in a sufficient manner would be to destroy the Germans.\n\n(As a side note, I had a great uncle who fought at the Battle of Delville Wood, and was shot through the arm. When he got home, he was given a doubled salary and a hero's welcome. Just an example of the thoughts of the people at the time)\n\nSources:\n\n_URL_2_\n_URL_3_\n_URL_0_\n_URL_1_", "provenance": null }, { "answer": "Strange that nobody draws the obvious parallel to the American Civil War. Early 19th century military romanticism, mixed with an unfamiliarity with industrial warfare leading to mass casualties, and post-war popular literature that sobers the population up to reality. It was a major reason for American neutrality during the war, and a major reason behind the hesitant response of the western democracies to Germany in the '30s.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "282291", "title": "Aftermath of World War I", "section": "Section::::Social trauma.\n", "start_paragraph_id": 113, "start_character": 0, "end_paragraph_id": 113, "end_character": 455, "text": "The experiences of the war in the west are commonly assumed to have led to a sort of collective national trauma afterward for all of the participating countries. The optimism of 1900 was entirely gone and those who fought became what is known as \"the Lost Generation\" because they never fully recovered from their suffering. For the next few years, much of Europe mourned privately and publicly; memorials were erected in thousands of villages and towns.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28959158", "title": "Jay Fox", "section": "Section::::Biography.:Home, Washington.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 224, "text": "The Great War came and brought with it a pressure to conform and unite against an outside enemy. Couple this with quarrelling among the colonists on the wellbeing of their community brought an end to Home in the late 1910s.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2595713", "title": "The Incredible Tide", "section": "Section::::Backstory.:The Peace Union.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 303, "text": "Although the reasons behind the war are not clear, a global superpower known as the Peace Union waged war against the West, finally using a doomsday weapon that destroyed their enemies, but destroyed themselves as well. The Peace Union does not exist anymore, but its leftover artifacts are everywhere.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21791624", "title": "Native Americans in the American Civil War", "section": "Section::::Overview.:Problems in the Midwest and West.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 989, "text": "The west was mostly peaceful during the war due to the lack of U.S. occupation troops. The federal government was still taking control of native land, and there were continuous fights. From January to May 1863, there were almost continuous fights in the New Mexico territory, as part of a concerted effort by the Federal government to contain and control the Apache; in the midst of all this, President Abraham Lincoln met with representatives from several major tribes, and informed them he felt concerned they would never attain the prosperity of the white race unless they turned to farming as a way of life. The fighting led to the Sand Creek Massacre caused by Colonel J. M. Chivington, of the Colorado Territorial Militia, whom settlers asked to retaliate against natives. With 900 volunteer militiamen, Chivington attacked a peaceful village of some 500 or more Arapaho and Cheyenne natives, killing women and children as well as warriors. There were few survivors of the massacre.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "863", "title": "American Civil War", "section": "Section::::Union victory and aftermath.:Results.\n", "start_paragraph_id": 199, "start_character": 0, "end_paragraph_id": 199, "end_character": 449, "text": "The causes of the war, the reasons for its outcome, and even the name of the war itself are subjects of lingering contention today. The North and West grew rich while the once-rich South became poor for a century. The national political power of the slaveowners and rich Southerners ended. Historians are less sure about the results of the postwar Reconstruction, especially regarding the second-class citizenship of the Freedmen and their poverty.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "161323", "title": "Military history of the United States", "section": "Section::::American Civil War (1861–1865).\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 632, "text": "As the fighting between the two capitals stalled, the North found more success in campaigns elsewhere, using rivers, railroads, and the seas to help move and supply their larger forces, putting a stranglehold on the South—the Anaconda Plan. The war spilled across the continent, and even to the high seas. After four years of appallingly bloody conflict, with more casualties than all other U.S. wars combined, the North's larger population and industrial might slowly ground the South down. The resources and economy of the South were ruined, while the North's factories and economy prospered filling government wartime contracts.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23565406", "title": "Indian Peace Commission", "section": "Section::::Establishment.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 884, "text": "During the 1860s, national preoccupation with the ongoing American Civil War and the withdrawal of troops to fight it, had weakened the US government's control of the west. This, in addition to corruption throughout the Bureau of Indian Affairs, and the continued migration of the railroad and white settlers westward, led to a general restlessness and eventually armed conflict. Following the Sand Creek Massacre on November 29, 1864, where troops under John Chivington killed and mutilated more than a hundred friendly Cheyenne and Arapaho, half or more women and children, hostilities intensified. Congress dispatched an investigation into the conditions of Native American peoples under Senator James R. Doolittle. After two years of inquiry, Doolittle's 500-page report condemned the actions of Chivington and blamed tribal hostilities on the \"aggressions of lawless white men\".\n", "bleu_score": null, "meta": null } ] } ]
null
1npm6r
why is there still a slight delay on my tv even when i have game mode turned on? compared to crt tv's seeming to be instant?
[ { "answer": "Why was this downvoted?", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "7306", "title": "ColecoVision", "section": "Section::::Hardware.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 468, "text": "All first-party cartridges and most third-party software titles feature a 12-second pause before presenting the game select screen. This delay results from an intentional loop in the console's BIOS to enable on-screen display of the ColecoVision brand. Companies like Parker Brothers, Activision, and Micro Fun bypassed this loop, which necessitated embedding portions of the BIOS outside the delay loop, further reducing storage available to actual game programming.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11700586", "title": "TV Powww", "section": "Section::::Gameplay.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 569, "text": "One of the pitfalls of the gameplay was that, due to broadcasting technicalities, there was significant lag in the transmission of a television signal. The player would experience this lag when playing at home, which likely made playing the game somewhat more difficult. (For similar reasons, such a game would be impossible in digital television without the use of a second video chat feed for the player, due to the time it takes to process and compress the video stream; most stations also mandate a seven-second delay to prevent obscenities from reaching the air.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21631471", "title": "Display lag", "section": "Section::::Display lag versus response time.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 575, "text": "LCD screens with a high response-time value often do not give satisfactory experience when viewing fast-moving images (they often leave streaks or blur; called ghosting). But an LCD screen with both high response time and significant display lag is unsuitable for playing fast-paced computer games or performing fast high-accuracy operations on the screen, due to the mouse cursor lagging behind. Manufacturers only state the response time of their displays and do not inform customers of the display lag value, which might vary depending on various screen options selected.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1674561", "title": "Head-up display (video gaming)", "section": "Section::::HUDs and burn-in.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 478, "text": "Prolonged display (that stays on the screen in a fixed position, remaining static) of HUD elements on certain CRT-based screens may cause permanent damage in the form of burning into the inner coating of the television sets, which is impossible to repair. Players who pause their games for long hours without turning off their television or putting it on standby risk harming their TV sets. Plasma TV screens are also at risk, although the effects are usually not as permanent.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21631471", "title": "Display lag", "section": "Section::::Game mode.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 685, "text": "Many televisions, scalers and other consumer-display devices now offer what is often called a \"game mode\" in which the extensive preprocessing responsible for additional lag is specifically sacrificed to decrease, but not eliminate, latency. While typically intended for videogame consoles, this feature is also useful for other interactive applications. Similar options have long been available on home audio hardware and modems for the same reason. Connection through VGA cable or component should eliminate perceivable input lag on many TVs even if they already have a game mode. Advanced post-processing is non existent on analog connection and the signal traverses without delay.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "515770", "title": "Refresh rate", "section": "Section::::Cathode ray tubes.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 683, "text": "CRT refresh rates have historically been an important factor in electronic game programming. Traditionally, one of the principles of video/computer game programming is to avoid altering the computer's video buffer except during the vertical retrace. This is necessary to prevent flickering graphics (caused by altering the picture in mid-frame) or screen tearing (caused by altering the graphics faster than the electron beam can render the picture). Some video game consoles such as the Famicom/Nintendo Entertainment System did not allow any graphics changes except during the retrace (the period when the electron guns shut off and return to the upper left corner of the screen).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21631471", "title": "Display lag", "section": "Section::::Effects of display lag on users.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 315, "text": "If the game's controller produces additional feedback (rumble, the Wii Remote's speaker, etc.), then the display lag will cause this feedback to not accurately match up with the visuals on-screen, possibly causing extra disorientation (e.g. feeling the controller rumble a split second before a crash into a wall).\n", "bleu_score": null, "meta": null } ] } ]
null
1c5t0i
Fictional historical people
[ { "answer": "Prester John. 500 years of European stories about a Christian monarch in India, or Central Asia, or Ethiopia. The Wikipedia entry is pretty good: _URL_0_\n\n(I've been fascinated by that guy since I first heard of him, but can't claim any expertise about him.)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "55621327", "title": "The Wandering Man (Akunin)", "section": "Section::::Historical basis.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 412, "text": "The prototypes of real historical figures operate in the story. The main figure, Wanderer - is one of the most mysterious personalities during the last years of the Russian Empire – Grigori Rasputin. The faithful fan of the Wanderer \"Fanny Zarubina\" (who is derisively called \"The Cow\") is Anna Vyrubova, the lady-in-waiting, the closest and most devoted friend of the last Russian Empress Alexandra Feodorovna.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1030000", "title": "Chapbook", "section": "Section::::Content.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 985, "text": "Historical stories set in a mythical and fantastical past were popular. The selection is interesting. Charles I, and Oliver Cromwell do not appear as historical figures in the Pepys collection, and Elizabeth I only once. The Wars of the Roses and the English Civil War do not appear at all. Henry VIII and Henry II appear in disguise, standing up for the right with cobblers and millers and then inviting them to Court and rewarding them. There was a pattern of high born heroes overcoming reduced circumstances by valour, such as St George, Guy of Warwick, Robin Hood (who at this stage has yet to give to the poor what he was stealing from the rich), and heroes of low birth who achieve status through force of arms, such as Clim of Clough, and William of Cloudesley. Clergy often appear as figures of fun, and stupid countrymen were also popular (e.g., \"The Wise Men of Gotham\"). Other works were aimed at regional and rural audience (e.g., \"The Country Mouse and the Town Mouse\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "60603911", "title": "Culture of England", "section": "Section::::Folklore.\n", "start_paragraph_id": 66, "start_character": 0, "end_paragraph_id": 66, "end_character": 1100, "text": "Some folk figures are based on semi-historical or historical people whose stories have been passed down the centuries; Lady Godiva for instance was said to have ridden naked on horseback through Coventry, Hereward the Wake was a heroic English figure resisting the Norman invasion, Herne the Hunter is an equestrian ghost associated with Windsor Forest and Great Park (whose tale bears the common European folkloric motif of the Wild Hunt) and Mother Shipton is the archetypal witch. The chivalrous bandit, such as Dick Turpin, is a recurring character. There are various still surviving national and regional folk activities, such as Morris dancing, Maypole dancing, Rapper sword in the North East, Long Sword dance in Yorkshire, Mummers Plays, bottle-kicking in Leicestershire, and cheese-rolling at Cooper's Hill. There is no official national costume, but a few costumes are well established, such as the Pearly Kings and Queens associated with cockneys, the Royal Guard, the Morris costume and Beefeaters. The utopian vision of a traditional England is sometimes referred to as \"Merry England\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26607946", "title": "George S. Stuart", "section": "Section::::\"Historical Figures\".\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 723, "text": "He has created more than 400 \"Historical Figures\" in groups to complement his performances. The groups include, American Revolutionary and Civil Wars (Samuel Adams to Abraham Lincoln), English Monarchies (Henry VII to Edward VII), Bourbon Dynasty (Henry IV to Charles X), Czarist Russia (Ivan IV to Joseph Stalin) Manchu Dynasty (Nurhaci to Mao Tse-Tung, Renaissance & Reformation (various rulers and clergy), Conquest of the Americas (Columbus to John Fremont), Really Awful People (history's infamous), Warriors of the Ages, Germanic Myth & Legend (northern pantheon) and his earliest works. Stuart's favorite figurine is that of Lincoln, which he describes as \"...the most enjoyable thing I ever did. Truly compelling.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "879065", "title": "Angle of Repose", "section": "Section::::Historical characters.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 286, "text": "The novel is thickly populated with real historical personages. A \"Who's Who\" of American geologists and other western individuals of the late 19th century make their appearance, including John Wesley Powell, Clarence King, Samuel Franklin Emmons, Henry Janin, and Rossiter W. Raymond.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "722616", "title": "Historical fantasy", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 672, "text": "Historical fantasy is a sub-genre of fantasy that encompasses the Middle Ages, as well as sometimes other eras, and simply represents fictitious versions of historic events. This sub-genre is common among role-playing games and high fantasy literature. It can include various elements of medieval European culture and society, including a monarchical government, feudal social structure, medieval warfare, and mythical entities common in European folklore. Works of this genre may have plots set in biblical times or classical antiquity. They often have plots based very loosely on mythology or legends of Greek-Roman history, or the surrounding cultures of the same era.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "102475", "title": "Historical mystery", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1172, "text": "The historical mystery or historical whodunit is a subgenre of two literary genres, historical fiction and mystery fiction. These works are set in a time period considered historical from the author's perspective, and the central plot involves the solving of a mystery or crime (usually murder). Though works combining these genres have existed since at least the early 20th century, many credit Ellis Peters's \"Cadfael Chronicles\" (1977–1994) for popularizing what would become known as the historical mystery. The increasing popularity and prevalence of this type of fiction in subsequent decades has spawned a distinct subgenre recognized by the publishing industry and libraries. \"Publishers Weekly\" noted in 2010 of the genre, \"The past decade has seen an explosion in both quantity and quality. Never before have so many historical mysteries been published, by so many gifted writers, and covering such a wide range of times and places.\" Editor Keith Kahla concurs, \"From a small group of writers with a very specialized audience, the historical mystery has become a critically acclaimed, award-winning genre with a toehold on the \"New York Times\" bestseller list.\"\n", "bleu_score": null, "meta": null } ] } ]
null
27n2kd
what would happen if you pulled your keys out of the ignition while driving?
[ { "answer": "Cars have interlocks on the ignition to prevent this. If the interlock is broken and you actually CAN yank the key out while driving, probably nothing.", "provenance": null }, { "answer": "I had a car with a broken interlock system. I could start the car then put the key in my pocket. I can't remember if I could turn the car off without putting the key back in.", "provenance": null }, { "answer": "Unsure about the engine but you wouldn't be able to steer as the steering wheel will lock.\n\n", "provenance": null }, { "answer": "70s and 80s GM cars had a weak retention pin, I remember it was always funny to reach over and rip the keys out of your friend's ignition while they were driving to see their reaction. Stupid, but funny.", "provenance": null }, { "answer": "Some asshole actually pulled my keys out while I was driving once. My steering wheel and brakes locked up and I had to pull my e-brake to get the car to stop. It was terrifying and I was pissed.", "provenance": null }, { "answer": "The last car I had, a '01 Mitsubishi Eclipse (automatic), I could take the keys out with absolutely no effect on the car. I always thought it was a little strange.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2109407", "title": "Car key", "section": "Section::::Types.:Transponder.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 697, "text": "When the key is turned in the ignition cylinder, the car's computer transmits a radio signal to the transponder circuit. The circuit has no battery; it is energized by the radio signal itself. The circuit typically has a computer chip that is programmed to respond by sending a coded signal back to the car's computer. If the circuit does not respond or if the code is incorrect, the engine will not start. Many cars immobilize if the wrong key is used by intruders. Chip Keys successfully protect cars from theft in two ways: forcing the ignition cylinder won't start the car, and the keys are difficult to duplicate. This is why chip keys are popular in modern cars and help decrease car theft.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6747895", "title": "1941 Ford", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 410, "text": "The \"ignition key\" for these cars was actually used to operate a bolt lock which, on one end, unlocked the steering column (a feature destined to return, mandated, decades later), and on the other end unblocked the ignition switch, allowing it to be operated. Starting the car was then accomplished by pressing a pushbutton on the dashboard, another feature destined to return with the advent of \"smart keys\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56610108", "title": "Disappearance of Gary Mathias", "section": "Section::::Investigation.:Discovery of the car.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 273, "text": "The keys were not present, suggesting at first that the car had been abandoned because it might not have been functioning properly, with the intention of returning later with help. But when police hot-wired the car, it started immediately. The gas tank was a quarter full.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1297380", "title": "Police car", "section": "Section::::Equipment.:Police-specific equipment.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 422, "text": "BULLET::::- Runlock: This allows the vehicle's engine to be left running without the keys being in the ignition. This enables adequate power, without battery drain, to be supplied to the vehicle's equipment at the scene of an incident. The vehicle can only be driven off after re-inserting the keys. If the keys are not re-inserted, the engine will switch off if the handbrake is disengaged or the footbrake is activated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "533426", "title": "Car alarm", "section": "Section::::Arming and disarming of car alarms.:OEM alarms.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 209, "text": "Some vehicles will disarm if the ignition is turned on; often when the vehicle is equipped with a key-based immobilizer and an alarm, the combination of the valid key code and the ignition disarms the system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12530893", "title": "Police vehicles in the United Kingdom", "section": "Section::::Runlock system.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 873, "text": "Most cars and police motorcycles are fitted with a 'Runlock' system. This allows the vehicle's engine to be left running without the keys being in the ignition. This enables adequate power, without battery drain, to be supplied to the vehicle's equipment at the scene of an incident. The vehicle can only be driven after re-inserting the keys. If the keys are not re-inserted, the engine will switch off if the handbrake is disengaged or the footbrake is activated; or the sidestand is flipped up in the case of a motorcycle. Runlock is also commonly used when an officer is required to quickly decamp from a vehicle or to keep the vehicle Mobile data terminal running. By enabling Runlock, the car's engine can be left running without the risk of someone stealing the vehicle: if the vehicle is driven normally, it will shut down, unless the Runlock system is turned off.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1584835", "title": "Smart key", "section": "Section::::How it works.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 270, "text": "Vehicles with a smart-key system can disengage the immobilizer and activate the ignition without inserting a key in the ignition, provided the driver has the key inside the car. On most vehicles, this is done by pressing a starter button or twisting an ignition switch.\n", "bleu_score": null, "meta": null } ] } ]
null
a2o54u
how do machines generate cold air
[ { "answer": "Machine cannot make cold. What they do is remove some of the heat from the air and recycling it. They do this over and over again. ", "provenance": null }, { "answer": "You can not generate cold air. But you can move temperature from one radiator to another one. A cooling unit like you have in a refrigerator, air conditioner and car have two radiators, one hot and one cold. When you compress a gas it becomes hotter, and when you release the pressure of the gas it becomes colder. You can use this to move temperature around. If you first compresses a gas it becomes hot, then you can pipe it though a radiator on the hot side to cool it down, once the gas reaches ambient temperature you can release the pressure though a pressure release, this will cool down the gas which you can now send through the radiator on the cold side. In this way you have moved temperature from the cold side to the hot side.", "provenance": null }, { "answer": "Air conditioning and refrigerators use a property of gas known as Gay-Lussac's Law. This basically states that as you increase pressure on a gas, the temperature increases. Conversely, as you decrease pressure of a gas, it absorbs energy and becomes cooler.\n\nSo AC and refrigerators work by compressing a coolant. This compression generates heat that is dispersed by by a fan that runs outside air over the compressor. The coolant is compressed so that it actually becomes a liquid. The liquid coolant is then pumped inside the area that is to be cooled. The coolant is then allowed to become gas again in an evaporator coil. As the liquid becomes gas, it absorbs heat (becoming cooler). Another fan passes air over the coil and the heat absorbed by the change in state of the coolant cools down the air. It's this cool air that is circulated in the fridge or inside of the house. From the evaporator coil, the coolant is then pumped back outside to the compressor to continue the cycle.\n\nFrom a thermodynamic standpoint, heat is generated by compressing the coolant outside and the same amount of heat is absorbed from the air being blown over the evaporator coil inside. This results in cooler air inside the dwelling or refrigerator.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "784830", "title": "Chiller", "section": "Section::::Use in industry.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 595, "text": "Air-cooled and evaporative cooled chillers are intended for outdoor installation and operation. Air-cooled machines are directly cooled by ambient air being mechanically circulated directly through the machine's condenser coil to expel heat to the atmosphere. Evaporative cooled machines are similar, except they implement a mist of water over the condenser coil to aid in condenser cooling, making the machine more efficient than a traditional air-cooled machine. No remote cooling tower is typically required with either of these types of packaged air-cooled or evaporatively cooled chillers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16348839", "title": "Ultra-low volume", "section": "Section::::Ultra low volume fogging machines.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 292, "text": "Ultra low volume (ULV) fogging machines are cold fogging machines that use large volumes of air at low pressures to transform liquid into droplets that are dispersed into the atmosphere. This type of fogging machine can produce extremely small droplets with diameters ranging from 1–150 µm. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "172111", "title": "Washing machine", "section": "Section::::Wash cycles.:Washing.\n", "start_paragraph_id": 131, "start_character": 0, "end_paragraph_id": 131, "end_character": 234, "text": "Many machines are cold-fill, connected to cold water only, which they heat to operating temperature. Where water can be heated more cheaply or with less carbon dioxide emission than by electricity, cold-fill operation is inefficient.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5837003", "title": "Solar air conditioning", "section": "Section::::Solar open-loop air conditioning using desiccants.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 507, "text": "Air can be passed over common, solid desiccants (like silica gel or zeolite) or liquid desiccants (like lithium bromide/chloride) to draw moisture from the air to allow an efficient mechanical or evaporative cooling cycle. The desiccant is then regenerated by using solar thermal energy to dehumidfy, in a cost-effective, low-energy-consumption, continuously repeating cycle. A photovoltaic system can power a low-energy air circulation fan, and a motor to slowly rotate a large disk filled with desiccant.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55771608", "title": "Spring-powered aircraft", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 430, "text": "A Spring-powered aircraft is an aircraft powered by a mechanical device capable of storing energy. The most popular version of a spring-powered aircraft are model toy planes driven by a rubber cord, which is twisted by turning the propellor. When leaving the hand from the propellor, it starts rotating and drives the plane. Most planes of this type have to be thrown by the operator, but some can start directly from the ground.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "321546", "title": "Espresso machine", "section": "Section::::Drive mechanism.:Air-pump-driven.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 704, "text": "In recent years air-pump-driven espresso machines have emerged. These machines use compressed air to force the hot water through the coffee grounds. The hot water is typically added from a kettle or a thermo flask. The compressed air comes from either a hand-pump, N or cartridges or an electric compressor. One of the advantages of the air-pump-driven machines is that they are much smaller and lighter than electric machines. They are often handheld and portable. The first air-pump-driven machine was the AeroPress, which was invented by Alan Adler, an American inventor, and introduced in 2005. Handpresso Wild, invented by Nielsen Innovation SARL, a French innovation house, was introduced in 2007.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6889398", "title": "Air brake (road vehicle)", "section": "Section::::Design and function.:Supply system.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 935, "text": "The air compressor is driven by the engine either by crankshaft pulley via a belt or directly from the engine timing gears. It is lubricated and cooled by the engine lubrication and cooling systems. Compressed air is first routed through a cooling coil and into an air dryer which removes moisture and oil impurities and also may include a pressure regulator, safety valve and smaller purge reservoir. As an alternative to the air dryer, the supply system can be equipped with an anti-freeze device and oil separator. The compressed air is then stored in a supply reservoir (also called a wet tank) from which it is then distributed via a four-way protection valve into the primary reservoir (rear brake reservoir) and the secondary reservoir (front/trailer brake reservoir), a parking brake reservoir, and an auxiliary air supply distribution point. The system also includes various check, pressure limiting, drain and safety valves.\n", "bleu_score": null, "meta": null } ] } ]
null
1761w3
Are there any creatures that use endothermic reactions for defense?
[ { "answer": "Lots of things are exothermic. Combustion, for instance. Very few biological reactions are strongly endothermic enough to be useful as a defense mechanism.\n\n\"Get back or I'll freeze you to death!\"", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "41938747", "title": "The Sixth Extinction: An Unnatural History", "section": "Section::::Summary of chapters.:Chapter 10: The New Pangaea.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 970, "text": "Kolbert points out that there is an evolutionary arms race, in which each species must be equipped to defend against their potential predators, and need to be more fit than their competition. A species has no defense if it encounters a new fungus, virus, or bacterium. This can be extremely deadly, as it was in the case of American bats killed by the psycrophilic fungus \"Geomyces destructans\". Another example of this occurred in the 1800s. The American chestnut was the dominant deciduous tree in eastern American forests. Then, a fungus (\"Cryphonectria parasitica\") started to cause chestnut blight. It was nearly 100 percent lethal. This fungus was unintentionally imported to the US by humans. Kolbert then explains that global trade and travel are creating a virtual \"Pangaea\", in which species of all kinds are being redistributed beyond historical geographic barriers. This furthers the first chapter's idea that invasive species are a mechanism of extinction.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6627913", "title": "Automimicry", "section": "Section::::Mimicry of distasteful members of the same species.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 1833, "text": "The existence of automimicry in the form of non-toxic mimics of toxic members of the same species (analogous to Batesian mimicry) poses two challenges to evolutionary theory: how can automimicry be maintained, and how can it evolve? For the first question, as long as prey of the species are, on average, unprofitable for predators to attack, automimicry can persist. If this condition is not met, then the population of the species rapidly crashes. The second question is more difficult, and can also be rephrased as being about the mechanisms that keep warning signals honest. If signals were not honest, they would not be evolutionarily stable. If costs of using toxins for defence affects members of a species, then cheats might always have higher fitness than honest signallers defended by costly toxins. A variety of hypotheses have been put forth to explain signal honesty in aposematic species. First, toxins may not be costly. There is evidence that in some cases there is no cost, and that toxic compounds may actually be beneficial for purposes other than defence. If so, then automimics may simply be unlucky enough not to have gathered enough toxins from their environment. A second hypothesis for signal honesty is that there may be frequency-dependent advantages to automimicry. If predators switch between host plants that provide toxins and plants that do not, depending on the abundance of larvae on each type, then automimicry of toxic larvae by non-toxic larvae may be maintained in a balanced polymorphism. A third hypothesis is that automimics are more likely to die or to be injured by a predator's attack. If predators carefully sample their prey and spit out any that taste bad before doing significant damage (\"go-slow\" behaviour), then honest signallers would have an advantage over automimics that cheat.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38652305", "title": "Predatory imminence continuum", "section": "Section::::Development of the predatory imminence continuum.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 3240, "text": "The development of the predatory imminence continuum began with the description of species-specific defence reactions. Species-specific defence reactions are innate responses demonstrated by an animal when they experience a threat. Since survival behaviours are so vital for an animal to acquire and demonstrate rapidly, it has been theorized that these defence reactions would not have time to be learned and therefore, must be innate. While these behaviours are species-specific, there are three general categories of defence reactions - fleeing, freezing, and threatening. Species-specific defence reactions are now recognized as being organized in a hierarchical system where different behaviours are exhibited, depending on the level of threat experienced. However, when this concept was first proposed, the dominant species-specific defence reaction in a certain context was thought to be controlled by operant conditioning. That is, if a species-specific defence reaction was unsuccessful in evading or controlling conflict, the hierarchical system would be rearranged because of the punishment, in the form of failure, experienced by an animal. It would then be unlikely for that species-specific defence reaction to be used in a similar situation again; instead, an alternative behaviour would be dominant. However, if the dominant behaviour was successful it would remain the recurring behaviour for that situation. After experimentation, this theory was met with much opposition, even by the person who proposed it. One point of opposition was found through the use of shock on rats and the species-specific defence reaction of freezing. This experiment found that while punishment did seem to affect freezing, it was not through response weakening but through the evoking of different levels of the behaviour. Other criticisms for this theory focused on the inability for species-specific defence reactions to effectively rearrange in this manner in natural situations. It has been argued that there would not be enough time for punishment, in the form of an animal being unsuccessful in its defence, to reorder the hierarchy of species-specific defence reactions. The rejection of the operant conditioning mechanism for the reorganization of species-specific defence reactions, led to the development of the predatory imminence continuum. The organization of defensive behaviours can be attributed to the level of threat an animal perceives itself to be in. This theory is one of adaptiveness, as the dominant defence reaction is the behaviour which is most effective in allowing the survival of the animal and the one which is most effective in preventing an increasing level of threat, also known as increasing imminence. The probability of being killed by a predator, known as predatory imminence, is what is responsible for the expressed defensive behaviour. The predatory imminence is dependent on many factors such as the distance from a predator, the potential for escape, and the likelihood of meeting a predator. Three general categories of defensive behaviours, based on increasing predatory imminence, have been identified. These are labelled as pre-encounter, post-encounter, and circa-strike defensive behaviours.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58687", "title": "Aggression", "section": "Section::::Ethology.:Between species and groups.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 872, "text": "An animal defending against a predator may engage in either \"fight or flight\" or \"tend and befriend\" in response to predator attack or threat of attack, depending on its estimate of the predator's strength relative to its own. Alternative defenses include a range of antipredator adaptations, including alarm signals. An example of an alarm signal is nerol, a chemical which is found in the mandibular glands of \"Trigona fulviventris\" individuals. Release of nerol by T. fulviventris individuals in the nest has been shown to decrease the number of individuals leaving the nest by fifty percent, as well as increasing aggressive behaviors like biting. Alarm signals like nerol can also act as attraction signals; in T. fulviventris, individuals that have been captured by a predator may release nerol to attract nestmates, who will proceed to attack or bite the predator.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5042518", "title": "Tralomethrin", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 453, "text": "Tralomethrin has potent insecticidal properties; it kills by modifying the gating kinetics of the sodium channels in neurons, increasing the length of time the channel remains open after a stimulus, thereby depolarizing the neuron for a longer period of time. This leads to uncontrolled spasming, paralysis, and eventual death. Insects with certain mutations in their sodium channel gene may be resistant to tralomethrin and other similar insecticides.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "245868", "title": "Cypermethrin", "section": "Section::::Environmental effects.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 388, "text": "Cypermethrin is a broad-spectrum insecticide, which means it kills beneficial insects as well as the targeted insects. Fish are particularly susceptible to cypermethrin, but when used according directions, application around residential sites poses little risk to aquatic life. Resistance to cypermethrin has developed quickly in insects exposed frequently and can render it ineffective.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52128194", "title": "Charles Moureu", "section": "Section::::Research.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 419, "text": "The autoxidation reactions that Moureu and Dufraisse described occur spontaneously in most organic products in the presence of oxygen from the air and certain catalysts. Autoxidation affects virtually all living organisms. Moureu identified catalysts that could trigger such reactions, and other compounds that could slow or inhibit such reactions. He called the inhibitors \"antioxygens\", now known as \"antioxidants\". \n", "bleu_score": null, "meta": null } ] } ]
null
1yin4z
Does the 'space' inside a black hole move faster than the speed of light?
[ { "answer": "In the case of light, I think it's easier to think of the gravity of a black hole as an well the light has to climb out of.\n\nOn Earth, when you throw a ball straight up into the air, it slows down until it stops (and then falls back to Earth). That process of slowing down is the ball exchanging its energy of motion (kinetic energy) for gravitational potential energy (climbing up the gravity well).\n\nLight actually does the same thing. Every photon of light has energy proportional to its frequency. As light moves up a gravity well, it gets *red-shifted*, which means its energy decreases. This is energy it has \"used\" to climb the gravity well.\n\nIn a black hole, if light is on a path to leave the black hole, then it's traveling up the gravity well. As it travels up, it gets red-shifted until all of its energy is used up just trying to get to the event horizon. So it will cease to exist before it can escape.", "provenance": null }, { "answer": "*EDIT: Multiple edits in many places. Apologies for that.*\n\nI wish people would stop thinking in terms of speed when it comes to black holes. It's a very confusing way to describe it. Seems simple at first, but it leads you into error later.\n\nA much more useful way to think of a BH is via topology. When inside the event horizon, no matter which way you're looking at, you're looking at the center. Spacetime itself is so twisted, knotted into itself, that all trajectories inside the event horizon, no matter how you draw them, eventually end in the center. There is no way up - worse, *there is no up*. There is only down, and down, and down.\n\n > does it mean that the space through which the light is traveling, actually moves faster than the speed of light?\n\nSpace is not something that can \"move\". Putting these two notions together makes no sense.\n\nYou're thinking as if you're swimming upriver, but you're overwhelmed by the speed of water. It's not like that. Space is not water.\n\n > I 'know' that nothing is faster than the speed of light - that is the maximum speed.\n\nIt's not as simple as that. Most people think of the speed of light as some sort of cosmic police that doesn't let you go faster than c. But that's not how it works.\n\nReality is, as you get closer and closer to c, the relations of space and time become \"distorted\". Time appears to stretch out, and space appears to be compressed. The closer you get to c, the stronger the distortion. The reason why you can't reach c is that, if you did, time would stretch out to infinity, space would compress down to nothing, and all sorts of divisions by zero would come out of the math. Space and time would be like nothing you could ever imagine. Math (the equations of motion) would be \"broken\".\n\nFurther, speed of light is a \"limit\", an obstacle for space exploration, only for the bystanders back on Earth. But the rocket traveling at relativistic speeds has a different experience:\n\nFor you, sitting here on Earth, it seems like my rocket travels at \"only\" 0.9999999...c towards the Andromeda Galaxy, and would reach it in 2.5 million years.\n\nBut for me, inside the rocket, because of space-time relativistic distortion, the journey to Andromeda could be very short indeed; maybe a few decades, or years, or days, or even a few seconds. It all depends on how fast I accelerate.\n\nIt appears to be a very long journey from where you're sitting, but it's pretty short (time-wise) for me - *and we are both correct!*\n\nSo be careful when thinking of c as a \"limit\"; it's a complex and subtle issue. True, you can never measure speed higher than c, no matter what you do - that's one of the few things that are absolute in this Universe. But you can travel as far as you want, in as short a time as you want; relativity itself allows you to do that.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1508445", "title": "Ergosphere", "section": "Section::::Rotation.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 669, "text": "As a black hole rotates, it twists spacetime in the direction of the rotation at a speed that decreases with distance from the event horizon. This process is known as the Lense–Thirring effect or frame-dragging. Because of this dragging effect, an object within the ergosphere cannot appear stationary with respect to an outside observer at a great distance unless that object were to move at faster than the speed of light (an impossibility) with respect to the local spacetime. The speed necessary for such an object to appear stationary decreases at points further out from the event horizon, until at some distance the required speed is that of the speed of light.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "215706", "title": "Supermassive black hole", "section": "Section::::Evidence.:Doppler measurements.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 835, "text": "Some of the best evidence for the presence of black holes is provided by the Doppler effect whereby light from nearby orbiting matter is red-shifted when receding and blue-shifted when advancing. For matter very close to a black hole the orbital speed must be comparable with the speed of light, so receding matter will appear very faint compared with advancing matter, which means that systems with intrinsically symmetric discs and rings will acquire a highly asymmetric visual appearance. This effect has been allowed for in modern computer generated images such as the example presented here, based on a plausible model for the supermassive black hole in Sgr A* at the centre of our own galaxy. However the resolution provided by presently available telescope technology is still insufficient to confirm such predictions directly.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7521471", "title": "Optical black hole", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 286, "text": "An optical black hole is a phenomenon in which slow light is passed through a Bose–Einstein condensate that is itself spinning faster than the local speed of light within to create a vortex capable of trapping the light behind an event horizon just as a gravitational black hole would.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3072290", "title": "Photon sphere", "section": "", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 756, "text": "A rotating black hole has two photon spheres. As a black hole rotates, it drags space with it. The photon sphere that is closer to the black hole is moving in the same direction as the rotation, whereas the photon sphere further away is moving against it. The greater the angular velocity of the rotation of a black hole, the greater the distance between the two photon spheres. Since the black hole has an axis of rotation, this only holds true if approaching the black hole in the direction of the equator. If approaching at a different angle, such as one from the poles of the black hole to the equator, there is only one photon sphere. This is because approaching at this angle the possibility of traveling with or against the rotation does not exist.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "456715", "title": "Kerr metric", "section": "Section::::Ergosphere and the Penrose process.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 375, "text": "A black hole in general is surrounded by a surface, called the event horizon and situated at the Schwarzschild radius for a nonrotating black hole, where the escape velocity is equal to the velocity of light. Within this surface, no observer/particle can maintain itself at a constant radius. It is forced to fall inwards, and so this is sometimes called the \"static limit\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26930551", "title": "Sigma (cosmology)", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 332, "text": "They then realised that the black holes must have something to do with a galaxy's formation, so they turned to something they thought was useless: the speed of the stars around the edge of the galaxy. This is sigma, the speed of the stars at the edge of the galaxy supposedly unaffected by the mass of the black hole at the centre.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9223226", "title": "Gullstrand–Painlevé coordinates", "section": "Section::::Speeds of light.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 378, "text": "BULLET::::- At the event horizon, formula_30 the speed of light shining outward away from the center of black hole is formula_31 It can not escape from the event horizon. Instead, it gets stuck at the event horizon. Since light moves faster than all others, matter can only move inward at the event horizon. Everything inside the event horizon is hidden from the outside world.\n", "bleu_score": null, "meta": null } ] } ]
null
cgt7zp
Did the Ottoman Caliph actually hold power over the Muslim ummah, or was it just a ceremonious title?
[ { "answer": "I'll try to answer your question, sorry if it doesn't satisfy you\n\nIt was mostly the latter. First of all. generally speaking (with exceptions, like in the Muslim kingdoms/communities in the Indian Ocean), the classical concept of a single caliph for the whole Islamic community had had no force since the 13th century with the destruction of Abbasid Baghdad in 1258. There were, to be sure, self-aggrandizing rulers from various parts of the Muslim world including who sometimes included \"caliph\" among their list of titles. Yet when they did so, it was in a manner virtually interchangeable with \"sultan\", lacking any implication of descent from earlier caliphs or an exclusive claim to the title or to universal authority. This general disregard for the past prestige of the caliphate seems to have been shared by intellectuals as well as politicians. In the centuries after the fall of Baghdad, Muslim scholars from the extreme traditionalist Ibn Taymiyya to the hyper-rationalist Ibn Khaldun, seem to have reached a basic consensus that the caliphate as an institution no longer had relevance for the times in which they lived.\n\nIt was still quite the case when the Ottomans came along. In the first place, There is no evidence to suggest that Selim aspired to the office of caliph before his conquest of Egypt, remaining silent about both in all of his pre-conquest correspondence with the Mamluks. Nor did he make a visible effort to claim these offices immediately following his conquest, failing to make any systematic mention of them in the various *fethnāmes* sent to announce his victory to officials and dignitaries of his realm, to vassals and foreign rulers, and to his heir, the future Sultan Suleiman. Instead, the title was acclaimed to him by people living entirely outside the territories under his control. The first public acknowledgement of Selim as *Khadim al-Haramayn* (Custodian of the Holy Mosque, an equally prestigious title) would come four years later from the Sharif of Mecca. Immediately following Selim’s conquest of Egypt, while he was still in Cairo, the Sharif sent an official delegation headed by his own son to meet the sultan and, in a gesture heavy with political significance, to hand over to him the keys to Mecca. It was only after this public act that Selim, in a letter to the ruler of Shirvan, seems to have for the first time referred to the \"*khilāfet-i ālīye\"* (exalted caliphate) and to the insertion of his own name in the *hutbe* of Mecca, reasoning that the failure of the Mamluks to protect the hajj routes had made it incumbent on him to assume this role. Then, in the following year, it was an Indian Ocean Muslim, Malik Ayaz of Gujarat, who as the governor of Diu became the first foreign ruler to spontaneously acknowledge Selim as “Caliph of the Faith” in a letter congratulating him on his victory\n\nDespite its mostly ceremonial title however, The Ottomans did sometimes use it in a politicized manner to mobilize or to correspond with Muslim communities outside the empire. One such example is in the 16th century, when the Ottoman Empire influenced and interacted with Muslim Powers in the Indian Ocean to fight the Portuguese, which i will briefly describe\n\nThe treaty of Tordesillas led to King Manuel I of Portugal claiming a new imperial title: \"Lord of the Conquest, Navigation and Commerce of Ethiopia, Arabia, Persia and India\", as he hoped to be recognized as these rulers’ superior, a \"king of kings\" or universal emperor, whose authority transcended the physical possession of any specific territory. Because Portuguese ambitions were concerned first and foremost with the transit spice trade, and since trade routes to and from the Red Sea were trafficked principally by Muslim merchants, Portuguese claims would be measured in its ability to prevent Indian Ocean Muslims from travelling to the Red Sea for trade and *hajj*. For the first time in history a non-Muslim power emerged in the Indian Ocean that was not only capable of preventing disparate Muslim communities from maintaining contact with one another by means of *hajj* but was actually compelled to do so according to the terms of its own claims to universal sovereignty. The Portuguese then started to raid and blockade parts of the Red Sea, and as a result of this sustained and organized violence, the Muslim Indian Ocean communities was primed for a radical re-politicization of the ideal of the Muslim *Umma.* With this in mind, the spontaneous acclamation of Selim as both \"Caliph\" and \"Servant of the Two Holy Cities\" was a potential two-edged sword. On the one hand, the notion that the Ottoman sultan was now responsible for protecting Muslim merchants and pilgrims throughout the Indian Ocean (and presumably, elsewhere too) implied that his legitimacy could be called into question by events far outside of the empire’s borders, beyond his control, and possibly even unknown to him if he failed to fulfill these obligation. On the other hand, if the sultan were able guarantee the safety of the Indian Ocean hajj, or at least make a credible effort to do so, he might expect a measure of allegiance in return from Muslims throughout maritime Asia, regardless of whether or not they were actually Ottoman subjects. The Ottomans had been nurturing the title caliph since the early reign of Süleyman. These efforts were directly connected with the consolidation of Ottoman rule in Egypt and were spearheaded by the grand vizier Ibrahim Pasha, Süleyman’s closest confidant and a leading proponent of Ottoman military engagement in the Indian Ocean. In 1538, a fleet of over seventy ships eventually did set sail to India from Suez, beginning the history of direct Ottoman military involvement in maritime Asia. With the departure of this fleet, a full-blown revival of the concept of the Universal Caliphate in a thoroughly Ottoman guise began to take shape in the following decades. The clearest evidence for this dates from the 1560s and emerges from a series of diplomatic exchanges between the Ottomans and the Sultan of Aceh, Ali Ala’ad-Din Ri’ayat Syah. In the first letter of this exchange, sent from Aceh in 1564 and addressed to Suleiman the Magnificent, the Ottoman sultan is repeatedly addressed as \"caliph\" by Ali Ala’ad-Din Ri’ayat Syah and is assured that in this capacity his name is read in the *hutbe* in all the mosques of Aceh. Moreover, the letter indicates that Suleiman is being similarly named in the *hutbes* of Sri Lanka, Calicut, and the Maldives, all places strategically located along the maritime trunk routes to the Red Sea, and these communities’ recognition of the sultan as defender of the universal *Umma* was now expressed in direct expectation of weapons, ships, and technical expertise in return, in order to continue the ongoing fight against the Portuguese \n\n**Sources:**\n\n*Tordesillas and the Ottoman Caliphate: Early Modern Frontiers and the Renaissance of an Ancient Islamic Institution* and *The Ottoman Age of Exploration* by Giancarlo Casale\n\n*Legitimizing The Order: The Ottoman Rhetoric of State Power* by Hakan T. Karateke and Maurus Reinkowski", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "29339048", "title": "Abolition of the Ottoman sultanate", "section": "Section::::End of Ottoman Empire.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 586, "text": "The Ottoman Dynasty embodied the Ottoman Caliphate since the fourteenth century, starting with the reign of Murad I. The Ottoman Dynasty kept the title Caliph, power over all Muslims, as Mehmed's cousin Abdülmecid II took the title. The Ottoman Dynasty left as a political-religious successor to Muhammad and a leader of the entire Muslim community without borders in a post Ottoman Empire. Abdülmecid II's title was challenged in 1916 by the leader of the Arab Revolt King Hussein bin Ali of Hejaz, who denounced Mehmet V, but his kingdom was defeated and annexed by Ibn Saud in 1925.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22278", "title": "Ottoman Empire", "section": "Section::::Government.\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 1550, "text": "The highest position in Islam, \"caliphate\", was claimed by the sultans starting with Murad I, which was established as the Ottoman Caliphate. The Ottoman sultan, \"pâdişâh\" or \"lord of kings\", served as the Empire's sole regent and was considered to be the embodiment of its government, though he did not always exercise complete control. The Imperial Harem was one of the most important powers of the Ottoman court. It was ruled by the Valide Sultan. On occasion, the Valide Sultan would become involved in state politics. For a time, the women of the Harem effectively controlled the state in what was termed the \"Sultanate of Women\". New sultans were always chosen from the sons of the previous sultan. The strong educational system of the palace school was geared towards eliminating the unfit potential heirs, and establishing support among the ruling elite for a successor. The palace schools, which would also educate the future administrators of the state, were not a single track. First, the Madrasa (') was designated for the Muslims, and educated scholars and state officials according to Islamic tradition. The financial burden of the Medrese was supported by vakifs, allowing children of poor families to move to higher social levels and income. The second track was a free boarding school for the Christians, the \"Enderûn\", which recruited 3,000 students annually from Christian boys between eight and twenty years old from one in forty families among the communities settled in Rumelia or the Balkans, a process known as Devshirme (').\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2081846", "title": "Ottoman Caliphate", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 496, "text": "The Ottoman Caliphate (1517–1924), under the Ottoman dynasty of the Ottoman Empire, was the last Sunni Islamic caliphate of the late medieval and the early modern era. During the period of Ottoman growth, Ottoman rulers claimed caliphal authority since Murad I's conquest of Edirne in 1362. Later Selim I, through conquering and unification of Muslim lands, became the defender of the Holy Cities of Mecca and Medina which further strengthened the Ottoman claim to caliphate in the Muslim world.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4392115", "title": "Sacred Relics (Topkapı Palace)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 257, "text": "With the conquest of the Arabic world by Sultan Selim I (1517), the Caliphate passed from the vanquished Abbasids to the Ottoman sultans. The Islamic prophet Muhammad’s mantle, which was kept by the last Abbasid Caliph Mutawakkil III, was given to Selim I.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "804036", "title": "Caliphate", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 319, "text": "), the Umayyad Caliphate (661–750) and the Abbasid Caliphate (750–1258). In the fourth major caliphate, the Ottoman Caliphate, the rulers of the Ottoman Empire claimed caliphal authority from 1517. During the history of Islam, a few other Muslim states, almost all hereditary monarchies, have claimed to be caliphates.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53021034", "title": "Selim I", "section": "Section::::Biography.:Conquest of the Middle East.:Syria, Palestine, Egypt and the Arabian Peninsula.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 480, "text": "The last Abbasid caliph, al-Mutawakkil III, was residing in Cairo as a Mamluk puppet at the time of the Ottoman conquest. He was subsequently sent into exile in Istanbul. In the eighteenth century a story emerged claiming that he had officially transferred his title to the Caliphate to Selim at the time of the conquest. In fact, Selim did not make any claim to exercise the sacred authority of the office of caliph, and the notion of an official transfer was a later invention.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36089827", "title": "Islam in the Ottoman Empire", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 784, "text": "Islam was the official religion of the Ottoman Empire. The highest position in Islam, \"caliphate\", was claimed by the sultan, after the defeat of the Mamluks which was established as Ottoman Caliphate. The Sultan was to be a devout Muslim and was given the literal authority of the Caliph. Additionally, Sunni clerics had tremendous influence over government and their authority was central to the regulation of the economy. Despite all this, the Sultan also had a right to decree, enforcing a code called Kanun (law) in Turkish. Additionally, there was a supreme clerical position called the Sheykhulislam (\"Sheykh of Islam\" in Arabic). Minorities, particularly Christians and Jews but also some others, were mandated to pay the jizya, the poll tax as mandated by traditional Islam.\n", "bleu_score": null, "meta": null } ] } ]
null
2c7ive
Britons during the Anglo-Saxon period?
[ { "answer": "A book which will answer most, if not all, of your questions is T.M. Charles-Edwards' magisterial new book (now released in paperback, although it's still something of a doorstop at 795 pages) [*Wales and the Britons 350-1064*](_URL_0_), (Oxford, 2013). As you can see from the contents page, this is a detailed study of what you want to know. Well worth the odd £20-£25 to have at hand.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "29039811", "title": "Common Brittonic", "section": "Section::::History.:Diversification.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 423, "text": "The Anglo-Saxon invasion of Britain during the 6th century marked the beginning of a decline in the language, as it was gradually replaced by Old English. Some Brittonic speakers migrated to Armorica and Galicia. By 700, Brittonic was mainly restricted to North West England and Southern Scotland, Wales, Cornwall and Devon, and Brittany. In these regions, it evolved into Cumbric, Welsh, Cornish and Breton, respectively.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13485", "title": "History of England", "section": "Section::::The Anglo-Saxon migration.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 1132, "text": "In the wake of the breakdown of Roman rule in Britain from the middle of the fourth century, present day England was progressively settled by Germanic groups. Collectively known as the \"Anglo-Saxons\", these were Angles and Saxons from what is now the Danish/German border area and Jutes from the Jutland peninsula. The Battle of Deorham was a critical in establishing Anglo-Saxon rule in 577. Saxon mercenaries existed in Britain since before the late Roman period, but the main influx of population probably happened after the fifth century. The precise nature of these invasions is not fully known; there are doubts about the legitimacy of historical accounts due to a lack of archaeological finds. Gildas Sapiens's \"De Excidio et Conquestu Britanniae\", composed in the 6th century, states that when the Roman army departed the Isle of Britannia in the 4th century CE, the indigenous Britons were invaded by Picts, their neighbours to the north (now Scotland) and the Scots (now Ireland). Britons invited the Saxons to the island to repel them but after they vanquished the Scots and Picts, the Saxons turned against the Britons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2645367", "title": "History of Anglo-Saxon England", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 775, "text": "The Anglo-Saxons were the members of Germanic-speaking groups who migrated to the southern half of the island of Great Britain from nearby northwestern Europe and their cultural descendants. Anglo-Saxon history thus begins during the period of Sub-Roman Britain following the end of Roman control, and traces the establishment of Anglo-Saxon kingdoms in the 5th and 6th centuries (conventionally identified as seven main kingdoms: Northumbria, Mercia, East Anglia, Essex, Kent, Sussex, and Wessex), their Christianisation during the 7th century, the threat of Viking invasions and Danish settlers, the gradual unification of England under the Wessex hegemony during the 9th and 10th centuries, and ending with the Norman conquest of England by William the Conqueror in 1066.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30894567", "title": "Anglo-Saxon burial mounds", "section": "Section::::Introduction of burial mounds.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 432, "text": "The Anglo-Saxon peoples had migrated to Britain during the fifth century CE, settling primarily along the eastern areas of what is now England. They were adherents of a pagan religion. The practice of Anglo-Saxon barrow burials had been adopted by the Merovingian dynasty Franks, who lived in what is now France, from the mid fifth century CE. It was from these Merovingian Franks that the Anglo-Saxons likely adopted the practice.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37780", "title": "Anglo-Saxons", "section": "Section::::Early Anglo-Saxon history (410–660).\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 591, "text": "The early Anglo-Saxon period covers the history of medieval Britain that starts from the end of Roman rule. It is a period widely known in European history as the Migration Period, also the \"Völkerwanderung\" (\"migration of peoples\" in German). This was a period of intensified human migration in Europe from about 400 to 800. The migrants were Germanic tribes such as the Goths, Vandals, Angles, Saxons, Lombards, Suebi, Frisii, and Franks; they were later pushed westwards by the Huns, Avars, Slavs, Bulgars, and Alans. The migrants to Britain might also have included the Huns and Rugini.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35573014", "title": "Witchcraft in Anglo-Saxon England", "section": "Section::::Background.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 208, "text": "The period of Anglo-Saxon England lasted from circa 410 through to 1066 AD, during which individuals considered to be \"Anglo-Saxon\" in culture and language dominated the country's demographics and politics. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22368461", "title": "Anglo-Saxon settlement of Britain", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 579, "text": "The Anglo-Saxon settlement of Britain describes the process which changed the language and culture of most of what became England from Romano-British to Germanic. The Germanic-speakers in Britain, themselves of diverse origins, eventually developed a common cultural identity as Anglo-Saxons. This process occurred from the mid-fifth to early seventh centuries, following the end of Roman rule in Britain around the year 410. The settlement was followed by the establishment of Anglo-Saxon kingdoms in the south and east of Britain, later followed by the rest of modern England.\n", "bleu_score": null, "meta": null } ] } ]
null