id
stringlengths
5
6
input
stringlengths
3
301
output
list
meta
null
vhf8y
Is there any scientific truth to the idea that you shouldn't sit too close to the TV?
[ { "answer": "No. However focusing on one spot for a long period of time can cause eyestrain, however that is only temporary.\n\nYou may find this interesting.\n_URL_0_", "provenance": null }, { "answer": "As others have already said, focusing on one spot for a long period of time can cause [eyestrain](_URL_0_), but this condition is usually temporary so this wouldn't necessarily damage your vision. \n\nHowever, in the early days of television, it was possible for EM radiation (other than visible light) to exit the [cathode ray tube](_URL_1_) behind the screen that was producing the image. This radiation can cause eye damage at close range, and was the reason early adopters of television were warned to not sit too close to the television. However, since the early 1970s, the FDA has monitored television radiation, and the dangers of sitting too close to televisions has been lowered. ", "provenance": null }, { "answer": "Yes, and modern flat screen LCD and plasma TVs are much more dangerous that older ones.\n\nTheir higher centre of gravity and narrower base means that they can, [and sometimes do](_URL_0_), fall on children sitting too close.", "provenance": null }, { "answer": "Children sit close to the TV so that they can see it better, especially if they are having trouble seeing it from far away. If a child has to sit too close to a screen to see it clearly, that is a sign that they may need corrective lenses. As a result of this, a reverse correlation was made that they now need glasses because they were initially sitting too close to the TV. \n\nI worked for an optometrist for 4 years, and this urban legend was one of her pet peeves.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "473284", "title": "Cultivation theory", "section": "Section::::Key terms in cultivation analysis.:Television reality.\n", "start_paragraph_id": 68, "start_character": 0, "end_paragraph_id": 68, "end_character": 408, "text": "The work of several researchers support the concept of television reality as a consequence of heavy viewing. According to Wyer and Budesheim's research, television messages or information, even when they are not necessarily considered truthful, can still be used in the process of constructing social judgments. Furthermore, indicted invalid information may still be used in subsequent audience's judgments.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "473284", "title": "Cultivation theory", "section": "Section::::Criticisms.:Humanist critique.\n", "start_paragraph_id": 124, "start_character": 0, "end_paragraph_id": 124, "end_character": 529, "text": "The theory has also received criticism for ignoring other issues such as the perceived realism of the televised content, which could be essential in explaining people's understanding of reality. Wilson, Martins, & Markse (2005) argue that attention to television might be more important to cultivating perceptions than only the amount of television viewing. In addition, C. R. Berger (2005) writes that because the theory ignores cognitive processes, such as attention or rational thinking style, it is less useful than desired.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8878225", "title": "Fantaserye and telefantasya", "section": "Section::::Criticisms.:Social constructionism.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 338, "text": "In this case, television forms a reality in the minds of the people, specifically, as an \"ideal\" world on where to live in. The people perceive it and then in turn form the notion that \"this is how life should be.\" This happens, when, in fact, it should not be, because the television is not the reality but life is the ultimate reality.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50018370", "title": "Cross-device tracking", "section": "Section::::Privacy and surveillance concerns.:Panoptic surveillance and the commodification of users' digital identity.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 836, "text": "The television, along with the remote control, is also argued to be conditioning humans into habitually repeating that which they enjoy without experiencing genuine surprise or even discomfort, a critique of the television similar to that of those made against information silos on social media sites today. In essence, this technological development led to \"egocasting\": a world in which people exert extreme amounts of control over what they watch and hear. As a result, users deliberately avoid content they disagree with in any form––ideas, sounds, or images. In turn, this siloing can drive political polarization and stoke tribalism. Plus, companies like TiVO analyze how TV show watchers use their remote and DVR capability to skip over programming, such as advertisements––a privacy concern users may lack awareness of as well.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "256764", "title": "Lee de Forest", "section": "Section::::Quotes.\n", "start_paragraph_id": 96, "start_character": 0, "end_paragraph_id": 96, "end_character": 234, "text": "BULLET::::- \"So I repeat that while theoretically and technically television may be feasible, yet commercially and financially, I consider it an impossibility; a development of which we need not waste little time in dreaming.\" – 1926\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2048718", "title": "On the Origin of the \"Influencing Machine\" in Schizophrenia", "section": "Section::::The Influencing Machine in art and media.:Literature.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 629, "text": "Activist Jerry Mander's book argues for the complete removal of television from our lives because of its ill effects. Mander gives the example of Tausk's \"Influencing machine\" as being a parallel for television: \"Doubtless you have noticed that this 'influencing machine' sounds an awful lot like television ... In any event, there is no question that television does what the schizophrenic fantasy says it does. It places in our minds images of reality which are outside our experience. The pictures come in the form of rays from a box. They cause changes in feeling and ... utter confusion as to what is real and what is not.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "483655", "title": "Artificial human companion", "section": "Section::::Introduction.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 205, "text": "It is Masthoff's contention that it is possible to develop an interactive, personalized form of television that would allow the viewer to engage in natural conversation and learn from these conversations.\n", "bleu_score": null, "meta": null } ] } ]
null
2n2x4s
how is it that a vehicle can be good at towing but shit at carrying heavy loads?
[ { "answer": "A vehicle can have a strong engine, trans and drive train to pull a heavy tow.\n\nIt needs a strong suspension system to carry a heavy load.\n\n", "provenance": null }, { "answer": "Because the trailers wheels take the brunt of the load weight. Imagine having a load spread across dozens of axles and tires. Much easier to pull. Put that load over a single axle it puts a lot of strain on the only load moving axle. It has to hold the weight up as well as move the vehicle forward. ", "provenance": null }, { "answer": "Try pulling something somewhat heavy. Now try carrying it. Get it?", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1164322", "title": "Tow truck", "section": "Section::::Operations.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 563, "text": "The military also deploys tow trucks for recovery of stranded vehicles. In the US Army, a variant of the HEMTT truck is used for this purpose, the M984 wrecker. For recovery in combat situations while under fire, many armies with large vehicle fleets also deploy armoured recovery vehicles. These vehicles fulfill a similar role, but are resistant to heavy fire and capable of traversing rough terrain with their tracks, as well as towing vehicles beyond the weight limits of wheeled wreckers, such as tanks (many are based on tank designs for this very reason).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27945098", "title": "Timeline of United States inventions (1890–1945)", "section": "Section::::Progressive Era (1890–1919).\n", "start_paragraph_id": 209, "start_character": 0, "end_paragraph_id": 209, "end_character": 573, "text": "BULLET::::- A tow truck is a vehicle used to transport motor vehicles to another location, generally a repair garage, or to recover vehicles which are no longer on a drivable surface. Vehicles are often towed in the case of breakdowns or collisions, or may be impounded for legal reasons. The tow truck was invented in 1916 by Ernest Holmes, Sr., of Chattanooga, Tennessee. He was a garage worker who was inspired to create the invention after he was forced to pull a car out of a creek using blocks, ropes, and six men. An improved design led him to manufacture wreckers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8554000", "title": "Towing", "section": "Section::::Towing of vehicles.\n", "start_paragraph_id": 61, "start_character": 0, "end_paragraph_id": 61, "end_character": 394, "text": "Hitch tow trucks are mostly sized for cars and light duty trucks. Larger versions, with a long, weighted body and heavier duty engines, transmissions, and tow hooks, may be used for towing of disabled buses, truck tractors, or large trucks. The artificial sizing and weighting must be designed to withstand the greater weight of the towed vehicle, which might otherwise tip the tow truck back.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8554000", "title": "Towing", "section": "Section::::Towing capacity.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 396, "text": "Towing capacity is a measure describing the upper limit to the weight of a trailer a vehicle can tow and may be expressed in pounds or kilograms. Some countries require that signs indicating the maximum trailer weight (and in some cases, length) be posted on trucks and buses close to the coupling device. Towing capacity may be lower as declared due to limitation imposed by the cooling system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1164322", "title": "Tow truck", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 408, "text": "A tow truck (also called a wrecker, a breakdown truck, recovery vehicle or a breakdown lorry) is a truck used to move disabled, improperly parked, impounded, or otherwise indisposed motor vehicles. This may involve recovering a vehicle damaged in an accident, returning one to a drivable surface in a mishap or inclement weather, or towing or transporting one via flatbed to a repair shop or other location.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8554000", "title": "Towing", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 526, "text": "Towing may be as simple as a tractor pulling a tree stump. The most familiar form is the transport of disabled or otherwise indisposed vehicles by a tow truck or \"wrecker.\" Other familiar forms are the tractor-trailer combination, and cargo or leisure vehicles coupled via ball or pintle and gudgeon trailer-hitches to smaller trucks and cars. In the opposite extreme are extremely heavy duty tank recovery vehicles, and enormous ballast tractors involved in heavy hauling towing loads stretching into the millions of pounds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3389208", "title": "Heavy rescue vehicle", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 911, "text": "A heavy rescue vehicle is a type of specialty emergency medical services or firefighting apparatus. They are primarily designed to provide the specialized equipment necessary for technical rescue situations, as well as search and rescue within structure fires. They carry an array of special equipment such as the Jaws of life, wooden cribbing, generators, winches, hi-lift jacks, cranes, cutting torches, circular saws and other forms of heavy equipment unavailable on standard trucks. This capability differentiates them from traditional pumper trucks or ladder trucks designed primarily to carry firefighters and their entry gear as well as on-board water tanks, hoses and equipment for fire extinguishing and light rescue. Most heavy rescue vehicles lack on-board water tanks and pumping gear, owing to their specialized role, but some do carry on-board pumps in order to broaden their response capability.\n", "bleu_score": null, "meta": null } ] } ]
null
z7wal
What is the chemical reaction that occurs when you put ice cream into soft drink?
[ { "answer": "No chemical reaction... the fat and sugars from the ice cream increases the surface tension on the bubbles, which means they don't burst. You see a bunch of bubbles because you've just introduced something with a bunch of nucleation sites. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "48212", "title": "Ice cream", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 657, "text": "Ice cream (derived from earlier iced cream or cream ice) is a sweetened frozen food typically eaten as a snack or dessert. It may be made from dairy milk or cream, or soy, cashew, coconut or almond milk, and is flavored with a sweetener, either sugar or an alternative, and any spice, such as cocoa or vanilla. Colourings are usually added, in addition to stabilizers. The mixture is stirred to incorporate air spaces and cooled below the freezing point of water to prevent detectable ice crystals from forming. The result is a smooth, semi-solid foam that is solid at very low temperatures (below ). It becomes more malleable as its temperature increases.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4604645", "title": "Coffee preparation", "section": "Section::::Presentation.:Cold drinks.\n", "start_paragraph_id": 131, "start_character": 0, "end_paragraph_id": 131, "end_character": 323, "text": "BULLET::::- Affogato is a cold drink, often served as dessert, consisting of a scoop of ice cream or gelato topped with an espresso shot. Often, the drinker is served the ice cream and espresso in separate cups, and will mix them at the table so as to prevent the ice cream from entirely melting before it can be consumed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30420606", "title": "Bacon ice cream", "section": "Section::::Heston Blumenthal.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 372, "text": "Traditional ice cream is frozen egg custard with flavours added. Blumenthal whisks egg yolks with sugar until the sugar interacts with the proteins in the yolk, creating a network of proteins. The entire substance turns white, at which point flavouring can be added and cooked in. While stirring the mixture, Blumenthal cools it as fast as possible using liquid nitrogen.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19619306", "title": "List of coffee drinks", "section": "Section::::Iced.:Other.\n", "start_paragraph_id": 101, "start_character": 0, "end_paragraph_id": 101, "end_character": 400, "text": "Originating in Australia and similar to the Mazagran, the minimal Ice Shot is a single shot of fresh espresso poured into an ordinary latté glass that has been filled with ice. The hot coffee, in melting some of the ice is diluted, re-freezing to a granita-like texture. The addition of a single scoop of ice-cream on top is a popular variant. No milk, sugar, extra flavouring or cream are involved.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48212", "title": "Ice cream", "section": "Section::::Ingredients and standard quality definitions.\n", "start_paragraph_id": 65, "start_character": 0, "end_paragraph_id": 65, "end_character": 690, "text": "BULLET::::- The Ice cream mix is defined as the pasteurized mix of cream, milk and other milk products that are not yet frozen. It may contain eggs, artificial or non-artificial flavours, cocoa or chocolate syrup, a food color, an agent that adjusts the pH level in the mix, salt, a stabilizing agent that doesn’t exceed 0.5% of the ice cream mix, a sequestering agent which preserves the food colour, edible casein that doesn’t exceed 1% of the mix, propylene glycol mono fatty acids in an amount that will not exceed 0.35% of the ice cream mix and sorbitan tristearate in an amount that will not exceed 0.035% of the mix. The ice cream mix may not include less than 36% solid components.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1092688", "title": "Fried ice cream", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 448, "text": "The dessert is commonly made by taking a scoop of ice cream frozen well below the temperature at which ice cream is generally kept, possibly coating it in raw egg, rolling it in cornflakes or cookie crumbs, and briefly deep frying it. The extremely low temperature of the ice cream prevents it from melting while being fried. It may be sprinkled with cinnamon and sugar and a touch of peppermint, though whipped cream or honey may be used as well.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1207268", "title": "Ice cream maker", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 314, "text": "An ice cream maker has to simultaneously freeze the mixture while churning it so as to aerate the mixture and keep the ice crystals small (less than 50 μm). As a result, most ice creams are ready to consume immediately. However, those containing alcohol must often be chilled further to attain a firm consistency.\n", "bleu_score": null, "meta": null } ] } ]
null
1ivwrw
Besieging castles (pre cannons)
[ { "answer": "I'm not an expert on sieges, but here's what I can offer:\n\n1. It would depend entirely on the technology available to both besiegers and besieged, as well as what was besieged. The First Siege of Rome during the Gothic war lasted a little over a year. When the Mongols under Batu Khan besieged Kiev in 1240 it took about a week. Another factor would be how much help the people inside the castle had on the outside, the size of both forces, morale, etc. I don't think there can be an average.\n\n2. If you're hitting a wall with a trebuchet, then you are most likely aiming it wrong. These were designed to launch objects over walls, not into them. The projectile had a very high arc and could easily clear walls several stories high. For the other two, it would depend on the wall thickness/build, the number of siege weapons attacking a section, and the objects being used. Again, I wouldn't say there is a set time to penetrate a wall. Too many variables.\n\n3. Again, a lot of variables make this hard to quantify. I'm sure there are treatises available in which people set a science to determining this, but those would just be theories based on exhaustible circumstances. Nothing could cover every siege. A main factor would be the technology available. If the attackers could easily create several points of entry into a fortification, then they might not strive for as large of a numerical advantage then if the defenders could easily turn away the siege engines. Another could be morale. A large defending force that has been out of food for weeks could perform much worse than a smaller force with plenty of provisions. In addition, a group of people that have been trapped in poor conditions for over a year might not have that same spirit as men who have only been there for a week.\n\n5. Fire was a pretty nifty thing to use. Siege weapons were built of wood and other flammable materials, so unless they were protected very thoroughly (covered with animal hides, for instance) they could easily go up in smoke. Using fire would be easiest against towers and rams, since they actually came up to the walls and gates. In order to reach the ranged siege weapons, however, the defenders would most likely have to launch a sally. Some men from inside would exit the fortification under favorable circumstances (night time, through a gate people weren't watching very well, etc) and try to cause damage amongst the siege lines. They could burn tents and siege weapons, kill men who weren't aware, and in general make a lot of chaos arrive in already tense conditions. This of course did not always work, and could easily end with the death or capture of all defenders involved.\n\nI'm sure more people could expand on this, or correct me where I'm wrong. I'm just drawing from some general knowledge of the subject, so if you need specific sources I can't help you there.", "provenance": null }, { "answer": "1. A siege went on as long as it needed to. The two main limiting factors were A) food - both those inside and outside and B) reinforcements arriving. As long as a fortification can maintain a food supply, they are in great shape to continue withstanding the siege. The sieging force would be able to last due to living off the land or merchants/hunters showing up with food for sale/barter. Now, another limiting factor on average siege length is reinforcing numbers. A sieging army would generally retreat and break off the siege in the event another force showed up. Remember, the goal is to take the keep/castle/fort but leave it intact for future use as well as to ensure you take as few casualties as possible. You don't want to lose troops forcing a siege if you can avoid it. Capitulating was a more common reason for a siege to end.\n\n2. The ultimate goal of these engines, while destructive, was not necessarily aimed at penetrating and taking down sections of the wall. While this may be the case in many games, it is not advantageous to destroy a fortification/wall just to gain access. Many things would be launched within the walls to demoralize the enemy forces, attempting to push them towards capitulation. There are excerpts that show that the Mongols would launch dead animals or fallen soldiers as a form of mental warfare as well as throwing the threat of disease and plague. \n\n3. While undermining was a prospect, it isn't going to be as popular a method unless you want to force the fortification sooner rather than starve them out. As u/GaiusCassius stated, many variables need to be taken into account. One of the main things to remember is that castle walls should have foundations that go down pretty far - I know a lot of Czech castles would have the foundations 10-20 feet under the surface. Now please note that a lot of keeps/donjons are preferred to have foundations built on solid bedrock (Pernstejn castle is an example regarding the keep). And you have to take into account the soil composition as well as the availability of lumber to shore up the tunnel walls from collapse.\n\n4. There is no set number to make this a 'golden rule' for defense. A lot has to do with the design of the castle, the number and type of towers, food stores and resources available, is there a barbican or a moat (wet or dry) as well as the proximity to additional friendly troops. It is accepted that you could defend a fortification with fewer people than the sieging force, you still have to have enough to defend all possible avenues of breach - is there only one approach for a ram, or do you have exposed walls that could be reach by tower or ladder? Spearmen would be the most prevalent forces to repel attackers OVER walls because of the additional reach. Halberds, spears and pole arms would see extensive use in fortification defense due to ease of use and simple construction.\n\n5. Defenses available would depend upon the time period, but generally the best things to have would be food and clean water. As long as you can outlast the besieging force, you were able to repel many attacks. Trebuchets and catapults are best defended against by trying to hide and avoid the projectiles. Rams would be restricted due to fortification design to reduce the number and direction of approaches to any gates. Castle design works to make sharp turns, under barbicans with murder holes, all in an attempt to destroy the ram before it made it all the way to the keep. Siege towers, while popularized by the Total War series and RotK, are expensive to construct, bulky and difficult to keep moving. The best defenses available would be a combination of catapults to throw boulders, fire arrows as well as boiling oil to set the base on fire. Again, some form of pole arm is best to repel attacks that make it to the wall. \n\ntl;dr - there are easy things that can be done to protect against sieges but they get glorified by Hollywood and games to make it seem easy and a fast thing. Remember, sieges take time and had fewer forced/pitched battles and more breaking-off of the siege or capitulation by the defending force.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "50023290", "title": "Counter-castle", "section": "Section::::Purpose.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 283, "text": "Siege castles are only evident from the period of the Late Middle Ages onwards. They were usually built as a temporary fortifications using wood and earth above the castle to be captured and within sight and the range of their guns. From this location the target would be bombarded.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3171368", "title": "Burg Castle (Solingen)", "section": "Section::::Decay.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 361, "text": "In 1632 Swedish soldiers laid siege to the castle. After the Thirty years war, in 1648, Imperial troops destroyed the fortifications of the castle including the keep, walls, and gates. In 1700, the main building was partially reconstructed and subsequently used for administrative purposes. 1849, the castle was sold to be scrapped, decayed, and became a ruin.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2683441", "title": "Hohenasperg", "section": "Section::::History.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 282, "text": "Between 1634 and 1635, during the Thirty Years War, the castle was defended against imperial troops by a garrison of Protestants from Württemberg, reinforced by Swedish forces. The siege ended finally with the surrender to the imperial troops, who occupied the fortress until 1649.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13796167", "title": "Santa Bárbara Castle", "section": "Section::::History.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 241, "text": "The castle was bombarded in 1691 by a French squadron. During the War of the Spanish Succession, it was held by the English for three years. In 1873, it was bombarded, along with the city, by the \"cantonalistas\" from the frigate \"Numancia\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46627916", "title": "Reszel Castle", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 298, "text": "The castle, an Ordensburg fortress, was built in between 1350-1401 by the Teutonic Order. The castle was frequently looted, besieged and gained by Polish and Teutonic forces. In the nineteenth century the castle was adapted into a prison, the castle was fully renovated after the Second World War.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51371787", "title": "List of Device Forts", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 732, "text": "The Device Forts, also known as Henrician castles and blockhouses, were a series of artillery fortifications built to defend the coast of England and Wales by Henry VIII. They ranged from large stone castles, to small blockhouses and earthwork bulwarks. Armed with artillery, the forts were intended to be used against enemy ships before they could land forces or attack vessels lying in harbour. The castles were commanded by captains appointed by the Crown, overseeing small garrisons of professional gunners and soldiers, who would be supplemented by local militia in an emergency. The Device programme was hugely expensive, costing a total of £376,000, much of it raised from the proceeds of the Dissolution of the Monasteries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1016554", "title": "Calshot Castle", "section": "Section::::History.:16th century.:Construction.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 478, "text": "The castle initially had a garrison of eight gunners, five soldiers and a lieutenant, under the command of a captain. In the late 1540s, it was heavily armed by the standards of the time, with 36 pieces of artillery. In the 1580s, the castle caught fire and the timber needed for the repairs required 127 trees to be sent from the New Forest. The work was carried out in 1584, prompted by the threat of a Spanish invasion, but by that time its garrison had shrunk to eight men.\n", "bleu_score": null, "meta": null } ] } ]
null
ch7j4u
Why do we discuss Ancient Greece in terms of city-states?
[ { "answer": "Many Greek city-states had slightly different setups, but their basic structure was the same: a single *astu* (city-center) with administrative control over a *chora* (territory). Athens controlled a large *chora* (Attica) as the result of *synoekismos,* or synoecism as the Brits would say. This is the process by which, usually early on in the Iron Age, various smaller population centers came together (literally \"housed\" together) to form a larger political unit: a *polis*, comprised of *astu* and *chora.* Why one center became the *astu* vs another was usually dependent on population, military might, religious importance, or the like. In Attica, the smaller population centers were organized into districts (\"demes\"). Some examples are the Piraeus (the harbor), or Sunion (a temple center in the south of Attica). These were not \"cities\" in that they had no independent authority in the business of Athens the polis. You should think of them as neighborhoods within a modern city, with some localized civic or religious structures.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "193686", "title": "Otto of Greece", "section": "Section::::Early life and reign.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 869, "text": "Athens was chosen as the Greek capital for historical and sentimental reasons, not because it was a large city. At the time, it was a town consisting of only 400 houses at the foot of the Acropolis. A modern city plan was laid out, and public buildings erected. The finest legacy of this period are the buildings of the University of Athens (1837, under the name Othonian University), the Athens Polytechnic University (1837, under the name Royal School of Arts), the National Gardens of Athens (1840), the National Library of Greece (1842), the Old Royal Palace (now the Greek Parliament Building, 1843), and the Old Parliament Building (1858). Schools and hospitals were established all over the (still small) Greek dominion, Due to the negative feelings of the Greek people toward non-Greek rule, historical attention to this aspect of his reign has been neglected.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12108", "title": "Greece", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1222, "text": "Greece is considered the cradle of Western civilisation, being the birthplace of democracy, Western philosophy, Western literature, historiography, political science, major scientific and mathematical principles, Western drama and notably the Olympic Games. From the eighth century BC, the Greeks were organised into various independent city-states, known as \"poleis\" (singular \"polis\"), which spanned the entire Mediterranean region and the Black Sea. Philip of Macedon united most of the Greek mainland in the fourth century BC, with his son Alexander the Great rapidly conquering much of the ancient world, from the eastern Mediterranean to India. Greece was annexed by Rome in the second century BC, becoming an integral part of the Roman Empire and its successor, the Byzantine Empire, in which Greek language and culture were dominant. Rooted in the first century A.D., the Greek Orthodox Church helped shape modern Greek identity and transmitted Greek traditions to the wider Orthodox World. Falling under Ottoman dominion in the mid-15th century, the modern nation state of Greece emerged in 1830 following a war of independence. Greece's rich historical legacy is reflected by its 18 UNESCO World Heritage Sites.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1216", "title": "Athens", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 583, "text": "Classical Athens was a powerful city-state that emerged in conjunction with the seagoing development of the port of Piraeus. A center for the arts, learning and philosophy, home of Plato's Academy and Aristotle's Lyceum, it is widely referred to as the cradle of Western civilization and the birthplace of democracy, largely because of its cultural and political impact on the European continent, and in particular the Romans. In modern times, Athens is a large cosmopolitan metropolis and central to economic, financial, industrial, maritime, political and cultural life in Greece.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20141462", "title": "European balance of power", "section": "Section::::History.:Antiquity to Westphalia.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 743, "text": "The emergence of city-states (\"poleis\") in ancient Greece marks the beginning of classical antiquity. The two most important Greek cities, the Ionian-democratic Athens and the Dorian-aristocratic Sparta, led the successful defense of Greece against the invading Persians from the east, but then clashed against each other for supremacy in the Peloponnesian War. The Kingdom of Macedon took advantage of the following instability and established a single rule over Greece. Desire to form a universal monarchy brought Alexander the Great to annex the entire Persian Empire and begin a hellenization of the Macedonian possessions. At his death in 323 BC, his reign was divided between his successors and several hellenistic kingdoms were formed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "66540", "title": "Ancient Greece", "section": "Section::::Politics and society.:Political structure.\n", "start_paragraph_id": 59, "start_character": 0, "end_paragraph_id": 59, "end_character": 1150, "text": "Ancient Greece consisted of several hundred relatively independent city-states (\"poleis\"). This was a situation unlike that in most other contemporary societies, which were either tribal or kingdoms ruling over relatively large territories. Undoubtedly the geography of Greece—divided and sub-divided by hills, mountains, and rivers—contributed to the fragmentary nature of ancient Greece. On the one hand, the ancient Greeks had no doubt that they were \"one people\"; they had the same religion, same basic culture, and same language. Furthermore, the Greeks were very aware of their tribal origins; Herodotus was able to extensively categorise the city-states by tribe. Yet, although these higher-level relationships existed, they seem to have rarely had a major role in Greek politics. The independence of the \"poleis\" was fiercely defended; unification was something rarely contemplated by the ancient Greeks. Even when, during the second Persian invasion of Greece, a group of city-states allied themselves to defend Greece, the vast majority of \"poleis\" remained neutral, and after the Persian defeat, the allies quickly returned to infighting.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13212", "title": "History of Europe", "section": "Section::::Classical antiquity.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 539, "text": "The Greeks and the Romans left a legacy in Europe which is evident in European languages, thought, visual arts and law. Ancient Greece was a collection of city-states, out of which the original form of democracy developed. Athens was the most powerful and developed city, and a cradle of learning from the time of Pericles. Citizens' forums debated and legislated policy of the state, and from here arose some of the most notable classical philosophers, such as Socrates, Plato, and Aristotle, the last of whom taught Alexander the Great.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "66540", "title": "Ancient Greece", "section": "Section::::Geography.:Regions.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 372, "text": "The territory of Greece is mountainous, and as a result, ancient Greece consisted of many smaller regions each with its own dialect, cultural peculiarities, and identity. Regionalism and regional conflicts were a prominent feature of ancient Greece. Cities tended to be located in valleys between mountains, or on coastal plains, and dominated a certain area around them.\n", "bleu_score": null, "meta": null } ] } ]
null
7xthvq
How did Multi-track sound recording come to be and subsequently widely used?
[ { "answer": "During World War II, some technically-minded people listening to German radio were puzzled: was Hitler really demanding that the Berlin Symphony Orchestra play Beethoven symphonies at 3am in the morning? It was a puzzle, because the sound was pristine, without the clicks and pops you get from a vinyl disc - at that time, the only option for playing pre-recorded material that the Americans and English knew about. They couldn't figure out how the Germans were doing it until, after the war had been essentially won, and the Allies gained access to the premises of Radio Frankfurt in Bad Neuheim, which had been broadcasting the Beethoven symphonies. One American Major who was a classical music fan and curious about German radio, Jack Mullin, decided to head to Bad Neuheim, and discovered that they had a new method of recording to Magnetophon - to magnetic tape - which crucially had an AC bias that enabled almost pristine recording quality.\n\nPrevious to this, recording had essentially been straight to vinyl disc, which was more limited in a variety of ways - once the groove had been laid down by the transcription needle, it was that way forever. However, magnetic tape could be altered - you could tape over it. \n\nAfter the war, Mullin worked with the company Ampex to replicate the technology they'd seen in Frankfurt. Initially using reels of tape taken from Bad Neuheim, and on prototype technology, they pre-recorded radio shows for Bing Crosby, a major star at the time; after Crosby gave the company $50,000 with no strings attached to perfect the technology, they released the first commercially available American tape recording devices in 1948, which basically instantly became the standard. \n\nCrosby gave an Ampex tape machine to his friend Les Paul, a jazz guitarist (and inveterate tinkerer who worked with the Gibson guitar company on the guitar brand that bears his name). Paul, like Mullin, had noticed the German broadcasts at 3am; he was working at Armed Forces Radio in Europe during the war, and he couldn't figure out how the Germans were doing it. So upon getting an Ampex tape machine, Paul was very keen to play with the technology, and discovered that 'overdubbing' was possible; you could record yourself playing along with a previous recording - as a singer you could harmonise with yourself. So you get recordings like Les Paul's recordings with Mary Ford (e.g., ['How High The Moon' from 1951](_URL_0_)), which were exceptionally popular, and which showed Paul playing several guitar parts at once, and Ford harmonising with herself. \n\nThis was effectively multi-track recording in one sense - there are multiple tracks of music recorded one after another - but it's not multi-track recording in the modern-sense, because there weren't multiple separate tracks of tape for each instrument - there was just one tape, with different performances literally dubbed over pre-existing performances.\n\nAn Ampex employee, Ross Snyder, heard these Les Paul and Mary Ford records and thought that the overdubbing method they used lead to a decline in sound quality, and so he aimed at developing a tape machine that had multiple tape heads that were in sync with each other; this was a complicated project with a lot of technological constraints - getting different tracks to line up with each other and making sure that the tape was at the right speed etc was something of a problem. But in 1956, the Ampex Sel-Sync was put on the market, an eight-track recording device. The Sel-Sync itself was generally seen as impractical for recording studios (it was enormous and heavy and glitchy), but other multi-track recording devices came into vogue in the late 1950s. \n\nIn England, someone at EMI who had seen the Magnetophons at Bad Neuheim had also had the same idea as Mullin, and they had unveiled the EMI BTR1 in 1947, a single track tape machine. After some improvements to the design (the BTR2 in 1952), and due to the coming demand for stereo recording, EMI put together a BTR3 in 1956 which had two tracks; however, these were not used for multi-track recording, per se, until just before the arrival of the Beatles in 1962, when a modified BTR3 at Abbey Road dubbed the 'Twin-Track' was used in order to do rudimentary multi-tracking. However, in late 1963 the Beatles began using a Telefunken M1 four-track recording machine made in Germany (which had been on the market since 1957, but which EMI experimented with extensively before allowing its use (Telefunken had also built the Magnetophons in WWII). In 1965 the Beatles started using a Studer J37 four-tape track recorder which was more suited to studio multi-tracking experimentation, and (a modified version of which) was used on *Revolver* and *Sgt Peppers* - the albums where multi-track recording most obviously became an artform in itself, with musicians creating sounds using the studio and the multi-track recorder as an instrument in itself.\n\nEdit: To deal with your second question - for a sound engineer it would have depended on what kind of music they were recording. Firstly, in the mid-to-late 1950s, Frank Sinatra was still recording basically on a single microphone, with the orchestra simply softer than him in the background. In contrast, in more or less the same time period, there was a maze of cables on the floor of the Motown studio at Hitsville USA (thus why the studio was dubbed the 'snakepit') indicating that there were leads and microphones going to different instruments set at different volumes - recording desks that mixed the volume and frequency of different instruments with each other already existed and were more common in pop rather than in orchestral contexts (especially because the levels of rock instruments could vary so much, and needed further control). Basically, once multi-track recording became a thing, it enabled engineers to record as pristine and perfect a sound of an individual instrument as they could, and multi-track recording multiplied the time it took for a band to record a song in the studio, with each part being looked over in detail to try and perfect it; often, when a band was playing together before the era of multi-tracking (or in the early years of multi-tracking), the band recording was basically it, perhaps with overdubs. But as things progressed, musicians would first lay down a basic track and then elaborate on it later, or record instruments one by one. So this meant that the original basic track didn't have to be as perfectly balanced and mixed as it had to be in the pre-multi-track days, as it was likely going to be elaborated on, and that musicians and producers could think more carefully about how the music sounded on record in context, and arrange the music accordingly. Geoff Emerick, who was the engineer on much of the Beatles' records, speaks of his disillusionment on working on the Beatles' stuff in the later era when multi-tracking was a thing in his book *Here, There And Everywhere* - he thought that the broad array of possible options and the pursuit of perfection led to a sort of paralysis, and made working in the studio with the Beatles much more difficult.\n\nSources: \n\n* *Recording The Beatles* by Brian Kehew and Kevin Ryan\n* *Perfecting Sound Forever: The Story Of Recorded Music* by Greg Milner \n* *Here There And Everywhere* by Geoff Emerick", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "255133", "title": "Multitrack recording", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 674, "text": "Multitrack recording (MTR)—also known as multitracking, double tracking, or tracking—is a method of sound recording developed in 1955 that allows for the separate recording of multiple sound sources or of sound sources recorded at different times to create a cohesive whole. Multitracking became possible in the mid-1950s when the idea of simultaneously recording different audio channels to separate discrete \"tracks\" on the same reel-to-reel tape was developed. A \"track\" was simply a different channel recorded to its own discrete area on the tape whereby their relative sequence of recorded events would be preserved, and playback would be simultaneous or synchronized.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9703269", "title": "History of multitrack recording", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1324, "text": "Multitrack recording of sound is the process in which sound and other electro-acoustic signals are captured on a recording medium such as magnetic tape, which is divided into two or more audio tracks that run parallel with each other. Because they are carried on the same medium, the tracks stay in perfect synchronisation, while allowing multiple sound sources to be recorded asynchronously. The first system for creating stereophonic sound (using telephone technology) was demonstrated by Clément Ader in Paris in 1881. The pallophotophone, invented by Charles A. Hoxie and first demonstrated in 1922, recorded optically on 35 mm film, and some versions used a format of as many as twelve tracks in parallel on each strip. The tracks were recorded one at a time in separate passes and were not intended for later mixdown or stereophony; as with later half-track and quarter-track monophonic tape recording, the multiple tracks simply multiplied the maximum recording time possible, greatly reducing cost and bulk. British EMI engineer Alan Blumlein patented systems for recording stereophonic sound and surround sound on disc and film in 1933. The history of modern multitrack audio recording using magnetic tape began in 1943 with the invention of stereo tape recording, which divided the recording head into two tracks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4556078", "title": "History of sound recording", "section": "Section::::Magnetic recording.:Multitrack recording.\n", "start_paragraph_id": 64, "start_character": 0, "end_paragraph_id": 64, "end_character": 980, "text": "The next major development in magnetic tape was multitrack recording, in which the tape is divided into multiple tracks parallel with each other. Because they are carried on the same medium, the tracks stay in perfect synchronization. The first development in multitracking was stereo sound, which divided the recording head into two tracks. First developed by German audio engineers ca. 1943, 2-track recording was rapidly adopted for modern music in the 1950s because it enabled signals from two or more separate microphones to be recorded simultaneously, enabling stereophonic recordings to be made and edited conveniently. (The first stereo recordings, on disks, had been made in the 1930s, but were never issued commercially.) Stereo (either true, two-microphone stereo or multimixed) quickly became the norm for commercial classical recordings and radio broadcasts, although many pop music and jazz recordings continued to be issued in monophonic sound until the mid-1960s.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9703269", "title": "History of multitrack recording", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 1609, "text": "The next major development in multitrack recording came in the mid-1950s, when the Ampex corporation devised the concept of 8-track recording, utilizing its \"Sel-Sync\" (Selective Synchronous) recording system, and sold the first such machine to musician Les Paul. However, for the next 35 years, multitrack audio recording technology was largely confined to specialist radio, TV and music recording studios, primarily because multitrack tape machines were both very large and very expensive - the first Ampex 8-track recorder, installed in Les Paul's home studio in 1957, cost a princely US$10,000 - roughly three times the US average yearly income in 1957, and equivalent to around $90,000 in 2016. However, this situation changed radically in 1979 with the introduction of the TASCAM Portastudio, which used the inexpensive compact audio cassette as the recording medium, making good-quality 4-track (and later 8-track) multitrack recording available to the average consumer for the first time. Ironically, by the time the Portastudio had become popular, electronics companies were already introducing digital audio recording systems, and by the 1990s, computer-based digital multitrack recording systems such as Pro Tools and Cubase were being adopted by the recording industry, and soon became standard. By the early 2000s, rapid advances in home computing and digital audio software were making digital multitrack audio recording systems available to the average consumer, and high-quality digital multitrack recording systems like GarageBand were being included as a standard feature on home computers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9703269", "title": "History of multitrack recording", "section": "Section::::Impact on popular music.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 542, "text": "The artistic potential of the multitrack recorder came to the attention of the public in the 1960s, when artists such as the Beatles and the Beach Boys began to multitrack extensively, and from then on virtually all popular music was recorded in this manner. The technology developed very rapidly during these years. At the start of their careers, the Beatles and Beach Boys each recorded live to mono, two-track (the Beatles), or three-track (the Beach Boys); by 1965 they used multitracking to create pop music of unprecedented complexity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "255133", "title": "Multitrack recording", "section": "Section::::Process.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 656, "text": "With the introduction of SMPTE timecode in the early 1970s, engineers began to use computers to perfectly synchronize separate audio and video playback, or multiple audio tape machines. In this system, one track of each machine carried the timecode signal, while the remaining tracks were available for sound recording. Some large studios were able to link multiple 24-track machines together. An extreme example of this occurred in 1982, when the rock group Toto recorded parts of \"Toto IV\" on three synchronized 24-track machines. This setup theoretically provided for up to 69 audio tracks, which is far more than necessary for most recording projects.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9703269", "title": "History of multitrack recording", "section": "Section::::Overview.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 662, "text": "The next evolution was 4-track recording, which was the studio standard through the mid 1960s. Many of the most famous recordings by The Beatles and The Rolling Stones were recorded on 4-track, and the engineers at London's Abbey Road Studios became particularly adept at the technique called \"reduction mixes\" in the UK and \"bouncing down\" in the United States, in which multiple tracks were recorded onto one 4-track machine and then mixed together and transferred (bounced down) to one track of a second 4-track machine. In this way, it was possible to record literally dozens of separate tracks and combine them into finished recordings of great complexity.\n", "bleu_score": null, "meta": null } ] } ]
null
9gftt7
what would happen if there was a second big bang somewhere else outside of our own expanding universe?
[ { "answer": "If it was outside of our universe it would also be beyond everything that we are capable of observing or understanding, so we would have no way of knowing that it even happened. ", "provenance": null }, { "answer": " > Would the space from each just become one and the furthest stars\n\nAs far as we know there is no such thing. Our universe appears to be infinite in extent, there is no boundary or edge to overlap. Similarly the idea of \"space outside our universe\" is incoherent.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "48903", "title": "Nucleosynthesis", "section": "Section::::History of nucleosynthesis theory.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 1239, "text": "The Big Bang itself had been proposed in 1931, long before this period, by Georges Lemaître, a Belgian physicist, who suggested that the evident expansion of the Universe in time required that the Universe, if contracted backwards in time, would continue to do so until it could contract no further. This would bring all the mass of the Universe to a single point, a \"primeval atom\", to a state before which time and space did not exist. Hoyle is credited with coining the term \"Big Bang\" during a 1949 BBC radio broadcast, saying that Lemaître's theory was \"based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past.\" It is popularly reported that Hoyle intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Lemaître's model was needed to explain the existence of deuterium and nuclides between helium and carbon, as well as the fundamentally high amount of helium present, not only in stars but also in interstellar space. As it happened, both Lemaître and Hoyle's models of nucleosynthesis would be needed to explain the elemental abundances in the universe.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "573880", "title": "Fine-tuned Universe", "section": "Section::::Alien design.\n", "start_paragraph_id": 42, "start_character": 0, "end_paragraph_id": 42, "end_character": 248, "text": "The Designer Universe theory of John Gribbin suggests that the universe could have been made deliberately by an advanced civilization in another part of the Multiverse, and that this civilization may have been responsible for causing the Big Bang.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4116", "title": "Big Bang", "section": "Section::::Features of the model.:Expansion of space.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 563, "text": "The Big Bang is not an explosion of matter moving outward to fill an empty universe. Instead, space itself expands with time everywhere and increases the physical distance between two comoving points. In other words, the Big Bang is not an explosion \"in space\", but rather an expansion \"of space\". Because the FLRW metric assumes a uniform distribution of mass and energy, it applies to our universe only on large scales—local concentrations of matter such as our galaxy are gravitationally bound and as such do not experience the large-scale expansion of space.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "206122", "title": "Big Crunch", "section": "Section::::Overview.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 282, "text": "A more specific theory called \"Big Bounce\" proposes that the universe could collapse to the state where it began and then initiate another Big Bang, so in this way the universe would last forever, but would pass through phases of expansion (Big Bang) and contraction (Big Crunch). \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1037059", "title": "Plane (esotericism)", "section": "Section::::Emanation vs. Big Bang.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 312, "text": "Most cosmologists today believe that the universe expanded from a singularity approximately 13.8 billion years ago in a 'smeared-out singularity' called the Big Bang, meaning that space itself came into being at the moment of the big bang and has expanded ever since, creating and carrying the galaxies with it.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "192904", "title": "Ultimate fate of the universe", "section": "Section::::Theories about the end of the universe.:Big Crunch.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 870, "text": "This scenario allows the Big Bang to occur immediately after the Big Crunch of a preceding universe. If this happens repeatedly, it creates a cyclic model, which is also known as an oscillatory universe. The universe could then consist of an infinite sequence of finite universes, with each finite universe ending with a Big Crunch that is also the Big Bang of the next universe. Theoretically, the cyclic universe could not be reconciled with the second law of thermodynamics: entropy would build up from oscillation to oscillation and cause heat death. Current evidence also indicates the universe is not closed. This has caused cosmologists to abandon the oscillating universe model. A somewhat similar idea is embraced by the cyclic model, but this idea evades heat death because of an expansion of the branes that dilutes entropy accumulated in the previous cycle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1864889", "title": "Cosmology", "section": "Section::::Disciplines.:Physical cosmology.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 575, "text": "Subsequent modelling of the universe explored the possibility that the cosmological constant, introduced by Einstein in his 1917 paper, may result in an expanding universe, depending on its value. Thus the Big Bang model was proposed by the Belgian priest Georges Lemaître in 1927 which was subsequently corroborated by Edwin Hubble's discovery of the redshift in 1929 and later by the discovery of the cosmic microwave background radiation by Arno Penzias and Robert Woodrow Wilson in 1964. These findings were a first step to rule out some of many alternative cosmologies.\n", "bleu_score": null, "meta": null } ] } ]
null
1vc9m0
the four tigers of asia
[ { "answer": "You could post this question in /r/AskSocialScience if it hasn't been asked there already.", "provenance": null }, { "answer": "The view in the early 2000s, which summarized a lot of research in the 1990s, [was that](_URL_0_):\n\n > Asian growth, impressive as it was, could mostly be explained by such bread-and-butter economic forces as high savings rates, good education, and the movement of underemployed peasants into the modern sector. What they found was that once you took account of the growth in these measurable inputs, you could explain most, and in some cases all, of the growth in output. What Young and Lau found was, if you like, that Asian growth has so far been mainly a matter of perspiration rather than inspiration--of working harder, not smarter. These results were and are controversial--partly because many people don't want to believe them and are eager to accept contrary calculations--but their basic message has held up quite well under repeated challenges. \n\nFurther,\n\n > The other unwelcome implication of the perspiration theory was that the pace of Asia's growth was likely to slow. You can get a lot of economic growth by increasing labor force participation, giving everyone a basic education, and tripling the investment share of GDP, but these are one-time, unrepeatable changes. So the perspiration theory suggested that sooner or later Asia's growth would slow down--sooner in the case of the original Asian tigers like Singapore, which is already investing half its GDP; later in low-wage countries like China that still have vast reserves of underemployed rural labor to exploit. \n\nIn short: the four Tigers had high savings rates (which translated into high rates of investment in capital), underwent massive efforts to educate their populace, opened up to trade, and moved a lot of people out of low-output industries and into high-output industries. \n\nIt was fundamentally the same story as always. While at the frontier of growth (like in the US and EU), economic growth comes from innovation and technological progress, growth in poor areas can be effective achieved through re-allocating existing resources to more productive uses.\n\nSome modern re-examination of the East Asian miracle has identified more sources of productivity growth (working smarter), but the dominant consensus is that we can explain the Asian miracle through the standard fare of capital accumulation, openness to trade, education, and reallocation of resources to more productive ends. \n\n(This isn't quite ELI5, it's more explain like I'm a freshman econ student, but it's the answer I would have given to you in /r/asksocialscience. Let me know if any of this needs clarification.)", "provenance": null }, { "answer": "1) They realised the importance of people and their education.\n\n They realised they were small islands, that their strongest assets were their people and their development was pegged to the countries development. All 4 countries have always invested and promoted human and physical growth. The importance placed on education and skill development has always been high and gender equality in terms of receiving education.\n\n2) They stayed free from debt and kept their money(currency) strong as possible.\n \n They borrowed little money from outside, kept their exchange rate strong and mostly stable. They prevented exchange rate appreciation by having somewhat of a fixed currency. Due to them being strong trading hubs and heavily involved with exports, they managed to keep themselves away from having too big a budget deficit.\n\n3) They are key Trade hubs\n\n Singapore for instance is in the centre of shipping trade between India and China. It has always managed to encourage free trade and has signed numerous [FTA](_URL_0_)\"free trade agreements\" that allow it to be a choice for trade and business.\n\n4) They managed to keep their economies growing steadily \n\n The governments have worked to keep certain industries such as Finance, export etc strong and encourage & develop these. This has allowed them to grow at a steady and good pace and surpass other regions.\n\n\nSpecifically in Singapore's context, good governance (debatable I know), low crime and high influx of professional talent are more reasons as to why it has managed to succeed thus far. \n\nSource: I am from Singapore.\n\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "38198492", "title": "Silk Way Rally", "section": "Section::::Rally Logo.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 522, "text": "\"Widely known as one of the largest predators, tigers are ranged both in Russia and China, where this species is even considered sacred. Moreover, the form of Chinese character 王, which means “a king”, reminds us of stripe pattern on the tiger’s forehead. White tigers, in particular, adapt easily to any environment if there is enough free space, being as well the largest representatives of their breed. In Chinese mythology the white tiger symbolizes courage and strong spirit that protects one from external threats. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3907582", "title": "Danaus chrysippus", "section": "Section::::Geographic range.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 511, "text": "The plain tiger is found across the entirety of Africa, where the predominant subspecies is \"D. c. alcippus\". Its range extends across the majority of Asia throughout Indian subcontinent, as well as many south Pacific islands. The plain tiger is even present in parts of Australia. \"D. c. chrysippus\" is most common throughout Asia and in some select regions in Africa, while \"D. c. orientis\" is present in more tropical African regions as well as some African islands, including Madagascar and the Seychelles.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "543466", "title": "Siberian tiger", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1102, "text": "The Siberian tiger (\"Panthera tigris tigris\") is a tiger population in the Russian Far East and Northeast China, and possibly North Korea. It once ranged throughout the Korean Peninsula, north China, Russian Far East, and eastern Mongolia. Today, this population inhabits mainly the Sikhote Alin mountain region in southwest Primorye Province in the Russian Far East. In 2005, there were 331–393 adult and subadult Siberian tigers in this region, with a breeding adult population of about 250 individuals. The population had been stable for more than a decade due to intensive conservation efforts, but partial surveys conducted after 2005 indicate that the Russian tiger population was declining. An initial census held in 2015 indicated that the Siberian tiger population had increased to 480–540 individuals in the Russian Far East, including 100 cubs. This was followed up by a more detailed census which revealed there was a total population of 562 wild Siberian tigers in Russia. As of 2014, about 35 individuals were estimated to range in the international border area between Russia and China.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "221151", "title": "Bengal tiger", "section": "Section::::Taxonomy.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 443, "text": "The validity of several tiger subspecies in continental Asia was questioned in 1999. Morphologically, tigers from different regions vary little, and gene flow between populations in those regions is considered to have been possible during the Pleistocene. Therefore, it was proposed to recognise only two subspecies as valid, namely \"P. t. tigris\" in mainland Asia, and \"P. t. sondaica\" in the Greater Sunda Islands and possibly in Sundaland.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30075", "title": "Tiger", "section": "Section::::Distribution and habitat.\n", "start_paragraph_id": 58, "start_character": 0, "end_paragraph_id": 58, "end_character": 311, "text": "In East Asia, the tiger inhabits Korean pine and temperate broadleaf and mixed forests in the Amur-Ussuri region of Primorsky Krai and Khabarovsk Krai in far eastern Siberia. Riparian forests are important habitats for both ungulates and tigers as they provide food and water, and serve as dispersal corridors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30075", "title": "Tiger", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 1147, "text": "The tiger once ranged widely from the Eastern Anatolia Region in the west to the Amur River basin, and in the south from the foothills of the Himalayas to Bali in the Sunda islands. Since the early 20th century, tiger populations have lost at least 93% of their historic range and have been extirpated in Western and Central Asia, from the islands of Java and Bali, and in large areas of Southeast and South Asia and China. Today's tiger range is fragmented, stretching from Siberian temperate forests to subtropical and tropical forests on the Indian subcontinent and Sumatra. The tiger is listed as Endangered on the IUCN Red List since 1986. As of 2015, the global wild tiger population was estimated to number between 3,062 and 3,948 mature individuals, down from around 100,000 at the start of the 20th century, with most remaining populations occurring in small pockets isolated from each other. Major reasons for population decline include habitat destruction, habitat fragmentation and poaching. This, coupled with the fact that it lives in some of the more densely populated places on Earth, has caused significant conflicts with humans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6098736", "title": "South China Tigers", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 244, "text": "For the naming of the team, South China tigers are considered as the most distinctive of all tiger subspecies. The population once numbered more than 4,000 in the wild, distributed from Hunan, Jiangxi in the north to as far south as Hong Kong.\n", "bleu_score": null, "meta": null } ] } ]
null
5ix0x2
why is there so much nudity in classical paintings?
[ { "answer": "The human form was what they were presenting. So, you paint/sculpt nudes. And modern prudish society is actually pretty recent, seasonal, and geographical. You won't find many equatorial societies wearing layers, fur and all of the added insulation. \n\nIt's not everything, but it's a start.", "provenance": null }, { "answer": "So with the spread of humanism ushered in by the beginning of the renaissance, people started to take a keen interest in humanity. They started thinking that people were incredible and the individual was exceptionally important. As a result, we start seeing changes in a number of artistic modes of expression. To answer your question though, painters and sculptors wanted to idealize the human form which took on a whole new significance in renaissance art. The natural human figure presented an expression of beauty, perfection, and humanity.\n\nTL;DR - Round about 1400 or so, people started thinking of humans as hot shit. As a result they celebrated the ideal human form - a nude figure.", "provenance": null }, { "answer": "In periods that were more conservative, it was a way to have eroticism in an acceptable form. When you look at some of the biblical allegories etc it is pretty obvious that they were more about sex than Jesus. \n\nAs long as the subject was a classical theme, nudity was acceptable. That is why Manet's [Luncheon in the Grass ](_URL_0_) was scandalous. If it had just featured the naked ladies no problem, but the fact that there were also clothed contemporary people in the scene made it shocking. ", "provenance": null }, { "answer": "The classical period I'm assuming you are referring to is that of the Renaissance. In Europe a few people experienced a return to the Romantic style of art. The Romantic style, popular in Rome, celebrated the human body in its ideal form. Hence, well-shaped nude bodies.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8625424", "title": "Nudity in religion", "section": "Section::::Indian religions.:Hinduism.:Philosophical basis.:Material basis.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 518, "text": "In comparison in the material aspect nudity is considered an art. This view is supported by Sri Aurobindo in his book \"The Renaissance in India\". He says about Hinduism in the book – \"Its spiritual extremism could not prevent it from fathoming through a long era the life of the senses and its enjoyments, and there too it sought the utmost richness of sensuous detail and the depths and intensities of sensual experience. Yet it is notable that this pursuit of the most opposite extremes never resulted in disorder…\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3085396", "title": "Portrait painting", "section": "Section::::History.:Renaissance.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 536, "text": "Partly out of interest in the natural world and partly out of interest in the classical cultures of ancient Greece and Rome, portraits—both painted and sculpted—were given an important role in Renaissance society and valued as objects, and as depictions of earthly success and status. Painting in general reached a new level of balance, harmony, and insight, and the greatest artists (Leonardo, Michelangelo, and Raphael) were considered \"geniuses\", rising far above the tradesman status to valued servants of the court and the church.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14775345", "title": "Nude (art)", "section": "Section::::Issues.:Sexuality.\n", "start_paragraph_id": 50, "start_character": 0, "end_paragraph_id": 50, "end_character": 1071, "text": "Kenneth Clark noted that sexuality was part of the attraction to the nude as a subject of art, stating \"no nude, however abstract, should fail to arouse in the spectator some vestige of erotic feeling, even though it be only the faintest shadow—and if it does not do so it is bad art and false morals.\" According to Clark, the explicit temple sculptures of tenth-century India \"are great works of art because their eroticism is part of their whole philosophy.\" Great art can contain significant sexual content without being obscene. However sexually explicit works of fine art produced in Europe before the modern era, such as Gustav Courbet's \"L'Origine du monde\", were not intended for public display. The judgement of whether a particular work is artistic or pornographic is ultimately subjective and has changed through history and from one culture to another. Some individuals judge any public display of the unclothed body to be unacceptable, while others may find artistic merit in explicitly sexual images. Public reviews of art may or may not address the issue.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15455036", "title": "Battle of the Nudes (engraving)", "section": "Section::::Context and reception.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 330, "text": "On the other hand, it has been suggested that Leonardo da Vinci may have had Pollaiuolo partly in mind when he wrote that artists should not:make their nudes wooden and without grace, so that they seem to look like a sack of nuts rather than the surface of a human being, or indeed a bundle of radishes rather than muscular nudes\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "334825", "title": "Ancient art", "section": "Section::::Middle East, Mediterranean, and India.:Rome.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 1475, "text": "In Greece and Rome, wall painting was not considered as high art. The most prestigious form of art besides sculpture was panel painting, i.e. tempera or encaustic painting on wooden panels. Unfortunately, since wood is a perishable material, only a very few examples of such paintings have survived, namely the Severan Tondo from circa 200 AD, a very routine official portrait from some provincial government office, and the well-known Fayum mummy portraits, all from Roman Egypt, and almost certainly not of the highest contemporary quality. The portraits were attached to burial mummies at the face, from which almost all have now been detached. They usually depict a single person, showing the head, or head and upper chest, viewed frontally. The background is always monochrome, sometimes with decorative elements. In terms of artistic tradition, the images clearly derive more from Greco-Roman traditions than Egyptian ones. They are remarkably realistic, though variable in artistic quality, and may indicate the similar art which was widespread elsewhere but did not survive. A few portraits painted on glass and medals from the later empire have survived, as have coin portraits, some of which are considered very realistic as well. Pliny the Younger complained of the declining state of Roman portrait art, \"The painting of portraits which used to transmit through the ages the accurate likenesses of people, has entirely gone out…Indolence has destroyed the arts.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5502117", "title": "Poor Man's Bible", "section": "Section::::Typologies.:Mural.\n", "start_paragraph_id": 99, "start_character": 0, "end_paragraph_id": 99, "end_character": 750, "text": "Murals were a common form of wall decoration in ancient Rome. The earliest Christian mural paintings come from the catacombs of Rome. They include many representations of Christ as \"the Good Shepherd\", generally as a standardised image of a young, beardless man with a sheep on his shoulders. Other popular subjects include the \"Madonna and Child\", Jonah being thrown into the sea, the three young men in the furnace and the \"Last Supper\". In one remarkable mural, in the Catacomb of the Aurelii, is the earliest image of Jesus, as he came to be commonly depicted, as a bearded, Jewish man in long robes. In this particular image he is preaching, not to a group of people but to a flock of sheep and goats, representing the faithful and the wayward.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1243208", "title": "Toplessness", "section": "Section::::In popular culture and the arts.:The arts.\n", "start_paragraph_id": 81, "start_character": 0, "end_paragraph_id": 81, "end_character": 231, "text": "As a result of the Renaissance, in many European societies artists were strongly influenced by classical Greek styles and culture. As a result, images of nude and semi-nude subjects in many forms proliferated in art and sculpture.\n", "bleu_score": null, "meta": null } ] } ]
null
62xwkn
why do people pass out/feel like they're about to, if the suddenly go from very warm water to very cold water or vice versa?
[ { "answer": "It's because of the mammalian diving reflex. We experience bradycardia (slow heart rate) and vasoconstriction. Essentially all of our oxygenated blood is shunted to vital areas (Brain, heart, and lungs)only. This conserves oxygen allowing us to endure the cold water for a longer period of time for the best possible chance at survival. Unfortunately in non life threatening situations it can make is feel like we're going to pass out. This \"hack\" is actually used as a first line treatment by physicians when a patient has a rapid heart rate.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "3543130", "title": "Underwater diving", "section": "Section::::Physiological constraints on diving.:Exposure.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 831, "text": "Cold shock response is the physiological response of organisms to sudden cold, especially cold water, and is a common cause of death from immersion in very cold water, such as by falling through thin ice. The immediate shock of the cold causes involuntary inhalation, which if underwater can result in drowning. The cold water can also cause heart attack due to vasoconstriction; the heart has to work harder to pump the same volume of blood throughout the body, and for people with heart disease, this additional workload can cause the heart to go into arrest. A person who survives the initial minute after falling into cold water can survive for at least thirty minutes provided they do not drown. The ability to stay afloat declines substantially after about ten minutes as the chilled muscles lose strength and co-ordination.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54637386", "title": "Physiology of underwater diving", "section": "Section::::Exposure.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 887, "text": "Cold shock response is the physiological response of organisms to sudden cold, especially cold water, and is a common cause of death from immersion in very cold water, such as by falling through thin ice. The immediate shock of the cold causes involuntary inhalation, which if underwater can result in drowning. The cold water can also cause heart attack due to vasoconstriction; the heart has to work harder to pump the same volume of blood throughout the body, and for people with heart disease, this additional workload can cause the heart to go into arrest. A person who survives the initial minute of trauma after falling into icy water can survive for at least thirty minutes provided they don't drown. However, the ability to perform useful work like staying afloat declines substantially after ten minutes as the body protectively cuts off blood flow to \"non-essential\" muscles.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17484978", "title": "Cold shock response", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 688, "text": "In humans, cold shock response is perhaps the most common cause of death from immersion in very cold water, such as by falling through thin ice. The immediate shock of the cold causes involuntary inhalation, which if underwater can result in drowning. The cold water can also cause heart attack due to vasoconstriction; the heart has to work harder to pump the same volume of blood throughout the body. For people with existing cardiovascular disease, the additional workload can result in cardiac arrest. Inhalation of water (and thus drowning) may result from hyperventilation. Some people are much better able to survive swimming in very cold water due to body or mental conditioning.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4886790", "title": "Winter swimming", "section": "Section::::Health risks.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 587, "text": "Winter swimming can be dangerous to people who are not used to swimming in very cold water. After submersion in cold water the cold shock response will occur, causing an uncontrollable gasp for air. This is followed by hyperventilation, a longer period of more rapid breathing. The gasp for air can cause a person to ingest water, which leads to drowning. As blood in the limbs is cooled and returns to the heart, this can cause fibrillation and consequently cardiac arrest. The cold shock response and cardiac arrest are the most common causes of death related to cold water immersion.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "446596", "title": "Diving reflex", "section": "Section::::Physiological response.:Thermal balance responses.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 1048, "text": "Cold shock response is the initial reaction to immersion in cold water. It generally starts with a gasp reflex in response to sudden and rapid chilling of the skin, and if the head is immersed there is a risk of inhaling water and drowning. This is followed by a reflexive hyperventilation, with a risk of panic and fainting if not controlled. Cold induced vasoconstriction causes the heart to work harder and the additional work can overload a weak heart, with a possible consequence of cardiac arrest. Cold incapacitation is the next stage, and generally occurs within 5 to 15 minutes in cold water. Blood flow to the extremities is reduced by vasoconstriction as the body attempts to reduce heat loss from the vital organs of the core. This accelerates the cooling of the periphery, and reduces the functionality of the muscles and nerves. The duration of exposure to produce hypothermia varies with health, body mass and water temperature. It generally takes in the order of 30 minutes for an unprotected person in water to become hypothermic.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17484978", "title": "Cold shock response", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 419, "text": "Hypothermia from exposure to cold water is not as sudden as is often believed. A person who survives the initial minute of trauma (after falling into icy water), can survive for at least thirty minutes provided they don't drown. However, the ability to perform useful work (for example to save oneself) declines substantially after ten minutes (as the body protectively cuts off blood flow to \"non-essential\" muscles).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "146879", "title": "Hypothermia", "section": "Section::::Causes.:Water immersion.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 749, "text": "The actual cause of death in cold water is usually the bodily reactions to heat loss and to freezing water, rather than hypothermia (loss of core temperature) itself. For example, plunged into freezing seas, around 20% of victims die within two minutes from cold shock (uncontrolled rapid breathing, and gasping, causing water inhalation, massive increase in blood pressure and cardiac strain leading to cardiac arrest, and panic); another 50% die within 15–30 minutes from cold incapacitation (inability to use or control limbs and hands for swimming or gripping, as the body \"protectively\" shuts down the peripheral muscles of the limbs to protect its core). Exhaustion and unconsciousness cause drowning, claiming the rest within a similar time.\n", "bleu_score": null, "meta": null } ] } ]
null
a4crh2
Were people in the old west (1865-1890) able to listen to classical music? would the average joe be familiar with composers such as Mozart, Beethoven, Chopin etc...?
[ { "answer": "Depends on who that \"average joe\" was. People played much more music for their own pleasure than we do today; small towns did have \"opera houses\" -- which while they didn't often play Verdi, did have aspirations to European culture. So, for example, we have records that in 1877, an opera by Balfe, *The Bohemian Girl,* was performed at the Belvidere Theatre in Central City, Colorado. This today little known light opera seems to have been a favorite in the old West opera houses, along with *The Mikado.*\n\nConsider the piano player-- he might be self taught, or he might be an Eastern swell who'd come out west, doing his Teddy Roosevelt. A remarkably diverse range of men and women sought adventure and fortune on the American frontier-- the French nobleman, the Marquis de Morès, a graduate of St Cyr (the French military academy-- he was a classmate of Petain)- came to the Dakotas to make a bundle in cattle ranching. He and his wealthy American wife would have been familiar with the \"greatest hits\" of contemporary European culture; you can say \"he's not an 'average Joe\", but . . .\n\nWhat I'd say is: revisit your idea of who \"average Joe\" is in the \"old West\". He might be a former Confederate soldier. He might be a former slave. He might be a Chinese railroad worker. He might be British with a fancy title. He might be a Methodist who'd know all the verses of popular hymns but have no idea of Mozart. He might be a Mexican or South American working on ranches (essentially all of the ranching technology and terminology that we think of as \"western\" is Spanish in origin, that \"buckaroo\" is actually a *vaquero)* and playing guitar for himself and his friends.\n\nSome of these people couldn't read . . . some of them could play Chopin. It's a very diverse crew, much more so than legend would have it.\n\n & #x200B;\n\nsources:\n\n[Marquis De Mores: Dakota Capitalist, French Nationalist](_URL_4_)\n\n[Chateau de Mores State Historic Site](_URL_3_)\n\nPrairie Fever: [British Aristocrats in the American West, 1830–1890](_URL_2_)\n\n[Kwangtung to Big Sky: The Chinese in Montana, 1864-1900](_URL_5_)\n\n[Colorado's Historic Opera Houses](_URL_1_)\n\n[Spanish Influence in the United States: Economic Aspects](_URL_6_)\n\n[The History of the Vaquero](_URL_0_)\n\n & #x200B;\n\n & #x200B;", "provenance": null }, { "answer": "Thanks this helps a lot!", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "479513", "title": "Old-time music", "section": "Section::::Regional styles.:Native American old-time music.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 334, "text": "Old-time music has been adopted by a few Native American musicians; Walker Calhoun (1918-2012) of Big Cove, in the Qualla Boundary (home to the Eastern Band of Cherokee Indians, just outside the Great Smoky Mountains National Park in western North Carolina) played three-finger-style banjo, to which he sang in the Cherokee language.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "341033", "title": "Eugene Istomin", "section": "Section::::Career.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 324, "text": "In the 1980s and 1990s, he toured 30 American cities—largely in the Midwest—in a twelve-ton truck with his own Steinway pianos and piano tuner. It was the expression of a lifelong conviction that classical music belonged to the ordinary American. In this same vein, he was an ardent fan of the Detroit Tigers baseball team.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13723241", "title": "Barzillai Lew", "section": "Section::::After the American Revolutionary War.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 1066, "text": "Barzillai, Dinah, and several of their sons and daughters sang and played wind and stringed instruments all over New England. They were noted throughout the 19th and 20th centuries as well-educated, skilled, and talented musicians. It was said \"no family in Middlesex County from Lowell to Cambridge could produce so much good music.\" They formed a complete band in their family and were employed to play at assemblies in Portland, Maine, Boston, Massachusetts, other large cities and towns, as well as commencement exercises at several New England colleges. They kept an elegant coach and fine span of horses and came on the Sabbath to the Pawtucket Society Church in as much style as any family in the town of Dracut. Dinah Bowman Lew may have been the first African-American woman pianist in American history. Barzillai Lew died in Dracut on January 18, 1822, and was buried in Clay Pit Cemetery. Years later, Dinah Bowman Lew petitioned and received from the Commonwealth of Massachusetts a pension for her husband's military service in the American Revolution.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32163151", "title": "Bobby Kimmel", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 292, "text": "He became aware of the folk music greats such as Doc Watson, Lightnin Hopkins, Merle Travis, Mississippi John Hurt, as well as contemporaries like Dick Rosmini, Steve Mann and Ry Cooder. Phonorecords from his father's music store at this time contributed invaluably to his musical education.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9738793", "title": "Eva Heinitz", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 373, "text": "While at the University of Washington, Heinitz was a excellent (and patient) teacher who introduced young Americans to the joys of early music and the viola da gamba. In 1964 she took on a group of students from Dr. Wallace Goleeke’s Ingraham High School Madrigal Singers, teaching them to play in a viol ensemble of soprano, alto, tenor, and bass Renaissance instruments.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "171080", "title": "Music of the United States", "section": "Section::::Education and scholarship.\n", "start_paragraph_id": 99, "start_character": 0, "end_paragraph_id": 99, "end_character": 2163, "text": "Early 20th scholarly analysis of American music tended to interpret European-derived classical traditions as the most worthy of study, with the folk, religious, and traditional musics of the common people denigrated as low-class and of little artistic or social worth. American music history was compared to the much longer historical record of European nations, and was found wanting, leading writers like the composer Arthur Farwell to ponder what sorts of musical traditions might arise from American culture, in his 1915 \"Music in America\". In 1930, John Tasker Howard's \"Our American Music\" became a standard analysis, focusing on largely on concert music composed in the United States. Since the analysis of musicologist Charles Seeger in the mid-20th century, American music history has often been described as intimately related to perceptions of race and ancestry. Under this view, the diverse racial and ethnic background of the United States has both promoted a sense of musical separation between the races, while still fostering constant acculturation, as elements of European, African, and indigenous musics have shifted between fields. Gilbert Chase's \"America's Music, from the Pilgrims to the Present\", was the first major work to examine the music of the entire United States, and recognize folk traditions as more culturally significant than music for the concert hall. Chase's analysis of a diverse American musical identity has remained the dominant view among the academic establishment. Until the 1960s and 1970s, however, most musical scholars in the United States continued to study European music, limiting themselves only to certain fields of American music, especially European-derived classical and operatic styles, and sometimes African American jazz. More modern musicologists and ethnomusicologists have studied subjects ranging from the national musical identity to the individual styles and techniques of specific communities in a particular time of American history. Prominent recent studies of American music include Charles Hamm's \"Music in the New World\" from 1983, and Richard Crawford's \"America's Musical Life\" from 2001.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6442424", "title": "Ivan Davis", "section": "Section::::References.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 320, "text": "BULLET::::- \"The Penguin Dictionary of Musical Performers. A biographical guide to significant interpreters of classical music – singers, solo instrumentalists, conductors, orchestras and string quartets – ranging from the seventeenth century to the present day,\" by Arthur David Jacobs (1922–1996) London: Viking, 1990\n", "bleu_score": null, "meta": null } ] } ]
null
411ick
with woman's equality in military combat roles being normalized, why is it still absurd that woman play with men in the nfl and other major league sports?
[ { "answer": "Because physically a woman in the NFL would not be able to stop most of the males.\n\nNow, if you have some sort of 6'4 275 she-hulk that would work.", "provenance": null }, { "answer": "Women have the *opportunity* to serve in combat roles if they are physically qualified. Although the military's physical fitness standards are tough, you don't have to be a beefcake--most soldiers wouldn't make very good football players. Thus few women attain the physical characteristics for play at the highest levels of certain sports.\n\nThe fact that there is usually segregation at the lower levels of sports doesn't help, either--it takes years of training to make it to the major leagues, and women typically play either on women's teams or not at all. So there is not much of a breeding ground to propel those women who *could* rival the major male athletes in those sports to actually do so.", "provenance": null }, { "answer": "Women are currently allowed to play in the NFL. There just hasn't been a woman good enough at football to qualify for a team yet.\n\n > I checked with league spokesman Greg Aiello, who said, \"The NFL has no male-only rule. All human beings are eligible, as long as they are three years out of high school and have a usable football skill set.\" Prep and college football have experienced huge controversies about whether girls and women can play. There's never going to be huge controversy in the NFL, because the decision is already made -- women are welcome.\n\n_URL_0_", "provenance": null }, { "answer": "Perhaps your believing propaganda a little too much. In non contact sports it would be devastating. Contact sports would be fatal. _URL_0_", "provenance": null }, { "answer": "The best way to get women into pro sports would be to have boys and girls play mixed leagues as kids. The better girls would develop more against tougher competition.\n\nIf your goal was get a girl in the NBA, I feel mixed youth leagues would be a great method to improve the talent of top tier girls (would hurt middle and low tier girls)", "provenance": null }, { "answer": "There are still lots of people that think it's absurd to let women into the infantry (myself included). I spent 6 years in the Army and met plenty of females that were great pilots or mechanics. I don't think I met a single female that could reasonably carry the weight that infantry troops do every day. Even someone as undeniably fit and badass as Rhonda Rousey would have trouble carrying an 100lb load while on patrol through Afghanistan.\n\n_URL_0_", "provenance": null }, { "answer": "I think the NFL is a bad example. \n\nBiologically, women are different from men. This isn't a dispute. It's accepted as fact. \n\nThere are certainly sports that could be integrated, but football isn't one of them. It's too physical. If you take a fit man and a fit woman, the man will always be stronger. Maybe you could get away with a female kicker, but that's about it. \n\nNow, if you look at a sport like bowling, it could be integrated across the board. Women are just as capable of perfecting their bowling skills. An elite male bowler would probably bowl faster than an elite female bowler, but since that's not taken into account, it shouldn't matter. \n\nSimilarly, a sport like baseball might one day be integrated. There is less and less contact between players every year thanks to new rules. If a woman could perfect the skill set required to bat and throw, she should be able to compete. I think you'd see more men than woman, but an athletic woman *could* compete in baseball. Eventually. \n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "7239865", "title": "Women in combat", "section": "Section::::Issues.:Social concerns.\n", "start_paragraph_id": 72, "start_character": 0, "end_paragraph_id": 72, "end_character": 312, "text": "Finally, there is the argument that by not incorporating women into combat, the American government is failing to tap into another source of soldiers for military combat operations. This argument claims that the government is creating a military that treats women as second-class citizens and not equals of men.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49797635", "title": "Honorary male", "section": "Section::::1900s to present.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 657, "text": "Women in the military face a similar problem. Recent wars in Iraq and Afghanistan have allowed women combat roles. However, in order for women in the military to be accepted and considered successful, they feel they must become \"one of the guys.\" Otherwise they face sexual and gender based ridicule that, in some cases, led to women ending their military careers. Feminist theorist Cynthia Enloe argues that the institution of the military is not comparable to those of education or business because of its inherent violent and hyper-masculine characteristics. She states that this environment is so harmful for women that they can never fully assimilate.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34631034", "title": "Combat Exclusion Policy", "section": "Section::::History.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 447, "text": "Women serving in the U.S. military in the past have often seen combat despite the Combat Exclusion Policy. Due to a shortage of troops, women were temporarily attached to direct combat units slipping in through a bureaucratic loophole. Although they were not supposed to be in positions that engaged in direct combat, and were ineligible for combat pay, thousands of women have engaged the enemy directly in Operations Iraqi and Enduring Freedom.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4824659", "title": "Rostker v. Goldberg", "section": "Section::::Case arguments.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 1322, "text": "The Army and Marine Corps precluded the use of women in combat as a matter of established policy, and both the Navy and the Air Force restricted women's participation in combat. Even the president, who had originally suggested that women be included, expressed his intent to continue the current military policy excluding women from combat. Since the purpose of registration was to prepare for a draft of combat troops, and since women are excluded from combat, Congress concluded that they would not be needed in the event of a draft, and therefore decided funds should not be used to register them. As one Senator said, “It has been suggested that all women be registered, but only a handful actually be inducted in an emergency. The Committee finds this a confused and ultimately unsatisfactory solution.\" As the Senate Committee recognized a year before, \"training would be needlessly burdened by women recruits who could not be used in combat.\" All in all, the proponents of the current MSSA advocated not using government funds to register people who were excluded from the job anyway. The main point of those who favored the registration of females was that females were in favor of it because of gender equality principles; women, as full citizens, ought to have the same civic duties and responsibilities as men.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35101121", "title": "Unit cohesion in the United States military", "section": "Section::::Women in combat.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 558, "text": "Brian Mitchell, in his article \"Women Make Poor Soldiers\" (excerpted from his 1989 book \"\"Weak Link: The Feminization of the American Military\"\"), expressed concern that placing women in combat lowers unit cohesion, either due to sexual relationships taking priority over group loyalty, or because men would feel obliged to be more protective of women than other men. Mitchell's view was harshly criticized in a New York Times review, which stated the book was \"spoiled by intemperate allegations and a supercilious tone\" and lacked sourcing for statements.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30864687", "title": "Feminisation of the workplace", "section": "Section::::Categories of feminization.:Feminization of sports.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 353, "text": "In the United States, women are seen as 'ill-equipped' to participate in sports, and their involvement was viewed as unfeminine and undesirable. The reasons why women experience less academic advantage from sports than men do focus on the clash between expectations for women, athletes, and the stigma for female athletes who are seen to be unfeminine.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "265901", "title": "Women's sports", "section": "Section::::1960s–2010s.\n", "start_paragraph_id": 82, "start_character": 0, "end_paragraph_id": 82, "end_character": 288, "text": "As of 2013, the only sports that men, but not women play professionally in the United States are football, baseball, and Ultimate Frisbee. Although basketball, soccer and hockey have female sports leagues, they are far behind in terms of exposure and funding compared to the men's teams.\n", "bleu_score": null, "meta": null } ] } ]
null
33sfcm
what is the body trying to accomplish when we dry heave?
[ { "answer": "Your body is attempting to vomit but not succeeding. That's pretty much it. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2281037", "title": "Emaciation", "section": "Section::::Treatment.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 1360, "text": "Other than treating, curing or remedying the underlying cause of emaciation, it as a symptom is treated by regaining the weight and restoring the tissues. This is done through renourishment, or reintroducing nourishing liquids and foods to the body while increasing the intake of food energy. The process, usually begun in an individual deprived of food for a period of time, must be done slowly to avoid complications such as regurgitation and vomiting. It begins with spoonfuls of water and salted broth, advancing to increased amounts of clear liquids including broth, tea and fruit juices. This soon is advanced to full liquid diet such as milk (if no lactose intolerance is present) and cream-based soups. Once solid food is introduced, an emaciated individual is usually given up to eight small meals per day, at two-hour intervals. Meals may consist of a small milkshake to minor portions of meat with a starchy side item. For the purposes of weight gain and tissue rebuilding, the diet will be focused on proteins, fats and carbohydrates that are rich in vitamins and minerals, and relatively high in energy. Oily foods and high-fiber foods like grains and certain vegetables are discouraged because they are difficult to digest, and filling while lower in energy. Treatment of emaciation also includes much sleep, rest and relaxation, and counseling.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3275308", "title": "Dry enema", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 317, "text": "A dry enema is an alternative technique for cleansing the human rectum either for reasons of health, or for sexual hygiene. It is accomplished by squirting a small amount of sterile lubricant into the rectum, resulting in a bowel movement more quickly and with less violence than can be achieved by an oral laxative.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "57774597", "title": "Food powder", "section": "Section::::Formation.:Dehydration.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 535, "text": "Drying (dehydrating) is one of the oldest and easiest methods of food preservation. Dehydration is the process of removing water or moisture from a food product by heating at right temperature as well as containing air movement and dry air to absorb and carry the released moisture away. Reducing the moisture content of food prevents the growth of microorganisms such as bacteria, yeast and molds and slows down enzymatic reactions that take place within food. The combination of these events helps to prevent spoilage in dried food.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28753330", "title": "Hemilepistus reaumuri", "section": "Section::::Ecology.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 373, "text": "The bulk of the water intake of \"Hemilepistus reaumuri\" is by taking up water vapour from saturated air and by eating damp sand. Water loss is minimised by the rectal epithelium, which absorbs water, ensuring that the faeces is drier than the food the animal consumed. Evaporation of water through the permeable exoskeleton may, however, provide a valuable cooling effect.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3275308", "title": "Dry enema", "section": "Section::::Techniques.:Suppositories.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 481, "text": "A rudimentary form of \"dry\" enema is the use of a non-medicated glycerin suppository. However, due to the relative hardiness of the suppository - necessary for its insertion into the human body - before the glycerin can act, it must be melted by the heat of the body, and hence it does not take effect for up to an hour. Often the hygroscopic glycerin irritates the sensitive membranes of the rectum resulting in forceful expulsion of the suppository without any laxative effects.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1058672", "title": "Ataxia–telangiectasia", "section": "Section::::Symptoms.:Feeding, swallowing, and nutrition.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 597, "text": "Involuntary movements may make feeding difficult or messy and may excessively prolong mealtimes. It may be easier to finger feed than use utensils (e.g., spoon or fork). For liquids, it is often easier to drink from a closed container with a straw than from an open cup. Caregivers may need to provide foods or liquids so that self-feeding is possible, or they may need to feed the person with A–T. In general, meals should be completed within approximately 30 minutes. Longer meals may be stressful, interfere with other daily activities, and limit the intake of necessary liquids and nutrients.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3233381", "title": "Phyllomedusa", "section": "Section::::Secretion.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 263, "text": "Some \"Phyllomedusa\" species produce a waxy secretion that reduces the evaporative water loss of their bodies. If they begin to dry out, they move their limbs over their backs, where the secretory glands are, and spread the lipid secretion over their entire skin.\n", "bleu_score": null, "meta": null } ] } ]
null
avvo8z
Why were there only two American Aces in the Vietnam War?
[ { "answer": "Remember that becoming an ace requires downing five enemy aircraft, and to down enemy aircraft the enemy needs to have some in the first place.\n\n & #x200B;\n\nThe VPAF had only started to receive jet fighters in 1964, and their aircraft were generally less numerous and less capable than what the Americans were fielding. Because the aims of the VPAF were exclusively defensive in nature and their supply of both manpower and materiel was very limited, they would deploy their aircraft very conservatively. They generally would only sortie their fighters when the situation was very much in their favor, and tactics were designed around ambushing strike formations, with forcing the attackers to drop their ordnance early and abort the mission being just as effective as downing an American aircraft. A more typical tactic involved having fighters appear away from a strike package to draw away escorts and then having another group pop in under the radar and make a pass on the strike aircraft themselves. While the typical strike aircraft (F-105) was nominally faster than the MiG-17, it could only outrun the MiG-17 once it had jettisoned its payload and thus ruined the mission. The MiGs, on the other hand, didn't exactly stay around to fight - all they had to do was ruin the mission, so a typical \"attack\" could consist of one pass on the strike aircraft followed by a rapid withdrawal by both the attacking and decoy aircraft.\n\nBecause of that, opportunities for American fighters to actually engage the VPAF were few and far between. Making this worse, particularly early on, was the tendency of American forces to rotate pilots through the theater to spread experience and the generally poor air-to-air combat training provided early in the war. So while encountering an enemy fighter was already a rare thing for American pilots in Vietnam, the rotation of aircrews meant that it would be rare to stay in the theater long enough to encounter enough VPAF fighters to even have the potential to make ace, and poor air-to-air combat training meant that you were less likely to be successful at downing an enemy fighter before it was able to disengage and escape.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "26832607", "title": "Post–World War II air-to-air combat losses", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 436, "text": "The Vietnam War saw a move away from cannon fire to air-to-air missiles. Although US forces maintained air supremacy throughout the war, there were still occasional dogfights and US and North Vietnamese aces. The North Vietnamese side claimed the Vietnam People's Air Force had 17 aces throughout the war, including Nguyen Van Coc, who is also the top ace of Vietnam War with 9 kills: seven acknowledged by the United States Air Force.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5686573", "title": "Charles Older", "section": "Section::::Military service.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 393, "text": "He became a pilot in the Marine Corps Reserve, but resigned to join the American Volunteer Group, better known as the Flying Tigers, to fight the Japanese prior to the United States entry into World War II. A member of the 3rd Pursuit Squadron (the \"Hell's Angels\"), he is credited with 10 victories, making him a double ace. By the end of the war, he had been promoted to lieutenant colonel.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21817943", "title": "17th Weapons Squadron", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 485, "text": "During World War II, the 17th Pursuit Squadron participated in the defense of the Philippines flying the Curtiss P-40 Warhawk and garnering the first American Ace of World War II. Wiped out during the Battle of the Philippines, some of its squadron members endured the Bataan Death March. Reactivated during the Vietnam War, the squadron went on to fly Republic F-105F Thunderchief Wild Weasel aircraft, and in Operation Desert Storm flying the General Dynamics F-16C Fighting Falcon.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "225878", "title": "Operation Rolling Thunder", "section": "Section::::Legacy.\n", "start_paragraph_id": 91, "start_character": 0, "end_paragraph_id": 91, "end_character": 275, "text": "From April 1965 to November 1968, in 268 air battles conducted over North Vietnam, VPAF claimed to have shot down 244 US or ARVN's aircraft, and they lost 85 MiGs. During the war, 13 VPAF's flying aces attained their status while flying the MiG-21 (cf. three in the MiG-17).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11759", "title": "McDonnell Douglas F-4 Phantom II", "section": "Section::::Operational history.:United States Navy.\n", "start_paragraph_id": 61, "start_character": 0, "end_paragraph_id": 61, "end_character": 667, "text": "On 10 May 1972, Lieutenant Randy \"Duke\" Cunningham and Lieutenant (junior grade) William P. Driscoll flying an F-4J, call sign \"Showtime 100\", shot down three MiG-17s to become the first American flying aces of the war. Their fifth victory was believed at the time to be over a mysterious North Vietnamese ace, Colonel Nguyen Toon, now considered mythical. On the return flight, the Phantom was damaged by an enemy surface-to-air missile. To avoid being captured, Cunningham and Driscoll flew their burning aircraft using only the rudder and afterburner (the damage to the aircraft rendered conventional control nearly impossible), until they could eject over water.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1699540", "title": "David McCampbell", "section": "Section::::United States Navy.:World War II.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 801, "text": "On October 24, 1944, in the initial phase of the Battle of Leyte Gulf, in the Philippines, he became the only American airman to achieve \"ace in a day\" status twice. McCampbell and his wingman attacked a Japanese force of 60 aircraft. McCampbell shot down nine, 7 Zeros and 2 Oscars, setting a U.S. single mission aerial combat record. During this same action, his wingman downed another six Japanese warplanes. When he landed his Grumman F6F Hellcat aboard USS \"Langley\" (the flight deck of \"Essex\" wasn't clear), his six machine guns had just two rounds remaining, and his airplane had to be manually released from the arrestor wire due to complete fuel exhaustion. Commander McCampbell received the Medal of Honor for both actions, becoming the only Fast Carrier Task Force pilot to be so honored.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7479805", "title": "List of Korean War flying aces", "section": "Section::::List of aces.:Soviet Union.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 835, "text": "Various sources claim between 43 and 60 pilots from the Soviet Union attained ace status in the war. Most sources claim around 50 pilots attained ace status during the Korean War, of whom many are very controversial. Research by the USAF named 52 pilots who may have had legitimate claim to the title. Little is known of some of the pilots and their combined tally is incompatible with the number of aircraft the USAF claims to have lost in the war. Subsequent independent sources generally agree the number of aces claimed was around 52, but 15 names differ among the lists, particularly lower-scoring pilots. The number of victories for virtually all of the ace pilots is subject to dispute. Listed are names of 67 Soviet pilots attributed as aces in various sources. Of these, the ace status of 30 are in question among historians.\n", "bleu_score": null, "meta": null } ] } ]
null
205yq4
Why did the brain of most animals evolve in the head and not in the torso?
[ { "answer": "It would be more protected, yes, but we keep our seeing, hearing, smelling, and tasting organs in our heads. If our brains were in our torsos, our senses of sight, smell, and hearing would be much less efficient (because the nerve signals would have to travel farther along pathways), and this would make us less able to survive. It's better to see a predator and run away before being attacked than it is to be attacked at all, even if your brain is better protected.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "46425008", "title": "Outline of the human brain", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 336, "text": "Human brain – central organ of the nervous system located in the head of a human being, protected by the skull. It has the same general structure as the brains of other mammals, but with a more developed cerebral cortex than any other, leading to the evolutionary success of widespread dominance of the human species across the planet.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19378", "title": "Mind", "section": "Section::::Relation to the brain.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 647, "text": "In animals, the brain, or \"encephalon\" (Greek for \"in the head\"), is the control center of the central nervous system, responsible for thought. In most animals, the brain is located in the head, protected by the skull and close to the primary sensory apparatus of vision, hearing, equilibrioception, taste and olfaction. While all vertebrates have a brain, most invertebrates have either a centralized brain or collections of individual ganglia. Primitive animals such as sponges do not have a brain at all. Brains can be extremely complex. For example, the human brain contains around 86 billion neurons, each linked to as many as 10,000 others.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8474825", "title": "Cranial vault", "section": "Section::::Development.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 357, "text": "The size and shape of the brain and the surrounding vault remain quite plastic as the brain grows in childhood. In several ancient societies, head shape was altered for aesthetic or religious reasons by binding cloth or boards tightly around the head during infancy. It is not known whether such artificial cranial deformation has an effect in brain power.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3717", "title": "Brain", "section": "Section::::Anatomy.:Evolution.:Vertebrates.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 678, "text": "All vertebrate brains share a common underlying form, which appears most clearly during early stages of embryonic development. In its earliest form, the brain appears as three swellings at the front end of the neural tube; these swellings eventually become the forebrain, midbrain, and hindbrain (the prosencephalon, mesencephalon, and rhombencephalon, respectively). At the earliest stages of brain development, the three areas are roughly equal in size. In many classes of vertebrates, such as fish and amphibians, the three parts remain similar in size in the adult, but in mammals the forebrain becomes much larger than the other parts, and the midbrain becomes very small.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2452832", "title": "Evolution of human intelligence", "section": "Section::::History.:\"Homo\".\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 386, "text": "The evolution of a larger brain created a problem for early humans, however. A larger brain requires a larger skull, and thus requires the female to have a wider birth canal for the newborn's larger skull to pass through. But if the female's birth canal grew too wide, her pelvis would be so wide that she would lose the ability to run, which was a necessary skill 2 million years ago.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10085369", "title": "Jebel Irhoud", "section": "Section::::Human remains.:Morphology.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 656, "text": "When comparing the fossils with those of modern humans, the main difference is the elongated shape of the fossil braincase. According to the researchers, this indicates that brain shape, and possibly brain functions, evolved within the \"Homo sapiens\" lineage and relatively recently. Evolutionary changes in brain shape are likely to be associated with genetic changes of the brain's organization, interconnection and development and may reflect adaptive changes in the way the brain functions. Such changes may have caused the human brain to become rounder and two regions in the back of the brain to become enlarged over thousands of years of evolution.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1209545", "title": "Head", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 364, "text": "Heads develop in animals by an evolutionary trend known as cephalization. In bilaterally symmetrical animals, nervous tissues concentrate at the anterior region, forming structures responsible for information processing. Through biological evolution, sense organs and feeding structures also concentrate into the anterior region; these collectively form the head.\n", "bleu_score": null, "meta": null } ] } ]
null
2c3m4g
why can some animals give birth without help when humans can't?
[ { "answer": "Humans can, it's just not ideal. The problem is we're bipedal animals evolved from quadrupedal animals. Plus, we've got active brains which require bigger skulls than other animals. Women's wombs just didn't keep up as well during our evolution to adjust for larger skulls and a bipedal lifestyle. But even with these problems, more than enough children were born and survived to adulthood without any problems that subpar care during birthing as seen in less advanced societies still isn't an evolutionary disadvantage.", "provenance": null }, { "answer": "The only true assets of humanity are its intelligence and its social/communication skills. Those two assets require one big ass brain, proportionally. That giant brain is what makes our birth so difficult and also why we have such high mortality rates surrounding it.\n\nThe other factor that makes birth so dangerous is that humans evolved to walk bipedally, for reasons of vision and locomotion. Bipedal stature makes the hips a lot more narrow, and in turn, gives the baby less room to escape the birth canal. Evolution decided that risk was worth it.\n\nThe things that make humans have difficult births were the exact same things that allow us to help each other with them. So we traded safety for knowledge, and we assist each other because we're smart enough to do so.\n\nTL;DR- Your cat can have 4 kittens on her own just fine because the cost of that is cat-intelligence. The cost of human intelligence is a difficult birth and a long childhood. Reproduction and evolution think that this is a reasonable price.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2368360", "title": "Obstetrical dilemma", "section": "Section::::Evolution of human birth.:Adaptations to ensure live birth.:Social assistance.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 893, "text": "Human infants are also almost always born with assistance from other humans because of the way that the pelvis is shaped. Since the pelvis and opening of birth canal face backwards, humans have difficulty giving birth themselves because they cannot guide the baby out of the canal. Non-human primates seek seclusion when giving birth because they do not need any help due to the pelvis and opening being more forward. Human infants depend on their parents much more and for much longer than other primates. Humans spend a lot of their time caring for their children as they develop whereas other species stand on their own from when they are born. The faster an infant develops, the higher the reproductive output of a female can be. So in humans, the cost of slow development of their infants is that humans reproduce relatively slowly. This phenomenon is also known as cooperative breeding.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "209203", "title": "List of domesticated animals", "section": "Section::::Semidomesticated, routinely captive-bred, or domestication status unclear.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 493, "text": "Due to the somewhat unclear outlines of what, precisely, constitutes domestication, there are some species that may or may not be fully domesticated. There are also species that are extensively used or kept as pets by humans, but are not significantly altered from wild-type animals. Most animals on this second table are at least somewhat altered from wild animals by their extensive interactions with humans. Many could not be released into the wild, or are in some way dependent on humans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "36253964", "title": "Origin of speech", "section": "Section::::Speculative scenarios.:Twentieth century speculations.:Scenarios involving mother-infant interactions.:Co-operative breeding.\n", "start_paragraph_id": 131, "start_character": 0, "end_paragraph_id": 131, "end_character": 1262, "text": "Evolutionary anthropologist Sarah Hrdy observes that only human mothers among great apes are willing to let another individual take hold of their own babies; further, we are routinely willing to let others babysit. She identifies lack of trust as the major factor preventing chimp, bonobo or gorilla mothers from doing the same: \"If ape mothers insist on carrying their babies everywhere ... it is because the available alternatives are not safe enough.\" The fundamental problem is that ape mothers (unlike monkey mothers who may often babysit) do not have female relatives nearby. The strong implication is that, in the course of \"Homo\" evolution, allocare could develop because \"Homo\" mothers did have female kin close by — in the first place, most reliably, their own mothers. Extending the Grandmother hypothesis, Hrdy argues that evolving \"Homo erectus\" females necessarily relied on female kin initially; this novel situation in ape evolution of mother, infant and mother's mother as allocarer provided the evolutionary ground for the emergence of intersubjectivity. She relates this onset of \"cooperative breeding in an ape\" to shifts in life history and slower child development, linked to the change in brain and body size from the 2 million year mark.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "67992", "title": "Isabella Rossellini", "section": "Section::::Stage and live performance.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 313, "text": "Some animals reproduce with male and female; some animals change sex – they start female and they end male or vice-versa. Some fish do that. Some animals are hermaphrodites – they don't need anybody, they have both vaginas and penises. Then we have animals that don't need sex at all, they just clone themselves.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2368360", "title": "Obstetrical dilemma", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 669, "text": "The obstetrical dilemma is a hypothesis to explain why humans often require assistance from other humans during childbirth to avoid complications, whereas most non-human primates give birth alone with relatively little difficulty. The obstetrical dilemma posits that this is due to the biological trade-off imposed by two opposing evolutionary pressures in the development of the human pelvis. As human ancestor species (hominids) developed bipedal locomotion (the ability to walk upright), decreasing the size of the bony birth canal, they also developed ever larger skulls, which required a wider obstetrical pelvic area to accommodate this trend in hominid infants.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53543", "title": "Domestication of the horse", "section": "Section::::Methods of domestication.\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 505, "text": "However, there is disagreement over the definition of the term \"domestication\". One interpretation of \"domestication\" is that it must include physiological changes associated with being selectively bred in captivity, and not merely \"tamed.\" It has been noted that traditional peoples worldwide (both hunter-gatherers and horticulturists) routinely tame individuals from wild species, typically by hand-rearing infants whose parents have been killed, and these animals are not necessarily \"domesticated.\" \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4243426", "title": "Environmental issues in Thailand", "section": "Section::::Wildlife.\n", "start_paragraph_id": 124, "start_character": 0, "end_paragraph_id": 124, "end_character": 367, "text": "The practice of keeping wild animals as pets threatens several species. Baby animals are typically captured and sold, which often requires killing the mother. Once in captivity and out of their natural habitat, many pets die or fail to reproduce. Affected populations include the Asiatic black bear, Malayan sun bear, white-handed lar, pileated gibbon and binturong.\n", "bleu_score": null, "meta": null } ] } ]
null
33v3pw
The brain is remarkable energy efficient. Is there any limit to the efficiency in which information is computed?
[ { "answer": "Information is a very important concept in physics (particularly in thermodynamics). At the most basic level information in physics is quantified as [entropy](_URL_2_), and the relationship between energy, temperature, and entropy can be understood in part through viewing [Boltzmann's constant](_URL_3_) as a (temperature dependent) conversion factor between energy and information.\n\nOnce you understand that energy and information are closely related and that relationship depends on temperature, you can go a step further and look at [Landauer's principle](_URL_0_) which describes how to apply this conversion factor to finding a lower limit of the energy required to erase 1 bit of information - E=k\\*T ln 2 (where E is energy, k is boltzmann's constant and T is temperature). That means there is a minimum energy cost associated with every \"lossy\" logical operation (AND, OR, XOR, etc.- any operation where the number of output bits is fewer than the number of input bits) a computer performs, at a given temperature.\n\nThe universe as a whole is permeated by the [cosmic microwave background radiation](_URL_6_), which is low-frequency microwave radiation at around 2.7 K. That's probably the most energy-efficient temperature for your computer to be at, because you can cool down to that temperature for free (just have your computer sit in deep space), and going colder than that would require you to use active heat pumps (which would almost certainly take more energy than you'd save on the more efficient computations). \n\nSo, [plugging and chugging](_URL_1_), that comes out to about 2.58×10^-23 Joules per bit deletion. Just for brevity let's assume that each processor clock is equivalent to one bit deletion (not true but close enough for an order-of-magnitude estimate), and assume your processor was running at 4GHz (why bother running a super-efficient computer if it's not fast enough to run Crysis?) That implies a minimum required power of about [0.1 picoWatts](_URL_4_). By contrast, an i7-4790K consumes about [150 Watts](_URL_7_) - about 15 orders of magnitude more. So while there are indeed physical limits on the efficiency of processing information, current engineering is not even remotely close to those limits. There are much higher and more relevant limits that relate specifically to how information is processed with current technology (moving electrons around).\n\nIf you're interested in how physics and information relate, and specifically how those concepts impact computing, [this](_URL_5_) is a good entry point page for learning more. If you liked the topics I wrote about above (which are pretty much just the third 'physical limits' bullet spelled out a bit more verbosely), you'd probably also really enjoy the other links on that page. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "374298", "title": "G factor (psychometrics)", "section": "Section::::Neuroscientific findings.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 407, "text": "Some research suggests that aside from the integrity of white matter, also its organizational efficiency is related to intelligence. The hypothesis that brain efficiency has a role in intelligence is supported by functional MRI research showing that more intelligent people generally process information more efficiently, i.e., they use fewer brain resources for the same task than less intelligent people.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1305044", "title": "Neuroscience and intelligence", "section": "Section::::Humans.:Neural efficiency.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 626, "text": "The neural efficiency hypothesis postulates that more intelligent individuals display less activation in the brain during cognitive tasks, as measured by Glucose metabolism. A small sample of participants (N=8) displayed negative correlations between intelligence and absolute regional metabolic rates ranging from -0.48 to -0.84, as measured by PET scans, indicating that brighter individuals were more effective processors of information, as they use less energy. According to an extensive review by Neubauer & Fink a large number of studies (N=27) have confirmed this finding using methods such as PET scans, EEG and fMRI.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33115171", "title": "Koomey's law", "section": "Section::::Slowing and end of Koomey's law.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 404, "text": "By the second law of thermodynamics and Landauer's principle, irreversible computing cannot continue to be made more energy efficient forever. As of 2011, computers have a computing efficiency of about 0.00001%. Assuming that the energy efficiency of computing will continue to double every 1.57 years, the Landauer bound will be reached in 2048. Thus, after about 2048, Koomey's law can no longer hold.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15445", "title": "Entropy (information theory)", "section": "Section::::Efficiency.\n", "start_paragraph_id": 90, "start_character": 0, "end_paragraph_id": 90, "end_character": 359, "text": "Efficiency has utility in quantifying the effective use of a communication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy formula_53. Furthermore, the efficiency is indifferent to choice of (positive) base , as indicated by the insensitivity within the final logarithm above thereto.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "767123", "title": "The Singularity Is Near", "section": "Section::::Content.:Computational capacity.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 779, "text": "Since Kurzweil believes computational capacity will continue to grow exponentially long after Moore's Law ends it will eventually rival the raw computing power of the human brain. Kurzweil looks at several different estimates of how much computational capacity is in the brain and settles on 10 calculations per second and 10 bits of memory. He writes that $1,000 will buy computer power equal to a single brain \"by around 2020\" while by 2045, the onset of the Singularity, he says the same amount of money will buy one billion times more power than all human brains combined today. Kurzweil admits the exponential trend in increased computing power will hit a limit eventually, but he calculates that limit to be trillions of times beyond what is necessary for the Singularity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17454631", "title": "Selfish brain theory", "section": "Section::::The explanatory power of the Selfish Brain theory.:Energy procurement by the brain.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 570, "text": "The brain can cover its energy needs (particularly those of the cerebral hemispheres) either by allocation or nutrient intake. The corresponding signal to the subordinate regulatory system originates in the cerebral hemispheres. The most phylogenetically recent part of the brain is characterized by a high plasticity and a high capacity to learn with this process. It is always able to adapt its regulatory processes by processing responses from the periphery, memorizing the results of individual feedback loops and behaviors, and anticipating any possible build-ups.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39177819", "title": "Cognitive computer", "section": "Section::::Intel Loihi chip.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 342, "text": "Intel's self-learning neuromorphic chip, named Loihi, perhaps named after the Hawaiian seamount Loihi, offers substantial power efficiency designed after the human brain. Intel claims Loihi is about 1000 times more energy efficient than the general-purpose computing power needed to train the neural networks that rival Loihi's performance. \n", "bleu_score": null, "meta": null } ] } ]
null
8yxc0h
why do some acne medications cause an "initial breakout," making your skin worse, before making it better?
[ { "answer": "Hi, it’s because your cells are turning over rapidly and pushing acne that is below the surafce to the top. This is quite normal. Now if at any point your acne become severe and it seems that the breakout is not normal for YOU, go to a derm just to make sure you are not allergic to the product. It is a good idea to start slow and if the side effects are too much, cut back", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "229985", "title": "Isotretinoin", "section": "Section::::Adverse effects.:Skin.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 561, "text": "Acne usually flares up 2–3 weeks into the treatment and is usually mild and tolerable. Occasionally this flare-up is severe, necessitating oral antiobiotics such as erythromycin. A short course of oral prednisolone may be required. Some dermatologists favour a few weeks of pre-treatment with oral antibiotics before commencing isotretinoin to reduce the chance of a severe flare. A \"stepped\" course may also be used to reduce the chance of this initial flare, by which the initial dose is low (e.g. 0.5 mg/kg) and subsequently increased throughout the course.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2404514", "title": "Proactiv", "section": "Section::::Safety and efficacy.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 633, "text": "The US Food and Drug Administration (FDA) warned in 2014 that over-the-counter acne products containing benzoyl peroxide and/or salicylic acid, including Proactiv, can cause severe irritation, as well as rare but life-threatening allergic reactions. Consumers were advised to stop using the products if they experience hives or itching, and to seek emergency medical attention if they feel faint, or experience throat tightness, breathing problems, or swelling of the eyes, face, lips or tongue. The FDA noted that it remains unclear whether the reactions are caused by the active ingredients, inactive ingredients or a combination.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54114", "title": "Vitamin A", "section": "Section::::Metabolic functions.:Dermatology.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 928, "text": "For the treatment of acne, the most prescribed retinoid drug is 13-cis retinoic acid (isotretinoin). It reduces the size and secretion of the sebaceous glands. Although it is known that 40 mg of isotretinoin will break down to an equivalent of 10 mg of ATRA — the mechanism of action of the drug (original brand name Accutane) remains unknown and is a matter of some controversy. Isotretinoin reduces bacterial numbers in both the ducts and skin surface. This is thought to be a result of the reduction in sebum, a nutrient source for the bacteria. Isotretinoin reduces inflammation via inhibition of chemotactic responses of monocytes and neutrophils. Isotretinoin also has been shown to initiate remodeling of the sebaceous glands; triggering changes in gene expression that selectively induce apoptosis. Isotretinoin is a teratogen with a number of potential side-effects. Consequently, its use requires medical supervision.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3392594", "title": "Angular cheilitis", "section": "Section::::Causes.:Drugs.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 652, "text": "Several drugs may cause AC as a side effect, by various mechanisms, such as creating drug-induced xerostomia. Various examples include isotretinoin, indinavir, and sorafenib. Isotretinoin (Accutane), an analog of vitamin A, is a medication which dries the skin. Less commonly, angular cheilitis is associated with primary hypervitaminosis A, which can occur when large amounts of liver (including cod liver oil and other fish oils) are regularly consumed or as a result from an excess intake of vitamin A in the form of vitamin supplements. Recreational drug users may develop AC. Examples include cocaine, methamphetamines, heroin, and hallucinogens.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18832275", "title": "Drug eruption", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 469, "text": "In medicine, a drug eruption is an adverse drug reaction of the skin. Most drug-induced cutaneous reactions are mild and disappear when the offending drug is withdrawn. These are called \"simple\" drug eruptions. However, more serious drug eruptions may be associated with organ injury such as liver or kidney damage and are categorized as \"complex\". Drugs can also cause hair and nail changes, affect the mucous membranes, or cause itching without outward skin changes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7702007", "title": "Drug-induced lupus erythematosus", "section": "Section::::Treatment.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 385, "text": "It is important to recognize early that these drugs are causing DIL like symptoms and discontinue use of the drug. Symptoms of drug-induced lupus erythematosus generally disappear days to weeks after medication use is discontinued. Non-steroidal anti-inflammatory drugs (NSAIDs) will quicken the healing process. Corticosteroids may be used if more severe symptoms of DIL are present.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58190167", "title": "Pharmacokinetics of progesterone", "section": "Section::::Routes of administration.:Intramuscular injection.:Oil solution.\n", "start_paragraph_id": 66, "start_character": 0, "end_paragraph_id": 66, "end_character": 306, "text": "Intramuscular progesterone often causes pain when injected. It irritates tissues and is associated with injection site reactions such as changes in skin color, pain, redness, transient indurations (due to inflammation), ecchymosis (bruising/discoloration), and others. Rarely, sterile abscesses can occur.\n", "bleu_score": null, "meta": null } ] } ]
null
4xnima
why do professional swimmers wear 2 caps when competing?
[ { "answer": "I'm guessing, but most likely, the increased weight doesn't matter compared to the reduced drag from smooth-ass head.\n\nEdit: stack exchange is 100% better than reddit for these things.\n_URL_0_", "provenance": null }, { "answer": "So I was actually wondering this out loud the other day while I was watching the Olympics with my wife, and not 10 seconds later the commentator on the TV actually explained it. He said occasionally you will see some swimmers wear their goggles with the strap on the outside of their swimming cap, but most wear them with the strap on the inside. Some swimmers wear their goggles with the strap in contact with their hair, and then a single swim cap over top, but other swimmers may feel that their hair is too slippery for the goggles' strap to stay in place, so they wear a swim cap, then the goggles with the strap over top of the first swim cap, and then a second swim cap to make sure their goggles can't slip off. He seemed to know what he was talking about, so I trust his explanation.", "provenance": null }, { "answer": "While diving in goggles and caps tend to fall off also while swimming caps tend to fall off bringing the goggles down with them. Wearing two caps helps to keep everything stay put and avoid the drag caused by your hair. This is also why goggle straps are typically placed under the cap. \n\nSource: swimmer. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "177213", "title": "Wetsuit", "section": "Section::::Uses.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 526, "text": "Unlike triathlons, which allow swimmers to wear wetsuits when the water is below a certain temperature (the standard is at the surface or up to for unofficial events.), most open water swim races either do not permit the use of wetsuits (usually defined as anything covering the body above the waist or below the knees), or put wetsuit-clad swimmers in a separate category and/or make them ineligible for race awards. This varies by locales and times of the year, where water temperatures are substantially below comfortable.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31524", "title": "Triathlon", "section": "Section::::Triathlon and fitness.:Swimming.\n", "start_paragraph_id": 64, "start_character": 0, "end_paragraph_id": 64, "end_character": 1204, "text": "Because open water swim areas are often cold and because wearing a wetsuit provides a competitive advantage, specialized triathlon wetsuits have been developed in a variety of styles to match the conditions of the water. For example, wetsuits that are sleeveless and cut above the knee are designed for warmer waters, while still providing buoyancy. Wetsuits are legal in sanctioned events at which the surface water temperature is or less. In non-sanctioned events or in \"age group\" classes where most racers are simply participating for the enjoyment of the sport instead of vying for official triathlon placing, wetsuits can often be used at other temperatures. Race directors will sometimes discourage or ban wetsuits if the water temperature is above due to overheating that can occur while wearing a wetsuit. Other rules have been implemented by race organizers regarding both wetsuit thickness as well as the use of \"swim skins;\" which need to be considered by those participating in future triathlons. Some triathlon sanctioning bodies have placed limits on the thickness of the wetsuit material. Under ITU and some national governing bodies' rules no wetsuit may have a thickness of more than .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19339249", "title": "Swim briefs", "section": "Section::::Use and popularity.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 485, "text": "Swim briefs are worn by professional and recreational athletes in many water sports. They are the standard for competitive diving and water polo. They are preferred in competitive swimming for the reduction of the water's drag on the swimmer, although jammers and bodyskins are sometimes worn instead of the swim brief. Participants in sports that require a wetsuit such as waterskiing, scuba diving, surfing, and wakeboarding often wear swim briefs as an undergarment to the wetsuit.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1335181", "title": "Competitive swimwear", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 569, "text": "Some swimsuits are designed specifically for swimming competitions where they may be constructed of a special low resistance fabric that reduces skin drag. For some kinds of swimming and diving, special bodysuits called \"diveskins\" are worn. These suits are made from spandex and provide little thermal protection, but they do protect the skin from stings and abrasion. Most competitive swimmers also wear special swimsuits including partial bodysuits, racerback styles, jammers and racing briefs to assist their glide through the water thus gaining a speed advantage.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27974815", "title": "Victoria Police Search and Rescue Squad", "section": "Section::::Equipment.:Diving equipment.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 392, "text": "A 'personal issue' wetsuit protects each diver from injury and retains enough warmth to withstand even the coldest water. An inflatable 'Fenzy' assists ascent to the surface through a compressed air cylinder, which automatically inflates the vest. Standard diving accessories include swim fins, facemask, diving knives, weight belt and a lifeline between the diver and the on-land attendant.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "177213", "title": "Wetsuit", "section": "Section::::Development of suit design.:Return of single-backed neoprene.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 399, "text": "Some triathlon wetsuits go further, and use rubber-molding and texturing methods to roughen up the surface of the suit on the forearms, to increase forward drag and help pull the swimmer forwards through the water. Extremely thin 1 mm neoprene is also often used in the under-arm area, to decrease stretch resistance and reduce strain on the swimmer when they extend their arms out over their head.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1335181", "title": "Competitive swimwear", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 675, "text": "Unlike regular swimsuits, which are designed mainly for the aesthetic appearances, swimsuits designed to be worn during competitions are manufactured to assist the athlete in swim competitions. They reduce friction and drag in the water, increasing the efficiency of the swimmer's forward motion. The tight fits allow for easy movement and are said to reduce muscle vibration, thus reducing drag. This also reduces the possibility that a high forwards dive will remove a divers swimwear. Starting around 2000, in an effort to improve the effectiveness of the swimsuits, engineers have taken to designing them to replicate the skin of sea-based animals, sharks in particular.\n", "bleu_score": null, "meta": null } ] } ]
null
bftfgp
What should one look out for when selecting historical works to read, especially ones on controversial matters?
[ { "answer": "Brilliant question. \n\nSo it can often be very hard to know quite what to look out for with works of history, particularly when you are new to the field and can’t spot the problems. \n\nAcademic qualification is perhaps a good starting place, yet, as you note, many books are written by journalists without formal training in academic history. This is not in itself an indicator of bad history as many such writers do produce very high quality work (Max Hastings for example). Instead I would look more broadly for what I suppose you could call the competency and dependability of the writer. \n\nThere are a number of ways you can check this. Firstly, reviews are brilliant. I would recommend checking feedback a particular book has received since publication, academic journals, and some websites newspapers will run review sections which can often prove helpful. Often books will also feature appraisals on the front/rear cover of the text itself, however I would approach these with caution - remember this is advertising, and I would be sceptical of such appraisals unless you are familiar with the quality of work of the individual writing it. \n\nNext, it is worth considering various matters, publisher, format, style, quality of print, etc. These are all often overlooked but can indicate the quality of a work of history. It has become increasingly easy to self publish bad history in recent years, either online or in print, so it helps to be wary. If something looks off, be wary. \n\nIn a similar vein, read the synopsis of a book, this will usually give you a condensed view into the book, its themes and central arguments. Do these seem plausible? Well presented? \n\nTry reading the introduction. This is of course not always possible, especially if not buying a book within a physical bookshop, but is very handy. The introduction, like the synopsis, will offer a glimpse into the soul of the book. Here you should be able to see the quality of the writing which will be displayed throughout, and this can be a brilliant test of character. \n\nAt the end of the day, these tips can help, but are not full proof. There is always going to be bad history out there which is not that credible. Often it can sneak through undetected, and can appear entirely legitimate to the untrained eye. In fact, even ‘good history’ is not without its flaws sometimes, and often a brilliant work may be betrayed by its style. It is often hard to tell, and if you are to spend much time reading history, you will certainly come across some questionable, however, there are some general rules which you should consider when reading which may help when you do come across such things. \n\nAlways engage critically when you read history. Remember that the author is constructing an argument when they write, and has a point of view they want to convince you of. When writing history, the historians can choose which facts to select, and how to present them, all history is biased and subjective in some way. However, this does not mean all history is created equal, some is better than others. Some writers will make their subjectivity clear, and acknowledge their role in constructing the narrative you will be reading, while others will not, they may present as objective undeniable fact that which they have subjectively interpreted. Keep this in mind while reading. Ask questions of the author, does a claim make sense? Is it backed up convincingly? Does the evidence they cite actually support their claims? Why have they portrayed something in a certain light? Personally I make notes in the margins of history books I am reading through to aid in this process. These issues can at first be tough, but as your wealth of knowledge grows, it does become easier. If something sticks out, pick at it much like a loose thread, and see where it leads. \n\nWith this in mind I would like to note that having a particular stance when history is not necessarily a bad thing, it depends upon how this manifests itself. Objectivity relates more to being “fair to the evidence” than being neutral to the topic, if you catch my meaning. To be fair to a figure like Lenin does not mean to be without judgement, but rather to have an opinion predicated upon the evidence. However, evidence is often complicated and vast, and can support many conclusions. One could be positive, or negative, about Lenin, and while neither can be considered undeniably true, both can ‘objective’. \n\nAs I hope to have pointed out, it can all be rather complicated, and it’s something we all struggle with. Good luck!", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "25265597", "title": "Philippe Nys", "section": "Section::::Works.:Editorial Activities.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 474, "text": "This series provides vital references for students, architects, town planners, historians and philosophers as well as for readers interested in gardens, garden-makers and collectors. By publishing fundamental works of the past – forgotten, unknown or un-translated – along with French and foreign, historic and contemporary, aesthetic and theoretical works, we hope to reveal the complexity and hidden wealth of a prime source of poetic imagination. Eight titles published.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20255180", "title": "Scholars' Facsimiles & Reprints", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 321, "text": "Works are selected for publication because of their bibliographical rarity and scholarly importance as primary sources. The publications list is focused on English and American literature and history, philosophy, psychology, religion, maritime history, and women's studies, from the Renaissance through the 19th century.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35603457", "title": "Heart Flesh Degeneration", "section": "Section::::Review.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 840, "text": "\"In addition to the well-founded historical knowledge of the author of this book developed through meticulous work, it is the language that is especially impressive. Dynamic and provided with numerous openings, the reader not only gets insights into historical processes but also in a possible world of thought of the perpetrators. And that is exactly what has impressed me about this book so much. As a reader you have to constantly look out for the author. He takes one by the hand and leads you to places and events, which one would never have exposed himself/herself to. With a very slightly translucent, sage and sometimes even witty language, he seduces one to lay down a little bit the emotional defenses that one has rightly placed for such literature and achieves even in causing feelings of disbelief and dismay in jaded people. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7851087", "title": "Getty Research Institute", "section": "Section::::Programs.:Publications.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 259, "text": "Here are selected books published by GRI, by the Getty Research Institute for the History of Art and the Humanities, by the Getty Center for the History of Art and the Humanities, by the Getty Information Institute, or by the Art History Information Program.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52197693", "title": "Time Bites: Views and Reviews", "section": "Section::::Reception.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 769, "text": "Booklist wrote \"Most of her conversational, fast-moving, often wry inquiries into literature, politics, and ethics were originally published in England, hence little known in America, a lack redressed in this generous and pleasurable collection. Knowing books as intimately as she does, and caring deeply about reading and writing, Lessing pens critical essays that are vibrant and illuminating, with quotable lines on every page.\" and the Library Journal stated \"Lessing comments cleverly on the classic novelists (e.g., Leo Tolstoy), but some of the most interesting pieces are centered on less well known or virtually forgotten writers. There are quite a few essays on the Sufi author Idries Shah (1924-96); other topics Lessing covers range from politics to cats.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8042940", "title": "List of fallacies", "section": "Section::::Further reading.\n", "start_paragraph_id": 179, "start_character": 0, "end_paragraph_id": 179, "end_character": 363, "text": "The following is a sample of books for further reading, selected for a combination of content, ease of access via the internet, and to provide an indication of published sources that interested readers may review. The titles of some books are self-explanatory. Good books on critical thinking commonly contain sections on fallacies, and some may be listed below.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "166565", "title": "Gabriel Naudé", "section": "Section::::\"Advice on establishing a library\".\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 1011, "text": "Naudé devotes an entire chapter to book selection, remarked upon throughout. The first authors who need to be purchased are those considered experts in their respective fields. No matter whether they are ancient or modern works, if a book is held in high regard by practitioners of a particular field then it should be present in any collection. In addition, any well known interpretations or commentaries that exist are a necessity. Naudé suggested purchasing books in the original languages because meaning can often be lost in translation. He is strongly against censorship of any kind. Naudé believes that every book has a reader regardless of the subject; and that information should be free and available. Readers could always find use of a book, even if it is to refute the ideas presented on its pages. Certain books are popular at times but later forgotten; he argued that it would be beneficial to a library if there were multiple copies of these books to accommodate the popular tastes of the times.\n", "bleu_score": null, "meta": null } ] } ]
null
1ib0ed
Were number systems written down before words and language?
[ { "answer": "OP, if you don't get a satisfactory answer here, try cross-posting to /r/AskAnthropology", "provenance": null }, { "answer": "Sumerian cuneiform has its origin in accounting systems, particularly those used by temples to keep track of goods like cattle and grain. A sign could represent a type of good, and the number with a system of tick marks next to it. Cuneiform symbols originated as mnemonics/pictograms, which were subsequently abstracted, and the representational/mnemonic function of the symbols became more sophisticated over time, developing into a more complete system. [This website](_URL_0_) has a more detailed explanation of how the system developed.\n\nSome kind of abstract numerical representation system does seem to predate what we think of as written language, but the development of written language was very gradual--it wouldn't be accurate to mark a strict cutoff before which proto-writing could be considered merely a mnemonic system and after which it could be considered \"fully-fledged writing\". John Hayes, in the *Manual of Sumerian Grammar and Texts*, points out that written Sumerian in all likelihood is an incomplete representation of the language as it was spoken, especially phonologically.\n\nWriting may have developed quite differently when it arose in other places--the earliest Chinese writing is from inscriptions on bones used in pyromancy, not accounting. The exact relationship of Egyptian hieroglyphics to cuneiform is still debated, and I don't know if anything is known about the development of Mayan writing, so it's hard to say if the method of development of writing in Mesopotamia out of an accounting system represents a general tendency or a peculiarity.\n\nAnd, of course, the written accounting tools used by the ancient Mesopotamians were still pretty basic--mathematical notation and double-entry bookkeeping are both much more recent inventions. What they had were pretty much just counting systems.\n\nHope some of that helps.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "625125", "title": "Letter case", "section": "Section::::History.\n", "start_paragraph_id": 100, "start_character": 0, "end_paragraph_id": 100, "end_character": 369, "text": "Traditionally, certain letters were rendered differently according to a set of rules. In particular, those letters that began sentences or nouns were made larger and often written in a distinct script. There was no fixed capitalisation system until the early 18th century. The English language eventually dropped the rule for nouns, while the German language keeps it.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27001600", "title": "Śaṅkaranārāyaṇa", "section": "Section::::Mathematical achievements.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 281, "text": "The system was a spoken one in the sense that consonants and vowels which are not vocalised have no numerical value. The system is a place-value system with zero. In fact many different \"words\" could represent the same number and this was highly useful for works written in verse.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22980", "title": "Phoneme", "section": "Section::::Correspondence between letters and phonemes.\n", "start_paragraph_id": 58, "start_character": 0, "end_paragraph_id": 58, "end_character": 993, "text": "Phonemes are considered to be the basis for alphabetic writing systems. In such systems the written symbols (graphemes) represent, in principle, the phonemes of the language being written. This is most obviously the case when the alphabet was invented with a particular language in mind; for example, the Latin alphabet was devised for Classical Latin, and therefore the Latin of that period enjoyed a near one-to-one correspondence between phonemes and graphemes in most cases, though the devisers of the alphabet chose not to represent the phonemic effect of vowel length. However, because changes in the spoken language are often not accompanied by changes in the established orthography (as well as other reasons, including dialect differences, the effects of morphophonology on orthography, and the use of foreign spellings for some loanwords), the correspondence between spelling and pronunciation in a given language may be highly distorted; this is the case with English, for example.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17730", "title": "Latin", "section": "Section::::Numbers.\n", "start_paragraph_id": 182, "start_character": 0, "end_paragraph_id": 182, "end_character": 281, "text": "In ancient times, numbers in Latin were written only with letters. Today, the numbers can be written with the Arabic numbers as well as with Roman numerals. The numbers 1, 2 and 3 and every whole hundred from 200 to 900 are declined as nouns and adjectives, with some differences.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20544", "title": "Morphophonology", "section": "Section::::Orthography.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 519, "text": "The principle behind alphabetic writing systems is that the letters (graphemes) represent phonemes. However, in many orthographies based on such systems the correspondences between graphemes and phonemes are not exact, and it is sometimes the case that certain spellings better represent a word's morphophonological structure rather than the purely phonological. An example of this is that the English plural morpheme is written \"-s\" regardless of whether it is pronounced as or ; we write \"cats and \"dogs, not \"dogz\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "166697", "title": "Greek numerals", "section": "Section::::Description.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 1012, "text": "This alphabetic system operates on the additive principle in which the numeric values of the letters are added together to obtain the total. For example, 241 was represented as  (200 + 40 + 1). (It was not always the case that the numbers ran from highest to lowest: a 4th-century BC inscription at Athens placed the units to the left of the tens. This practice continued in Asia Minor well into the Roman period.) In ancient and medieval manuscripts, these numerals were eventually distinguished from letters using overbars: , , , etc. In medieval manuscripts of the Book of Revelation, the number of the Beast 666 is written as  (600 + 60 + 6). (Numbers larger than 1,000 reused the same letters but included various marks to note the change.) Fractions were indicated as the denominator followed by a \"keraia\" (ʹ); γʹ indicated one third, δʹ one fourth and so on. As an exception, special symbol ∠ʹ indicated one half. These fractions were additive (also known as Egyptian fractions); for example indicated .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7489", "title": "Collation", "section": "Section::::Labeling of ordered items.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 563, "text": "In some contexts, numbers and letters are used not so much as a basis for establishing an ordering, but as a means of labeling items that are already ordered. For example, pages, sections, chapters, and the like, as well as the items of lists, are frequently \"numbered\" in this way. Labeling series that may be used include ordinary Arabic numerals (1, 2, 3, ...), Roman numerals (I, II, III, ... or i, ii, iii, ...), or letters (A, B, C, ... or a, b, c, ...). (An alternative method for indicating list items, without numbering them, is to use a bulleted list.)\n", "bleu_score": null, "meta": null } ] } ]
null
24mn2e
what defines a religion from mythology?
[ { "answer": "A mythology is a set of traditional stories. The Jewish mythology is the Old Testament, the Christian mythology is the Old and New Testaments, the Greek mythology is... well, Greek mythology. Mythology doesn't have anything to do with whether the stories are true or not, it's an agnostic term. It simply says \"these are stories that people tell each other.\"\n\nA religion is a set of beliefs that is accepted on faith by its followers. Religions often incorporate mythologies as part of their belief set, though it's not totally necessary. The point here is that \"religion\" and \"mythology\" are not interchangeable terms. One is a set of stories, one is a set of beliefs.\n\nIt's confusing because the colloquial usage for \"mythology\" (and \"myth\" especially) often *implies* that it's something that's untrue, so many religious people avoid calling the stories from their faith \"myths.\" They are, though, by the strictest technical definition.", "provenance": null }, { "answer": "I know it's marked as explained, since [u/corpuscle634](_URL_0_) did a good job of it, but I want to belabor a few points:\n\nMythological stories are often more or less cultural--that is, they are particular to a certain culture (for earlier cultures this often coincided with the religious sphere, but does not have to). Myths are considered to be emblematic of cultural values, even if they didn't always have a clear message. So more than simply \"stories that people tell each other,\" they are stories that *peoples* tell each other. A nuanced difference, but one I find important.\n\nReligions are also wayy more than a set of beliefs. If you want to be a bit reductive about it, which I will be for the sake of clarity, religions are a combination of beliefs, moral systems, practices/rituals, and narratives (like mythologies). Of course each of these particular things could exist on their own--especially mythologies--but when they come together as a whole we generally define them as a 'religion.'\n\nSorry about being a bit long-winded, let me know if you have more questions about it. I spend quite a lot of time studying them so I guess I'm a bit passionate about the details ;)", "provenance": null }, { "answer": "Mythology: other people's religions.", "provenance": null }, { "answer": "If a you call a goats tail a leg, how many legs does it have? If you answer that wrong you will answer the religion question wrong.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "25824", "title": "Religion and mythology", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 553, "text": "Mythology is the main component of Religion. It refers to systems of concepts that are of high importance to a certain community, making statements concerning the supernatural or sacred. Religion is the broader term, besides mythological system, it includes ritual. A given mythology is almost always associated with a certain religion such as Greek mythology with Ancient Greek religion. Disconnected from its religious system, a myth may lose its immediate relevance to the community and evolve—away from sacred importance—into a legend or folktale. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5246067", "title": "Gender history", "section": "Section::::Gender in religion.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 575, "text": "All over the world, religion is formed around a divine, supernatural figure. While the idea of the divine, supernatural figure varies from religion to religion, each one is framed around different concepts of what it means to be male and female. Furthermore, the religion of a culture usually directly corresponds or is influenced by the culture's gender structure, like the family structures and/or the state. Therefore the religious structure and the gender structure work together to form and define a culture, creating the defining structures of equality and uniformity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24400334", "title": "Credulity", "section": "Section::::Examples.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 244, "text": "A religion is a system of human thought which usually includes a set of narratives, symbols, beliefs and practices that give meaning to the practitioner's experiences of life through reference to a higher power, God or gods, or ultimate truth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7482", "title": "Christian mythology", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 557, "text": "Christian mythology is the body of myths associated with Christianity. The term encompasses a broad variety of legends and stories, especially those considered sacred narratives. Mythological themes and elements occur throughout Christian literature, including recurring myths such as ascending to a mountain, the \"axis mundi\", myths of combat, descent into the Underworld, accounts of a dying-and-rising god, flood stories, stories about the founding of a tribe or city, and myths about great heroes (or saints) of the past, paradises, and self-sacrifice.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25824", "title": "Religion and mythology", "section": "Section::::Introduction.:Religion.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 547, "text": "\"Religion\" is a belief concerning the supernatural, sacred, or divine, and the moral codes, practices, values, and institutions associated with such belief, although some scholars, such as Durkheim, would argue that the supernatural and the divine are not aspects of all religions. Religious beliefs and practices may include the following: a deity or higher being, eschatology, practices of worship, practices of ethics and politics. Some religions do not include all these features. For instance, belief in a deity is not essential to Buddhism.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4392610", "title": "Life stance", "section": "Section::::Spectrum.:Religious life stances.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 501, "text": "A \"religion\" is a set of beliefs and practices, often centered upon specific supernatural and/or moral claims about reality, the cosmos, and human nature, and often codified as prayer, ritual, and law. Religion also encompasses ancestral or cultural traditions, writings, history, and mythology, as well as personal faith and mystic experience. The term \"religion\" refers to both the personal practices related to communal faith and to group rituals and communication stemming from shared conviction.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8999824", "title": "Hierophany", "section": "Section::::In Mircea Eliade's writings.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 284, "text": "Eliade argues that religion is based on a sharp distinction between the sacred (God, gods, mythical ancestors, etc.) and the profane. According to Eliade, for traditional man, myths describe \"breakthroughs of the sacred (or the 'supernatural') into the World\"—that is, hierophanies. \n", "bleu_score": null, "meta": null } ] } ]
null
3tubs2
how did they predict existence of subatomic particles, black holes, multiple universes using maths and equations?
[ { "answer": " > subatomic particles\n\nThey didn't predict those with math. They were largely experimentally discovered. Thomson's cathode ray experiment found that there were negatively charged subatomic particles we now call electrons, for example. \n\n > black holes \n\nThese were first thought of from Newtonian mechanics by Laplace (and others?), but it was really just an idea. After Einstein produced the general relativity theory, Schwarzschild predicted the limit at which a black hole could form. \n\n > multiple universes\n\nThose aren't mathematically predicted either, they're really outside the realm of science. ", "provenance": null }, { "answer": "You find an equation that explains what you can see, then you look at what that equation predicts about what you can't see. After that you make experiments to do tests, and when you find results you can't explain you repeat the process over again.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "41555255", "title": "History of subatomic physics", "section": "Section::::Revelations of quantum mechanics.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 289, "text": "Improved understanding of the world of particles prompted physicists to make bold predictions, such as Dirac's positron in 1928 (founded on the Dirac Sea model) and Pauli's neutrino in 1930 (founded on conservation of energy and angular momentum in beta decay). Both were later confirmed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11009033", "title": "Type II supernova", "section": "Section::::Theoretical models.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 1222, "text": "The Standard Model of particle physics is a theory which describes three of the four known fundamental interactions between the elementary particles that make up all matter. This theory allows predictions to be made about how particles will interact under many conditions. The energy per particle in a supernova is typically 1–150 picojoules (tens to hundreds of MeV). The per-particle energy involved in a supernova is small enough that the predictions gained from the Standard Model of particle physics are likely to be basically correct. But the high densities may require corrections to the Standard Model. In particular, Earth-based particle accelerators can produce particle interactions which are of much higher energy than are found in supernovae, but these experiments involve individual particles interacting with individual particles, and it is likely that the high densities within the supernova will produce novel effects. The interactions between neutrinos and the other particles in the supernova take place with the weak nuclear force, which is believed to be well understood. However, the interactions between the protons and neutrons involve the strong nuclear force, which is much less well understood.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5021087", "title": "Avi Loeb", "section": "Section::::Career.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1230, "text": "Several of Loeb's early predictions were confirmed in recent years. In 1992, he suggested with Andy Gould that exoplanets could be detected through gravitational microlensing, a technique that is routinely used these days. In 1993, he proposed the use of the C+ fine-structure line to discover galaxies at high redshifts, as done routinely now. In 2005, he predicted in a series of papers with his postdoc at the time, Avery Broderick, how a hot spot in orbit around a black hole would appear; their predictions were confirmed in 2018 by the GRAVITY instrument on the VLT which observed a circular motion of the centroid of light of the black hole at the center of the Milky Way, SgrA*. In 2009, Broderick and Loeb predicted the shadow of the black hole in the giant elliptical galaxy M87, which was imaged in 2019 by the Event Horizon Telescope. In 2013, a report was published on the discovery of the \"Einstein Planet\" Kepler 76b, the first Jupiter size exoplanet identified through the detection of relativistic beaming of its parent star, based on a technique proposed by Loeb and Gaudi in 2003. In addition, a pulsar was discovered around the supermassive black hole, SgrA*, following a prediction by Pfahl and Loeb in 2004.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31883", "title": "Uncertainty principle", "section": "Section::::Critical reactions.:EPR paradox for entangled particles.\n", "start_paragraph_id": 114, "start_character": 0, "end_paragraph_id": 114, "end_character": 951, "text": "While it is possible to assume that quantum mechanical predictions are due to nonlocal, hidden variables, and in fact David Bohm invented such a formulation, this resolution is not satisfactory to the vast majority of physicists. The question of whether a random outcome is predetermined by a nonlocal theory can be philosophical, and it can be potentially intractable. If the hidden variables are not constrained, they could just be a list of random digits that are used to produce the measurement outcomes. To make it sensible, the assumption of nonlocal hidden variables is sometimes augmented by a second assumption—that the size of the observable universe puts a limit on the computations that these variables can do. A nonlocal theory of this sort predicts that a quantum computer would encounter fundamental obstacles when attempting to factor numbers of approximately 10,000 digits or more; a potentially achievable task in quantum mechanics.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1810098", "title": "Structure formation", "section": "Section::::Very early universe.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 347, "text": "Other theories of the very early universe have been proposed that are claimed to make similar predictions, such as the brane gas cosmology, cyclic model, pre-big bang model and holographic universe, but they remain nascent and are not widely accepted. Some theories, such as cosmic strings, have largely been refuted by increasingly precise data.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "246070", "title": "Predictability", "section": "Section::::Predictability and causality.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 390, "text": "In experimental physics, there are always observational errors determining variables such as positions and velocities. So perfect prediction is \"practically\" impossible. Moreover, in modern quantum mechanics, Werner Heisenberg's indeterminacy principle puts limits on the accuracy with which such quantities can be known. So such perfect predictability is also \"theoretically\" impossible. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26476831", "title": "History of randomness", "section": "Section::::17th–19th centuries.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 618, "text": "From the time of Newton until about 1890, it was generally believed that if one knows the initial state of a system with great accuracy, and if all the forces acting on the system can be formulated with equal accuracy, it would be possible, in principle, to make predictions of the state of the universe for an infinitely long time. The limits to such predictions in physical systems became clear as early as 1893 when Henri Poincaré showed that in the three-body problem in astronomy, small changes to the initial state could result in large changes in trajectories during the numerical integration of the equations.\n", "bleu_score": null, "meta": null } ] } ]
null
21uqj6
if your car gets stolen and your insurance covers it, what happens if the stolen car gets found after you have already gotten a new one?
[ { "answer": "If you had comprehensive coverage (which is what total theft falls under) and your insurance company paid out on the claim, you either have the option to repay back the money for the value of the car or the insurance company simply takes possession of the vehicle. People choosing to retain the vehicle and pay back the settlement is rare but does happen. We see this every once in a while, and we do also pay to have the vehicle towed back to you.\nSource: Licensed auto adjuster", "provenance": null }, { "answer": "I buy it from an insurance auction.", "provenance": null }, { "answer": "I was given a choice when my car was stolen because the insurance cut a check to my bank 8 hours before my car was found and it hadn't cleared so they could cancel it still. I told them to give me an hour to go look at it and at first glance I called them back and said its yours. that car was beat up from the feet up and I wanted nothing to do with it lol. nothing was taken besides my emergency toolbox so I got most of my possessions back at least but I don't think many people get that choice to go decide if they want the car or not afterword.", "provenance": null }, { "answer": "I handle auto theft investigations, the insurance company pays for the value of the vehicle. In a sense think of it as them buying it from you and if it is found then it belongs to the insurance company. I've had cars I investigated be recovered 2 years later in perfect condition. Any questions feel free to ask and I'll answer with as much as I am able to release. ", "provenance": null }, { "answer": "In college, my dad's car was stolen from our driveway. We lived in a gated community. It was a nice house, but nothing crazy. Still had a gate where you needed a code. \n\nI get a call the next morning. \"Hey, any chance you came by and took the car?\" \"Uhh, nope. Still at school\" (in town school)\n\nTurns out it was stollen. Insurance gave him the price of the car, as it was valued at the time (depreciation). \n\nA couple of months later that let him know the guy was busted for trying to sell the car for crack. It was nice of them to let us know, but they owned it at that point. ", "provenance": null }, { "answer": "This actually happened to me. I had full coverage insurance. The cops found the car like a week after the insurance company paid out my claim and about a couple of months after it was stolen. The insurance company owned the vehicle. I was notified and visited the tow yard to get some personal belongings. Found out later that I actually \"stole\" my belongings since I no longer owned them. I was supposed to file claims to my insurance company for my personal belongings as well, which I didn't know at the time. I actually got a carfax from the car months after that and saw the insurance company sent it to an auction. The strange thing is that they branded it a salvage title salvage but the car was in perfect condition.", "provenance": null }, { "answer": "After the car is claimed by the insurance company they will go up for sale. Ive seen 40k~ Mercedes auction for $12k cause it was stolen, no exterior damage just needed new locks.", "provenance": null }, { "answer": "Story time. Yay! This happened to me. A number of years ago, I was in a long distance relationship with a girl. I was living in Colorado and she was in a shit town in Washington state. The relationship had gone on for long enough that I decided it was time to shit or get off the pot. To be together I moved to Washington after I graduated from university. She had a year left in school. I had no money. Money enough to move but that was it. I did have a '94 nissan altima. It was cherry red with grey leather interior and \"wood\" trimmings. I loved that thing. It had some scratches but its only retardation was that the front driver door wouldnt lock, so I never kept anything of value in there figuring \"who the hell wants a shit box nissan?\". We'll the fine people in sometown, WA did! I came out one morning to find my car not where I left it, or anywhere at all. So I called the police. Filed a report, figuring its long gone but not an hour later did I get a call from the police letting me know they found the car. Turns out it was involved in a high speed pursuit running drugs from the other end of town. The colorado plates tipped em off. I guess the thief sped off and was pursued. He bailed and ditched the car in some neighborhood where the police pursued him on foot with a K9 unit. They caught him. My car was exactly the I left it! But I was required to have an insurance adjustor look at it. Because of its scratches and broken door, it was declared a total loss but it ran just fine. Insurance paid me out more than I bought the car for at full value. They paid me something like $4000. I did have to pay a \"salvage fee\" of like $200 to keep the car. As a result of a salvage title, they wouldnt insure it beyond liability. So I just needed to make sure I didnt crash it or let it get stolen again. So I paid to have the door fixed. The bigger up-side to this is that with the money I got from insurance, I got an engagement ring! I spent like $1000 on that and the rest I put in the bank. I later sold that car for another $1000 to some guy a few years later at full disclosure that it was a salvage title. \n\nTL;DR: Stolen car helps me buy an engagement ring. ", "provenance": null }, { "answer": "Insurance adjuster here. When you report your vehicle stolen we have 60 days in Rhode Island to attempt recovery. Unless of course we have obvious signs of theft (e.g. Car jacking, use in a crime on video) most companies will not issue payment till then for this reason very reason. However in the unlikely event your car did turn up after we paid out your claim we would have all ready bought your title from you, thus we own the recovered vehicle and will most likely sell it at an insurance auto auction. ", "provenance": null }, { "answer": "Usually, it belongs to your insurance company now. They'll take full ownership of the stolen vehicle. This should be stated in your insurance policy.", "provenance": null }, { "answer": "This exact thing happened to me. The police found my car, parked and in perfect condition a week or so after the insurance cut me a check and I bought another car with it.\n\nI just let the insurance company know and that was it. They went and took possession of the original car and put it up for auction I assume to recoup their payout to me.\n\nThey did let me go and grab some personal items that were left in the car. \n\n", "provenance": null }, { "answer": "Yep, it goes to the insurance company, who auctions it. Here's the Insurance Auto Auction website: _URL_0_", "provenance": null }, { "answer": "If its found in your garage you go to jail", "provenance": null }, { "answer": "My dad's truck got stolen off the dealership lot when it was in for an oil change (story on request). They eventually found his truck, intact actually, insurance let him pull all the cab-over camper hardware and airbags off it but after that it was theirs even though it was intact.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "6400749", "title": "Automobile folklore", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 649, "text": "Some drivers believe that a new car is in greater danger than a used car of getting into an accident or having a collision. Some drivers will leave change under their seats. Others use one coin to scratch the car, based on the (false) belief that since the car is new and nothing has happened to it yet, the chances of something bad happening to the car is greater when compared to a used car which already has its fair share of dents and scratches. In hopes of preventing a high damaging accident, they will place a small nick or scratch on the car in an area where it will not be seen. The inside of the wheel well is one commonly scratched area.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13625540", "title": "Security deposit", "section": "Section::::In leasing.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 293, "text": "Often car rental and car leasing companies will require a deposit to protect themselves against possible damage to the car. Once the car is returned, it is checked for any possible damage, and if damage is found, funds are deducted from the deposit to cover the repairs and the loss of value.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4627760", "title": "Vehicle title branding", "section": "Section::::Objectives.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 505, "text": "BULLET::::1. A deterrent to auto theft: If a vehicle is a complete loss due to an accident, its serial number (VIN, Vehicle identification number) and registration documents could still be of potential value to persons dealing in stolen cars. The diminished sale value of a title branded vehicle reduces the profitability of switching the registration and VIN from an accident vehicle to that of a stolen vehicle of the same make and model/year, in an attempt to register it as a rebuilt car and sell it.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8054500", "title": "Total loss", "section": "Section::::Auto insurance.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 1200, "text": "In many jurisdictions a vehicle designated as a total loss is sold by insurance companies to general public, auto dealers, auto brokers, or auto wreckers. The metrics insurance companies use to make the decision include the cost of the repairs needed plus the value of the remaining parts, added to the cost of reimbursing the driver for a rental while the car in question is repaired. If this figure exceeds the value of the car after it is repaired, the vehicle is deemed a total loss. In most jurisdictions, a decision by an insurer to write off a vehicle results in vehicle title branding, marking the car as \"salvage\" or (if repaired and reinspected under subsequent ownership) \"rebuilt\". If the vehicle is not severely damaged, however, it can be restored to its original condition. After a government approved inspection, the vehicle can be put back on the road. The inspection process may not attempt to assess the quality of the repairs. This function will be relegated to a professional mechanic or inspector. However, if the vehicle is severely damaged as per standards set by state or provincial governments, the vehicle is dismantled by an auto wrecker and is sold as parts or scrapped.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41108090", "title": "South African insurance law", "section": "Section::::Duty to disclose material facts.:Non-material facts.\n", "start_paragraph_id": 236, "start_character": 0, "end_paragraph_id": 236, "end_character": 713, "text": "In \"Commercial Union v Lotter\", the buyer of a luxury motor vehicle did not disclose to the insurer that the vehicle had been stolen from another country. When the vehicle was stolen again, the insurance company repudiated the claim. The court upheld the company's repudiation on the basis that material facts had not been disclosed. The insurance company argued that its right of subrogation was diminished by the fact that the vehicle in question was a stolen vehicle when the insurance policy was taken out: The insured had no title to the vehicle, so the insurance company could not sue a negligent third party, in terms of its right of subrogation, for the full costs of repairing any damage to the vehicle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15274568", "title": "Auto auction", "section": "Section::::Dealer auto auctions.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 344, "text": "BULLET::::- Salvage: vehicles that have been in accidents, floods, fires or recovered thefts that have been purchased by insurance companies. The insurance companies sell these vehicles to dealers or body shops who will fix them and resell them, or auto recyclers who will part out the remaining parts of the vehicle that haven't been damaged.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6981214", "title": "Salvage title", "section": "Section::::Determination of salvage status.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 685, "text": "Upon paying the claim, the insurer may offer to return the vehicle to the owner as an insurance buy-back, in which case the owner is responsible for having the repairs made and having the car inspected by a State-designated facility. Depending on the state, this inspection may remove the salvage brand from the vehicle's title. The exact percentage of value that triggers the decision to total the vehicle is guided by applicable laws and regulations. The damage estimate is calculated at retail repair rates, which may be more than the cost of wholesale repair. Vehicles that are not bought back are auctioned as salvage to an auto recycler or a rebuilder and given a salvage title.\n", "bleu_score": null, "meta": null } ] } ]
null
cfi5fu
what is the difference between normal steel and galvanized steel?
[ { "answer": "Galvanizing is a process of adding a zinc coat to the steel. It should make the nuts last longer and prevents rust.\n\nThe actual process is slightly more complicated but this is essentially it.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2299119", "title": "Decarburization", "section": "Section::::Electrical steel.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 414, "text": "Electrical steel is one material that uses decarburization in its production. To prevent the atmospheric gases from reacting with the metal itself, electrical steel is annealed in an atmosphere of nitrogen, hydrogen, and water vapor, where oxidation of the iron is specifically prevented by the proportions of hydrogen and water vapor so that the only reacting substance is carbon being made into carbon monoxide.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27058", "title": "Steel", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 240, "text": "Steel is an alloy of iron and carbon, and sometimes other elements. Because of its high tensile strength and low cost, it is a major component used in buildings, infrastructure, tools, ships, automobiles, machines, appliances, and weapons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15880710", "title": "Window shutter hardware", "section": "Section::::Shutter and hardware terminology.:Blacksmiths' terms about iron.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 454, "text": "Steel - basically a mixture of iron and carbon, although other metals may be added to change the characteristic of the steel (add chrome, get stainless steel; add nickel, get armor plate). The carbon content of steel is closely controlled in its manufacture - the more carbon the \"stronger\" the steel. Steel varies from iron in that it can be hardened. Heat steel red-hot and quench it and it gets hard, heat iron red hot and quench it and it gets cold.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3737589", "title": "Electrical steel", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 275, "text": "Electrical steel (lamination steel, silicon electrical steel, silicon steel, relay steel, transformer steel) is an iron alloy tailored to produce specific magnetic properties: small hysteresis area resulting in low power loss per cycle, low core loss, and high permeability.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "464779", "title": "Building material", "section": "Section::::Man-made substances.:Metal.\n", "start_paragraph_id": 72, "start_character": 0, "end_paragraph_id": 72, "end_character": 209, "text": "BULLET::::- Steel is a metal alloy whose major component is iron, and is the usual choice for metal structural building materials. It is strong, flexible, and if refined well and/or treated lasts a long time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1961488", "title": "Ferrocerium", "section": "Section::::Use.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 486, "text": "While ferrocerium-and-steels function in a similar way to natural flint-and-steel in fire starting, ferrocerium takes on the role that steel played in traditional methods: when small shavings of it are removed quickly enough the heat generated by friction is enough to ignite those shavings, converting the metal to the oxide, i.e., the sparks are tiny pieces of burning metal. The sparking is due to cerium's low ignition temperature of between . About 700 tons were produced in 2000.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22615422", "title": "Structural material", "section": "Section::::Iron.:Steel.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 303, "text": "Steel is a ductile material, which will behave elastically until it reaches yield (point 2 on the stress–strain curve), when it becomes plastic and will fail in a ductile manner (large strains, or extensions, before fracture at point 3 on the curve). Steel is equally strong in tension and compression.\n", "bleu_score": null, "meta": null } ] } ]
null
4nydrm
how are the "dark triad" traits in psychology different from each other?
[ { "answer": "Psychopathy is a lack of empathy & understanding of how people feel.\n\nNarcissism is placing your own needs & wants over others.\n\nMachiavellianism is lying & manipulating people to achieve your goals.\n\nThey're called the \"dark triad\" because they do so often overlap & feed into each other. It's easy to be narcissistic if you don't care about how people feel. It's easy to manipulate people if you feel your needs are more important than theirs.", "provenance": null }, { "answer": "While there's a lot of overlap among the three diagnoses, there are some unique factors to each diagnosis. So for example all 3 might lack empathy, have feelings of grandeur, and be controlling and manipulative, it's only psychopaths who lack any conscience. They're also easily bored and lack an ability to feel fear beyond immediate peril. They don't need people because they don't care about people. Hannibal Lecter is a psychopath. He feels no guilt for anything he's done. He's bored and likes to show he's smarter than others by doing things such as killing and eating people, or killing people and making other people eat them unwittingly. He can bite off someone's tongue and swallow it while his heart rate never changes from normal, because he doesn't feel fear or nervousness, nor emotions beyond very shallow ones of entertainment, when he does this.\n\nA narcissist does have a conscience so *may* feel guilt about bad things they've done, though many have a very weak conscience so will feel no guilt at all. They typically DO have empathy, but only for those who they recognize as useful to them, or similar to them. To the extent a person is similar to them, they may relate to that person. Otherwise, they won't. They do often need people--to admire them and reflect back their greatness. They see 3 types of people: people they love (because they're useful and/or embody traits that they want to have in themselves), people they hate (because they embody all of the traits that they abhor in themselves) and people who don't exist (everybody else, who can't be useful to them as someone to love or hate). Narcissists are prone to narcissistic rage. That is, they can easily become furious if challenged or wounded by an insult or threat to their elevated sense of self. Think of the Wizard of Oz--the great, powerful Oz. Enormous, green, with smoke and a booming voice. That's how the narcissist sees himself. All powerful, a whole city worshiping and celebrating him. But when someone starts to figure out that that's all an act, and the Wizard is really just a cowering, pathetic loser behind a curtain, then he gets *really* angry and threatening. He sends a little girl and her friends to (he thinks) die at the hands of a witch just so he can conceal his true identity as a nobody. \n\nSomeone who has machiavellianism desires to control and manipulate and be powerful. They might be narcissistic too, but they might not be. It is associated a lot with narcissism, though, because narcissists have a hollow core that they're trying to fill up through external praise and recognition. Think of Benito Mussolini. He desired power and recognition, to bend people to his will, to control. He could be cruel in exerting his control, though he wasn't cruel for fun or entertainment, as a narcissist or psychopath might be. He was cruel to assert his power, though. He did very evil things in Ethiopia to gain power and wealth. But he also seemed to form bonds of closeness with family members. Psychopaths wouldn't do that, and narcissists would only do it to the extent that it glorified themselves, and I don't think he did that.\n\nThese terms have recently been thrown around a lot in connection with Donald Trump. I'd say he's a narcissist, not a psychopath. And he has a certain amount of machiavellianism in him, but it's not so strong with him, at least right now--if he wins, it will come out greatly, though, because he'll have the power to do a lot and he'll want more. All 3 of these types are insatiable in their pursuit of power in the case of machiavellianism, acclaim and love in the case of narcissism, and amusement in the case of psychopaths.", "provenance": null }, { "answer": "It is also important to note that diagnostic names are not concrete fact. These change drastically over even decades as more understanding develops. Some even being equivalents of throwing darts at a board. I'd caution parsing too much.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "18280830", "title": "Dark triad", "section": "Section::::Components.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 463, "text": "There is a good deal of conceptual and empirical overlap between the dark triad traits. For example, researchers have noted that all three traits share characteristics such as a lack of empathy, interpersonal hostility, and interpersonal offensiveness. Likely due in part to this overlap, a number of measures have recently been developed that attempt to measure all three dark triad traits simultaneously, such as the Dirty Dozen and the short dark triad (SD3).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "231673", "title": "Seduction", "section": "Section::::Strategies.:Short term.:In Males.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 1075, "text": "The dark triad is made up of three personality traits, psychopathy, narcissism and Machiavellianism and was proposed by Paulhus and Williams (2002) . The three traits are exploitative in nature and are used for sexually coercive behaviours, useful in the seduction process. Typically these three traits are deemed maladaptive for the individual and society. Nevertheless, these traits have been found to be adaptive in an exploitative strategy in short term mating. Dark triad traits are adaptive for an unrestricted sociosexuality and promiscuous behaviours. The three traits are associated with impulsivity, manipulative behaviours and lack of empathy. These personality traits would be useful in seducing a partner for a short term encounter. From an evolutionary perspective, these would have been particularly beneficial to our ancestral males who wanted to increase their reproductive success, through seducing many women and therefore increasing their chance of passing on their genes. These particular traits may be used as a tactic for increasing success in mating.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39076687", "title": "Delroy L. Paulhus", "section": "Section::::Dark personalities.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 617, "text": "Paulhus and Williams (2002) coined the term \"dark triad\" in referring to three socially aversive personalities: machiavellianism, narcissism, and psychopathy. The research showed both similarities and differences among the three constructs. Their distinctiveness was confirmed in studies of associations with impulsivity, aggression, body modification, mate choice, sexual deviancy, scholastic cheating, revenge, and the personality of stalkers. A fourth member, everyday sadism, was recently added to the pantheon of dark personalities. Questionnaire measures are available in a chapter by Paulhus and Jones (2015).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7753430", "title": "Psychopathy", "section": "Section::::Definition.:Personality dimensions.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 271, "text": "Psychopathy, narcissism and Machiavellianism, three personality traits that are together referred to as the dark triad, share certain characteristics, such as a callous-manipulative interpersonal style. The dark tetrad refers to these traits with the addition of sadism.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18280830", "title": "Dark triad", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 572, "text": "Research on the dark triad is used in applied psychology, especially within the fields of law enforcement, clinical psychology, and business management. People scoring high on these traits are more likely to commit crimes, cause social distress and create severe problems for an organization, especially if they are in leadership positions (for more information, see psychopathy, narcissism, and Machiavellianism in the workplace). They also tend to be less compassionate, agreeable, empathetic, satisfied with their lives, and likely to believe they and others are good.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29790364", "title": "HEXACO model of personality structure", "section": "Section::::Research relating to the HEXACO model.:Honesty-Humility and the Dark Triad.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 1253, "text": "The dark triad of personality consists of psychopathy, Machiavellianism and narcissism. Psychopathy is identified by characteristics such as remorselessness, antisociality and selfishness. Machiavellianism consists of selfishness, in that one will focus on their own needs, primarily by manipulating others. Narcissism can also be defined as selfishness, but is different as this person would consider themselves of a higher importance than those around them. However, these constructs are said to be not fully represented in common five-factor models of personality. The Dark Triad can be conceptualized as being on the opposite pole of Honesty-Humility (Sincere, Faithful, Loyal etc.), which would mean that low levels of Honesty-Humility corresponds to higher levels of psychopathy, Machiavellianism and/or narcissism. The Dark Triad personality constructs tend to only correlate with disagreeableness on the Big Five Inventory, otherwise they are represented inconsistently on measures of the Big Five traits. For that reason, several researchers have used the HEXACO model to gain a more detailed understanding of the personality characteristics of individuals who exhibit traits/behaviours that would be considered along the Dark Triad dimension.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "18280830", "title": "Dark triad", "section": "Section::::Origins.:Biological.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 473, "text": "found to have substantial genetic components. It has also been found that the observed relationships among the dark triad, and among the dark triad and the Big Five, are strongly driven by individual differences in genes. However, while psychopathy (h = 0.64) and narcissism (h = 0.59) both have a relatively large heritable component, Machiavellianism (h = 0.31) while also moderately influenced by genetics, has been found to be less heritable than the other two traits.\n", "bleu_score": null, "meta": null } ] } ]
null
2vkn4b
Why don't we launch spacecraft using magnets?
[ { "answer": "To get into low earth orbit with a rail gun would require an exit velocity so high that whatever you are launching would burn up immediately. It has been suggested as an efficient way to launch things from bodies without an atmosphere.", "provenance": null }, { "answer": "It's physical limits. If you launch with a strong rail gun then your spacecraft will just burn up like a meteor, simply because the speed required to get into orbit is just too fast to travel through the atmosphere. That's why they need a heat shield when coming back from orbit.\n\nYou can launch with a heat shield from the rail gun, but in this case it will slow down due to atmospheric drag and fall back before reaching orbit.\n\nThe only way to do this would be having a rail gun long enough to rise like 100 km or more, where atmospheric drag becomes negligible. In this case the physical limit would be the resistance of materials: the rail gun's cannon would collapse under its own weight.\n\n_URL_0_\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "25494798", "title": "Magnetorquer", "section": "Section::::Disadvantages.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 653, "text": "The main disadvantage of magnetorquers is that very high magnetic flux densities are needed if large craft have to be turned very fast. This either necessitates a very high current in the coils, or much higher ambient flux densities than are available in Earth orbit. Consequently, the torques provided are very limited and only serve to accelerate or decelerate the change in a spacecraft's attitude by minute amounts. Over time active control can produce very fast spinning even here, but for accurate attitude control and stabilization the torques provided often aren't enough. To overcome this, magnetorquer are often combined with reaction wheels.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49023", "title": "Propulsion", "section": "Section::::Vehicular propulsion.:Space.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 757, "text": "All current spacecraft use chemical rockets (bipropellant or solid-fuel) for launch, though some (such as the Pegasus rocket and SpaceShipOne) have used air-breathing engines on their first stage. Most satellites have simple reliable chemical thrusters (often monopropellant rockets) or resistojet rockets for orbital station-keeping and some use momentum wheels for attitude control. Soviet bloc satellites have used electric propulsion for decades, and newer Western geo-orbiting spacecraft are starting to use them for north-south stationkeeping and orbit raising. Interplanetary vehicles mostly use chemical rockets as well, although a few have used ion thrusters and Hall effect thrusters (two different types of electric propulsion) to great success.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37845", "title": "Magnetic sail", "section": "Section::::Modes of operation.:Inside a planetary magnetosphere.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 397, "text": "In theory, it is possible for a magnetic sail to launch directly from the surface of a planet near one of its magnetic poles, repelling itself from the planet's magnetic field. However, this requires the magnetic sail to be maintained in its \"unstable\" orientation. A launch from Earth requires superconductors with 80 times the current density of the best known high-temperature superconductors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28873712", "title": "Inductive discharge ignition", "section": "Section::::Magnetos.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 293, "text": "Due to their reliability, magnetos are used as ignition systems on aircraft. They are also used on machinery that do not have a separate electric supply or battery. They are also used on drag race cars because they offer a weight advantage over systems that utilize a distributor and battery.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37845", "title": "Magnetic sail", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 322, "text": "A magnetic sail or magsail is a proposed method of spacecraft propulsion which would use a static magnetic field to deflect charged particles radiated by the Sun as a plasma wind, and thus impart momentum to accelerate the spacecraft. A magnetic sail could also thrust directly against planetary and solar magnetospheres.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29398", "title": "Single-stage-to-orbit", "section": "Section::::Approaches.:Nuclear propulsion.\n", "start_paragraph_id": 80, "start_character": 0, "end_paragraph_id": 80, "end_character": 546, "text": "Due to weight issues such as shielding, many nuclear propulsion systems are unable to lift their own weight, and hence are unsuitable for launching to orbit. However some designs such as the Orion project and some nuclear thermal designs do have a thrust to weight ratio in excess of 1, enabling them to lift off. Clearly one of the main issues with nuclear propulsion would be safety, both during a launch for the passengers, but also in case of a failure during launch. No current program is attempting nuclear propulsion from Earth's surface.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2306393", "title": "Spin-stabilisation", "section": "Section::::Use.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 266, "text": "On rockets with a solid motor upper stage, spin stabilization is used to keep the motor from drifting off course as they don't have their own thrusters. Usually small rockets are used to spin up the spacecraft and rocket then fire the rocket and send the craft off.\n", "bleu_score": null, "meta": null } ] } ]
null
7go45y
how does winrar/7-zip just make my files smaller?
[ { "answer": "Just about any set of information contains some repeating information. My paragraph contains the word contains several times. One means to reduce the size would be to put a short unique identifier in place of each word \"contains\" and a reference that the identifier means contains. \n\nSo a very simple compression scheme for the above paragraph might look like:\n\nJust about any set of ! @ some repeating !. My paragraph @ the word @ several times. One means to reduce the size would be to put a short unique # in place of each word \"@\" and a reference that the # means @. \n\n!=information\n\n@=contains \n\n\\#=identifier\n\nThat's a quick 15% reduction in the characters (counting the replacement information we added). If we did the same with parts of words, or shorter binary strings, we could further reduce the space. It's easy to automatically replace the characters with the original words when you want the original file back.\n\nCompression schemes do something similar with the binary data that makes up all computer files. \n\nThat's why compression works better on data with lots of repeating information (like text or a bitmap image) rather than data with little repeating information (like a jpg). ", "provenance": null }, { "answer": "I'm not familiar with the exact methods they use but I know one technique is redundancy removal (this is \"lossless in that when uncompressing nothing is lost - it can be restored to its original condition every time)\n\nLet's take an image for example. It's made of pixels, each one represented by it's rgb color value in rows and columns.\n\nLet's say you have a white area in the photo, so you have say twenty white pixels right next to each other in a row. The easiest way to represent this is by saving twenty pixels with the same value, however you could instead compress it by saving it in a way that says (white pixel) x 20, so you've stored only two values - one for the pixel type and one for the quantity - instead of twenty.\n\nNow there are more complex ways to do this but it all boils down to things like that, finding ways to better represent values for size rather than convenience and easy use.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "322689", "title": "7-Zip", "section": "Section::::File manager.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 501, "text": "7-Zip comes with a file manager along with the standard archiver tools. The file manager has a toolbar with options to create an archive, extract an archive, test an archive to detect errors, copy, move, and delete files, and open a file properties menu exclusive to 7-Zip. The file manager, by default, displays hidden files because it does not follow Windows Explorer's policies. The tabs show name, modification time, original and compressed sizes, attributes, and comments (4DOS codice_3 format).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "322689", "title": "7-Zip", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 695, "text": "7-Zip is a free and open-source file archiver, a utility used to place groups of files within compressed containers known as \"archives\". It is developed by Igor Pavlov and was first released in 1999. 7-Zip uses its own 7z archive format, but can read and write several other archive formats. The program can be used from a command-line interface as the command p7zip, or through a graphical user interface that also features shell integration. Most of the 7-Zip source code is under the GNU LGPL license; the unRAR code, however, is under the GNU LGPL with an \"unRAR restriction\", which states that developers are not permitted to use the code to reverse-engineer the RAR compression algorithm.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22850755", "title": "Xz", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 249, "text": "Although the original 7-Zip program, which implements LZMA2 compression, is able to produce small files at the cost of speed, it also created its own unique archive format which was made primarily for Windows and did not support Unix functionality.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1888029", "title": "Solid compression", "section": "Section::::Rationale.:Costs.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 463, "text": "On the other hand, getting a single file out of a solid archive originally required processing all the files before it, so modifying solid archives could be slow and inconvenient. Later versions of 7-zip use a variable solid block size, so that only a limited amount of data must be processed in order to extract one file. Parameters control the maximum solid block window size, the number of files in a block, and whether blocks are separated by file extension.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "322689", "title": "7-Zip", "section": "Section::::Formats.:7z.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 324, "text": "By default, 7-Zip creates 7z-format archives with a codice_1 file extension. Each archive can contain multiple directories and files. As a \"container\" format, security or size reduction are achieved using a stacked combination of filters. These can consist of pre-processors, compression algorithms, and encryption filters.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "188488", "title": "Zip (file format)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 909, "text": "ZIP is an archive file format that supports lossless data compression. A ZIP file may contain one or more files or directories that may have been compressed. The ZIP file format permits a number of compression algorithms, though DEFLATE is the most common. This format was originally created in 1989 and released to the public domain on February 14, 1989 by Phil Katz, and was first implemented in PKWARE, Inc.'s PKZIP utility, as a replacement for the previous ARC compression format by Thom Henderson. The ZIP format was then quickly supported by many software utilities other than PKZIP. Microsoft has included built-in ZIP support (under the name \"compressed folders\") in versions of Microsoft Windows since 1998. Apple has included built-in ZIP support in Mac OS X 10.3 (via BOMArchiveHelper, now Archive Utility) and later. Most have built in support for ZIP in similar manners to Windows and Mac OS X.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3909369", "title": "Fragmentation (computing)", "section": "Section::::Types of fragmentation.:Data fragmentation.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 635, "text": "When writing a new file of a known size, if there are any empty holes that are larger than that file, the operating system can avoid data fragmentation by putting the file into any one of those holes. There are a variety of algorithms for selecting which of those potential holes to put the file; each of them is a heuristic approximate solution to the bin packing problem. The \"best fit\" algorithm chooses the smallest hole that is big enough. The \"worst fit\" algorithm chooses the largest hole. The \"first-fit algorithm\" chooses the first hole that is big enough. The \"next fit\" algorithm keeps track of where each file was written.\n", "bleu_score": null, "meta": null } ] } ]
null
17l2uv
model-view-controller pattern
[ { "answer": "MVC isn't that hard. You have the view (or views). This is everything you see. It's a lot of rules about how to show the info. \n\nThen you have the model, which is the data. \n\nThen you have the controller. The controller takes the data and moves it into the view. If something changes, you tell the controller and he updates the correct places. \n\nExample: \n\nView:\n\n TextBox age: Position left \n Slider ageSlider: position right\n %%could have separated them into two views if I wanted to\n\n onSliderUpdate(\n NotifyControler(newAge)\n )\n\nModel:\n\n Person{\n int age.\n }\n\nController: \n\n OnNotify(int age)\n model.age = age\n for each view\n updateView\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "288233", "title": "Model–view–controller", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 420, "text": "Model–View–Controller (usually known as MVC) is an architectural pattern commonly used for developing user interfaces that divides an application into three interconnected parts. This is done to separate internal representations of information from the ways information is presented to and accepted from the user. The MVC design pattern decouples these major components allowing for code reuse and parallel development.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6851173", "title": "Graphical Editing Framework", "section": "Section::::GEF 3.x.:Architecture.:Design pattern usage.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 318, "text": "BULLET::::- Model-View-Controller is an architectural design pattern which divides an application into separate parts which communicate with each other in a specific way. The goal is to separate data model (model), graphical user interface (view) and business logic (controller). GEF uses the MVC pattern extensively.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1934392", "title": "Apache Wicket", "section": "Section::::Rationale.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 470, "text": "Traditional model-view-controller (MVC) frameworks work in terms of whole requests and whole pages. In each request cycle, the incoming request is mapped to a method on a \"controller\" object, which then generates the outgoing response in its entirety, usually by pulling data out of a \"model\" to populate a \"view\" written in specialized template markup. This keeps the application's flow-of-control simple and clear, but can make code reuse in the controller difficult.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13867326", "title": "ASP.NET MVC", "section": "Section::::Background.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 339, "text": "A \"model\" represents the state of a particular aspect of the application. A \"controller\" handles interactions and updates the model to reflect a change in state of the application, and then passes information to the view. A \"view\" accepts necessary information from the controller and renders a user interface to display that information.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9733872", "title": "Map-based controller", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 470, "text": "In the field of control engineering, a map-based controller is a controller whose outputs are based on values derived from a pre-defined lookup table. The inputs to the controller are usually values taken from one or more sensors and are used to index the output values in the lookup table. By effectively placing the transfer function as discrete entries within a lookup table, engineers free to modify smaller sections or update the whole list of entries as required.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34983319", "title": "Hierarchical model–view–controller", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 314, "text": "Hierarchical model–view–controller (HMVC) is a software architectural pattern, a variation of model–view–controller (MVC) similar to presentation–abstraction–control (PAC), that was published in 2000 in an article in JavaWorld Magazine, the authors apparently unaware of PAC, which was published 13 years earlier.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34983319", "title": "Hierarchical model–view–controller", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 208, "text": "The controller has some oversight in that it selects first the model and then the view, realizing an approval mechanism by the controller. The model prevents the view from accessing the data source directly.\n", "bleu_score": null, "meta": null } ] } ]
null
1w8c0c
why do people like to believe in god
[ { "answer": "1) if you've been raised in a religious household, it's hard to say that your family is wrong and taught you incorrect things\n\n2) it's comforting to think that everybody gets what's coming to them in the end, i.e. if you're good, you get rewarded, and if you're bad then you get punished\n\n3) many people are naturally fearful of death, so if someone can \"guarantee\" that it's not scary or bad, that's a relief", "provenance": null }, { "answer": "I imagine you'll get a lot of responses along the lines of \"they need easy comfort\" or \"they were brainwashed as children\".\n\nI don't think these are untrue. But they're too simplified, even for ELI5. \n\nAs far as comfort goes, it's not just comfort people are looking for. It's order. Humans love patterns and we really love authority. We're still very tribal in nature and that concept of a central authority for us isn't just appealing, it's a desire that is a base part of out nature. This is why you often see religions in which gods or a god promise a specific group of people that they are special. This god was part of a tribe. The leader of that tribe. \n\nGods are a powerful explanation and a powerful authority. As history shows, our commitment to that authority and the sense of community is so strong that many of us simply can't let go of. This is the part where the brainwashing children comes in. I wouldn't say it's intentional, at least not in any kind of malicious way (fundamentalists and extremists notwithstanding, they're a totally different story). It's more that we develop our connection to tribe/community very very strongly as children. And if your tribe follows a god as a leader, you will as well.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2527664", "title": "Ásatrúarfélagið", "section": "Section::::Beliefs and theology.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 726, "text": "My faith is based on a constant search but I don't search frantically. It's no use to rush out into space to search for some gods there, if they want to have anything to do with me, they will come. I have often become aware of them, but I don't rush after them or shout at them. I have gotten to know them a bit in myself and also in other people. ... Primarily it is the effects of the great force felt by everyone that make me religious. ... The most remarkable thing about faith is that it gives us growth, the possibility to grow and thrive. And humility cannot be neglected. Without it we cannot live to any useful degree, though of course it has its particular place. But a man who is completely without it is a madman.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "333890", "title": "Maasai Creed", "section": "Section::::Text.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 381, "text": "We believe in the one High God of love who created the beautiful world and everything good in it. He created man and wanted man to be happy in the world. God loves the world and every nation and tribe in the world. We have known this God in darkness, and we now know God in the light. God promised in his book the Bible that he would save the world and all the nations and tribes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24742", "title": "Paul Dirac", "section": "Section::::Personal life.:Religious views.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 1683, "text": "I cannot understand why we idle discussing religion. If we are honest—and scientists have to be—we must admit that religion is a jumble of false assertions, with no basis in reality. The very idea of God is a product of the human imagination. It is quite understandable why primitive people, who were so much more exposed to the overpowering forces of nature than we are today, should have personified these forces in fear and trembling. But nowadays, when we understand so many natural processes, we have no need for such solutions. I can't for the life of me see how the postulate of an Almighty God helps us in any way. What I do see is that this assumption leads to such unproductive questions as why God allows so much misery and injustice, the exploitation of the poor by the rich and all the other horrors He might have prevented. If religion is still being taught, it is by no means because its ideas still convince us, but simply because some of us want to keep the lower classes quiet. Quiet people are much easier to govern than clamorous and dissatisfied ones. They are also much easier to exploit. Religion is a kind of opium that allows a nation to lull itself into wishful dreams and so forget the injustices that are being perpetrated against the people. Hence the close alliance between those two great political forces, the State and the Church. Both need the illusion that a kindly God rewards—in heaven if not on earth—all those who have not risen up against injustice, who have done their duty quietly and uncomplainingly. That is precisely why the honest assertion that God is a mere product of the human imagination is branded as the worst of all mortal sins.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9881894", "title": "Justin L. Barrett", "section": "Section::::\"Why Would Anyone Believe in God?\".\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 460, "text": "In his book \"Why Would Anyone Believe in God?\" he suggests that \"belief in God is an almost inevitable consequence of the kind of minds we have. Most of what we believe comes from mental tools working below our conscious awareness. And what we believe consciously is in large part driven by these unconscious beliefs.\" and \"that beliefs in gods match up well with these automatic assumptions; beliefs in an all-knowing, all-powerful God match up even better.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14780759", "title": "Evolutionary psychology of religion", "section": "Section::::Mechanisms of evolution.:Religion as a by-product.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 875, "text": "Justin L. Barrett in \"Why Would Anyone Believe in God?\" (2004) suggests that belief in God is natural because it depends on mental tools possessed by all human beings. He suggests that the structure and development of human minds make belief in the existence of a supreme god (with properties such as being superknowing, superpowerful and immortal) highly attractive. He also compares belief in God to belief in other minds, and devotes a chapter to looking at the evolutionary psychology of atheism. He suggests that one of the fundamental mental modules in the brain is the Hyperactive Agency Detection Device (HADD), another potential system for identifying danger. This HADD may confer a survival benefit even if it is over-sensitive: it is better to avoid an imaginary predator than be killed by a real one. This would tend to encourage belief in ghosts and in spirits.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51220703", "title": "World Youth Day 2019", "section": "Section::::Journey.:Sixth day (27 January).:Closing Mass.\n", "start_paragraph_id": 65, "start_character": 0, "end_paragraph_id": 65, "end_character": 588, "text": "\"We do not always believe that God can be so concrete in our daily lives, so close and real, and even less so that He makes himself present by acting through someone we know, such as a neighbor, a friend, a relative. It is not uncommon for us to behave like the neighbors of Nazareth, preferring a God at a distance: magnificent, good, generous but distant and not bothering. Because a close God in everyday life, friend and brother asks us to learn closeness, daily presence and, above all, fraternity. (...) God is real, because love is real; God is concrete because love is concrete.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "458277", "title": "Red Jacket", "section": "Section::::Speech to the U.S. Senate.:Civility.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 263, "text": "We also have a religion, which was given to our forefathers, and has been handed down to us their children. We worship in that way. It teaches us to be thankful for all the favors we receive; to love each other, and to be united. We never quarrel about religion.\n", "bleu_score": null, "meta": null } ] } ]
null
dhgpfg
Why was Iran/Persia never colonized?
[ { "answer": "While the area that is modern day Iran was indeed never colonized the way that India was, the country was picked apart bit by bit in the 1800s and what was left was more or less defacto colonized by Russia and Great Britain. Although there were resources to be had, nothing was quite so valuable as to engage a Great Power in a series of costly and strategically risky colonization efforts.\n\nAs a starting point, a huge area of the middle east, centered in Persia, was controlled by the Safavid Empire through much of the early modern era. Similar to the Ottomans or the Mughals, they had a strong government, a standing army made of slaves similar to the effective Ottoman Janissary system, and in the later years a modern gunpowder based musketeer force.\n\nDuring the early to mid 1700s, Persia was on a decline. Indian silk was booming in Europe (and drawing some uncomfortable looks from colonizing powers), causing disruption to the economy. Huge swaths of Safavid territory were lost to the Ottomans and to Russia, in a series of wars launched by Peter the Great. A brief resurgence in the mid century would actually see the territory reconquered, but the country subsequently declined again after their ruler was assassinated, their economic woes continued, and the Safavid dynasty ended. By the turn of the century, as the colonizing powers strengthened their hold on India and the East, what would become modern day Iran was much diminished compared to it's past Imperial self.\n\nThe 1800s weren't all that great for Iranian empires either. More wars with the Russians would lose them most of the Caucasus, Amernia, and some northern cities. A series of wars with Great Britain would lose Herat and eastern territories which were heavily populated and filled with useful agricultural products like saffron. At this point Iran could more or less see the writing on the wall.\n\nIn response to Iran's growing losses and the obvious influence of Britain and Russia, the new Qajar Dynasty ended up more or less acting as puppets to the great powers. Colonization was costly and intensive, and intervention by any power in Iran could have drawn a response from one of the other powers. With this knowledge, and with new diplomatic ties to the west, successive Qajar monarchs sought to modernize Iran, and play western powers off each other, to varying degrees of success. Iran would remain nominally independent, but more or less entirely existing because it lay in both Russian and British spheres of influence. Neither Power willing to commit to an invasion, but both with significant influence in the region.\n\nIran's situation was precarious, but hadn't changed for generations of Qajar rulers. Just to make things interesting, as the world rapidly approached the end of \"classical\" colonization coming after World War II, oil was found in the Middle East in the early 1900s (again drawing those uncomfortable looks from hungry Western powers). Almost concurrently, several terrible famines struck the region and millions of Iranians died. The hundred years of more or less \"stable\" rule quickly crumbled in a rapid-fire series of events.\n\nIran found itself under (mostly neutral) occupation by British, Russian and Ottoman forces during WWI. The Russians suffered their own revolution, the Ottomans collapsed, and the British unsuccessfully tried to setup a protectorate. The resultant interwar years were filled with several internal revolutions and coups, a brief constitutional monarchy, a military dictatorship, but ultimately independent, Iranian rule. After several hundred years of conflict with western powers, Iran was finally occupied in WWII by a joint invasion of the Soviet Union and Great Britain. I do find it somewhat telling that the only time an occupation like this happened was with the agreement of both British and Russian forces.\n\nHowever, this wasn't an occupation with the intent to colonize. Iran had just made it just under the bar for that particular form of control. Instead, Iran got to be the very first theater in a different kind of geopolitical power game, the Cold War. The resultant Iran Crisis of 1946 over the withdrawal of the Soviet forces would end up with the eventual relinquishing of Soviet control in the area, and (with some words from the USA) the Iranians would even get to keep their oil.\n\nI would tend to frame what happened to Iran, not as \"avoiding\" colonization, but merely getting by until colonization wasn't the preferred method for influence in a country. Iran suffered a CIA lead Coup in 1953. To this day, even after the Revolution of 1979, Iran continues to walk a line between spheres of power. It's not \"free\" from them, no country is really, but it maintains independence by being costly to invade, and being willing to push and pull a little to play the major powers off each other. I don't find it hard to think of some topical examples where this continues right into modern day.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "181786", "title": "Safavid dynasty", "section": "Section::::History.:Decline of the Safavid state.\n", "start_paragraph_id": 103, "start_character": 0, "end_paragraph_id": 103, "end_character": 601, "text": "More importantly, the Dutch East India Company and later the English/British used their superior means of maritime power to control trade routes in the western Indian Ocean. As a result, Iran was cut off from overseas links to East Africa, the Arabian peninsula, and South Asia. Overland trade grew notably however, as Iran was able to further develop its overland trade with North and Central Europe during the second half of the seventeenth century. In the late seventeenth century, Iranian merchants established a permanent presence as far north as Narva on the Baltic sea, in what now is Estonia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46490813", "title": "Peoples of the Caucasus in Iran", "section": "Section::::Caucasian refugees.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 515, "text": "Iran and the Russian Empire fought 5 wars between the mid-17th century and 1828 (if not including the Anglo-Soviet Invasion of Iran). Iran eventually lost vast and often solidly Persian-speaking and Muslim territories spanning from Dagestan in the North Caucasus to what is today Nakhchivan, Azerbaijan and Armenia to the Russians per the Treaty of Gulistan and Treaty of Turkmenchay. The Russians killed many inhabitants of these Iranian ruled lands, and expelled the rest to Iran, or to a certain extent, Turkey.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "882382", "title": "Persianate society", "section": "Section::::Mongol invasion.\n", "start_paragraph_id": 59, "start_character": 0, "end_paragraph_id": 59, "end_character": 1018, "text": "The culture of the Persianate world in the 13th, 14th, and 15th centuries inadvertently benefited by the invading hordes of Asia. The Mongols under Genghis Khan (1220–58) and Timur (\"Tamerlane\", 1336–1405) stimulated the development of Persianate culture in Central and West Asia, because of the new concentrations of specialists of high culture created by the invasions. Many Iranians sought refuge in a few safe havens, particularly India, where scholars, poets, musicians, and fine artisans intermingled; because of the broad peace secured by the imperial systems established by the Ilkhanids and Timurids, scholars and artists, ideas and skills, and fine books and artifacts circulated freely over a wide area. The Ilkhanids and Timurids were patrons of Persianate high culture. Under their rule new styles of architecture based on pre-Islamic Iranian traditions were developed, Persian literature was encouraged and schools of miniature painting and book production were established at Herat, Tabriz and Esfahan.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1193437", "title": "Trans-Iranian Railway", "section": "Section::::World War II.:British and Soviet operation 1941–42.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 691, "text": "The British and Russians initially stated their reason for invading Iran was the Iranian government's failure to rid the country of Germans, who supposedly were planning an eventual coup d'etat. Yet there were other reasons for the invasion, and the Trans-Iranian Railways key location as part of the so-called \"Persian Corridor\" was one of the primary reasons for the Anglo-Soviet invasion of Iran in World War II. Despite Reza Shah's attempts to remain neutral, the allies decided it would be most effective to remove Reza Shah from the throne, using his young son, instead to assist in their use of the Trans-Iranian Railway to transport oil to Britain, and supplies to the Soviet Union.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "877182", "title": "Shirvan", "section": "Section::::People and culture.:Iranian influence and population.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 1197, "text": "Iranian penetration started since the Achaemenid era and continued in the Parthian era. However it was during the Sassanid era that the influence really increased and Persian colonies were set up in the region. According to Vladimir Minorsky: \"The presence of Iranian settlers in Transcaucasia, and especially in the proximity of the passes, must have played an important role in absorbing and pushing back the aboriginal inhabitants. Such names as Sharvan, Layzan, Baylaqan, etc., suggest that the Iranian immigration proceeded chiefly from Gilan and other regions on the southern coast of the Caspian\". Abu al-Hasan Ali ibn al-Husayn Al-Masudi (896–956), the Arab historian states Persian presence in Aran, Bayleqan, Darband, Shabaran, Masqat and Jorjan. From 9th century, the urban population of Shirwan increasingly moved to Persian language, while the rural population seems to mostly have retained their old Caucasian languages. Up to the nineteenth century, there was still a large number of Tat population (who claim to be descendants of Sassanid era Persian settlers), however due to similar culture and religion with Turkic-speaking Azerbaijanis, this population was partly assimilated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43712", "title": "Parsis", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 398, "text": "At the time of the Muslim conquest of Persia, the dominant religion of the region (which was ruled by the Sasanian Empire) was Zoroastrianism. Iranians such as Babak Khorramdin rebelled against Muslim conquerors for almost 200 years. During this time many Iranians (who are now called Parsis since the migration to India) chose to preserve their religious identity by fleeing from Persia to India.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "817924", "title": "Ethnic minorities in Iran", "section": "Section::::Historical notes.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 925, "text": "Iran (then called Persia) traditionally was governed over the last few centuries in a fairly decentralised way with much regional and local autonomy. In particular, weaker members of the Qajar dynasty often did not rule much beyond the capital Tehran, a fact exploited by the imperial powers Britain and Russia in the 19th century. For example, when British cartographers, diplomats, and telegraph workers traveled along Iran's southern coast in the early 19th century laden with guns and accompanied by powerful ships, some local chieftains quickly calculated that their sworn allegiance to the Shah in Tehran with its accompanying tax burden might be optional. When queried, they proclaimed their own local authority. However during Constitutional Revolution ethnic minorities including Azeris, Bakhtiaris and Armenians fought together for establishment of democracy in Iran while they had the power to become independent.\n", "bleu_score": null, "meta": null } ] } ]
null
49qgaa
why do people go to the bathroom on a pretty regular schedule?
[ { "answer": "Because generally your food/drink intake is fairly regular as well. You have coffee every day at 8am. Lunch at noon, dinner at 7 etc. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "42137175", "title": "Open defecation", "section": "Section::::Reasons.:Uncomfortable or unsafe toilet.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 345, "text": "BULLET::::- Too many people using a toilet: This is especially true in case of shared or public toilets. If too many people want to use a toilet at the same time, then some people may go outside to defecate instead of waiting. In some cases, people might not be able to wait due to diarrhea (or result of an Irritable Bowel Syndrome emergency).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11383147", "title": "Room", "section": "Section::::Types of rooms.:Work rooms.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 287, "text": "Other rooms are meant to promote comfort and cleanliness, such as the toilet and bathroom, which may be combined or which may be in separate rooms. The public equivalent is the restroom, which usually features a toilet and handwashing facilities, but not usually a shower or a bathtub. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "234514", "title": "Bathroom", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 520, "text": "A bathroom is a room in the home or hotel for personal hygiene activities, generally containing a toilet, a sink (basin) and either a bathtub, a shower, or both. In some countries, the toilet is usually included in the bathroom, whereas other cultures consider this insanitary or impractical, and give that fixture a room of its own. The toilet may even be outside of the home in the case of pit latrines. It may also be a question of available space in the house whether the toilet is included in the bathroom or not. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1053470", "title": "Public toilet", "section": "Section::::Alternative names.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 487, "text": "In Canadian English, public facilities are frequently called \"washrooms\", although usage varies regionally. The word \"toilet\" generally denotes the fixture itself rather than the room. The word \"washroom\" is rarely used to mean \"utility room\" or \"mud room\" as it is in some parts of the United States. \"Bathroom\" is generally used to refer to the room in a person's home that includes a bathtub or shower. In public athletic or aquatic facilities, showers are available in locker rooms.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19283589", "title": "Urinal (health care)", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 213, "text": "Generally, patients who are able to are encouraged to walk to the toilet or use a bedside commode as opposed to a urinal. The prolonged use of a urinal has been shown to lead to constipation or trouble urinating.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50164463", "title": "Workers’ right to access restroom", "section": "Section::::Health issues.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 388, "text": "Inability to access a bathroom when necessary has caused health issues such as urinary tract infections, kidney infections, and digestive problems which can later develop into severe health problems. Inadequate access to the use of a bathroom when required can lead to substantial problems for people who have prostate problems, going through menopause, or are menstruating for instance.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "234514", "title": "Bathroom", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 215, "text": "In North American English the word \"bathroom\" may be used to mean any room containing a toilet, even a public toilet (although in the United States this is more commonly called a restroom and in Canada a washroom).\n", "bleu_score": null, "meta": null } ] } ]
null
1q8dyd
why do prepaid visa cards ask for social security numbers?
[ { "answer": "Its a bank account. Just like a bank. You can't open an account at a brick and mortar bank without using a SSN. Same applies to a non-traditional account.", "provenance": null }, { "answer": "So they can't be used for money laundering. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "364578", "title": "Identity document", "section": "Section::::National policies.:North America.:United States.\n", "start_paragraph_id": 373, "start_character": 0, "end_paragraph_id": 373, "end_character": 557, "text": "Social Security numbers and cards are issued by the US Social Security Administration for tracking of Social Security taxes and benefits. They have become the \"de facto\" national identification number for federal and state taxation, private financial services, and identification with various companies. SSNs do not establish citizenship because they can also be issued to permanent residents as well as citizens. They typically can only be part of the establishment of a person's identity; a photo ID that verifies date of birth is also usually requested.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22255742", "title": "Social Security number", "section": "Section::::Types of Social Security cards.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 544, "text": "In 2004 Congress passed The Intelligence Reform and Terrorism Prevention Act; parts of which mandated that the Social Security Administration redesign the Social Security Number (SSN) Card to prevent forgery. From April 2006 through August 2007, Social Security Administration (SSA) and Government Printing Office (GPO) employees were assigned to redesign the Social Security Number Card to the specifications of the Interagency Task Force created by the Commissioner of Social Security in consultation with the Secretary of Homeland Security.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2049265", "title": "Identity documents in the United States", "section": "Section::::Social Security card.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 426, "text": "The Social Security number (SSN) and card are issued by the Social Security Administration. Almost all parents voluntarily apply for a Social Security number shortly after the birth of a child. In the absence of a national identity card (and concordant national identity number), the Social Security number has become the \"de facto\" national identifier for a large variety of purposes, both governmental and non-governmental.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22255742", "title": "Social Security number", "section": "Section::::Structure.:Historical structure.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 912, "text": "Prior to the 2011 randomization process, the first three digits or area number were assigned by geographical region. Prior to 1973, cards were issued in local Social Security offices around the country and the area number represented the office code where the card was issued. This did not necessarily have to be in the area where the applicant lived, since a person could apply for their card in any Social Security office. Beginning in 1973, when the SSA began assigning SSNs and issuing cards centrally from Baltimore, the area number was assigned based on the ZIP Code in the mailing address provided on the application for the original Social Security card. The applicant's mailing address did not have to be the same as their place of residence. Thus, the area number did not necessarily represent the state of residence of the applicant regardless of whether the card was issued prior to, or after, 1973.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2049265", "title": "Identity documents in the United States", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 388, "text": "Social Security cards have federal jurisdiction but cannot verify identity. They verify only the match between a given name and a Social Security Number (SSN) and were intended only for use in complying with Social Security payroll tax laws. They now are used in a wider scope of activities, such as for obtaining credit and other regulated financial services in banking and investments.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22255742", "title": "Social Security number", "section": "Section::::Identity theft.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 370, "text": "The Social Security Administration has suggested that, if asked to provide his or her Social Security number, a citizen should ask which law requires its use. In accordance with §7213 of the 9/11 Commission Implementation Act of 2004 and , the number of replacement Social Security cards per person is generally limited to three per calendar year and ten in a lifetime.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1551616", "title": "National identification number", "section": "Section::::Asia.:Pakistan.\n", "start_paragraph_id": 119, "start_character": 0, "end_paragraph_id": 119, "end_character": 733, "text": "Every citizen has an NIC number for activities such as paying taxes, opening a bank account, getting a utility connection (phone, cell phone, gas, electricity). However, since a majority of births in the country are not registered, and a large number of Pakistanis do not conduct any of the activities described above, most do not have ID cards. Obtaining an NIC card costs 100 rupees (US$1.66 - almost the average daily income), and this reduces the number of people who can afford it. In 2006, NADRA announced that it had issued 50 million CNIC (the C standing for Computerized) numbers, which is approximately one-third of the population. In June 2008, the federal government announced it would start issuing CNIC cards for free.\n", "bleu_score": null, "meta": null } ] } ]
null
epfyhb
How often are planets found?
[ { "answer": "Planets in our Solar System, or planets in general?\n\nIn our Solar System, only two planets still considered planets have been found in recorded history, Uranus in 1781, and Neptune in 1864. If Planet Nine is real, odds are we'll find that within the next 10 years or so.\n\nOutside of our solar system, planets are found on a virtually daily basis, although usually only announced in batches of a few planets, a few dozen planets, or rarely a few hundred planets. For some planets just announced recently, a bunch of planets around nearby stars were announced only a week ago: [_URL_0_](_URL_0_)", "provenance": null }, { "answer": "Exoplanets: Roughly one to two per day as average over the last years, but they often come in large batches as people study hundreds of objects in parallel and then release the analysis. [Here is a graph](_URL_0_).\n\nGaia is a spacecraft that currently collects data, it is expected to find over 10,000 exoplanets in the next few years.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "28582968", "title": "Discoveries of exoplanets", "section": "Section::::2009.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 376, "text": "BULLET::::- 30 planets: On October 19, it was announced that 30 new planets were discovered, all were detected by radial velocity method. It is the most planets ever announced in a single day during the exoplanet era. October 2009 now holds the most planets discovered in a month, breaking the record set in June 2002 and August 2009, during which 17 planets were discovered.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14479423", "title": "Sagittarius Window Eclipsing Extrasolar Planet Search", "section": "Section::::Planets discovered.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 449, "text": "Sixteen candidate planets were discovered with orbital periods ranging from 0.6 to 4.2 days. Planets with orbital periods less than 1.2 days have not previously been detected, and have been dubbed \"ultra-short period planets\" (USPPs) by the search team. USPPs were discovered only around low-mass stars, suggesting that larger stars destroyed any planets orbiting so closely or that planets were unable to migrate as far inward around larger stars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "849815", "title": "Kepler space telescope", "section": "Section::::Mission results.:2015.\n", "start_paragraph_id": 121, "start_character": 0, "end_paragraph_id": 121, "end_character": 493, "text": "BULLET::::- In January 2015, the number of confirmed Kepler planets exceeded 1000. At least two (Kepler-438b and Kepler-442b) of the discovered planets announced that month were likely rocky and in the habitable zone. Also in January 2015, NASA reported that five confirmed sub-earth-sized rocky exoplanets, all smaller than the planet Venus, were found orbiting the 11.2 billion year old star Kepler-444, making this star system, at 80% of the age of the universe, the oldest yet discovered.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48510", "title": "Terrestrial planet", "section": "Section::::Extrasolar terrestrial planets.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 382, "text": "In 2005, the first planets around main-sequence stars that may be terrestrial were found: Gliese 876 d, has a mass 7 to 9 times that of Earth and an orbital period of just two Earth days. It orbits the red dwarf Gliese 876, 15 light years from Earth. OGLE-2005-BLG-390Lb, about 5.5 times the mass of Earth, orbits a star about 21,000 light years away in the constellation Scorpius.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1197737", "title": "47 Ursae Majoris", "section": "Section::::Planetary system.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 618, "text": "In 2010, the discovery of a third planet (47 UMa d) was made by using the Bayesian Kepler Periodogram. Using this model of this planetary system it was determined that it is 100,000 times more likely to have three planets than two planets. This discovery was announced by Debra Fischer and P.C. Gregory. This planet has an orbital period of 14,002 days or 38.33 years and a semi-major axis of 11.6 AU with a moderate eccentricity of 0.16. It would be the longest-period planet discovered by the radial velocity method, although longer-period planets had previously been discovered by direct imaging and pulsar timing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "414048", "title": "40 Eridani", "section": "Section::::Planetary system.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 475, "text": "In 2018, a planet was discovered orbiting 40 Eridani A with a minimum mass of Earth masses. The planet has an orbit of 42 days, and lies considerably interior to the habitable zone, receiving 9 times more stellar flux than Earth, which is an even greater stellar flux amount than Mercury, the innermost planet in our solar system, on average receives from our Sun. It is one of the closest Super-Earths known, the closest discovered to date () within a multiple star system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "849815", "title": "Kepler space telescope", "section": "Section::::Mission results.:2011.\n", "start_paragraph_id": 99, "start_character": 0, "end_paragraph_id": 99, "end_character": 855, "text": "Based on Kepler's findings, astronomer Seth Shostak estimated in 2011 that \"within a thousand light-years of Earth\", there are \"at least 30,000\" habitable planets. Also based on the findings, the Kepler team has estimated that there are \"at least 50 billion planets in the Milky Way\", of which \"at least 500 million\" are in the habitable zone. In March 2011, astronomers at NASA's Jet Propulsion Laboratory (JPL) reported that about \"1.4 to 2.7 percent\" of all Sun-like stars are expected to have Earth-size planets \"within the habitable zones of their stars\". This means there are \"two billion\" of these \"Earth analogs\" in the Milky Way alone. The JPL astronomers also noted that there are \"50 billion other galaxies\", potentially yielding more than one sextillion \"Earth analog\" planets if all galaxies have similar numbers of planets to the Milky Way.\n", "bleu_score": null, "meta": null } ] } ]
null
8p0w80
whats happening when a sneeze ‘gets stuck’ then just burns your nose and makes your eyes water.
[ { "answer": "Sneezes are a protective response to alert you to less than ideal breathing conditions and remove irritants/allergens from your nose. They’re triggered by the presence of irritants, but only a certain concentration, which is mediated by multiple nerve endings that generate “spikes” when they’re irritated. Once the number of spikes passes a certain threshold, you sneeze. \n\nSometimes your nose will be irritated to the point of feeling like you have to sneeze, but there isn’t quite enough to push you over the threshold. So you “get stuck.”", "provenance": null }, { "answer": "According to Wikipedia: \n\n\"Sneezing typically occurs when foreign particles or sufficient external stimulants pass through the nasal hairs to reach the nasal mucosa. This triggers the release of histamines, which irritate the nerve cells in the nose, resulting in signals being sent to the brain to initiate the sneeze through the trigeminal nerve network. The brain then relates this initial signal, activates the pharyngeal and tracheal muscles and creates a large opening of the nasal and oral cavities, resulting in a powerful release of air and bioparticles. The powerful nature of a sneeze is attributed to its involvement of numerous organs of the upper body – it is a reflexive response involving the face, throat, and chest muscles. Sneezing is also triggered by sinus nerve stimulation caused by nasal congestion and allergies.\"\n\nSo your \"half-sneeze\", or whatever it's official name is, is what likely happens when something in your nose triggers the urge to sneeze, but not the complex mechanics involved in the actual process.", "provenance": null }, { "answer": "When the inside of your nose gets a tickle, a message is sent to a special part of your brain called the sneeze center. The guy manning the sneeze center then sends a message to all the muscles that have to work together to create the amazingly complicated process that we call the sneeze.\n\nSome of the muscles involved are the abdominal (belly) muscles, the chest muscles, the diaphragm (the large muscle beneath your lungs that makes you breathe), the muscles that control your vocal cords, and muscles in the back of your throat.\n\nDon't forget the eyelid muscles! Did you know that you always close your eyes when you sneeze?\n\nIt is the job of the sneeze center to make all these muscles work together, in just the right order, to send that irritation flying out of your nose. \n\nSo what happens when a sneeze get's stuck? The guy at the sneeze center is on a coffee break.", "provenance": null }, { "answer": "For all my life, I've suffered from allergies and would frequently have really horrible sneezing fits. I envied people who could just sneeze once or twice and be done with it. That was so foreign to me. If I allowed myself to sneeze once, I was committing myself to at least 15 or 20 intense sneezes, one right on top of the other.\n\nIt was exactly like this: _URL_0_\n\nSometimes it was so intense, I couldn't catch my breath. The worst part was that when sneezing like this, you desperately need to swallow because you're salivating. But swallowing would only intensify the sneeze reflex. So I'd often feel like I was choking. It was so bad I'd hardly ever allow myself to sneeze in public. I just got used to biting my tongue and making weird faces. I'd fend it off at all costs. I often had them while sleeping, since I couldn't prevent them.\n\nIt got worse in my 20s, because I'd broken my nose in high school and suffered a deviated septum. And over time, all my right-side sinuses became severely blocked up. By the time I was in my late 30s, I was miserable.\n\nAnyway, a couple years ago I finally got my nose straightened and my sinuses cleaned out (FESS), and immediately after began getting allergy shots.\n\nI now know what it feels like to only sneeze once. Only now, after that first sneeze, I get \"stuck\" in that almost-sneezing zone for about 10 minutes and it's incredibly frustrating and unsatisfying. I just keep throwing my head back like I'm going to sneeze and it never comes. I look even more ridiculous. I almost wish I could go back.\n\nBodies are weird, man.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "232411", "title": "Sneeze", "section": "Section::::Description.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 715, "text": "Sneezing typically occurs when foreign particles or sufficient external stimulants pass through the nasal hairs to reach the nasal mucosa. This triggers the release of histamines, which irritate the nerve cells in the nose, resulting in signals being sent to the brain to initiate the sneeze through the trigeminal nerve network. The brain then relates this initial signal, activates the pharyngeal and tracheal muscles and creates a large opening of the nasal and oral cavities, resulting in a powerful release of air and bioparticles. The powerful nature of a sneeze is attributed to its involvement of numerous organs of the upper body – it is a reflexive response involving the face, throat, and chest muscles.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4176408", "title": "Photic sneeze reflex", "section": "Section::::Pathophysiology.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 1062, "text": "There is much debate about the true cause and mechanism of the sneezing fits brought about by the photic sneeze reflex. Sneezing occurs in response to irritation in the nasal cavity, which results in an afferent nerve fiber signal propagating through the ophthalmic and maxillary branches of the trigeminal nerve to the trigeminal nerve nuclei in the brainstem. The signal is interpreted in the trigeminal nerve nuclei, and an efferent nerve fiber signal goes to different parts of the body, such as mucous glands and the thoracic diaphragm, thus producing a sneeze. The most obvious difference between a normal sneeze and a photic sneeze is the stimulus: normal sneezes occur due to irritation in the nasal cavity, while the photic sneeze can result from a wide variety of stimuli. Some theories are below. There is also a genetic factor that increases the probability of photic sneeze reflex. The C allele on the rs10427255 SNP is particularly implicated in this although the mechanism is unknown by which this gene increases the probability of this response.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "232411", "title": "Sneeze", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 622, "text": "A sneeze, or sternutation, is a semi-autonomous, convulsive expulsion of air from the lungs through the nose and mouth, usually caused by foreign particles irritating the nasal mucosa. A sneeze expels air forcibly from the mouth and nose in an explosive, spasmodic involuntary action resulting chiefly from irritation of the nasal mucous membrane. This action allows for mucus to escape through the nasal cavity. Sneezing is possibly linked to sudden exposure to bright light, sudden change (fall) in temperature, breeze of cold air, a particularly full stomach, or viral infection, and can lead to the spread of disease.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "150116", "title": "Black pepper", "section": "Section::::Phytochemicals, folk medicine and research.\n", "start_paragraph_id": 56, "start_character": 0, "end_paragraph_id": 56, "end_character": 228, "text": "Pepper is known to cause sneezing. Some sources say that piperine, a substance present in black pepper, irritates the nostrils, causing the sneezing. Few, if any, controlled studies have been carried out to answer the question.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "232411", "title": "Sneeze", "section": "Section::::Description.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 956, "text": "The sneeze reflex involves contraction of a number of different muscles and muscle groups throughout the body, typically including the eyelids. The common suggestion that it is impossible to sneeze with one's eyes open is, however, inaccurate. Other than irritating foreign particles, allergies or possible illness, another stimulus is sudden exposure to bright light – a condition known as photic sneeze reflex (PSR). Walking out of a dark building into sunshine may trigger PSR, or the ACHOO (autosomal dominant compulsive helio-ophthalmic outbursts of sneezing) syndrome as it's also called. The tendency to sneeze upon exposure to bright light is an autosomal dominant trait and affects 18-35% of the human population. A rarer trigger, observed in some individuals, is the fullness of the stomach immediately after a large meal. This is known as snatiation and is regarded as a medical disorder passed along genetically as an autosomal dominant trait.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "232411", "title": "Sneeze", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 520, "text": "The function of sneezing is to expel mucus containing foreign particles or irritants and cleanse the nasal cavity. During a sneeze, the soft palate and palatine uvula depress while the back of the tongue elevates to partially close the passage to the mouth so that air ejected from the lungs may be expelled through the nose. Because the closing of the mouth is partial, a considerable amount of this air is usually also expelled from the mouth. The force and extent of the expulsion of the air through the nose varies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4176408", "title": "Photic sneeze reflex", "section": "Section::::Risks.:Medical procedures.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 412, "text": "Uncontrollable fits of sneezing are common in patients under propofol sedation who undergo periocular or retrobulbar injection. A sneeze by a sedated patient often occurs upon insertion of a needle into or around their eye. The violent and uncontrollable movement of the head during a reflexive sneeze has potential to cause damage within the patient's eye if the needle is not removed before the sneeze occurs.\n", "bleu_score": null, "meta": null } ] } ]
null
4531yj
What was Moscow's relationship with Ceausescu before the Romanian revolution? What role, if any, did they play in the revolution?
[ { "answer": "Ceausescu was something of a maverick in the Eastern Bloc, which is why your friend probably thinks that. Relations between Brezhnev/Andropov and Ceausescu were cool at best and outright frigid at worst, although they never reached the point of de facto breaking off relations, unlike in Beijing. He denounced the 1968 Soviet invasion of Czechoslovakia. He kept relations with both Israel and the PLO and helped push for peace between Israel and Egypt. He kept up cordial relations with the Chinese, much to the extreme irritation of the Kremlin-Ceausescu personally modeled his personality cult off of Mao Zedong and Kim Il Sung. He openly recognized West Germany and was the first Warsaw Pact country to independently invite a US President to visit. (Nixon would later use Ceaucescu as a conduit for backdoor negotiations with the Vietnamese and while in Bucharest, consulted with him on his desire to open relations with China.) He refused to endorse the invasion of Afghanistan, and participated in the 1984 Olympics in Los Angeles, which the Soviet Union boycotted. \n\nWith that being said, however, I don't find it quite plausible. To be sure, if there is one intelligence service in the world which **nothing** should be put beyond, it's Russia's, no matter who rules in the Kremlin. Westerners often have a hard time grasping just how tactically skilled and utterly amoral they are. But Moscow wasn't able to react in places with a much larger KGB presence and more strategic value, like East Germany, in 1989, as none other than Vladimir Putin points out quite vividly. (\"Moscow was silent.\") I don't think they could have orchestrated a coup in a place like Romania where the security service in practice watched out for the KGB as much as anybody else. Moreover, the 1989 Romanian Revolution wasn't a reaction against just Ceausescu, in contrast to previous coup attempts such as that in 1984(I think), but against Communism as a whole. No matter how many Communists joined the opposition, the ideology was strongly discredited all over the Warsaw Pact by 1989. Gorbachev thought he could keep the genie in the bottle. But the Romanian Communists who joined the Revolution didn't, as seen by their subsequent economic and social liberalization policies. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2300939", "title": "Helen of Greece and Denmark", "section": "Section::::Queen Mother of Romania.:Imposition of a communist regime.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 689, "text": "Satisfied with this appointment, the Soviet authorities were more conciliatory with Romania. On 13 March 1945 Moscow transferred the administration of Transylvania to Bucharest. A few months later, on 19 July 1945, Michael I was decorated with the Order of Victory, one of the most prestigious Soviet military orders. Still, the Sovietization of the kingdom was accelerated. The purge of \"fascist\" personalities continued while censorship was strengthened. A land reform was also implemented, causing a drop in production which ruined agricultural exports. The king, however, managed to temporarily prevent the establishment of People's Tribunals and the restoration of the death penalty.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1816958", "title": "Moldavian Autonomous Soviet Socialist Republic", "section": "Section::::Creation.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 487, "text": "Establishing the republic became a matter of dispute. Despite the objections of Soviet commissar of foreign relations Chicherin who argued that the new establishment would only strengthen the position of Romanians towards Bessarabia and able to activate \"expansionist claims of Romanian chauvinism\", Kremlin launched a campaign to create the autonomy attracting to it Bessarabian refugees and Romanian political emigrants who lived in Moscow and the Ukrainian Socialist Soviet Republic.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55156334", "title": "Alexandru Lapedatu", "section": "Section::::International missions.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 403, "text": "In 1917, the Triple Alliance armies occupied Bucharest and were advancing towards Iași. The Romanian Government decided to move the State Treasury to Russia in two transports, delegating Alexandru I. Lapedatu to accompany the second one that included cultural goods. He left Iași for Moscow on 28 July 1917, and stayed there until 19 December 1917, experiencing the arrival of the Bolshevik revolution.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38872", "title": "Bessarabia", "section": "Section::::History.:Part of the Soviet Union.\n", "start_paragraph_id": 79, "start_character": 0, "end_paragraph_id": 79, "end_character": 462, "text": "The Soviet Union regained the region in 1944, and the Red Army occupied Romania. By 1947, the Soviets had imposed a communist government in Bucharest, which was friendly and obedient towards Moscow. The Soviet occupation of Romania lasted until 1958. The Romanian communist regime did not openly raise the matter of Bessarabia or Northern Bukovina in its diplomatic relations with the Soviet Union. At least 100,000 people died in a post-war famine in Moldavia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "61262565", "title": "Russia involvement in regime change", "section": "Section::::World War II.:1944: Romania.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 462, "text": "The Soviet Union regained the region in 1944, and the Red Army occupied Romania. By 1947, the Soviets had imposed a communist government in Bucharest, which was friendly and obedient towards Moscow. The Soviet occupation of Romania lasted until 1958. The Romanian communist regime did not openly raise the matter of Bessarabia or Northern Bukovina in its diplomatic relations with the Soviet Union. At least 100,000 people died in a post-war famine in Moldavia.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7925349", "title": "Tatarbunary uprising", "section": "Section::::Background.:Soviet–Romanian relations.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 690, "text": "After World War I relations between Romania and Soviet Russia were tense. Since 1918 there were numerous bilateral meetings in Copenhagen, Warsaw, Genoa, and other locations but no consensus could be reached. The Soviets saw Bessarabia as an annexed province and considered the decision of union with Romania as imposed by the occupying Romanian Army. Moreover, historians from both countries intensely debated the treaty with the Soviet Rumcherod in 1918 that required withdrawal of the Romanian Army from Bessarabia but which both countries failed to respect. The legitimacy of the Sfatul Țării was also brought into question, although the only contested decision is the unification act.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33638793", "title": "Romanian Volunteer Corps in Russia", "section": "Section::::Darnytsia Corps.:October Revolution and Romanian truce.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 824, "text": "The October Revolution shook Russia and placed most of it under a Bolshevik government which had no intention of continuing with war against the Central Powers. Although the Romanian presence in Kiev was set back by the November Uprising and the January Rebellion, then dispersed by the anti-Entente Skoropadsky regime, Constantin Gh. Pietraru and a small force remained behind in the new Ukrainian People's Republic (UNR), where they signed up the last group of Romanian volunteers. Some of these efforts were hampered by a diplomatic tensions between the UNR and Romania. Ukrainian officials refused to either rally with the Entente or negotiate border treaties with Romania, but tacitly permitted Deleu, Bocu, Ghibu and other Transylvanian Romanians activists who worked against Austria-Hungary to work on UNR territory.\n", "bleu_score": null, "meta": null } ] } ]
null
2l7mzv
why do roosters "cock-a-doodle-doo" in the morning?
[ { "answer": "They actually crow all day. Not just mornings.", "provenance": null }, { "answer": "And can triple confirm.\n\nSpend some time in the third world and in some places it seems like everybody has *at least* one.\n\nThey crow 24/7 and if several are in a reasonably close vicinity they will occasionally even have crow-offs. A rooster that's gone hoarse, presumably from excessive crowing, is.. amusing. Except when you're trying to sleep and they decide 3am is *roosta time*!", "provenance": null }, { "answer": "Because you're from America. They say other things in different countries. ", "provenance": null }, { "answer": "Pretty sure its their mating call, which they use at the quietest time of day(dawn). And, as other people have pointed out, other times(assuming it's quiet then as well).", "provenance": null }, { "answer": " they cock-a-doodle-goddamn-doo all fucking day and night.not just in the morning.", "provenance": null }, { "answer": "Crowing is a form of communication. My roosters crow whenever the hell they want, but because a lot of people think its just in the morning, and also because roosters are usually most active in the morning, that's usually when people actually notice the crowing, but it happens all day. ", "provenance": null }, { "answer": "Roosters crow all the fucking time. The cartoon image of crowing at dawn comes from you hearing it as soon as you wake up. It's a joke, son.\n\nSource: Lived next door to roosters for 4 years.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "37402", "title": "Chicken", "section": "Section::::General biology and habitat.:Behavior.:Social behaviour.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 383, "text": "A rooster's crowing is a loud and sometimes shrill call and sends a territorial signal to other roosters. However, roosters may also crow in response to sudden disturbances within their surroundings. Hens cluck loudly after laying an egg, and also to call their chicks. Chickens also give different warning calls when they sense a predator approaching from the air or on the ground.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26574", "title": "Rooster", "section": "Section::::Crowing.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 539, "text": "The rooster is often portrayed as crowing at the break of dawn (\"cock-a-doodle-doo\"). However, while many roosters crow shortly after waking up, this idea is not exactly true. A rooster can and will crow at any time of the day. Some roosters are especially vociferous, crowing almost constantly, while others only crow a few times a day. These differences are dependent both upon the rooster's breed and individual personality. A rooster can often be seen sitting on fence posts or other objects, where he crows to proclaim his territory.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7036252", "title": "Stereotypes of animals", "section": "Section::::Common Western animal stereotypes.:Birds in general.:Chickens.\n", "start_paragraph_id": 484, "start_character": 0, "end_paragraph_id": 484, "end_character": 550, "text": "BULLET::::- Roosters can be heard crowing as it begins to get lighter. In past centuries people believed the rooster controlled the rise of daylight and thus only crowed at this occasion. While roosters do indeed crow at dawn and therefore were often used as a prototypical alarm clock in past centuries, they can and will crow at any time of the day, not just in the morning. The idea that the rooster scares the darkness away led to its worship in various religious belief systems. In English the word \"cock-crow\" is a synonym for \"early morning\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26574", "title": "Rooster", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 527, "text": "\"Roosting\" is the action of perching aloft to sleep at day, which is done by both sexes. The rooster is polygamous, but cannot guard several nests of eggs at once. He guards the general area where his hens are nesting, and attacks other roosters that enter his territory. During the daytime, a rooster often sits on a high perch, usually off the ground, to serve as a lookout for his group (hence the term \"rooster\"). He sounds a distinctive alarm call if predators are nearby and will frequently crow to assert his territory.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7036252", "title": "Stereotypes of animals", "section": "Section::::Common Western animal stereotypes.:Birds in general.:Chickens.\n", "start_paragraph_id": 480, "start_character": 0, "end_paragraph_id": 480, "end_character": 389, "text": "BULLET::::- Roosters usually sit on high perches, looking out for their group. When it spots danger it will crow loudly. This led people to portray roosters as people who crave attention and suffer from delusions of grandeur. The image of the high perched rooster is also prevalent in Christian traditions, where statues of cocks are often put on top of church steeples as a weather vane.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26574", "title": "Rooster", "section": "Section::::Crowing.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 218, "text": "Roosters have several other calls as well, and can cluck, similar to the hen. Roosters occasionally make a patterned series of clucks to attract hens to a source of food, the same way a mother hen does for her chicks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26574", "title": "Rooster", "section": "Section::::Crowing.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 208, "text": "Roosters almost always start crowing before four months of age. Although it is possible for a hen to crow as well, crowing (together with hackles development) is one of the clearest signs of being a rooster.\n", "bleu_score": null, "meta": null } ] } ]
null
z2fs9
How Accurate/Bias was "Century of the Self"?
[ { "answer": "I eagerly await the more informed responses here, but a key aspect to this history is the role Bernays played in the cult around his own personality. What I mean by this is that as a master of PR, Bernays was very willing and adept in talking up his own influence. I'd take the depictions of Bernays with a grain of salt, but it's a fantastic documentary. ", "provenance": null }, { "answer": "Not really one to comment on this, but if you haven't seen his (Adam Curtis') other more recent works The Trap and The Power of Nightmares you absolutely should. Excellent stuff. ", "provenance": null }, { "answer": "Have you read Propaganda? It's pretty damning. There's a link to a free PDF copy if you follow the bibliography/works cited section at the bottom of the Wikipedia page. ", "provenance": null }, { "answer": "Not commenting directly on \"Century of the Self\", but Adam Curtis:\n\nMost all of his documentaries start with \"This is a story about...\"\n\nKeep that in mind. It's a story. It's a particular account of events too complex to explain otherwise. It may be compelling, even undeniable. It may be well researched, he may have universal consensus backing his thesis or smoking gun evidence to back it up.\n\nBut it's just a story.\n\nI like Adam Curtis, I think he illuminates interesting aspects of our culture and recent history. I agree with him on a lot of his views. But I'd still endorse some healthy scepticism when watching his work. There's often a lot he overlooks or neglects (because if he included it, his \"story\" wouldn't run).\n\n(I really wanna make a documentary about Adam Curtis that starts with \"This is a story about a documentary maker who believed the chaos of historical events could be explained in a simple way, and the key to doing so lay in understanding the ideologies of those involved...\")", "provenance": null }, { "answer": "Why don't you tell us you own opinion in more detail? Just because you aren't a \"pro\" doesn't mean you can't think. Also, please consider posting a review to /r/HistoryResources", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "60752925", "title": "Present bias", "section": "Section::::History.:Present bias and economics.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 757, "text": "The term of present bias was coined in the second half of the 20th century. In the 1930’s economic research started investigating time preferences. The findings led to the model of exponential discounting, thus time consistent discounting. However, later research led to the conclusion that time preferences were indeed not consistent, but inconsistent. In other words, people were found to prefer immediate advantages to future advantages in that their discount over a short period of time falls rapidly, while falling less the more the rewards are in the future. Therefore, people are biased towards the present. As a result, Phelps and Pollak introduced the quasi-hyperbolic model in 1968. In economics, present bias is therefore a model of discounting.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20791604", "title": "The Century of Self", "section": "Section::::Critical reaction.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 210, "text": "On Metacritic, \"The Century of Self\" has been given a score of 68 out of 100 based on \"generally favorable reviews.\" Clashmusic.com, the online arm of \"Clash\" magazine, gave it a positive review and commented:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1592486", "title": "The Century of the Self", "section": "Section::::Overview.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 256, "text": "Along these lines, \"The Century of the Self\" asks deeper questions about the roots and methods of consumerism and commodification and their implications. It also questions the modern way people see themselves, the attitudes to fashion, and superficiality.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49333", "title": "Cultural bias", "section": "Section::::Cultural bias by discipline.:History.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 673, "text": "Cultural bias may also arise in historical scholarship, when the standards, assumptions and conventions of the historian's own era are anachronistically used to report and assess events of the past. This tendency is sometimes known as presentism, and is regarded by many historians as a fault to be avoided. Arthur Marwick has argued that \"a grasp of the fact that past societies are very different from our own, and ... very difficult to get to know\" is an essential and fundamental skill of the professional historian; and that \"anachronism is still one of the most obvious faults when the unqualified (those expert in other disciplines, perhaps) attempt to do history\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1592486", "title": "The Century of the Self", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 369, "text": "The Century of the Self is a 2002 British television documentary series by filmmaker Adam Curtis. It focuses on the work of psychoanalysts Sigmund Freud and Anna Freud, and PR consultant Edward Bernays. In episode one, Curtis says, \"This series is about how those in power have used Freud's theories to try and control the dangerous crowd in an age of mass democracy.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2019798", "title": "Normalcy bias", "section": "Section::::Examples.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 604, "text": "People who promote conspiracy theories or apocalyptic future scenarios have cited the normalcy bias as a prime reason why others scoff at their pronouncements. For example, survivalists who fear that the U.S. will soon descend into totalitarianism cite normalcy bias as the reason why most Americans do not share their worries. Similarly, fundamentalist Christians use the normalcy bias to explain why others scoff at their beliefs about the \"End Time\". One fundamentalist website writes: \"May we not get blinded by the 'normalcy bias' but rather live with the knowledge that the Lord’s coming is near.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9005414", "title": "Sensemaking", "section": "Section::::Weick's approach to sensemaking.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 412, "text": "BULLET::::7. People favour \"plausibility over accuracy\" in accounts of events and contexts (Currie & Brown, 2003; Brown, 2005; Abolafia, 2010): \"in an equivocal, postmodern world, infused with the politics of interpretation and conflicting interests and inhabited by people with multiple shifting identities, an obsession with accuracy seems fruitless, and not of much practical help, either\" (Weick, 1995: 61).\n", "bleu_score": null, "meta": null } ] } ]
null
cl5gwi
how does my (i)phone know which sounds to let through in a phone call / face time?
[ { "answer": "There are several microphones. 1 intended to pick up your voice, the other(s) to pick up all the rest of the noise around you (lets call them noise microphones). The cell phone subtracts the sounds from the noise microphones from the sounds picked up by the voice microphone. The sounds that are left are just the voice.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "406703", "title": "Telephone call", "section": "Section::::Placing a call.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 765, "text": "A typical phone call using a traditional phone is placed by picking the phone handset up off the base and holding the handset so that the hearing end is next to the user's ear and the speaking end is within range of the mouth. The caller then rotary dials or presses buttons for the phone number needed to complete the call, and the call is routed to the phone which has that number. The second phone makes a ringing noise to alert its owner, while the user of the first phone hears a ringing noise in its earpiece. If the second phone is picked up, then the operators of the two units are able to talk to one another through them. If the phone is not picked up, the operator of the first phone continues to hear a ringing noise until they hang up their own phone.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16232612", "title": "Disconnect tone", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 473, "text": "Typically, the disconnect tone is a few cycles of the reorder or busy tone (e.g. in US), or between five and fifteen seconds of the Number Unobtainable tone (e.g. in UK). On some telephone exchanges in the UK, the following audio message is looped for fifteen seconds, interspersed with special information tones (SIT), to advise the remote party has hung up: \"(SIT) The other person has hung up\". On iPhones, the tone is 3 bursts of 425 Hz tones each lasting 0.2 seconds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34627763", "title": "United States v. Kramer", "section": "Section::::Court findings.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 351, "text": "BULLET::::2. \"The phone keeps track of the 'Network connection time,' which is 'the elapsed time from the moment [the user] connect[s] to [the] service provider's network to the moment [the user] end[s] the call by pressing [the end key].'\" The court used this as evidence that the phone performs logical and arithmetic operations when placing calls.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "162228", "title": "Dial tone", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 542, "text": "A dial tone is a telephony signal sent by a telephone exchange or private branch exchange (PBX) to a terminating device, such as a telephone, when an off-hook condition is detected. It indicates that the exchange is working and is ready to initiate a telephone call. The tone stops when the first dialed digit is recognized. If no digits are forthcoming, the permanent signal procedure is invoked, often eliciting a special information tone and an intercept message, followed by the off-hook tone, requiring the caller to hang up and redial.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "615209", "title": "Speaking clock", "section": "Section::::Australia.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 544, "text": "In Australia, the number 1194 gives the speaking clock in all areas and from all providers. It is always the current time from where the call originates. A male voice, often known by Australians as George, says \"At the third stroke, it will be (hours) (minutes) and (seconds) seconds/precisely. (three beeps)\" e.g. \"At the third stroke, it will be three thirty three and forty seconds ... beep beep beep\". These are done in 10 second increments and the beep is 1 kHz. Originally there was only one stroke eg:”At the stroke, it will be...” etc.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1199886", "title": "ToneLoc", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 537, "text": "ToneLoc took advantage of the extended return codes available on US Robotics modems (e.g., ATX6) to detect dial tones upon dialing a number and to detect when a human answered the phone in addition to scanning for other modems. Detection of voice numbers sped up the scanning process by disconnecting upon detecting a human instead of timing out waiting for a modem carrier signal. The detection of a dial tone after dialing a number allowed for users to search for poorly secured extenders which could be used to divert calls through. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1056585", "title": "Improved Mobile Telephone Service", "section": "Section::::Technical Information.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 801, "text": "Mobiles would originate calls by sending a burst of connect tone, to which the base station responded with a burst of seize tone. The mobile would then respond with its identification, consisting of its area code and last four digits of the phone number sent at 20 pulses per second, just as in inward dialing but with the addition of rudimentary parity checking. Digits are formed with a pulsetrain of alternating tones, either connect and silence (for odd digits) or connect and guard (for even digits). When the base station received the calling party's identification, it would send dialtone to the mobile. The user would then use the rotary dial, which would send the dialed digits as an alternating 10 pps pulse train (originally, directly formed by the rotary dial) of connect and guard tones.\n", "bleu_score": null, "meta": null } ] } ]
null
5wkcqd
In the post red giant stage of a star, why do the outer layers drift into space and not collapse onto the white dwarf?
[ { "answer": "Two reasons. One is that as the star expands the surface gravity decreases and so it's easier for those outer layers to escape. Two is that it's not actually a \"drifting\" away in many cases but is in fact forceful; [stellar winds](_URL_0_) are generated which are blowing surface material away. We see this observationally but the physical processes that are all intertwined are quite complex.", "provenance": null }, { "answer": "It's worth remembering, too, how thin the outer layers of a red giant star can be. The sun isn't going to gain any mass when it becomes a red giant, but it will expand all the way out to 1 AU or so. It's almost a cloud as much as a star, if a hot, glowing cloud.\n\nThe eventual white dwarf is very, very small. So given the volume here, you don't need much momentum to stay in orbit, rather than fall back in. So those cooling, outer layers of gas can swirl on their own, with the force that led them to be expelled out in the first place keeping them in motion. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "37749068", "title": "Extreme mass ratio inspiral", "section": "Section::::Formation.:Alternatives.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 382, "text": "A third option is that a giant star passes close enough to the central massive black hole for the outer layers to be stripped away by tidal forces, after which the remaining core may become an EMRI. However, it is uncertain if the coupling between the core and outer layers of giant stars is strong enough for stripping to have a significant enough effect on the orbit of the core.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25101402", "title": "Astrophysical X-ray source", "section": "Section::::X-ray emission from stars.:White dwarfs.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 757, "text": "When the core of a medium mass star contracts, it causes a release of energy that makes the envelope of the star expand. This continues until the star finally blows its outer layers off. The core of the star remains intact and becomes a white dwarf. The white dwarf is surrounded by an expanding shell of gas in an object known as a planetary nebula. Planetary nebula seem to mark the transition of a medium mass star from red giant to white dwarf. X-ray images reveal clouds of multimillion degree gas that have been compressed and heated by the fast stellar wind. Eventually the central star collapses to form a white dwarf. For a billion or so years after a star collapses to form a white dwarf, it is \"white\" hot with surface temperatures of ~20,000 K.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11215402", "title": "Z Andromedae", "section": "Section::::Binary system.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 308, "text": "The evolved red giant star is losing mass, since radiation pressure overcomes the low gravity on the surface. The outflow of matter is captured by the gravitational field of the white dwarf and falls on its surface in the end. At least during the active phase an accretion disk forms around the white dwarf.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6813", "title": "Chandrasekhar limit", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 601, "text": "White dwarfs resist gravitational collapse primarily through electron degeneracy pressure (compare main sequence stars, which resist collapse through thermal pressure). The Chandrasekhar limit is the mass above which electron degeneracy pressure in the star's core is insufficient to balance the star's own gravitational self-attraction. Consequently, a white dwarf with a mass greater than the limit is subject to further gravitational collapse, evolving into a different type of stellar remnant, such as a neutron star or black hole. Those with masses under the limit remain stable as white dwarfs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51326627", "title": "WD 1145+017 b", "section": "Section::::Vaporization.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 988, "text": "WD 1145+047 b is currently being vaporized by its star because of its extreme proximity to it. White dwarfs are usually the size of the Earth, and have half as much mass as they did during the main sequence. Due to this and the searing hot temperature of the stellar remnant, rocky minerals are being vaporized off the surface of this object, into orbit around the star, which is responsible for a hot dusty disk that was observed around its host star. It is likely that WD 1145+017 b is bound to disintegrate in the future (around 100–200 million years from now) due to further vaporization and ablation. The minor planet is likely being pelted by several smaller objects of up to , as it is likely not just a single object orbiting the white dwarf star, but likely several planetesimals, which is probably responsible for some of the variations in the light curve data. The smaller objects can also throw debris into orbit upon impact, which may also be responsible for the variations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27980", "title": "Stellar evolution", "section": "Section::::Stellar remnants.:White and black dwarfs.\n", "start_paragraph_id": 57, "start_character": 0, "end_paragraph_id": 57, "end_character": 1263, "text": "If the white dwarf's mass increases above the Chandrasekhar limit, which is for a white dwarf composed chiefly of carbon, oxygen, neon, and/or magnesium, then electron degeneracy pressure fails due to electron capture and the star collapses. Depending upon the chemical composition and pre-collapse temperature in the center, this will lead either to collapse into a neutron star or runaway ignition of carbon and oxygen. Heavier elements favor continued core collapse, because they require a higher temperature to ignite, because electron capture onto these elements and their fusion products is easier; higher core temperatures favor runaway nuclear reaction, which halts core collapse and leads to a Type Ia supernova. These supernovae may be many times brighter than the Type II supernova marking the death of a massive star, even though the latter has the greater total energy release. This instability to collapse means that no white dwarf more massive than approximately can exist (with a possible minor exception for very rapidly spinning white dwarfs, whose centrifugal force due to rotation partially counteracts the weight of their matter). Mass transfer in a binary system may cause an initially stable white dwarf to surpass the Chandrasekhar limit.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26808", "title": "Star", "section": "Section::::Formation and evolution.:Post–main sequence.:Collapse.\n", "start_paragraph_id": 66, "start_character": 0, "end_paragraph_id": 66, "end_character": 678, "text": "As a star's core shrinks, the intensity of radiation from that surface increases, creating such radiation pressure on the outer shell of gas that it will push those layers away, forming a planetary nebula. If what remains after the outer atmosphere has been shed is less than 1.4 , it shrinks to a relatively tiny object about the size of Earth, known as a white dwarf. White dwarfs lack the mass for further gravitational compression to take place. The electron-degenerate matter inside a white dwarf is no longer a plasma, even though stars are generally referred to as being spheres of plasma. Eventually, white dwarfs fade into black dwarfs over a very long period of time.\n", "bleu_score": null, "meta": null } ] } ]
null
6y9t3u
why can't facebook, twitter and instagram just shut down bot accounts?
[ { "answer": "The problem is how do you actually figure out that a twitter user is a bot. You can use Machine Learning with certain features (like tweet sentiment) to analyze the data, but there's the possibility of false positives. Also, it's an arms race. Once they figure out what you're looking for all the bot creator needs to do is to change the bot themselves. You can take a look at [this pdf](_URL_0_) if you're interested in the details of figuring out whether a twitter user is a bot or a human. My ML professor participated in the DARPA Twitter Bot challenge and he said there were a lot of arguments on his team because of it, and at times he felt that it was going to rip the team apart.", "provenance": null }, { "answer": "2 problems:\n\n1: It's hard to get computers to recognize bot behavior in some cases. Computers are terrible at patterns, which is weird for humans to comprehend, because humans are amazing at noticing patterns. This we're getting better at constantly with machine learning (complex equations that adjust themselves to better fit patterns). Additionally, you have to continually train the machine to combat newer kinds of bots- you can't beat a pattern before the pattern exists. And the more behaviors you flag as \"bot-like\", the more risk you run against the next problem.\n\n2: If they screw up and shut down normal user accounts they'd have a media frenzy on their hands- anyone whose account got banned because Facebook though they were a bot would be furious, and it wouldn't be hard for them to rally support either.\n\nAnother consideration is that they might not always want to shut out bots. Some could very well be interesting twitter bots that provide useful or novel services, like a neat bot that would tweet Twitter's stock price every hour, or would tweet a link to the highest Reddit post of the last day every day at midnight. Those aren't particularly taxing on their systems, but could be a neat thing that brings more traffic to Twitter, which means it's good for business. Also, depending on how the bot is set up, it may get served ads and generate revenue that way (this heavily depends on the people behind the bot).", "provenance": null }, { "answer": "If I can throw something else into this discussion, these Services all make money based on advertisements. What do you think would happen to Facebook if the advertisers found out that say, half of the users were fake? ( between my girlfriend and I I think we've created seven or eight ourselves) What about if the Twitter users that always get posted were discovered to be just 20% fake? If you were an Advertiser wouldn't you feel like you deserve to be reimbursed or that contracts for advertisement should be renegotiated based on actual users? There is a profit motive - a very serious profit motive - to keeping these Bots accounts active in spite of how problematic they can be. That's why I don't believe that there's ever going to be a real push from the service providers to get rid of Bot accounts. It will only be when the advertisers realize that they are wasting money for inflated user counts.", "provenance": null }, { "answer": "Well, just banning the accounts in the first place is a bit of a problem in and of itself from a structure standpoint because there are a lot of loose ends to tie up.\n\n* What do you do with the usernames? Are they still taken even after the account is removed? This could be thousands of usernames. Do we differentiate between \"bot dukeofdummies the 1st\" and \"human dukeofdummies the II\"? Do you show it visibly or hide it in the back end? \n* Do you show the history of the previous account? Do you delete all of \"Dukeofdummies the 1st\" or does it still show up? Records are nice but it does skew with all the marketing data these sites make money off of.\n* What happens when another bot account *still wants to talk to dukeofdummies*? I take this account not knowing its prior history and every day I get 30 friend requests from random bots I've never met. Talk about a bad customer experience.\n\nEven if we get all the technical jargon figured out to remove accounts (it's annoying but it's doable) How do you even figure out if someone is a bot? You could look for bot-like behavior... but that gets tricky. \n\n* 8000 people on one IP address? Kinda sketchy. 50 people on one IP address? Could be legit.\n* 8000 people copy paste the same post all around 9:25? Could be a bot net, but it could also be a bunch of humans posting the latest Rush Limbaugh rant that started at 9 in a race to be the first to post on Facebook.\n* What about an account that doesn't do anything but copy yesterday's most popular posts in these subreddits and then tries to post them to the front page! You could easily build a bot to do this [but on the other hand...](_URL_0_)\n* Flat up asking is an interesting idea, but that doesn't work. Because humans can ignore your question just as easily as a bot. Some people can go six months without checking Facebook. People can also easily say \"yes. sincerely, Dukeofdummies\". How is that proof that you're human?\n\nThe really, really difficult part about all of this though is that these websites purposely make it easy to sign up. If the initial barrier to enter to one of these sites is too high... then people don't sign up. However that means that even if you remove 700 bot accounts accurately... they can simply build them right back up. You need to remove them from circulation *faster* than they can repopulate, which means jumping to conclusions faster and making false positives, costing you customer satisfaction and having people leave your user base and cost you money which hits on the biggest issue in all of this.\n\nWith all of the effort it's going to take to fix this, with all of the ambiguity of the question \"are you human?\", with all the potential costs of accidentally removing customers, A website owner has to ask itself, \"is anyone *really* going to care if bots are manipulating some things?\" Is the user base going to leave over this? Comparing costs/benefits does this really hurt us? Can we make do with a light purge every once in a while of the most obvious offenders to make it look like we're doing something and go on with our lives?\n\nTL:DR: It's kinda hard to do, really hard to do it well and in a timely fashion, and they really don't feel the urge to do it in the first place.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "52601346", "title": "Social bot", "section": "", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 411, "text": "Using social bots is against the terms of service of many platforms, especially Twitter and Instagram. However, a certain degree of automation is of course intended by making social media APIs available. Many users, especially businesses still automate their Instagram activity in order to gain real followers rather than buying fake ones. This is commonly done through third-party social automation companies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "805274", "title": "BugMeNot", "section": "Section::::Use of the service.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 430, "text": "BugMeNot allows users of their service to add new accounts for sites with free registration. It also encourages users to use disposable email address services to create such accounts. However, it does not allow them to add accounts for pay websites, as this could potentially lead to credit card fraud. BugMeNot also claims to remove accounts for any website requesting that they do not provide accounts for non-registered users.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31437888", "title": "Microblogging in China", "section": "Section::::Chinese microbloggers on Twitter.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 402, "text": "Due to the strict Internet censorship policy on microblogging enacted by the CPC government, a number of Chinese microbloggers choose to make posts that contain \"sensitive contents\" on Twitter. Although Twitter has been blocked in China since 2009, most Twitter users who reside in China can access the Twitter website using a proxy. More information can be found on List of websites blocked in China.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35992662", "title": "Twitter bot", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 633, "text": "A Twitter bot is a type of bot software that controls a Twitter account via the Twitter API. The bot software may autonomously perform actions such as tweeting, re-tweeting, liking, following, unfollowing, or direct messaging other accounts. The automation of Twitter accounts is governed by a set of automation rules that outline proper and improper uses of automation. Proper usage includes broadcasting helpful information, automatically generating interesting or creative content, and automatically replying to users via direct message. Improper usage includes circumventing API rate limits, violating user privacy, or spamming.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30642008", "title": "Censorship of Twitter", "section": "Section::::Censorship by Twitter.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 361, "text": "Under the Terms of Service that Twitter requires its users to agree to, Twitter retains the right to temporarily or permanently suspend user accounts based on a violation of the agreement. One such example took place on December 18, 2017, when it banned the accounts belonging to Paul Golding, Jayda Fransen, Britain First, and the Traditionalist Worker Party.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9988187", "title": "Twitter", "section": "Section::::Society.:Impact.:Twitterbot effect.\n", "start_paragraph_id": 209, "start_character": 0, "end_paragraph_id": 209, "end_character": 1312, "text": "In addition to content-generating bots, users can purchase followers, favorites, retweets and comments on various websites that cater to expanding a user's image through accumulation of followers. With more followers, users' profiles gain more attention, thus increasing their popularity. Generating Web traffic is a valuable commodity for both individuals and businesses because it indicates notability. With Twitterbots, users are able to create the illusion of \"buzz\" on their site by obtaining followers from services such as Swenzy and underground suppliers who operate bot farms or click farms. The companies that facilitate this service create fake Twitter accounts that follow a number of people, some of these Twitter accounts may even post fake tweets to make it seem like they are real. This practice of obtaining mass amounts of twitterbots as followers is not permitted on Twitter. The emphasis on followers and likes as a measure of social capital has urged people to extend their circle to weak and latent ties to promote the idea of popularity for celebrities, politicians, musicians, public figures, and companies alike. According to \"The New York Times\", bots amass significant influence and have been noted to sway elections, influence the stock market, public appeal, and attack governments.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20819040", "title": "Hashtag", "section": "Section::::Style.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 283, "text": "As well as frustrating other users, the misuse of hashtags can lead to account suspensions. Twitter warns that adding hashtags to unrelated tweets, or repeated use of the same hashtag without adding to a conversation, could cause an account to be filtered from search, or suspended.\n", "bleu_score": null, "meta": null } ] } ]
null
4xm5iy
why do higher impedance speakers yield better sound quality?
[ { "answer": "It will be hard to keep this simple, but here we go. A larger voice coil SLIGHTLY improves quality. The larger the voice coil, the higher the impedance. The higher the impedance, the lower the volume. To maintain higher volume, you need higher voltage. Many phones and other portable music devices won't put out higher voltages. It would shorten your battery life. ", "provenance": null }, { "answer": "For the most part, impedance is unrelated to sound quality. That said, lower impedance tends to make amplifiers work harder, which can lead to poorer performance. ", "provenance": null }, { "answer": "The amplifier has an output impedance and the speakers have an input impedance. The signal itself is a time-varying voltage which is divided between the two impedances according to their ratio. To maximize the amount of power in the speakers, you want as low an output impedance on the amplifier and as high an input impedance on the speakers as possible.\n\nNote that this is the same for *any* sort of amplification process. You want high impedance on the load because that's where you want most of the signal to be dissipated. If you have a low impedance on the load, most of the signal will be dissipated (uselessly) elsewhere.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "246802", "title": "Audio power", "section": "Section::::Power and loudness in the real world.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 734, "text": "Speaker sensitivity is measured and rated on the assumption of a fixed amplifier output voltage because audio amplifiers tend to behave like voltage sources. Sensitivity can be a misleading metric due to differences in speaker impedance between differently designed speakers. A speaker with a higher impedance may have lower measured sensitivity and thus appear to be less efficient than a speaker with a lower impedance even though their efficiencies are actually similar. Speaker efficiency is a metric that only measures the actual percentage of electrical power that the speaker converts to acoustic power and is sometimes a more appropriate metric to use when investigating ways to achieve a given acoustic power from a speaker.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44731820", "title": "AudioQuest", "section": "Section::::Effects of cable quality on analog audio quality.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 257, "text": "Using an under-rated, long, skinny, oxidized high impedance speaker cable will drastically reduce the damping properties of the entire audio system, which is why in many high-end audio systems, the amplifier is located as close to the speakers as possible.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "638310", "title": "Damping factor", "section": "Section::::In practice.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 794, "text": "Large amounts of damping of the loudspeaker is not necessarily better, for example a mere 0.35 dB difference in real-life results between a high (100) and medium (20) Damping Factor. Some engineers, including Nelson Pass claim loudspeakers can sound \"better\" with lower electrical damping. A lower damping factor helps to enhance the bass response of the loudspeaker by several decibels (where the impedance of the speaker would be at its maximum), which is useful if only a single speaker is used for the entire audio range. Therefore, some amplifiers, in particular vintage amplifiers from the 1950s, '60s and '70s, feature controls for varying the damping factor. While such bass \"enhancement\" may be pleasing to some enthusiasts, it nonetheless represents a distortion of the input signal.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19663420", "title": "The Cizek Model One", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 252, "text": "This product was important because it was the first loudspeaker system which, due to its particular crossover, showed a flat impedance curve (except for the resonance peak) with a consequent easier work for the amplifier and linear frequency response.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41004476", "title": "Naim Audio amplification", "section": "Section::::History.:Design principles.:Output protection.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 1004, "text": "Vereker also believed that a well-designed amplifier must be stable at all times when driving real-life loads, which are different from those achieved in lab conditions because loudspeakers' impedances vary with frequency. The inherent compromise between the pursuits for stability and sound quality means that Naim's power amplifiers are designed to work optimally with its own moderately priced speaker cable, and its predecessor . Product manuals warn users against using \"high-definition wire or any other special cable between amplifier and loudspeaker\". Whilst other manufacturers habitually employ Zobel networks (or an output filter which enhances amplifiers' stability) to protect against use with speakers and or cables with very high-capacitance, Naim amplifiers routinely omit these filters because of their adverse effect on sound quality. The design decision was made to use a suitable length of speaker cable (a minimum of 3.5m, with 5m being optimal) to provide the effective inductance.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23714384", "title": "Tube sound", "section": "Section::::Harmonic content and distortion.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 408, "text": "The design of speaker crossover networks and other electro-mechanical properties may result in a speaker with a very uneven impedance curve, for a nominal 8 Ω speaker, being as low as 6 Ω at some places and as high as 30–50 Ω elsewhere in the curve. An amplifier with little or no negative feedback will always perform poorly when faced with a speaker where little attention was paid to the impedance curve.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7645050", "title": "Plasma speaker", "section": "Section::::Comparison to conventional loudspeakers.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 364, "text": "Thus conventional speaker output, or the fidelity of the device, is distorted by physical limitations inherent in its design. These distortions have long been the limiting factor in commercial reproduction of strong high frequencies. To a lesser extent square wave characteristics are also problematic; the reproduction of square waves most stress a speaker cone.\n", "bleu_score": null, "meta": null } ] } ]
null
548hbi
why did teachers always tell us to remove hats/caps when we enter inside a building? what does this signify?
[ { "answer": "It goes back to olden times when a knight would remove his face gear. It's simply a sign of respect to remove head pieces when in someone else's \"home\", and as well as during the Pledge of Allegiance.", "provenance": null }, { "answer": "What Saul/Paul of Tarsus wrote in 1 Corinthians 11:7 likely plays a role too. This is why it has been traditional to take hats off in church, at least.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8084104", "title": "Etiquette in Europe", "section": "Section::::Hats and coats.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 438, "text": "Wearing coats, boots or other outer garments inside someone’s home is often frowned upon as well. Sitting down to eat at table wearing a hat or coat etc. is even worse. Also one should remove one's hat when showing deference. Removing one's hat is also a form of respectful greeting: the origin of this is that knights were expected to remove their helmets when meeting their king; not doing so would be a sign of mistrust and hostility.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "59730982", "title": "Lobsters (website)", "section": "Section::::Notable Features.:Hats.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 415, "text": "Hats are a feature where users who belong to an company, project, or organization may choose to wear a \"hat\", indicating that they are speaking on behalf of the organization. This allows users to fluidly move between talking in an official capacity to talking as themselves without changing accounts. Specially colored red hats are worn by members of the Lobsters community that upkeep the site, marked as \"Sysop\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1032688", "title": "Fear of a Black Hat", "section": "Section::::Plot.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 487, "text": "The members wear outrageous headwear during their performances. This is explained as an act of rebellion, remembering their slave ancestors, who had to work bare-headed in the sun. According to N.W.H., hats are a symbol for resistance and revolution since their hatless ancestors were too tired from working all day in the sun to revolt. This is a typical example of the bizarre logic the group uses to explain the deeper meanings behind their otherwise crude and base music and images.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "337977", "title": "Academic dress of the University of Oxford", "section": "Section::::Student dress.:Undergraduates.:Undergraduates and mortarboards.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 475, "text": "It is often claimed that undergraduates by custom do not wear their caps (or even that they can be fined for doing so). This is incorrect. Outdoors, caps may be worn, but it is customary to touch or raise one's cap as a salute to senior university or college officers. Like all other male members of the university (including graduates) other than the Chancellor, Vice-Chancellor and Proctors, male undergraduates must remove their caps during university ceremonies indoors.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "732101", "title": "Academic dress of the University of Cambridge", "section": "Section::::Components of Cambridge academic dress.:Headdresses.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 689, "text": "A form of a black hat known as a square cap (also mortarboard) is worn or carried. Properly, it is worn outdoors and carried indoors, except by people acting in an official capacity who customarily continue to wear it indoors. Although in practice few people wear (or even carry) a cap nowadays, they are nominally still required for graduates at the university; caps ceased to be compulsory for undergraduates in 1943 due to a shortage during the Second World War, and, after bringing them back for degree ceremonies in the Senate House only, were finally made entirely optional for undergraduates in 1953, though they are still not permitted to wear any other head covering with a gown.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56406788", "title": "Granny dress", "section": "Section::::Controversy.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 1098, "text": "Students wearing granny dresses to school were suspended or sent home. In Oakland in 1965, girls were sent home for wearing granny dresses. In Kansas City, Missouri, a mother wore her own granny dress to school in an attempt to convince the principal to allow her daughter to wear one. In Trumansburg, New York, in 1966, three sisters were suspended from school for wearing the dress. The school's attorney claimed that both safety and possible class disruption were the reasons the dress was banned. The principal of the school felt that there was a danger of tripping on stairs because of the length of the dresses. Laura M. Lorraine, dean of the Analy Union High School also thought the length of the dresses made them difficult for walking up stairs. The school attorney felt that granny dresses were \"extreme\" and may encourage students to adopt other extreme forms of dressing. In some cases, school authorities just stated that it wasn't \"suitable school attire.\" In 1966, a \"Dear Abby\" column featured a letter from a girl who was sent to the principal's office for wearing a granny dress.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "337977", "title": "Academic dress of the University of Oxford", "section": "Section::::Components of Oxford academic dress.:Academic caps.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 428, "text": "Men wear a mortarboard (also known as a \"square\" or trencher cap) [h1], which is not worn indoors, except by the Chancellor, Vice-Chancellor and Proctors. When meeting the Vice-Chancellor, Proctors, or other senior official of the university in the street, it is traditional for a man to touch or raise his cap. In practice few people wear their caps nowadays, and instead carry their caps on occasions where caps are required.\n", "bleu_score": null, "meta": null } ] } ]
null
3vy6nk
Would having a more efficient/ faster brain affect our perception of time?
[ { "answer": "I am by no means an expert, but I know (both firsthand and from [documented sources](_URL_0_) ) that your perception of time can be affected by life-threatening situations. These situations *probably (and I'm guessing here)* do something like speeding up your neuronal activity and cause your neural \"clock speed\" to increase. This causes things around you to move slowly from your point of view. \n\nPerhaps someone in this field can expand further on this, but it's an interesting subject and I wish I knew more about it to properly answer your question. ", "provenance": null }, { "answer": "If you simulated a brain at twice the speed, it would alter the person's perception of time. If it took them ten seconds to count to ten normally, then it would take five seconds to simulate them counting to ten.\n\nIf you made a human brain more efficient, then you're redesigning it. You could alter the perception of time, but you could also leave it the same. It depends on what you do to it.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1305044", "title": "Neuroscience and intelligence", "section": "Section::::Humans.:Neural efficiency.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 626, "text": "The neural efficiency hypothesis postulates that more intelligent individuals display less activation in the brain during cognitive tasks, as measured by Glucose metabolism. A small sample of participants (N=8) displayed negative correlations between intelligence and absolute regional metabolic rates ranging from -0.48 to -0.84, as measured by PET scans, indicating that brighter individuals were more effective processors of information, as they use less energy. According to an extensive review by Neubauer & Fink a large number of studies (N=27) have confirmed this finding using methods such as PET scans, EEG and fMRI.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10226486", "title": "Fergus I. M. Craik", "section": "Section::::Research.:Age-Related Memory Changes.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 975, "text": "In this case, age is looked at as a factor that alters and degrades memory efficiency and abilities over time. Age-related memory problems become more persistent in the elderly years, and one's ability to recall previously encoded stimuli without cues or context is no longer optimal. However, verbal or visual stimuli can be recognized at the same level of efficiency over the course of a lifetime. Craik and his colleagues found physiological evidence for this cognitive degradation through their research into the brains of elderly participants. Specifically, they discovered that there is a reduction in frontal activity. Still, there is an increased level of activity in the left prefrontal cortex when older adults undergo some nonverbal tasks of retrieval when compared to younger individuals. Moreover, the presence of increased left prefrontal cortex activity is only found in tasks revolving retrieval but there is still a reduction when performing encoding tasks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4402098", "title": "Memory and aging", "section": "Section::::Causes.:Theories.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 682, "text": "Speed of processing is another theory that has been raised to explain working memory deficits. As a result of various studies he has completed examining this topic, Salthouse argues that as we age our speed of processing information decreases significantly. It is this decrease in processing speed that is then responsible for our inability to use working memory efficiently as we age. The younger persons brain is able to obtain and process information at a quicker rate which allows for subsequent integration and manipulation needed to complete the cognitive task at hand. As this processing slows, cognitive tasks that rely on quick processing speed then become more difficult.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1305044", "title": "Neuroscience and intelligence", "section": "Section::::Humans.:Neural efficiency.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 837, "text": "fMRI and EEG studies have revealed that task difficulty is an important factor affecting neural efficiency. More intelligent individuals display neural efficiency only when faced with tasks of subjectively easy to moderate difficulty, while no neural efficiency can be found during difficult tasks. In fact, more able individuals appear to invest more cortical resources in tasks of high difficulty. This appears to be especially true for the Prefrontal Cortex, as individuals with higher intelligence displayed increased activation of this area during difficult tasks compared to individuals with lower intelligence. It has been proposed that the main reason for the neural efficiency phenomenon could be that individuals with high intelligence are better at blocking out interfering information than individuals with low intelligence.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35129609", "title": "Time-based prospective memory", "section": "Section::::Factors affecting prospective memory.:Cognitive load.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 838, "text": "A study by Khan et al. (2008) examined the influence of cognitive load (low vs. high) on time-based prospective memory. The findings implied that time-based prospective memory is severely affected when cognitive load is high. The study attributed the poor performance on time-based tasks as a result of dividing attentional resources into actively monitoring time, self-initiating the response at the appropriate time and the ongoing task. Humans have limited attentional capacity, and therefore high cognitive load affects monitoring of time and consequently time-based prospective memory performance negatively. Numerous aspects of daily life depend on time-based prospective memory, ranging from daily activities such as remembering what time to meet a friend, to more important tasks such as remembering what time to take medication.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6069126", "title": "Time perception", "section": "Section::::Types of temporal illusions.:Changes with age.\n", "start_paragraph_id": 53, "start_character": 0, "end_paragraph_id": 53, "end_character": 858, "text": "Psychologists have found that the subjective perception of the passing of time tends to speed up with increasing age in humans. This often causes people to increasingly underestimate a given interval of time as they age. This fact can likely be attributed to a variety of age-related changes in the aging brain, such as the lowering in dopaminergic levels with older age; however, the details are still being debated. In an experimental study involving a group of subjects aged between 19 and 24 and a group between 60 and 80, the participants' abilities to estimate 3 minutes of time were compared. The study found that an average of 3 minutes and 3 seconds passed when participants in the younger group estimated that 3 minutes had passed, whereas the older group's estimate for when 3 minutes had passed came after an average of 3 minutes and 40 seconds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47357235", "title": "Neural efficiency hypothesis", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 292, "text": "The neural efficiency hypothesis is the phenomenon where smarter individuals show lower (more efficient) brain activation than less bright individuals on cognitive tests of low to moderate difficulty. For tasks of higher difficulty, however, smarter individuals show higher brain activation.\n", "bleu_score": null, "meta": null } ] } ]
null
2cmp6e
Are you born allergic to things or do you get it later on?
[ { "answer": "Both. There are several different types of allergies. Type I hypersensitivity is caused when your body identifies something as an antigen and over-responds to it (for example, peanut allergies fall into this category). However, there is another type of hypersensitivity reaction, a type IV hypersensitivity reaction, that occurs after you have been \"sensitized\" to something. This is what occurs when you develop a reaction to something like poison oak/ivy (whatever they have in your area) or latex. They are mediated by different mechanisms: Type I is caused by mast cells releasing histamine and other pro-inflammatory cytokines, while Type IV is mediated by macrophages that congregate in the \"infected\" area.", "provenance": null }, { "answer": "No. You are not born allergic to anything. You need to be sensitized to the allergen first, and you aren't really fully capable of that until several months after birth.\n\nYou can, however, be born with a predisposition to becoming allergic to things (not specific things, just \"things\" in general). ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "58859", "title": "Allergen", "section": "Section::::Types of allergens.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 675, "text": "An allergic reaction can be caused by any form of direct contact with the allergen—consuming food or drink one is sensitive to (ingestion), breathing in pollen, perfume or pet dander (inhalation), or brushing a body part against an allergy-causing plant (direct contact). Other common causes of serious allergy are wasp, fire ant and bee stings, penicillin, and latex. An extremely serious form of an allergic reaction is called anaphylaxis. One form of treatment is the administration of sterile epinephrine to the person experiencing anaphylaxis, which suppresses the body's overreaction to the allergen, and allows for the patient to be transported to a medical facility.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3485964", "title": "Allergic contact dermatitis", "section": "Section::::Signs and symptoms.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 306, "text": "The symptoms of allergic contact may persist for as long as one month before resolving completely. Once an individual has developed a skin reaction to a certain substance it is most likely that they will have it for the rest of their life, and the symptoms will reappear when in contact with the allergen.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55313", "title": "Allergy", "section": "Section::::Cause.:Foods.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 388, "text": "A wide variety of foods can cause allergic reactions, but 90% of allergic responses to foods are caused by cow's milk, soy, eggs, wheat, peanuts, tree nuts, fish, and shellfish. Other food allergies, affecting less than 1 person per 10,000 population, may be considered \"rare\". The use of hydrolysed milk baby formula versus standard milk baby formula does not appear to change the risk.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24431455", "title": "Suillus americanus", "section": "Section::::Allergenicity.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 331, "text": "Some susceptible individuals have experienced an allergic reaction after touching \"Suillus americanus\". The symptoms of allergic contact dermatitis generally develop one to two days after initial contact, persist for roughly a week, then disappear without treatment. Cooking the fruit bodies inactivates the responsible allergens.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "341640", "title": "Coriander", "section": "Section::::Allergy.\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 345, "text": "Some people are allergic to coriander leaves or seeds, having symptoms similar to those of other food allergies. In one study, 32% of pin-prick tests in children and 23% in adults were positive for coriander and other members of the family Apiaceae, including caraway, fennel, and celery. The allergic symptoms may be minor or life-threatening.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14400598", "title": "Allergic response", "section": "Section::::Mechanism.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 437, "text": "Many substances can trigger an allergic reaction. Common triggers of a reaction include foods, likes nuts, eggs, milk, gluten, fruit and vegetables; insect bites from bees or wasps (often a severe response occurs); environmental factors such as pollen, dust, mold, plants like grass or trees, animal dander; medications or chemicals. Some people experience an allergic response to cold or hot temperatures outside, jewelry or sunlight. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2834063", "title": "Peanut allergy", "section": "Section::::Cause.:Routes of exposure.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 402, "text": "While the most obvious route for an allergic exposure is unintentional ingestion, some reactions are possible through external exposure. Peanut allergies are much more common in infants who had oozing and crusted skin rashes as infants. Sensitive children may react via ingestion, inhalation, or skin contact to peanut allergens which have persistence in the environment, possibly lasting over months.\n", "bleu_score": null, "meta": null } ] } ]
null
2mj51d
Will the other side of the moon ever be facing earth?
[ { "answer": "The Moon is [tidally locked](_URL_0_), meaning it's orbital period equals its rotational period. Thus, the answer to your question is that the same side of the Moon will *always* face the Earth. Viewing the Moon from Earth, we will never see the other side. Something very dramatic would have to happen to change that.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "602678", "title": "Extraterrestrial skies", "section": "Section::::The Moon.:The Earth from the Moon.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 326, "text": "As a result of the Moon's synchronous rotation, one side of the Moon (the \"near side\") is permanently turned towards Earth, and the other side, the \"far side\", mostly cannot be seen from Earth. This means, conversely, that Earth can be seen only from the near side of the Moon and would always be invisible from the far side.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "61182504", "title": "Earth phase", "section": "Section::::Overview.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 326, "text": "As a result of the Moon's synchronous rotation, one side of the Moon (the \"near side\") is permanently turned towards Earth, and the other side, the \"far side\", mostly cannot be seen from Earth. This means, conversely, that Earth can be seen only from the near side of the Moon and would always be invisible from the far side.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5473867", "title": "Near side of the Moon", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 331, "text": "The near side of the Moon is the lunar hemisphere that is permanently turned towards Earth, whereas the opposite side is the far side. Only one side of the Moon is visible from Earth because the Moon rotates on its axis at the same rate that the Moon orbits the Earth – a situation known as synchronous rotation, or tidal locking.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "187916", "title": "Man in the Moon", "section": "Section::::Scientific explanation.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 429, "text": "The near side of the Moon, containing these maria that make up the man, is always facing Earth. This is due to a tidal locking or synchronous orbit. Thought to have occurred because of the gravitational forces partially caused by the Moon's oblong shape, its rotation has slowed to the point where it rotates exactly once on each trip around the Earth. This causes the near side of the Moon to always turn its face toward Earth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53559013", "title": "Solar eclipse of January 23, 1860", "section": "Section::::Description.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 461, "text": "On the other side as the Moon from the Earth headed towards the left at New Zealand, as the umbral path was outside the South Pole and over the Prime Meridian to the Peninsula, the Moon from the Earth was seen as it was going on bottom, then on the right and on top in the peninsular portion though the Earth rotates to the east as it was north of the South Pole at the Prime Meridian, the rest of the world saw the Moon from the Earth headed towards the left.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "314494", "title": "Libration", "section": "Section::::Lunar libration.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 448, "text": "The Moon keeps one hemisphere of itself facing the Earth, due to tidal locking. Therefore, the first view of the far side of the Moon was not possible until the Soviet probe Luna 3 reached the Moon on October 7th, 1959 and further lunar exploration by the U.S. and the Soviet Union. However, this simple picture is only approximately true: over time, slightly \"more\" than half (about 59%) of the Moon's surface is seen from Earth due to libration.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19331", "title": "Moon", "section": "Section::::Earth-Moon system.:Appearance from Earth.\n", "start_paragraph_id": 67, "start_character": 0, "end_paragraph_id": 67, "end_character": 595, "text": "The Moon is in synchronous rotation as it orbits Earth; it rotates about its axis in about the same time it takes to orbit Earth. This results in it always keeping nearly the same face turned towards Earth. However, because of the effect of libration, about 59% of the Moon's surface can actually be seen from Earth. The side of the Moon that faces Earth is called the near side, and the opposite the far side. The far side is often inaccurately called the \"dark side\", but it is in fact illuminated as often as the near side: once every 29.5 Earth days. During new moon, the near side is dark.\n", "bleu_score": null, "meta": null } ] } ]
null
1ktrmb
Question about image scaling
[ { "answer": "It's a bit more complicated than that. [Here's a very good explanation of how it's done.](_URL_0_)", "provenance": null }, { "answer": "When deciding how large to render an image, you have to arrive at an integer size at some point. This typically happens internal to the rendering API being used - you provide it an NxM image, and given the current transform (in your example, scaling down), it will arrive at needing to draw the image at some other size XxY. It's up to the implementation to round up or down to handle fractional sizes, or even leave the fractional sizes in and perform antialiasing on the edges, as you suggested).\n\nTo actual draw the image at the smaller size, some form of interpolation is used (such as nearest neighbor, linear, or cubic). This process has some similarities with [texture mapping](_URL_0_) for 3D rendering. For each destination pixel, the nearest source pixels will be factored in and weighed according to their distance to the destination pixel.\n\nFor nearest neighbor, the nearest pixel in the source image is used. For linear, the nearest 4 pixels are used and weighed (a 2x2 grid).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2937077", "title": "Image scaling", "section": "Section::::Mathematical.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 413, "text": "Image scaling can be interpreted as a form of image resampling or image reconstruction from the view of the Nyquist sampling theorem. According to the theorem, downsampling to a smaller image from a higher-resolution original can only be carried out after applying a suitable 2D anti-aliasing filter to prevent aliasing artifacts. The image is reduced to the information that can be carried by the smaller image.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "695241", "title": "Scale invariance", "section": "Section::::Other examples of scale invariance.:Computer vision.\n", "start_paragraph_id": 123, "start_character": 0, "end_paragraph_id": 123, "end_character": 368, "text": "In computer vision and biological vision, scaling transformations arise because of the perspective image mapping and because of objects having different physical size in the world. In these areas, scale invariance refers to local image descriptors or visual representations of the image data that remain invariant when the local scale in the image domain is changed. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2937077", "title": "Image scaling", "section": "Section::::Applications.:General.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 239, "text": "Image scaling is used in, among other applications, web browsers, image editors, image and file viewers, software magnifiers, digital zoom, the process of generating thumbnail images and when outputting images through screens or printers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2937077", "title": "Image scaling", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 207, "text": "In computer graphics and digital imaging, image scaling refers to the resizing of a digital image. In video technology, the magnification of digital material is known as upscaling or resolution enhancement.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47805418", "title": "Directional Cubic Convolution Interpolation", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 207, "text": "By taking into account the edges in an image, this scaling algorithm reduces artifacts common to other image scaling algorithms. For example, staircase artifacts on diagonal lines and curves are eliminated.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35248", "title": "2D computer graphics", "section": "Section::::Scaling.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 604, "text": "A scaling in the most general sense is any affine transformation with a diagonalizable matrix. It includes the case that the three directions of scaling are not perpendicular. It includes also the case that one or more scale factors are equal to zero (projection), and the case of one or more negative scale factors. The latter corresponds to a combination of scaling proper and a kind of reflection: along lines in a particular direction we take the reflection in the point of intersection with a plane that need not be perpendicular; therefore it is more general than ordinary reflection in the plane.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2937077", "title": "Image scaling", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 621, "text": "When scaling a vector graphic image, the graphic primitives that make up the image can be scaled using geometric transformations, with no loss of image quality. When scaling a raster graphics image, a new image with a higher or lower number of pixels must be generated. In the case of decreasing the pixel number (scaling down) this usually results in a visible quality loss. From the standpoint of digital signal processing, the scaling of raster graphics is a two-dimensional example of sample-rate conversion, the conversion of a discrete signal from a sampling rate (in this case the local sampling rate) to another.\n", "bleu_score": null, "meta": null } ] } ]
null
28xos6
how exactly do organisms evolve and adapt to their environment?
[ { "answer": "They don't.\n\nThere's a bit of variation between individuals of a generation, and traits that prove beneficial have a better chance of passing on than traits that don't.\n\n\nIf 95% of frogs of a species are green and 5% are red, so long as green is the preferred camouflage color, the green frogs will reproduce more on account of the red ones getting eaten.\n\nIf, say, an environmental event occurs that causes red to be a better form of camouflage, you'll see more and more red frogs surviving to breed every generation, as the green frogs try to hide, fail, and get eaten.\n\nIt might take generations upon generations, but eventually you might see a day where most of the frogs are red.", "provenance": null }, { "answer": "When an organism has offspring it passes on the genetic code (DNA in this example). During this passing down mutations occur, this causes the offspring organism to be slightly different. This difference may help the organism survive or may hurt its chance at survival, its random mutation. The body doesn't learn what the environment is and design mutations off of that, it is random mutations.\n\nThe environment selects which mutations are beneficial and which ones are not. Imagine some rabbits in a white, snow field. If all the rabbits are brown and one was to gain a mutation that made it white, it would be able to camouflage more effectively and hide from predators more effectively than the other rabbits. Of course if the rabbit had gained a mutation that had made it darker it would stand out and have less chance of survival. At the end of this scenario the white rabbit is much more likely to contribute more offspring to the next generation, then each of them will have a higher success rate than their brown challengers. \n\nThis process is called natural selection. It is the environment choosing the best suited to survive to pass on more genes to the following generation. Eventually the population will be flooded with these 'beneficial' genes.\n\nEdit: I will be very happy to answer any follow up questions you may have whether it's about something I wrote or something else you may be wondering about evolution/natural selection.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "942048", "title": "Adaptation", "section": "Section::::Types.:Genetic change.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 868, "text": "Habitats and biota do frequently change. Therefore, it follows that the process of adaptation is never finally complete. Over time, it may happen that the environment changes little, and the species comes to fit its surroundings better and better. On the other hand, it may happen that changes in the environment occur relatively rapidly, and then the species becomes less and less well adapted. Seen like this, adaptation is a genetic \"tracking process\", which goes on all the time to some extent, but especially when the population cannot or does not move to another, less hostile area. Given enough genetic change, as well as specific demographic conditions, an adaptation may be enough to bring a population back from the brink of extinction in a process called evolutionary rescue. Adaptation does affect, to some extent, every species in a particular ecosystem.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22052921", "title": "Adaptive behavior (ecology)", "section": "Section::::Non-heritable.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 697, "text": "Populations change through the process of evolution. Each individual in a population has a unique role in their particular environment. This role, commonly known as an ecological niche, is simply how an organism lives in an environment in relation to others. Over successive generations, the organism must adapt to their surrounding conditions in order to develop their niche. An organism's niche will evolve as changes in the external environment occur. The most successful species in nature are those that are able to use adaptive behaviors to build on prior knowledge, thereby increasing their overall knowledge bank. In turn, this will improve their overall survival and reproductive success.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9236", "title": "Evolution", "section": "Section::::Outcomes.\n", "start_paragraph_id": 71, "start_character": 0, "end_paragraph_id": 71, "end_character": 612, "text": "Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12266", "title": "Genetics", "section": "Section::::Genetic change.:Natural selection and evolution.\n", "start_paragraph_id": 72, "start_character": 0, "end_paragraph_id": 72, "end_character": 416, "text": "Over many generations, the genomes of organisms can change significantly, resulting in evolution. In the process called adaptation, selection for beneficial mutations can cause a species to evolve into forms better able to survive in their environment. New species are formed through the process of speciation, often caused by geographical separations that prevent populations from exchanging genes with each other.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1962246", "title": "Geobiology", "section": "Section::::Important concepts.:Co-evolution of life and Earth.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 368, "text": "Along with standard biological evolution, life and planet co-evolve. Since the best adaptations are those that suit the ecological niche that the organism lives in, the physical and chemical characteristics of the environment drive the evolution of life by natural selection, but the opposite can also be true: with every advent of evolution, the environment changes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56284235", "title": "Constructive development (biology)", "section": "Section::::Key themes of constructive development.:Developmental environments are constructed.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 266, "text": "In the course of development, organisms help shape their internal and external environment, and in this way, influence their own development. Organisms also construct developmental environments for their offspring through various forms of extra-genetic inheritance.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13571938", "title": "Plant evolutionary developmental biology", "section": "Section::::Mechanisms and players in evolution.\n", "start_paragraph_id": 67, "start_character": 0, "end_paragraph_id": 67, "end_character": 480, "text": "While environmental factors are significantly responsible for evolutionary change, they act merely as agents for natural selection. Some of the changes develop through interactions with pathogens. Change is inherently brought about via phenomena at the genetic level – mutations, chromosomal rearrangements and epigenetic changes. While the general types of mutations hold true across the living world, in plants, some other mechanisms have been implicated as highly significant.\n", "bleu_score": null, "meta": null } ] } ]
null
7upuj5
Why are galaxies the colour they are?
[ { "answer": "The colors gives insight in the composition of the star population of the galaxy.\n\nBlue regions are composed mostly of young hot stars, while red regions are older, cooler stars.\n\nYou will often also see smaller, pink spots. These are huge clouds of hydrogen, the color stems from the characteristic emission of hydrogen gas, which you can see in the [balmer series](_URL_0_). The HII line is the bright red emission on the right side of the spectrum.\n\nWhen you look at galaxies in other wavelengths than the visible light, you can get a lot mor information about the galaxies composition, like radio or gamma emission. But those do not contribute to the visible appearance of the galaxy.\n\nIt should also be noted, that most galaxies are not visible to the naked eye at all. They are so far away, that the expansion of the universe stretched their light into and beyond the infrared spectrum, and can only be seen by specialized equipment. Images of galaxies are then \"converted\" to what a human might see if he were close enough. But it's always an artists impression.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "964475", "title": "Dwarf galaxy", "section": "Section::::Blue compact dwarf galaxies.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 497, "text": "In astronomy, a blue compact dwarf galaxy (BCD galaxy) is a small galaxy which contains large clusters of young, hot, massive stars. These stars, the brightest of which are blue, cause the galaxy itself to appear blue in colour. Most BCD galaxies are also classified as dwarf irregular galaxies or as dwarf lenticular galaxies. Because they are composed of star clusters, BCD galaxies lack a uniform shape. They consume gas intensely, which causes their stars to become very violent when forming.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12212921", "title": "Galaxy Zoo", "section": "Section::::Galactic bars and bulges.\n", "start_paragraph_id": 49, "start_character": 0, "end_paragraph_id": 49, "end_character": 1041, "text": "Some spiral galaxies have central bar-shaped structures composed of stars. These galaxies are called 'barred spirals' and have been investigated by Galaxy Zoo in several studies. It is unclear why some spiral galaxies have bars and some do not. Galaxy Zoo research has shown that red spirals are about twice as likely to host bars as blue spirals. These colours are significant. Blue galaxies get their hue from the hot young stars they contain, implying that they are forming stars in large numbers. In red galaxies, this star formation has stopped, leaving behind the cooler, long-lived stars that give them their red colour. Karen Masters, a scientist involved in the studies, stated: \"For some time data have hinted that spirals with more old stars are more likely to have bars, but with such a large number of bar classifications we're much more confident about our results. It's not yet clear whether the bars are some side effect of an external process that turns spiral galaxies red, or if they alone can cause this transformation.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12212921", "title": "Galaxy Zoo", "section": "Section::::Blue ellipticals and red spirals.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 843, "text": "Mainstream astronomical theory before Galaxy Zoo held that elliptical (or 'early type') galaxies were red in color and spiral (or 'late type') galaxies were blue in color: several papers published as a result of Galaxy Zoo have proved otherwise. A population of blue ellipticals was found. These are galaxies which have changed their shape from spiral to oval, but still have young stars in them. Indeed, Galaxy Zoo came about through Schawinski's searching for blue elliptical galaxies, as near the end of 2006, he had spent most of his waking hours trying to find these rare galaxies. Blueness in galaxies means that new stars are forming. However ellipticals are almost always red, indicating that they are full of old and dead stars. Thus, blue ellipticals are paradoxical, but give clues to star-formation in different types of galaxies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3192980", "title": "Mice Galaxies", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 439, "text": "The colors of the galaxy are peculiar. In NGC 4676A a core with some dark markings is surrounded by a bluish white remnant of spiral arms. The tail is unusual, starting out blue and terminating in a more yellowish color, despite the fact that the beginning of each arm in virtually every spiral galaxy starts yellow and terminates in a bluish color. NGC 4676B has a yellowish core and two arcs; arm remnants underneath are bluish as well.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12212921", "title": "Galaxy Zoo", "section": "Section::::Blue ellipticals and red spirals.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 792, "text": "It is thought that Red Spirals are galaxies in the process of transition from young to old. They are more massive than blue spirals and are found on the outskirts of large clusters of galaxies. Chris Lintott stated: \"We think what we’re seeing is galaxies that have been gently strangled, so to speak, where somehow the gas supply for star formation has been cut off, but that they’ve been strangled so gently that the arms are still there.\" The cause might be the Red Spiral's gentle interaction with a galaxy cluster. He further explained: \"The kind of thing we’re imagining [is that] as the galaxy moves into a denser environment, there’s lot of gas in clusters as well as galaxies, and it’s possible the gas from the galaxy just gets stripped off by the denser medium it’s plowing into.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1338741", "title": "Peculiar galaxy", "section": "Section::::Formation.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 532, "text": "Scientists hypothesize that many peculiar galaxies are formed by the collision of two or more galaxies. As such, peculiar galaxies tend to host more active galactic nuclei than normal galaxies, indicating that they contain supermassive black holes. Many peculiar galaxies experience starbursts, or episodes of rapid star formation, due to the galaxies merging. The periods of elevated star formation and the luminosity resulting from active galactic nuclei cause peculiar galaxies to be slightly bluer in color than other galaxies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37419951", "title": "NGC 3738", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 213, "text": "Blue compact dwarf galaxies are blue in appearance because of the large cluster of hot massive stars. The galaxies are relatively dim and appear to be irregular in shape. They are typically chaotic in appearance.\n", "bleu_score": null, "meta": null } ] } ]
null
jql28
- why wouldn't it be possible to set up a solar heated steam pipeline that also removes salt, to get water to rural parts of africa?
[ { "answer": "I guess because they can't afford it. ", "provenance": null }, { "answer": "In a lot of places it isn't the salt, it's microbes and general filth that makes the water bad. They have filters to get rid of this stuff, and a number of charities are handing them out currently. Just google water filter africa and you'll get piles of results.", "provenance": null }, { "answer": "Desalination is extremely expensive, even for wealthy countries.", "provenance": null }, { "answer": "I guess because they can't afford it. ", "provenance": null }, { "answer": "In a lot of places it isn't the salt, it's microbes and general filth that makes the water bad. They have filters to get rid of this stuff, and a number of charities are handing them out currently. Just google water filter africa and you'll get piles of results.", "provenance": null }, { "answer": "Desalination is extremely expensive, even for wealthy countries.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2714658", "title": "Compass Minerals", "section": "Section::::Production methods and facilities.:Solar evaporation.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 558, "text": "Solar evaporation is the oldest and most energy-efficient method of mineral production. At the Great Salt Lake near Ogden, Utah, Compass Minerals draws naturally occurring brine out of the lake into shallow ponds and allows solar evaporation to produce salt, sulfate of potash (SOP) and magnesium chloride. Its SOP plant at the Great Salt Lake is the largest in North America and one of only three SOP brine solar evaporation operations in the world. Annual capacity is 350,000 tons of SOP, 1.5 million tons of salt, and 750,000 tons of magnesium chloride. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25398021", "title": "Renewable energy in Morocco", "section": "Section::::Industry.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 416, "text": "Despite huge wind and solar potential, it is too early to say when Morocco could begin exporting renewable electricity to Europe from projects such as the $400 billion Desertec initiative. It is unclear whether the Desertec consortium's planned investment in solar thermal energy across North Africa could go into Morocco or how much power could eventually be exported to Europe. Desertec's plans are likely to need\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11492908", "title": "Renewable energy in Africa", "section": "Section::::Renewable energy use.:Solar power.\n", "start_paragraph_id": 38, "start_character": 0, "end_paragraph_id": 38, "end_character": 513, "text": "Some plans exist to build solar farms in the deserts of North Africa to supply power for Europe. The Desertec project, backed by several European energy companies and banks, planned to generate renewable electricity in the Sahara desert and distribute it through a high-voltage grid for export to Europe and local consumption in North-Africa. Ambitions seek to provide continental Europe with up to 15% of its electricity. The TuNur project would supply 2GW of solar generated electricity from Tunisia to the UK.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11492908", "title": "Renewable energy in Africa", "section": "Section::::Renewable energy use.:Solar water pumping.\n", "start_paragraph_id": 43, "start_character": 0, "end_paragraph_id": 43, "end_character": 589, "text": "With a minimum of training in operation and maintenance, solar powered water pumping and purification systems have the potential to help rural Africans fulfill one of their most basic needs for survival. Further field test are in progress by organizations like KARI and the many corporations that manufacture the products needed, and these small-scale applications of solar technology are promising. Combined with sustainable agricultural practices and conservation of natural resources, solar power is a prime candidate to bring the benefits of technology to the parched lands of Africa.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "651372", "title": "Evaporative cooler", "section": "Section::::Applications.:Other examples.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 707, "text": "Simple evaporative cooling devices such as evaporative cooling chambers (ECCs) and clay pot coolers, or pot-in-pot refrigerators,  are simple and inexpensive ways to keep vegetables fresh without the use of electricity. Several hot and dry regions throughout the world could potentially benefit from evaporative cooling, including North Africa, the Sahel region of Africa, the Horn of Africa, southern Africa, the Middle East, arid regions of South Asia, and Australia. Benefits of evaporative cooling chambers for many rural communities in these regions include reduced post-harvest loss, less time spent traveling to the market, monetary savings, and increased availability of vegetables for consumption.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "156787", "title": "Desalination", "section": "Section::::Methods.:Reverse osmosis.\n", "start_paragraph_id": 25, "start_character": 0, "end_paragraph_id": 25, "end_character": 397, "text": "Off-grid solar-powered desalination units use solar energy to fill a buffer tank on a hill with seawater. The reverse osmosis process receives its pressurized seawater feed in non-sunlight hours by gravity, resulting in sustainable drinking water production without the need for fossil fuels, an electricity grid or batteries.Nano-tubes are also used for the same function (i.e, Reverse Osmosis).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "198725", "title": "Drinking water", "section": "Section::::Water quality.:Water treatment.:Point of use methods.\n", "start_paragraph_id": 79, "start_character": 0, "end_paragraph_id": 79, "end_character": 205, "text": "Solar water disinfection is a low-cost method of purifying water that can often be implemented with locally available materials. Unlike methods that rely on firewood, it has low impact on the environment.\n", "bleu_score": null, "meta": null } ] } ]
null
3tey0h
how fresh water fish populations spread from one river network to another without going through (presumably deadly) seawater?
[ { "answer": "I wondered that about fresh water mussels. So I asked. They have a free living form which basically hitches a ride in the gills of fish. This means the same species can be in one river system even though they are mussels.\n\nSimilar events can occur for other species. Egrets and other fish eating birds will fly from one river system to another. Over geological time species can be transferred. All watersheds border each other so it is not travel over hundreds of miles but perhaps ten miles. Larva can remain in a bird's mouth during such a flight.", "provenance": null }, { "answer": "Fresh water fish have a membrane transporter on the apical side of the cell (facing the water) that transports Na+ from outside their body into the cell. Some fish when they encounter salt water, have the ability to switch that protein transporter to the basal side, thus pumping Na+ from inside their body, to their epithelial/skin layer so that it can be easily dispersed into the surrounding salt water. ", "provenance": null }, { "answer": "The most likely mechanism is traveling between watersheds during floods. This is especially prevalent in low-elevation areas like the southeastern US. There are numerous watersheds that, at some point, are divided by small elevational differences (inches). For example, for a fish to get from the delta of the Mobile River in Alabama to the delta of the Pascagoula River in the Mississippi, all you would need is one large hurricane that dumps 10-20 inches of rain. Everything would be underwater, and the rivers would be flooding so large that the ocean around the area would be quite diluted. Think of it this way - the Amazon river dumps so much fresh water (which floats for a long time before mixing) into the Atlantic that you can drink it before you can even *see* land. Evolution and migration occur over long time scales, so the probability of eventually having a freshwater corridor between nearby southeastern rivers because of a hurricane is 100%.\n\nIt gets a bit trickier in areas with more geographical relief, like the western USA. As someone else mentioned, ice dams and glaciers can have crazy but temporary effects on the direction that a stream flows, but this only would facilitate movement of coldwater fish, which are not that diverse. The most likely mechanism in steep locations is \"stream capture\". It is similar to how a meandering river eventually cuts through an oxbow. However, in some instances (again, over geologic time), a river can cut through a valley wall until it actually flows completely into another basin. The Colorado River once did this into the San Andreas Fault during the early 1900's and created the Salton Sea. This can provide a conduit for fish exchange. There are even some locations ([Two Ocean Pass](_URL_0_)) where a stream flows in BOTH directions (To the Atlantic and Pacific Oceans) at the same time!\n\nHowever, the probability of this happening is much lower than big floods, so there are many steep places in the world that are fishless. Other places it has likely happened only once or twice, so you can have a very unique community of fish evolve (endemic species) because there is so little gene exchange.\n\nBut by definition, you can assume that where you see the same species in adjacent watersheds, there is some natural gene exchange through time, or they would by default become different species just because of different environmental pressures and genetic drift.\n", "provenance": null }, { "answer": "Another reason is that a lot of those fish have been implanted by the department of fish and wildlife. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "31356771", "title": "Megalocytivirus", "section": "Section::::Transmission and epizoology.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 891, "text": "A second potential mechanism for accidental movement of infected fish is the international trade in ornamental or aquarium fishes, which includes the global trade of approximately 5000 freshwater and 1450 saltwater fishes. Each year over 1 billion individual fish are shipped among more than 100 nations, creating a serious concern for the spread of megalocytiviruses as well as other important fish pathogens. There is already substantial evidence of this problem: megalocytiviruses which are genetically identical or extremely similar to ISKNV have been isolated from ornamental fishes (gouramis) that were being traded internationally. Furthermore, an Australian outbreak of megalocytivirus among farmed Murray cod (\"Maccullochella peelii\") was linked to imported gouramis in pet shops. In addition, a 2008 study reported 10 aquarium fish species that tested positive for ISKNV in Korea.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35640555", "title": "Diseases and parasites in salmon", "section": "Section::::Interaction with humans.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 512, "text": "The European Commission (2002) concluded “The reduction of wild salmonid abundance is also linked to other factors but there is more and more scientific evidence establishing a direct link between the number of lice-infested wild fish and the presence of cages in the same estuary.” It is reported that wild salmon on the west coast of Canada are being driven to extinction by sea lice from nearby salmon farms. Antibiotics and pesticides are often used to control the diseases and parasites, as well as lasers.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "34970786", "title": "Moses-Saunders Power Dam", "section": "Section::::Background.:Negative impacts.:Environmental Impacts.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 889, "text": "Flooding and pollution have affected fish populations on the river and in Lake St. Lawrence. Northern Pike, Walleye, Muskellunge, Lake Sturgeon and American eel have been affected. The loss of spawning grounds is also believed to have contributed to drops in their populations. Recent efforts have stabilized or increased much of the populations. R.H. Saunders Generating Station has a ladder made in a decommissioned ice sluce designed for juvenile American Eels to head upriver, across the generating station. At long and high, it was the only one in North America and the tallest in the world at the time. In recent years, it has been upgraded and extended in length. OPG maintains a trap and transport program with local commercial fisherman for downstream migration. From 2006 to 2011, approximately four million young eels crossed into the upper St. Lawrence River and Lake Ontario.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7954867", "title": "Quebec", "section": "Section::::Geography.:Wildlife.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 305, "text": "Inland waters are populated by small to large fresh water fish, such as the largemouth bass, the American pickerel, the walleye, the \"Acipenser oxyrinchus\", the muskellunge, the Atlantic cod, the Arctic char, the brook trout, the \"Microgadus tomcod\" (tomcod), the Atlantic salmon, the rainbow trout, etc.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55295780", "title": "Flamicell", "section": "Section::::Fish.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 338, "text": "The main fish that live in this river is the common trout. It has been confirmed that the 95% of this species, opposite from the salmon, do not reach the sea. Also the 95% of the common trout that live in this river have an average movement of fifty meters. Which means that only a 5% of them travel for distances longer than 500 meters.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3267286", "title": "Lahontan cutthroat trout", "section": "Section::::Human history.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 339, "text": "Upstream populations have been isolated and decimated by poorly managed grazing and excessive water withdrawals for irrigation, as well as by hybridization, competition, and predation by non-native salmonids. This is important, as although Lahontan cutthroat trout can inhabit either lakes or streams, they are obligatory stream spawners.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41566466", "title": "Fish species of the Neretva basin", "section": "Section::::Allochthonous fishes.:Invasive salmonids.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 630, "text": "Like in many rivers around Europe, there are some introduced salmonid fish species in the Neretva. Of these only grayling (\"Thymallus thymallus\") established stable population so far, while the more harmful rainbow trout (\"Oncorhynchus mykiss\") had lower survival rate and accordingly low population growth and small size. Brook trout (\"Salvelinus fontinalis\") and lake trout (\"Salvelinus namaycush\") have also recently been introduced to almost all of the Neretva basin reservoirs, but had only moderate to low success in establishing stable populations. At least for now populations of these invasive salmonids are rather weak.\n", "bleu_score": null, "meta": null } ] } ]
null
81r9m2
Do we experience atmosphere tides? Do the molecules in the air get dragged according to the moons gravity causing “deeper” periods of time?
[ { "answer": "I have recently encountered something called \"Density altitude\" Which has to do with the change in atmospheric density due to temperature extremes. To address the moon question directly, googling atmospheric tide yielded a [Wikipedia page](_URL_0_) that may be a good place to start.", "provenance": null }, { "answer": "The moon does indeed effect the atmosphere, but orders of magnitude less than it does the ocean. Tidal effects form pressure waves of about 100 microbars, or about 0.01% the atmospheric pressure at sea level. That's only perceptible to scientific instruments, and practically background noise compared to the regular variations in atmospheric pressure that occur due to weather and the effects of the sun.\n\n_URL_0_\n\n_URL_1_", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "35707126", "title": "Ionospheric dynamo region", "section": "Section::::Atmospheric Tides.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 716, "text": "Atmospheric tides are global-scale waves excited by regular solar differential heating (thermal tides) or by the gravitational tidal force of the moon (gravitational tides). The atmosphere behaves like a huge waveguide closed at the bottom (the Earth's surface) and open to space at the top. In such a waveguide an infinite number of atmospheric wave modes can be excited. Because the waveguide is imperfect, however, only modes of lowest degree with large horizontal and vertical scales can develop sufficiently well so that they can be filtered out from the meteorological noise. They are solutions of the Laplace equation and are called Hough functions. These can be approximated by a sum of spherical harmonics.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10151726", "title": "Atmospheric tide", "section": "Section::::General characteristics.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 393, "text": "At ground level, atmospheric tides can be detected as regular but small oscillations in surface pressure with periods of 24 and 12 hours. However, at greater heights, the amplitudes of the tides can become very large. In the mesosphere (heights of ~ 50–100 km) atmospheric tides can reach amplitudes of more than 50 m/s and are often the most significant part of the motion of the atmosphere.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10151726", "title": "Atmospheric tide", "section": "Section::::Lunar atmospheric tides.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 325, "text": "Atmospheric tides are also produced through the gravitational effects of the Moon. \"Lunar (gravitational) tides\" are much weaker than \"solar (thermal) tides\" and are generated by the motion of the Earth's oceans (caused by the Moon) and to a lesser extent the effect of the Moon's gravitational attraction on the atmosphere.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "801420", "title": "Atmospheric physics", "section": "Section::::Atmospheric tide.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 424, "text": "i) Atmospheric tides are primarily excited by the Sun's heating of the atmosphere whereas ocean tides are primarily excited by the Moon's gravitational field. This means that most atmospheric tides have periods of oscillation related to the 24-hour length of the solar day whereas ocean tides have longer periods of oscillation related to the lunar day (time between successive lunar transits) of about 24 hours 51 minutes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30718", "title": "Tide", "section": "Section::::Other tides.:Atmospheric tides.\n", "start_paragraph_id": 156, "start_character": 0, "end_paragraph_id": 156, "end_character": 305, "text": "Atmospheric tides are negligible at ground level and aviation altitudes, masked by weather's much more important effects. Atmospheric tides are both gravitational and thermal in origin and are the dominant dynamics from about , above which the molecular density becomes too low to support fluid behavior.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10151726", "title": "Atmospheric tide", "section": "Section::::General characteristics.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 500, "text": "BULLET::::1. Atmospheric tides are primarily excited by the Sun's heating of the atmosphere whereas ocean tides are excited by the Moon's gravitational pull and to a lesser extent by the Sun's gravity. This means that most atmospheric tides have periods of oscillation related to the 24-hour length of the solar day whereas ocean tides have periods of oscillation related both to the solar day as well as to the longer lunar day (time between successive lunar transits) of about 24 hours 51 minutes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2138223", "title": "Aeronomy", "section": "Section::::Atmospheric tides.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 551, "text": "Atmospheric tides are global-scale periodic oscillations of the atmosphere. In many ways they are analogous to ocean tides. Atmospheric tides form an important mechanism for transporting energy input into the lower atmosphere from the upper atmosphere, while dominating the dynamics of the mesosphere and lower thermosphere. Therefore, learning about atmospheric tides is essential in understanding the atmosphere as a whole. Modeling and observations of atmospheric tides are needed in order to monitor and predict changes in the Earth's atmosphere.\n", "bleu_score": null, "meta": null } ] } ]
null
8v8fmp
what is the difference between losing weight because i am not eating any food vs losing weight while maintaining a healthy diet and working out to lose more calories?
[ { "answer": "Losing weight is easy by not eating. But you will feel weak from lack of nutrition and you may still not have a good body composition. Working out and eating well you will feel energised and strong and will end up with a healthy body composition. \n\nBody composition is how much fat and muscle you have in your body. You may maintain the same weight your whole life but if you never workout your body composition will change as your muscle atrophy with age. Working out will help maintain your body, your mind and strength so you will still be able to tie your own shoes when you're 80.", "provenance": null }, { "answer": "I was 134 kg and 62 years old. I tried the eat healthy and exercise option. I had a personal trainer to push me. After a year no significant progress except I was a bit fitter. I'd read and had tonnes of advice. But it felt all so much like blah blah because when I followed it nothing much happened. So I just stopped all that and went to eating 1 meal a day in the evening. No carbos, no sugars, no salt, no alcohol, nothing in the fridge to tempt me, no substitutes like vitamin pills. Just vegetables, meat or fish. My dietitian told me I was being irresponsible insisting I follow her recommended diet. I said yes but did no. She was happier that way. She kept on measuring me every 3 months and felt good with the results. I didn't feel lethargic. Test results showed no issues with nutrition. I felt more than sufficiently energized and able to do everything. I don't go to any gym. I am not into fitness. I stand on the weigh scale every day. My blood pressure fell. My diabetes disappeared. My heart irregularities fell away. I lost 30 kg in 6 months. I have held the weight loss and extended it for 24 months without any difficulty. I got down to 101 kg. My weight can oscillate up and down 3 or 4 kgs. When it goes up and I feel I am slipping I simply cut back on food intake. My body has adapted naturally to less food. I rarely feel hungry. I often feel tempted especially in supermarkets and restaurants with all the things on offer and what others are loading up on. That for me is the difference between the 2 options.", "provenance": null }, { "answer": "If your only goal is weight loss there is no difference. But if the goal is also to have a nice, healthy looking physique, and to feel better, have more energy, a mix of good diet and exercise is best.", "provenance": null }, { "answer": "Fundamentally there is no difference.\n\nYou lose weight in the kitchen. Exercise is only loosely related.\n\nIf in your two cases you are running a calorie deficit, ie eating fewer calories than you burn you will lose weight. \n\nWhere starvation diets harm you is a lack of adequate nutrition which can lead to vitamin deficiencies.\n\nStarvation diets tend to encourage unhealthy eating patterns, fasting followed by binge eating. These patterns tend to continue after the diet finishes, leading to weight gain.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "400199", "title": "Weight loss", "section": "Section::::Myths.\n", "start_paragraph_id": 56, "start_character": 0, "end_paragraph_id": 56, "end_character": 740, "text": "Some popular beliefs attached to weight loss have been shown to either have less effect on weight loss than commonly believed or are actively unhealthy. According to Harvard Health, the idea of metabolism being the \"key to weight\" is \"part truth and part myth\" as while metabolism does affect weight loss, external forces such as diet and exercise have an equal effect. They also commented that the idea of changing one's rate of metabolism is under debate. Diet plans in fitness magazines are also often believed to be effective, but may actually be harmful by limiting the daily intake of important calories and nutrients which can be detrimental depending on the person and are even capable of driving individuals away from weight loss.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48173169", "title": "Global Energy Balance Network", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 467, "text": "GEBN's view of weight and metabolic health promoted the idea that weight loss can be achieved by taking more exercise while maintaining the same level of consumption - this view \"crosses a line by advancing a view that falls outside the scientific consensus\", and presents an overly simplistic view of the energy balance equation, with experts noting that \"evidence for eating less as a weight-loss strategy is much, much stronger than the evidence for moving more\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "400199", "title": "Weight loss", "section": "Section::::Intentional.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 246, "text": "Weight loss is achieved by adopting a lifestyle in which fewer calories are consumed than are expended. According to the UK National Health Service this is best achieved by monitoring calories eaten and supplementing this with physical exercise.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8460", "title": "Dieting", "section": "Section::::Nutrition.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 558, "text": "One of the most important things to take into consideration when either trying to lose or put on weight is output versus input. It is important to know the amount of energy your body is using every day, so that your intake fits the needs of one's personal weight goal. Someone wanting to lose weight would want a smaller energy intake than what they put out. There is increasing research-based evidence that low-fat vegetarian diets consistently lead to healthy weight loss and management, a decrease in diabetic symptoms as well as improved cardiac health.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8460", "title": "Dieting", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 474, "text": "A study published in \"American Psychologist\" found that short-term dieting involving \"severe restriction of calorie intake\" does not lead to \"sustained improvements in weight and health for the majority of individuals\". Other studies have found that the average individual maintains some weight loss after dieting. Weight loss by dieting, while of benefit to those classified as unhealthy, may slightly increase the mortality rate for individuals who are otherwise healthy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1149933", "title": "Weight gain", "section": "Section::::Description.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 409, "text": "A commonly asserted \"rule\" for weight gain or loss is based on the assumption that one pound of human fat tissue contains about 3,500 kilocalories (often simply called \"calories\" in the field of nutrition). Thus, eating 500 fewer calories than one needs per day should result in a loss of about a pound per week. Similarly, for every 3500 calories consumed above the amount one needs, a pound will be gained.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2230412", "title": "Empty calories", "section": "Section::::Allowable intake without impacting health.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 569, "text": "Food energy intake must be balanced with activity to maintain a proper body weight. Sedentary individuals and those eating less to lose weight may suffer malnutrition if they eat food supplying empty calories but not enough nutrients. In contrast, people who engage in heavy physical activity need more food energy as fuel, and so can have a larger amount of calorie-rich, essential nutrient-poor foods. Dietitians and other healthcare professionals prevent malnutrition by designing eating programs and recommending dietary modifications according to patient's needs.\n", "bleu_score": null, "meta": null } ] } ]
null
4e3108
why are blood stains on fabric so difficult to remove compared to other types of stains?
[ { "answer": "Mostly because it's a complex substance, containing liquid, suspended solids, cells, fats, oils, and a plethora of other compounds. Most other stains are caused by small portions of these types of compounds. \n\nWine is mostly aqueous (water based), grease is lipid (fat) based, etc. Most solvents are good at cleaning up one type of stain because they designed to pick up one type. \n\nBlood, being a mixture of most or all of these types, means many solvents won't work on all of blood's components. ", "provenance": null }, { "answer": "Try using hydrogen peroxide. I'm amazed at how easily it got out a blood stain on the carpet, when my dog had a tumor burst.", "provenance": null }, { "answer": "Blood shouldn't be washed out in hot water, only cold. Hot water cooks the proteins in it and makes it stick.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "40413995", "title": "Mallory's trichrome stain", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 455, "text": "For tissues that are not directly acidic or basic, it can be difficult to use only one stain to reveal the necessary structures of interest. A combination of the three different stains in precise amounts applied in the correct order reveals the details selectively. This is the result of more than just electrostatic interactions of stain with the tissue and the stain not being washed out after each step. Collectively the stains compliment one another.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1750229", "title": "Wood stain", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 429, "text": "Gel stains are a late 20th century innovation in stain manufacturing, in that they are high-viscosity liquids and do not 'flow'. This property allows more control during application, particularly when the wood is in a vertical position, which can often cause traditional liquid stains to run, drip, or pool. Gel stains often have limited penetrating ability, as they are thixotropic (a liquid that nevertheless does not 'flow').\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16831007", "title": "Stain removal", "section": "Section::::Stain removal.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 402, "text": "Another factor in stain removal is the fact that stains can sometimes comprise two separate staining agents, which require separate forms of removal. A machine oil stain could also contain traces of metal, for example.Also of concern is the color of the material that is stained. Some stain removal agents will not only dissolve the stain, but will dissolve the dye that is used to color the material.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4891283", "title": "Cloth menstrual pad", "section": "Section::::Current use.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 455, "text": "Stains sometimes occur. Some women prefer darker colored fabrics which do not show stains as much as light colored fabrics do. Causes of staining do not include allowing the blood to dry, but using hot water when washing the pad will, as hot water sets protein stains (blood). Often, soaking pads for at least 4-6 hours (or overnight) in cold water with an oxygen bleach can assist in stain removal. Drying cloth pads in sunlight can help to fade stains.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "56212", "title": "Linen", "section": "Section::::Flax fiber.:Properties.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 315, "text": "Linen is a very durable, strong fabric, and one of the few that are stronger wet than dry. The fibers do not stretch, and are resistant to damage from abrasion. However, because linen fibers have a very low elasticity, the fabric eventually breaks if it is folded and ironed at the same place repeatedly over time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1750229", "title": "Wood stain", "section": "Section::::Composition.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 656, "text": "Stain is composed of the same three primary ingredients as paint (pigment, solvent (or vehicle), and binder) but is predominantly vehicle, then pigment and/or dye, and lastly a small amount of binder. Much like the dyeing or staining of fabric, wood stain is designed to add color to the substrate (wood and other materials) while leaving some of the substrate still visible. Transparent varnishes or surface films are applied afterwards. In principle, stains do not provide a durable surface coating or film. However, because the binders are from the same class of film-forming binders that are used in paints and varnishes, some build-up of film occurs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3255718", "title": "H&E stain", "section": "Section::::Uses.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 210, "text": "H&E staining does not always provide enough contrast to differentiate all tissues, cellular structures, or the distribution of chemical substances, and in these cases more specific stains and methods are used.\n", "bleu_score": null, "meta": null } ] } ]
null
2eq7ys
How long does it take to boot up a supercomputer?
[ { "answer": "The only 'supercomputer' I have access to is a network of 128 computers, each with some xeon processors and lots of ram. These can be booted simultaneously, and boot into a minimally configured Linux environment, which is very quick. The master node then needs to start the cluster management software and register all the compute nodes and you're more or less good to go.", "provenance": null }, { "answer": "The cluster (SuperComputer) I work in has just over 2500 nodes (computers) with a little over 30,000 computer cores. It doesn't take very long to reboot it. All the nodes boot up simultaneously. So only a few minutes. What takes the longest is to verify that all services are running and all the high availability systems are functioning properly. Beyond that you need to get it into a state where you can start running jobs which involves getting the job scheduler and resource managers running. On my system we run about 80,000 jobs per 24 hour period and have 300-600 jobs running at any given moment utilizing about 95% of the system 24/7. So if we ever need to restart the job scheduler it can take as long as 30-40 minutes to start back up because there are so many jobs in the queue and data that needs to be loaded into the scheduler. Now if you are taking a system that had been running(powered up) for a year and need to completely power it down to do facility electrical work........ that is when it can become a nightmare to get back into production. When you let a large system cool down to room temp and everything stops running you will see a number (sometimes small sometimes large) of component failures. Hard drives will fail, processors will fail, power supplies will fail. The last time we powered our system down it took 2 x 16 hour days to get it back into 100% working order.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "6008500", "title": "CPU time", "section": "Section::::Unix commands for CPU time.:Unix command \"time\".\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 213, "text": "This process took a total of 0.337 seconds of CPU time, out of which 0.327 seconds was spent in user space, and the final 0.010 seconds in kernel mode on behalf of the process. Elapsed real time was 1.15 seconds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "497565", "title": "MD5CRK", "section": "Section::::Complexity.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 287, "text": "To give some perspective to this, using Virginia Tech's System X with a maximum performance of 12.25 Teraflops, it would take approximately formula_8 seconds or about 3 weeks. Or for commodity processors at 2 gigaflops it would take 6,000 machines approximately the same amount of time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4243252", "title": "Gustafson's law", "section": "Section::::Applications.:Application in everyday computer systems.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 503, "text": "Amdahl's Law reveals a limitation in, for example, the ability of multiple cores to reduce the time it takes for a computer to boot to its operating system and be ready for use. Assuming the boot process was mostly parallel, quadrupling computing power on a system that took one minute to load might reduce the boot time to just over fifteen seconds. But greater and greater parallelization would eventually fail to make bootup go any faster, if any part of the boot process were inherently sequential.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3463130", "title": "Features new to Windows XP", "section": "Section::::Performance and kernel improvements.:Faster boot and application launch.\n", "start_paragraph_id": 95, "start_character": 0, "end_paragraph_id": 95, "end_character": 1278, "text": "The ability to boot in 30 seconds was a design goal for Windows XP, and Microsoft's developers made efforts to streamline the system as much as possible; The Logical Prefetcher is a significant part of this; it monitors what files are loaded during boot, optimizes the locations of these files on disk so that less time is spent waiting for the hard drive's heads to move and issues large asynchronous I/O requests that can be overlapped with device detection and initialization that occurs during boot. The prefetcher works by tracing frequently accessed paged data which is then used by the \"Task Scheduler\" to create a prefetch-instructions file at %WinDir%\\Prefetch. Once the system boots or an application is started, any data and code specified in the trace that is not already in memory is prefetched from the disk. The previous prefetching results determine which scenario benefited more and what should be prefetched at the next boot or launch. The prefetcher also uses the same algorithms to reduce application startup times. To reduce disk seeking even further, the \"Disk Defragmenter\" is called in at idle time to optimize the layout of these specific files and metadata in a contiguous area. Boot and resume operations can be traced and analyzed using Bootvis.exe.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "363188", "title": "NetWare", "section": "Section::::Performance.:Aggressive caching.\n", "start_paragraph_id": 131, "start_character": 0, "end_paragraph_id": 131, "end_character": 689, "text": "The default dirty cache delay time was fixed at 2.2 seconds in NetWare 286 versions 2.x. Starting with NetWare 386 3.x, the dirty disk cache delay time and dirty directory cache delay time settings controlled the amount of time the server would cache changed (\"dirty\") data before saving (flushing) the data to a hard drive. The default setting of 3.3 seconds could be decreased to 0.5 seconds but not reduced to zero, while the maximum delay was 10 seconds. The option to increase the cache delay to 10 seconds provided a significant performance boost. Windows 2000 and 2003 server do not allow adjustment to the cache delay time. Instead, they use an algorithm that adjusts cache delay.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "314383", "title": "Supertask", "section": "Section::::Prominent supertasks.:Davies' super-machine.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 787, "text": "Proposed by E. B. Davies, this is a machine that can, in the space of half an hour, create an exact replica of itself that is half its size and capable of twice its replication speed. This replica will in turn create an even faster version of itself with the same specifications, resulting in a supertask that finishes after an hour. If, additionally, the machines create a communication link between parent and child machine that yields successively faster bandwidth and the machines are capable of simple arithmetic, the machines can be used to perform brute-force proofs of unknown conjectures. However, Davies also points out that due to fundamental properties of the real universe such as quantum mechanics, thermal noise and information theory his machine can't actually be built.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "211058", "title": "ALGOL W", "section": "Section::::Implementation.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 355, "text": "In an OS environment on a 360/67 with spooled input and output files, the compiler will recompile itself in about 25 seconds. The compiler is approximately 2700 card images. Thus, when the OS scheduler time is subtracted from the execution time given above, it is seen that the compiler runs at a speed in excess of 100 cards per second (for dense code).\n", "bleu_score": null, "meta": null } ] } ]
null
rn6f5
Does potential energy "count" as energy?
[ { "answer": "I somehow miss your sub-question, and I gave an answer. I will keep some of what I wrote there, maybe you will like it.\n\nMaybe you didn't realize it, but gravitational energy is also consider/named potential energy. Why? because it has the potential to generate a force (gravitational force in this case) if you put an object in a gravitational field, it will be attracted to the effective force generated from all the gravitational potential in there. \n\nWhy is it so intriguing?, well it has point out that if you count this potential energy into the evolution of the universe, we could reach a total amount of 0 energy at the beginning. And that is outstanding. Because it means that the Universe was created from nothing.\n\nEdit: Read DocSmile answer", "provenance": null }, { "answer": "Somewhat related to your question: Did you know that a compressed spring becomes heavier by e=mc^2 ", "provenance": null }, { "answer": "If you combine an electron and positron at point A, you create a pair of photons of energy 511 KeV. If one of the photons travels up to point B, it gets red-shifted and ends up at point B with a lower energy. \n\nIf you combine an electron and positron at point B, and one of the created photons travels down to point A, it will end up there blue-shifted, with a higher energy than 511 KeV.\n\nSo, while locally (at the point of annihilation) you see the same amount of energy released when matter and anti-matter, from any fixed point you will see different amounts of energy.\n", "provenance": null }, { "answer": "Potential energy is really a relative thing that measures the *difference* in energy between two states. If I blow up a stick of TNT in midair, the debris will have more energy than it would if the explosion took place on the ground, but only because of gravitational potential energy (debris falling to the ground), not because the effects of the explosion were somehow enhanced.", "provenance": null }, { "answer": " > To clarify, if I were to lift say, a few kilograms of matter a kilometer up, and then combine it with ani-matter\n\nWhoa, hold on. Antimatter isn't anti-mass, it's ordinary mass. There is no distinction between matter and antimatter as to mass.\n\nAnd yes, potential energy counts as energy. For example, a very elliptical orbit (like a comet) is a constant exchange between kinetic and potential energy as the object orbits the parent body -- sometimes more potential energy, sometimes more kinetic, but the two types of energy sum to a constant (to agree with the principle of energy conservation).\n\nBecause all mass is the same type of mass, combining matter with antimatter at any altitude will have the same effect, unrelated to the issue of kinetic and potential energy.\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "23703", "title": "Potential energy", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 404, "text": "Common types of potential energy include the gravitational potential energy of an object that depends on its mass and its distance from the center of mass of another object, the elastic potential energy of an extended spring, and the electric potential energy of an electric charge in an electric field. The unit for energy in the International System of Units (SI) is the joule, which has the symbol J.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "859234", "title": "Mechanical energy", "section": "Section::::General.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 440, "text": "The potential energy, \"U\", depends on the position of an object subjected to a conservative force. It is defined as the object's ability to do work and is increased as the object is moved in the opposite direction of the direction of the force. If \"F\" represents the conservative force and \"x\" the position, the potential energy of the force between the two positions \"x\" and \"x\" is defined as the negative integral of \"F\" from \"x\" to \"x\":\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28113360", "title": "Classical central-force problem", "section": "Section::::Basics.:Potential energy.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 288, "text": "Thus, the total energy of the particle—the sum of its kinetic energy and its potential energy \"U\"—is a constant; energy is said to be conserved. To show this, it suffices that the work \"W\" done by the force depends only on initial and final positions, not on the path taken between them.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23703", "title": "Potential energy", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 373, "text": "Potential energy is associated with forces that act on a body in a way that the total work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, that are called \"conservative forces\", can be represented at every point in space by vectors expressed as gradients of a certain scalar function called \"potential\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1931736", "title": "Self-energy", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 930, "text": "In most theoretical physics such as quantum field theory, the energy that a particle has as a result of changes that it itself causes in its environment defines self-energy formula_1, and represents the contribution to the particle's energy, or effective mass, due to interactions between the particle and its system. In electrostatics, the energy required to assemble the charge distribution takes the form of self-energy by bringing in the constituent charges from infinity, where the electric force goes to zero. In a condensed matter context relevant to electrons moving in a material, the self-energy represents the potential felt by the electron due to the surrounding medium's interactions with it. Since electrons repel each other the moving electron polarizes, or causes to displace, the electrons in its vicinity and then changes the potential of the moving electron fields. These and other effects entail self-energy. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19048", "title": "Mass", "section": "Section::::In quantum physics.:Tachyonic particles and imaginary (complex) mass.\n", "start_paragraph_id": 97, "start_character": 0, "end_paragraph_id": 97, "end_character": 201, "text": "This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the \"rest mass–energy\") and a contribution from its motion, the kinetic energy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31296", "title": "Tachyon", "section": "Section::::Tachyons in relativity theory.:Mass.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 201, "text": "This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the \"rest mass–energy\") and a contribution from its motion, the kinetic energy.\n", "bleu_score": null, "meta": null } ] } ]
null
1gest2
prism and the nsa scandal
[ { "answer": "A low level contractor working for the NSA named Snowden leaked a document describing a classified program that collects data from the largest internet communities in the world, including microsoft, facebook, yahoo, google, etc. The internet companies themselves claim to be unaware of this.\n\nIt's a big deal because the government is throwing a wide net and capturing *all* data rather than targeting who, what or when. The part of the law that deals with this would normally require a warrant (of sorts) before digging into your information. Instead they decided they'd search you first then ask for permission. Especially troubling is that the program searches citizens as well, which is illegal and not sactioned under FISA.\n\nThe government, under pressure, eventually admitted to the program existing. So it isn't a question or conspiracy, it's very real.\n\nHere is a great start / quick reading to explain : _URL_0_", "provenance": null }, { "answer": " > Search before submitting! If it's been asked before, indicate that the previous answers didn't help. Otherwise your question may be removed.\n\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "39601333", "title": "PRISM (surveillance program)", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 453, "text": "Documents indicate that PRISM is \"the number one source of raw intelligence used for NSA analytic reports\", and it accounts for 91% of the NSA's Internet traffic acquired under FISA section 702 authority.\" The leaked information came to light one day after the revelation that the FISA Court had been ordering a subsidiary of telecommunications company Verizon Communications to turn over to the NSA logs tracking all of its customers' telephone calls.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39601333", "title": "PRISM (surveillance program)", "section": "Section::::The program.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 574, "text": "PRISM is a program from the Special Source Operations (SSO) division of the NSA, which in the tradition of NSA's intelligence alliances, cooperates with as many as 100 trusted U.S. companies since the 1970s. A prior program, the Terrorist Surveillance Program, was implemented in the wake of the September 11 attacks under the George W. Bush Administration but was widely criticized and challenged as illegal, because it did not include warrants obtained from the Foreign Intelligence Surveillance Court. PRISM was authorized by the Foreign Intelligence Surveillance Court.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39601333", "title": "PRISM (surveillance program)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 707, "text": "PRISM is a code name for a program under which the United States National Security Agency (NSA) collects Internet communications from various US Internet companies. The program is also known by the SIGAD . PRISM collects stored Internet communications based on demands made to Internet companies such as Google LLC under Section 702 of the FISA Amendments Act of 2008 to turn over any data that match court-approved search terms. The NSA can use these PRISM requests to target communications that were encrypted when they traveled across the Internet backbone, to focus on stored data that telecommunication filtering systems discarded earlier, and to get data that is easier to handle, among other things.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39756995", "title": "Codefellas", "section": "Section::::Background.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 292, "text": "On June 6, 2013, former NSA contractor Edward Snowden leaked the existence of PRISM, an electronic surveillance program intended to monitor e-mail and phone call activity in the United States to identify possible terrorist threats, to the newspapers \"The Guardian\" and \"The Washington Post\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1636889", "title": "Booz Allen Hamilton", "section": "Section::::Controversies and leaks.:PRISM media leak.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 949, "text": "In June 2013, Edward Snowden—at the time a Booz Allen employee contracted to projects of the National Security Agency (NSA)—publicly disclosed details of classified mass surveillance and data collection programs, including PRISM. The alleged leaks are said to rank among the most significant breaches in the history of the NSA and led to considerable concern worldwide. Booz Allen condemned Snowden's leak of the existence of PRISM as \"shocking\" and \"a grave violation of the code of conduct and core values of our firm\". The company fired Snowden \"in absentia\" shortly after and stated he had been an employee for less than three months at the time. Market analysts considered the incident \"embarrassing\" but unlikely to cause enduring commercial damage. Booz Allen stated that it would work with authorities and clients to investigate the leak. Charles Riley of \"CNN\"/\"Money\" said that Booz Allen was \"scrambling to distance itself from Snowden\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39601333", "title": "PRISM (surveillance program)", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 788, "text": "PRISM began in 2007 in the wake of the passage of the Protect America Act under the Bush Administration. The program is operated under the supervision of the U.S. Foreign Intelligence Surveillance Court (FISA Court, or FISC) pursuant to the Foreign Intelligence Surveillance Act (FISA). Its existence was leaked six years later by NSA contractor Edward Snowden, who warned that the extent of mass data collection was far greater than the public knew and included what he characterized as \"dangerous\" and \"criminal\" activities. The disclosures were published by \"The Guardian\" and \"The Washington Post\" on June 6, 2013. Subsequent documents have demonstrated a financial arrangement between the NSA's Special Source Operations division (SSO) and PRISM partners in the millions of dollars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21939", "title": "National Security Agency", "section": "Section::::Organizational structure.:Employees.:Personnel security.\n", "start_paragraph_id": 197, "start_character": 0, "end_paragraph_id": 197, "end_character": 262, "text": "Edward Snowden's leaking of the existence of PRISM in 2013 caused the NSA to institute a \"two-man rule\", where two system administrators are required to be present when one accesses certain sensitive information. Snowden claims he suggested such a rule in 2009.\n", "bleu_score": null, "meta": null } ] } ]
null
5w0930
why does rubbing alcohol, vinegar, etc have expiry dates? how can they go bad?
[ { "answer": "The FDA requires expiry dates on nearly everything aside from alcohol and cosmetics. The manufacturers have to put something, so they generally choose a date a few years out, that they are willing to guarantee the product will meet or exceed that date.\n\n_URL_0_", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "11701763", "title": "Cellulose acetate film", "section": "Section::::Preservation and storage.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 1108, "text": "Currently there is no practical way of halting or reversing the course of degradation. Many film collectors use camphor tablets but it is not known what the long term effects on the film would be. While there has been significant research regarding various methods of slowing degradation, such as storage in molecular sieves, temperature and moisture are the two key factors affecting the rate of deterioration. According to the Image Permanence Institute, fresh acetate film stored at a temperature of 70 °F (21 °C) and 40% relative humidity will last approximately 50 years before the onset of vinegar syndrome. Reducing the temperature by 15° while maintaining the same level of humidity brings a dramatic improvement: at a temperature of 55 °F (13 °C) and 40% relative humidity, the estimated time until onset of vinegar syndrome is 150 years. A combination of low temperature and low relative humidity represents the optimum storage condition for cellulose acetate base films, with the caveat that relative humidity should not be lowered below 20%, or the film will dry out too much and become brittle.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3829190", "title": "Hand sanitizer", "section": "Section::::Uses.:Health care.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 257, "text": "Alcohol rub sanitizers kill most bacteria, and fungi, and stop some viruses. Alcohol rub sanitizers containing at least 70% alcohol (mainly ethyl alcohol) kill 99.9% of the bacteria on hands 30 seconds after application and 99.99% to 99.999% in one minute.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19916594", "title": "Acetic acid", "section": "Section::::Production.:Oxidative fermentation.\n", "start_paragraph_id": 44, "start_character": 0, "end_paragraph_id": 44, "end_character": 249, "text": "A dilute alcohol solution inoculated with \"Acetobacter\" and kept in a warm, airy place will become vinegar over the course of a few months. Industrial vinegar-making methods accelerate this process by improving the supply of oxygen to the bacteria.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44367216", "title": "Tower brewery", "section": "Section::::Brewing process.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 215, "text": "The spent hops are removed from the bitter liquor by decanting or 'casting' it into the hop back, a vessel on the ground floor beneath the coppers. The spent hops settle out and the liquor is strained through them.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "847398", "title": "Balsamic vinegar", "section": "Section::::Classifications.:\"Traditional Balsamic Vinegar of Modena DOP and Traditional Balsamic Vinegar of Reggio Emilia DOP\".\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 344, "text": "Reggio Emilia designates the different ages of their balsamic vinegar (\"Aceto Balsamico Tradizionale di Reggio Emilia\") by label colour. A red label means the vinegar has been aged for at least 12 years, a silver label that the vinegar has aged for at least 18 years, and a gold label designates that the vinegar has aged for 25 years or more.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "267787", "title": "Rubbing alcohol", "section": "Section::::Warnings.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 379, "text": "Product labels for rubbing alcohol include a number of warnings about the chemical, including the flammability hazards and its intended use only as a topical antiseptic and not for internal wounds or consumption. It should be used in a well-ventilated area due to inhalation hazards. Poisoning can occur from ingestion, inhalation, absorption, or consumption of rubbing alcohol.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "782", "title": "Mouthwash", "section": "Section::::Ingredients.:Alcohol.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 768, "text": "Alcohol is added to mouthwash not to destroy bacteria but to act as a carrier agent for essential active ingredients such as menthol, eucalyptol and thymol which help to penetrate plaque. Sometimes a significant amount of alcohol (up to 27% vol) is added, as a carrier for the flavor, to provide \"bite\". Because of the alcohol content, it is possible to fail a breathalyzer test after rinsing although breath alcohol levels return to normal after 10 minutes. In addition, alcohol is a drying agent, which encourages bacterial activity in the mouth, releasing more malodorous volatile sulfur compounds. Therefore, alcohol-containing mouthwash may temporarily worsen halitosis in those who already have it, or indeed be the sole cause of halitosis in other individuals.\n", "bleu_score": null, "meta": null } ] } ]
null
1aep2u
In the 60s, was japan's economy expected to overtake the usa's by now?
[ { "answer": "In the 1980's, this was certainly a thing, but not in the 1960's. See for example [this book](_URL_0_) (from 1993, when the Japanese bubble had already burst.) The subtitle, The Myth of the Invincible Japanese, indicates that fear of Japanese power was a theme at the time. You don't write a book to counter a myth when there is no myth.\n\n", "provenance": null }, { "answer": "When developing countries are rapidly industrializing and coming from far behind, they can enjoy from rapid growth rates and their productivity can grow in huge leaps. When they get close to the level of western economies, their growth and competitive advantage will slow down. \n\nJapan reached the western level of economic development but it has has smaller population than US and it's shrinking. Only way they could have overtaken US would be if their competitiveness and productivity would have kept increasing much faster rate than US once they reached US level of sophistication. That did not happen. There was widespread fear of Japanese economy back in the 70's and 80's when they seemed to out-manufacture US in all important sectors. They were first to use robots in large scale and they had huge government projects like [Fifth Generation Computer Systems project (FGCS)](_URL_0_) that were supposed to revolutionize computing. That fear was overblown.\n\nChina is different story. They have bigger population than US. Once their economy develops even close to the level of western economies, they will be bigger than US. China will be bigger than US unless something bad happens. \n\nps. European Union has been the the biggest economy in the world since 2007. ", "provenance": null }, { "answer": "No, it was not in GDP, but definitely in GDP Per Capita, Japan's population isn't and has never been big enough (In the past 100 years) to take over the US in total GDP, it's population just isn't big enough.", "provenance": null }, { "answer": "One minor note: Realistically, China will probably become the world's largest economy much sooner than 2050. The OECD thinks it could be [within the next 4-5 years](_URL_0_), although that's almost certainly jumping the gun. The U.S. National Intelligence Council [thinks it'll be around 2030](_URL_1_) if present trends hold. \n\nThey're by no means guaranteed to -- there are a lot of problems in China's economy that are currently being hidden by their growth rate, and the U.S. is likely entering a natural resources boom -- but either way, 2050 is probably unrealistic. All other things being equal, China and India *should* be the largest economies on the planet if for no reason other than the size of their internal markets. I feel like India gets lost in this debate a lot.\n\nAnd all of this might turn out to be yet another prediction that never comes true. Macroeconomic tea-reading has never been humanity's strong suit.\n\nEchoing the others here, [predictions about Japan's overtaking the U.S.](_URL_2_) were more a feature of the late 1970s and 1980s. The Harvard professor Ezra Vogel's [*Japan as Number One: Lessons for America*](_URL_3_) was published in 1979, for example, and is a somewhat entertaining read in hindsight. Not because Vogel got everything wrong -- a lot of his analysis is spot-on -- but because it's a decent warning against academic overconfidence. ", "provenance": null }, { "answer": "Besides what people before me have already said, an interesting side bit: the USSR was expected to overtake the US because they had rapid industrializatrion. In the standard economic textbook of the time, by Samuelson, it was predicted that the US and USSRwould achieve equal GDP by between 1977 and 1995 depending on assumption. Subsequent edition kept pushing the date further and further", "provenance": null }, { "answer": "As LaoBa said, this was an expectation of the 1980s, not so much the 60s and 70s. It peaked roughly from 1982-1992, cycling through an initial long phase of \"The Japanese are infiltrating our political and economic system and/or buying it wholesale,\" hit several notes of \"We beat them, this is a betrayal of the rightful outcomes of conquerors,\" and finally was heading into \"Japan and the United States will inevitably go to war with one another over domination of the pacific/world/etc.\" in current events pseudo-academic literature (journalists, mid-grade military officers, etc as authors) and news articles until the Nikkei crashed.\n\nSources: I put \"Japan\" into the Washington Post, LA Times, and New York Times archives and read most of what was published from 1980 to 1992 for my Historical Research Methods thesis, \"US Perceptions of Japan in Newspapers, 1980-1992\", and then expanded on it for my East Asian Studies thesis \"Comparative Jingoism: Pseudo-Academic Literature on the threats of Japan 1980-1992 vs China 1997-2009,\" both undergraduate works (my BA is in Polisci and East Asian Studies with a minor in History). Also, Amakudari by Colignon & Usui; MITI and the Japanese Miracle by Chalmers Johnson; and Between MITI and the Market: Japanese Industrial Policy for High Technology by Daniel Okimoto from my highly Nihoncentric East Asian Politics & Society course taught by an eccentric former salaryman that used to joke about what we were drinking according to time of the month.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "261248", "title": "Post-occupation Japan", "section": "Section::::Economy.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 309, "text": "The high economic growth and political tranquillity of the mid-to-late 1960s were tempered by the quadrupling of oil prices by the Organization of the Petroleum Exporting Countries (OPEC) in 1973. Almost completely dependent on imports for petroleum, Japan experienced its first recession since World War II.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24854552", "title": "Post–World War II economic expansion", "section": "Section::::Specific countries.:Japan.\n", "start_paragraph_id": 73, "start_character": 0, "end_paragraph_id": 73, "end_character": 1068, "text": "After 1950 Japan's economy recovered from the war damage and began to boom, with the fastest growth rates in the world. Given a boost by the Korean War, in which it acted as a major supplier to the UN force, Japan's economy embarked on a prolonged period of extremely rapid growth, led by the manufacturing sectors. Japan emerged as a significant power in many economic spheres, including steel working, car manufacturing and the manufacturing of electronics. Japan rapidly caught up with the West in foreign trade, GNP, and general quality of life. The high economic growth and political tranquility of the mid to late 1960s were slowed by the quadrupling of oil prices in 1973. Almost completely dependent on imports for petroleum, Japan experienced its first recession since World War II. Another serious problem was Japan's growing trade surplus, which reached record heights. The United States pressured Japan to remedy the imbalance, demanding that Tokyo raise the value of the yen and open its markets further to facilitate more imports from the United States.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "495103", "title": "Economic history of Japan", "section": "Section::::Early 20th century.:Militarism.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 373, "text": "In the 1930s, the Japanese economy suffered less from the Great Depression than most industrialized nations, its GDP expanding at the rapid rate of 5% per year. Manufacturing and mining came to account for more than 30% of GDP, more than twice the value for the agricultural sector. Most industrial growth, however, was geared toward expanding the nation's military power.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "183897", "title": "Empire of Japan", "section": "Section::::Early Shōwa (1926–1930).:Economic factors.\n", "start_paragraph_id": 97, "start_character": 0, "end_paragraph_id": 97, "end_character": 383, "text": "The Great Depression, just as in many other countries, hindered Japan's economic growth. The Japanese Empire's main problem lay in that rapid industrial expansion had turned the country into a major manufacturing and industrial power that required raw materials; however, these had to be obtained from overseas, as there was a critical lack of natural resources on the home islands.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37699", "title": "History of East Asia", "section": "Section::::20th century.:Postwar.\n", "start_paragraph_id": 91, "start_character": 0, "end_paragraph_id": 91, "end_character": 386, "text": "The Japanese growth in the postwar period was often called a \"miracle\". It was led by manufacturing; starting with textiles and clothing and moving to high-technology, especially automobiles, electronics and computers. The economy experienced a major slowdown starting in the 1990s following three decades of unprecedented growth, but Japan still remains a major global economic power.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3433405", "title": "Japanese economic miracle", "section": "Section::::Governmental contributions.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 905, "text": "The Japanese financial recovery continued even after SCAP departed and the economic boom propelled by the Korean War abated. The Japanese economy survived from the deep recession caused by a loss of the U.S. payments for military procurement and continued to make gains. By the late 1960s, Japan had risen from the ashes of World War II to achieve an astoundingly rapid and complete economic recovery. According to Knox College Professor Mikiso Hane, the period leading up to the late 1960s saw \"the greatest years of prosperity Japan had seen since the Sun Goddess shut herself up behind a stone door to protest her brother Susano-o's misbehavior.\" The Japanese government contributed to the post-war Japanese economic miracle by stimulating private sector growth, first by instituting regulations and protectionism that effectively managed economic crises and later by concentrating on trade expansion.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3398337", "title": "East Asian cultural sphere", "section": "Section::::East Asian Culture.:Economy and trade.:Post WW2 (Tiger economies).\n", "start_paragraph_id": 105, "start_character": 0, "end_paragraph_id": 105, "end_character": 265, "text": "Following Japanese defeat, economic collapse after the war, and US military occupation, Japan's economy recovered in the 1950s with the post-war economic miracle in which rapid growth propelled the country to become the world's second largest economy by the 1980s.\n", "bleu_score": null, "meta": null } ] } ]
null
3rxomv
if one were to strike a billiards ball with the cue ball, and we disregard the deceleration as the target ball hits each bumper as well as other balls on the table, would the ball eventually, but always sink into a pocket regardless of where on the table it went?
[ { "answer": "No, it won't always sink. It's easy to think of a situation in which it won't happen.\n\nIf you hit the ball parallel to any side of the table, it would bounce back and forth forever, never entering any of the pockets (unless it was right in front of the pocket already.", "provenance": null }, { "answer": "No, it would come to a stop. You didn't account for friction between the ball and table.\n\nIf you disregard that as well, the answer is still no. \\/u/Jeffffffff's answer is correct.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8808840", "title": "Five-pin billiards", "section": "Section::::Rules.:Fouls.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 259, "text": "BULLET::::- Knocking any ball off the table; opponent receives ball-in-hand plus 2 points (the ball is spotted in its starting position, or as close to this position as possible, unless it was the now-incoming opponent's cue ball, which as noted is in-hand).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2227569", "title": "Crud (game)", "section": "Section::::Shuck.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 575, "text": "Making the ball: You score a point by making the object ball in one of the two corner pockets on your opponent's (i.e., the opposite) side of the table or by banking the ball into a side pocket or back into one of your own pockets. If, upon striking the object ball, it goes either directly into a side pocket or the pockets on your own side of the table then you lose the point (note: toilet paper to block the side pockets is an acceptable variation). If the cue ball goes into ANY pocket on your throw, you lose the point—with one exception, the \"shuck\", explained below.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8808840", "title": "Five-pin billiards", "section": "Section::::Rules.:Scoring.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 234, "text": "BULLET::::- Knocking over pins with the object ball without hitting the opponent's cue ball first, or with one's own cue ball, does not earn the shooter any points, and in the latter case is a foul that awards points to the opponent.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8808840", "title": "Five-pin billiards", "section": "Section::::Rules.:Fouls.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 233, "text": "BULLET::::- Hitting the pins directly with the shooter's cue ball before any contact with the opponent's cue ball; opponent receives ball-in-hand plus 2 points (the erstwhile value of the knocked-over pins is not calculated at all).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6150292", "title": "Cue sports techniques", "section": "Section::::Throw.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 269, "text": "When a ball with (sidespin) on it hits an object ball with a degree of fullness, the object ball will be \"thrown\" in the opposite direction of the side of the cue ball the was applied. Thus, a cue ball with left hand on it will \"throw\" a hit object ball to the right. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "274682", "title": "English billiards", "section": "Section::::Rules.:Scoring.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 266, "text": "BULLET::::- ( in snooker terms) – striking one's cue ball so that it hits another ball and then enters a pocket: 3 points if the red ball was hit first; 2 points if the other cue ball was hit first; 2 points if the red and the other cue ball are hit simultaneously.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2227569", "title": "Crud (game)", "section": "Section::::Shuck.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 359, "text": "Scoring: Once the object ball is moving it is like a \"time-bomb\" in the sense that when it stops moving the location of the cue ball on the table will (may) determine the winner of the point, i.e. if the cue ball is on your side of table when the object ball stops moving, you lose the point. The only variation to this is the \"Gentleman's Rule\" (see below).\n", "bleu_score": null, "meta": null } ] } ]
null
2hyg1l
germany just made all universities free, how do other countries do this?
[ { "answer": "What do you mean by \"just\"? I can't find anything in the news.\n\nIf you're asking why they're so cheap:\n\n##THIS IS EXTREMELY SIMPLIFIED. GERMAN UNIVERSITIES ARE CONFUSING\n\nalso this is only about \"Universitäten\" and not tons of other places you can study. yup, confusing.\n\nThey aren't actually free but rather cost about 300 euros (depending on what federal state you're in) of a Semesterbeitrag (literally semester comtribution) per semester. These (depending on what federal state you're in) might pay for stuff like free public transportation, but (depending on what federal state you're in) you might also have to pay for.\n\n~~Some places, not all (you guessed right: depending on what federal state you're in!) also have a Studiengebühr (fee for studies) up to around 500 euros, but of course, exceptions apply.~~ apparently this changed although wikipedia disagrees\n\nThere's also no such thing as dorms, so you sleep off-campus—which you have to pay for.\n\ntl;dr: studying in germany is not free but super cheap. If you speak german and would like to study in germany but you don't live in germany, check out the [DAAD](_URL_0_)\n\nif you have further questions, ask them.\n\nedit: damn it, this was already answered. well, I guess this can be helpful if you know what you, the student, have to pay.", "provenance": null }, { "answer": "Other countries could easily accomplish this and other social programs by simply raising [income tax to 42%](_URL_0_) on any income above $60,000 per year, which is the level in Germany.", "provenance": null }, { "answer": "Economist here. There is no such thing as free. They are just making college payments compulsory for everyone regardless of whether they use it or not.\n\nThe system used by the US was fine until universities realized the government would subsidize them raising tuition to ridiculous rates by lending to students.", "provenance": null }, { "answer": "Norway has had free university since we built our first one. Or, not free... I pay $30 per semester. The state pays the salaries of th professors and their research. With also some private money as well", "provenance": null }, { "answer": "In France, University after high school are free (maybe ~400€/yr for insurance that a lot of student don't pay thanks to the scolarship)\nPrivate Business/Engineer school are expensive though (4000-20000€/yr) but public one are good and free and they have a national recognition (that some private don't have)\nSo you can become a doctor only paying for your food/rent etc... But studies are very hard (that's how they don't end up overwhelmed)\n\nAnd all of this are available thanks to ... ... ... TAX(s) (education is the first source of expense) (46 billions € + 26 for superior(?) studies)\n\n_URL_0_\n\nSorry if I'm not clear enough (I'm a bit in a rush)\n", "provenance": null }, { "answer": "They know that the [top tax rate has no bearing on economic growth](_URL_0_) and set responsible tax rates.", "provenance": null }, { "answer": "In Spain public universities cost a minimum of 750€/year (more or less). The price goes up if you have to repeat a course. For example, the first time you study Maths II the price is ~70€, the secont time its ~140€, the third ~350€. It caps at the forth time (I dont know at what price).\n\nYou can aply for grants though, if your income is low or your marks are high, or a mixture of both.", "provenance": null }, { "answer": "Swedish universities are free for citizens. Loans are available with good rates for living expenses. ", "provenance": null }, { "answer": "Taxes. All of these social programs still cost money. That money comes from increased taxes. Whether this is a good thing or not depends on your perspective.", "provenance": null }, { "answer": "This thread is full of radical misinformation on the nature of college funding in America and other countries and already people are talking about more general government funding like the military.\n\n\nIn theory college is available quite cheaply and in given \"Western Country X\" a single-payer (free, tax based) system of college would probably not reach beyond 5% of GDP (and that's huge! nearly a trillion in the US, 250billion in Deutschland). Hence why most Western Countries are able to do that. \n\nUS college costs come from:\n > At the better universities, research facilities, sports, and high profile professor acquisitions all present costs exponentially higher than routine costs\n\n > I read a stat (for american unis) recently about how the ratio of academic (professor) employee to support employee (janitor/secretary) has exploded, i.e. what used to be 1:1 is now 1:4 so like, your given hour of college learning now has an overhead with a huge upside bias. \n\n > > How does Europe avoid this? Well for instance, I get health insurance from my university, of all fucking places, this being America. I'm sure in total at least 50 people get a paycheck at this 15,000 student uni to handle health insurance. In Europe this is you know, being handled by the actual government at all times. \n\n > Haven't read on this, but I would assume Euros dont do all this \"online learning\" or other alternative approach nonsense. These efforts have a \"long tail\". You have to employ a lot of people to get new infrastructure up and running.\n\nFinally, college education is at an all time high in the US. Colleges are in low supply and loans are not. As a corollary to this, colleges are paying for construction costs to expand, often a very big cost. \n\nEveryone in this thread is pathetically ignorant. ", "provenance": null }, { "answer": "Have a generally strong economy and high taxes. Also have a well structured education system- Germany doesn't try and send everyone to universities, instead it also has very strong technical/vocational training starting in high school to create highly skilled workers who wouldn't necessarily thrive in a university. In comparison, in the US everyone is told to go to college and take out huge government subsidized loans. Many universities as a result have bloated budgets with way too many administrators. My guess is many US universities also have a lot more \"amenities\" like nicely manicured campuses, gyms, student activities, sports, etc. ", "provenance": null }, { "answer": "\"Universities in the US just raised their tuitions by 5%\"", "provenance": null }, { "answer": "Ain't nothing free - taxpayers are paying for it.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "471603", "title": "State school", "section": "Section::::By country and region.:Germany.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 611, "text": "In Germany, most institutions of higher education are subsidised by German states and are therefore also referred to as \"staatliche Hochschulen.\" (public universities) In most German states, admission to public universities is still cheap, about two hundred Euro per semester. In 2005, many states introduced additional fees of 500 Euro per semester to achieve a better teaching-quality; however, all of these states (except Lower Saxony, which will follow in 2014/15) have abolished tuition fees as of autumn 2013. Nevertheless, additional fees for guest or graduate students are charged by many universities.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19725260", "title": "University", "section": "Section::::Classification.\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 1424, "text": "In Germany, universities are institutions of higher education which have the power to confer bachelor, master and PhD degrees. They are explicitly recognised as such by law and cannot be founded without government approval. The term Universitaet (i.e. the German term for university) is protected by law and any use without official approval is a criminal offense. Most of them are public institutions, though a few private universities exist. Such universities are always research universities. Apart from these universities, Germany has other institutions of higher education (Hochschule, Fachhochschule). Fachhochschule means a higher education institution which is similar to the former polytechnics in the British education system, the English term used for these German institutions is usually 'university of applied sciences'. They can confer master degrees but no PhDs. They are similar to the model of teaching universities with less research and the research undertaken being highly practical. Hochschule can refer to various kinds of institutions, often specialised in a certain field (e.g. music, fine arts, business). They might or might not have the power to award PhD degrees, depending on the respective government legislation. If they award PhD degrees, their rank is considered equivalent to that of universities proper (Universitaet), if not, their rank is equivalent to universities of applied sciences.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "249320", "title": "Tuition payments", "section": "Section::::By location.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 575, "text": "In the German Education system almost all universities and most universities of applied sciences are funded by the state and do not charge tuition fees. In exceptional cases universities may offer courses for professionals (e.g. executive MBA programs), which may require tuition payment. Some local governments have recently decided that students from non-EU countries can be charged, although ERASMUS students, students from developing countries and other special groups are exempt. In addition, some private institutions of higher education run on a tuition-based model. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "375698", "title": "Education in Germany", "section": "Section::::Tertiary education.\n", "start_paragraph_id": 141, "start_character": 0, "end_paragraph_id": 141, "end_character": 551, "text": "Most of the German universities are public institutions, charging fees of only around €60-200 per semester for each student, usually to cover expenses associated with the university cafeterias and (usually mandatory) public transport tickets. Thus, academic education is open to most citizens and studying is very common in Germany. The dual education system combines both practical and theoretical education but does not lead to academic degrees. It is more popular in Germany than anywhere else in the world and is a role model for other countries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11867", "title": "Germany", "section": "Section::::Demographics.:Education.\n", "start_paragraph_id": 139, "start_character": 0, "end_paragraph_id": 139, "end_character": 357, "text": "Most of the German universities are public institutions, and students traditionally study without fee payment. The general requirement for university is the \"Abitur\". However, there are a number of exceptions, depending on the state, the college and the subject. Tuition free academic education is open to international students and is increasingly common.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20595788", "title": "Student debt", "section": "Section::::Statistics.:Germany.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 1424, "text": "Germany has both private and public universities with the majority being public universities, which is part of the reason their graduates do not have as much debt. For undergraduate studies, public universities are free but have an enrollment fee of no more than €250 per year, which is roughly US$305. Their private universities cost an average of €10,000 a semester, which is about US$12,000. Private universities account for 7.1% enrollment with the rest attending the public universities. The private universities have a smaller teacher to student ratio and tend to offer more specialized programs, which is why Germany is experiencing a boom in private universities enrollment in recent years for majors like law and medicine. However, most students still prefer public universities due to the drastic difference in tuition cost. The only expense students take out loans for in public universities is the living cost, which ranges from €3600 to €8,200 a year depending on the university location. However, the repayment of this loan is interest free and no borrower pays more than €10,000 regardless of the borrowed amount. The average debt at graduation is €5,600 which is US$6,680. The chance to gain a bachelor's degree through well respected universities at a reasonable price without interest packed loans attracts many foreign students, as seen through increased enrollment of students from all around the world.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "180763", "title": "University of Göttingen", "section": "Section::::Academics.:Exchange programs.\n", "start_paragraph_id": 119, "start_character": 0, "end_paragraph_id": 119, "end_character": 447, "text": "As Germany is a member of the European Union, university students have the opportunity to participate in the Erasmus Programme. The university also has exchange programs and partnerships with reputable universities outside Europe such as University of Technology, Sydney in Australia, Tsinghua University, Peking University and Fudan University in China, Tokyo University in Japan and the University of California, Berkeley, in the United States.\n", "bleu_score": null, "meta": null } ] } ]
null
10kawg
why apple released ios 6 maps when it obviously wasn't ready for release.
[ { "answer": "The official story is that Apple wanted voice navigated maps and Google could not deliver. However, Apple and Google have a relationship that is growing more and more hostile. Ever since Apple sued Samsung, and Google stepped in to protect Samsung, and sued Apple. ", "provenance": null }, { "answer": "Sorry for not summing it up, but for me this blog post clarified why. Sometimes businesses have to do business centric decisions that aren't user centric _URL_0_", "provenance": null }, { "answer": "Because google refused to license turn by turn, which was a HUGE feature. Apple had been trying to get turn by turn built in for years but they could never come to an agreement. Also, google changed it's pricing from wholesale licensing to pay-per-use, which would make the maps more expensive. On top of that, google started adding ads to their search results.\n\nApple was also getting nervous that google was collecting user info, so they made their own.\n\nBasically they had 2 options:\n\n1. Go another year without turn by turn, potentially losing users to android. Pay more for maps in the meantime as google advertised to their users.\n\n2. Buy a bunch of mapping services, develop their own maps using third party data like tomtom or osm. Wait for the kinks to work themselves out through user reports. \n\nThey chose option 2. It's really not that hard to grasp.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "36109560", "title": "IOS 6", "section": "Section::::Problems.:Maps app launch.\n", "start_paragraph_id": 74, "start_character": 0, "end_paragraph_id": 74, "end_character": 300, "text": "In iOS 6, Apple replaced Google Maps with its own Apple Maps as the default mapping service for the operating system, and immediately faced criticism for inaccurate or incomplete data, including a museum in a river, missing towns, satellite images obscured by clouds, missing local places, and more.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37093440", "title": "Apple Maps", "section": "Section::::Early inaccuracy.:Apple's response.\n", "start_paragraph_id": 55, "start_character": 0, "end_paragraph_id": 55, "end_character": 757, "text": "In June 2016, Eddy Cue said in an interview with \"Fast Company\" that Apple \"had completely underestimated the product, the complexity of it.\" He also said the problems with Apple Maps led to \"significant changes to all of our development processes.\" After the launch of Maps, Apple started offering public betas of new versions of iOS and OS X. Furthermore, Cue commented that before Maps was launched Apple's executive team long discussed if Apple should have its own mapping service. One month later, Tim Cook looked back to the launch of Apple Maps in an interview with \"The Washington Post\" and said \"Maps was a mistake.\" He added that the company admitted its mistake and that Maps is something the company is now proud of because of the improvements.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37093440", "title": "Apple Maps", "section": "Section::::History.:Initial release.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 996, "text": "Before Apple Maps launched as the default mapping application in iOS, Google Maps held this position since the first generation iPhone release in 2007. In late 2009, tensions between Google and Apple started to grow when the Android version of Google Maps featured turn-by-turn navigation, a feature which the iOS version lacked. At the time, Apple argued that Google collected too much user data. When Apple made iOS 6 available, Google Maps could only be accessed by iOS 6 users via the web. Although Google did not immediately launch a mapping application of its own, shortly after the announcement of Apple Maps, Google did add an equivalent of Apple Maps' Flyover feature to its virtual globe application Google Earth. Three months later, in December 2012, Google Maps was released in the App Store. This version of Google Maps, unlike the previous version, featured turn-by-turn navigation. Shortly after it was launched, Google Maps was the most popular free application in the App Store.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37093440", "title": "Apple Maps", "section": "Section::::History.:Initial release.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 748, "text": "On June 11, 2012, during the Apple Worldwide Developers Conference (WWDC), Apple announced the initial release of Apple Maps and revealed that the application would replace Google Maps as the default web mapping service in iOS 6 and beyond. Apple also announced that the application would include turn-by-turn navigation, 3D maps, Flyovers, and the virtual assistant Siri. Furthermore, Apple stated that iPhone users would be able to navigate Apple Maps while in the locked screen. The mapping service was released on September 19, 2012. Following the launch, Apple Maps was heavily criticized, which resulted in a public apology by Apple CEO Tim Cook in late September and the departure of two key employees of Apple. (See also §Early inaccuracy)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33304644", "title": "IPhone 5", "section": "Section::::Reception.:Critical reception.:Criticism.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 412, "text": "Reviewers and commentators were critical of the new Maps app that replaced Google Maps in iOS 6. It had been reported to contain errors such as misplacement of landmark tags, directing users to incorrect locations and poor satellite images. Nine days after Maps' release, Apple issued a statement apologizing for the frustration it had caused customers and recommending that they try alternate mapping services.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33304644", "title": "IPhone 5", "section": "Section::::Features.:Operating system and software.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 854, "text": "iOS 6 features several new and/or updated apps, which includes Apple Maps and Passbook. Apple's built-in Maps app, which replaced the former Maps app powered by Google Maps, had been universally derided and lacked many features present in competing maps apps. It uses Apple's new vector-based engine that eliminates lag, making for smoother zooming. New to Maps is turn-by-turn navigation spoken directions, 3D views in some major cities and real-time traffic. iOS 6 is able to retrieve documents such as boarding passes, admission tickets, coupons and loyalty cards through its new Passbook app. An iOS device with Passbook can be scanned under a reader to process a mobile payment at locations that have compatible hardware. The app has context-aware features such as notifications for relevant coupons when in the immediate vicinity of a given store.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37093440", "title": "Apple Maps", "section": "Section::::History.:2012–2015.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 742, "text": "During WWDC in June 2013, Apple announced the new version of Apple Maps in iOS 7. This new version had a new look and icon. A number of new functions were also implemented, including full-screen mode, night mode, real-time traffic information, navigation for pedestrians, and the Frequent Locations feature. The latter feature, which can be switched on and off, was introduced to record the most frequently visited destinations by users in order to improve Apple Maps. In addition, new satellite imagery was added once again. On September 18, 2013, Apple released iOS 7. At that time, the new iPhone 5S included a new motion coprocessor, the M7, which can identify whether a user is walking or driving in order to adjust the navigation mode.\n", "bleu_score": null, "meta": null } ] } ]
null
2slyen
How is light that is as old as the universe travelling to us from 13 billion light years away?
[ { "answer": "The universe was opaque until ~300,000 years after its formation. So the light from the CMB is around that age. \n\nNow, imagine an infinitely long ruler. Pick any marking on that ruler and place yourself there. When the universe became transparent, a photon from 10cm from you gets sent in your direction, but due to the universe expanding, the space between markings keep increasing. \n\nYou and the light source are both still at the original markings, but the expansion has caused that 10cm to become billions of light years. When the photon eventually reaches you, it must appear to originate from that marking which is now far away. \n\nP.S. There is no center of the universe. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5985207", "title": "Expansion of the universe", "section": "Section::::Understanding the expansion of the universe.:Measuring distances in expanding space.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 688, "text": "The light took much longer than 4 billion years to reach us though it was emitted from only 4 billion light years away, and, in fact, the light emitted towards the Earth was actually moving \"away\" from the Earth when it was first emitted, in the sense that the metric distance to the Earth increased with cosmological time for the first few billion years of its travel time, and also indicating that the expansion of space between the Earth and the quasar at the early time was faster than the speed of light. None of this surprising behavior originates from a special property of metric expansion, but simply from local principles of special relativity integrated over a curved surface.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40874497", "title": "Z8 GND 5296", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 252, "text": "The light reaching Earth from z8_GND_5296 shows its position over 13 billion years ago, having traveled a distance of more than 13 billion light-years. Due to the expansion of the universe, this position is now at about (comoving distance) from Earth.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28736", "title": "Speed of light", "section": "Section::::Practical effects of finiteness.:Spaceflights and astronomy.\n", "start_paragraph_id": 51, "start_character": 0, "end_paragraph_id": 51, "end_character": 583, "text": "Receiving light and other signals from distant astronomical sources can even take much longer. For example, it has taken 13 billion (13) years for light to travel to Earth from the faraway galaxies viewed in the Hubble Ultra Deep Field images. Those photographs, taken today, capture images of the galaxies as they appeared 13 billion years ago, when the universe was less than a billion years old. The fact that more distant objects appear to be younger, due to the finite speed of light, allows astronomers to infer the evolution of stars, of galaxies, and of the universe itself.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11439", "title": "Faster-than-light", "section": "Section::::Superluminal travel of non-information.:Possible distance away from Earth.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 1631, "text": "Since one might not travel faster than light, one might conclude that a human can never travel further from the Earth than 40 light-years if the traveler is active between the age of 20 and 60. A traveler would then never be able to reach more than the very few star systems which exist within the limit of 20–40 light-years from the Earth. This is a mistaken conclusion: because of time dilation, the traveler can travel thousands of light-years during their 40 active years. If the spaceship accelerates at a constant 1 g (in its own changing frame of reference), it will, after 354 days, reach speeds a little under the speed of light (for an observer on Earth), and time dilation will increase their lifespan to thousands of Earth years, seen from the reference system of the Solar System, but the traveler's subjective lifespan will not thereby change. If the traveler returns to the Earth, they will land thousands of years into the Earth's future. Their speed will not be seen as higher than the speed of light by observers on Earth, and the traveler will not measure their speed as being higher than the speed of light, but will see a length contraction of the universe in their direction of travel. And as the traveler turns around to return, the Earth will seem to experience much more time than the traveler does. So, while their (ordinary) coordinate speed cannot exceed \"c\", their proper speed (distance as seen by Earth divided by their proper time) can be much greater than \"c\". This is seen in statistical studies of muons traveling much further than \"c\" times their half-life (at rest), if traveling close to \"c\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2108547", "title": "Chi Cygni", "section": "Section::::Distance.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 206, "text": "Older studies generally derived smaller distances such as 345, 370, or 430 lights years. The original parallax calculated from Hipparcos measurements was 9.43 mas, indicating a distance of 346 light years.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28736", "title": "Speed of light", "section": "Section::::Practical effects of finiteness.:Spaceflights and astronomy.\n", "start_paragraph_id": 52, "start_character": 0, "end_paragraph_id": 52, "end_character": 427, "text": "Astronomical distances are sometimes expressed in light-years, especially in popular science publications and media. A light-year is the distance light travels in one year, around 9461 billion kilometres, 5879 billion miles, or 0.3066 parsecs. In round figures, a light year is nearly 10 trillion kilometres or nearly 6 trillion miles. Proxima Centauri, the closest star to Earth after the Sun, is around 4.2 light-years away.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8065677", "title": "Distance measures (cosmology)", "section": "Section::::Details.:Light-travel distance.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 385, "text": "This distance is the time (in years) that it took light to reach the observer from the object multiplied by the speed of light. For instance, the radius of the observable universe in this distance measure becomes the age of the universe multiplied by the speed of light (1 light year/year) i.e. 13.8 billion light years. Also see misconceptions about the size of the visible universe.\n", "bleu_score": null, "meta": null } ] } ]
null
75fdqo
why is the minimum age for ‘adult’ medicines usually 12? not 18 or 21?
[ { "answer": "Because a lot of pharmaceuticals are dosed by patient weight. At 12 years old you are nearly full size.", "provenance": null }, { "answer": "It's just a rule of thumb to avoid overdosing, even if it's exact same active ingredient nobody is going to tell you to cut exact part of the pill to create a proper dose of the medicine for the child's weight.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "21354286", "title": "Acute generalized exanthematous pustulosis", "section": "Section::::Cause.:Medicines.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 282, "text": "The most frequently reported drugs that have been associated with the development of AGEP include penicillin, aminopenicillins, macrolides, quinolones, sulfonamides, hydroxychloroquine, terbinafine, and diltiazem. A more complete list of drugs sorted by their intended actions are:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "102959", "title": "Substance abuse", "section": "Section::::Epidemiology.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 804, "text": "The initiation of drug and alcohol use is most likely to occur during adolescence, and some experimentation with substances by older adolescents is common. For example, results from 2010 Monitoring the Future survey, a nationwide study on rates of substance use in the United States, show that 48.2% of 12th graders report having used an illicit drug at some point in their lives. In the 30 days prior to the survey, 41.2% of 12th graders had consumed alcohol and 19.2% of 12th graders had smoked tobacco cigarettes. In 2009 in the United States about 21% of high school students have taken prescription drugs without a prescription. And earlier in 2002, the World Health Organization estimated that around 140 million people were alcohol dependent and another 400 million with alcohol-related problems.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38272", "title": "Nicotine", "section": "Section::::History, society, and culture.:Legal status.\n", "start_paragraph_id": 91, "start_character": 0, "end_paragraph_id": 91, "end_character": 337, "text": "In the United States, nicotine products and Nicotine Replacement Therapy products like Nicotrol are only available to persons 18 and above; proof of age is required; not for sale in vending machine or from any source where proof of age cannot be verified. In some states, these products are only available to persons over the age of 21.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43097", "title": "Gender dysphoria", "section": "Section::::History.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 644, "text": "In April 2011, the UK National Research Ethics Service approved prescribing monthly injection of puberty-blocking drugs to youngsters from 12 years old, in order to enable them to get older before deciding on formal sex change. The Tavistock and Portman NHS Foundation Trust (T&P) in North London has treated such children. Clinic director Dr. Polly Carmichael said, \"Certainly, of the children between 12 and 14, there's a number who are keen to take part. I know what's been very hard for their families is knowing that there's something available but it's not available here.\" The clinic received 127 referrals for gender dysphoria in 2010.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1099396", "title": "Drug interaction", "section": "Section::::Underlying factors.\n", "start_paragraph_id": 77, "start_character": 0, "end_paragraph_id": 77, "end_character": 372, "text": "BULLET::::- Old age: factors relating to how human physiology changes with age may affect the interaction of drugs. For example, liver metabolism, kidney function, nerve transmission or the functioning of bone marrow all decrease with age. In addition, in old age there is a sensory decrease that increases the chances of errors being made in the administration of drugs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13852290", "title": "History of nationality in Cyprus", "section": "Section::::Current state of affairs.:Required Documents.:Issue of Passport to underage children.\n", "start_paragraph_id": 178, "start_character": 0, "end_paragraph_id": 178, "end_character": 212, "text": "Children under the age of 18 are regarded as underage children. Underage children up to the age of 12 may be included in the Passport of their parents or acquire their own Passport provided both parents consent.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1099396", "title": "Drug interaction", "section": "Section::::Epidemiology.\n", "start_paragraph_id": 87, "start_character": 0, "end_paragraph_id": 87, "end_character": 306, "text": "Among US adults older than 55, 4% are taking medication and or supplements that put them at risk of a major drug interaction. Potential drug-drug interactions have increased over time and are more common in the low educated elderly even after controlling for age, sex, place of residence, and comorbidity.\n", "bleu_score": null, "meta": null } ] } ]
null
1297ym
How long after sperm enters the egg does it take to create the DNA for the baby?
[ { "answer": "The term you're looking for is the \"zygote\"; this is the first cell that properly has all 46 chromosomes. In general, this takes about 12 hours to form after the sperm cell has entered the ovum.\n\nAs to whether it's the \"start of a life\", I guess you'd have to define what life is. Both the sperm and oocyte prior to fertilization are \"alive\", and the zygote that's made is also \"alive\". Perhaps you mean \"human life\"? Well then you'd need to define a human.", "provenance": null }, { "answer": "Let's get some terminology down here. \"Baby\" isn't the correct term. It takes 9 months in humans, more or less, to form the baby. A fertilized egg (ovum) becomes a zygote, or if you prefer, an embryo. \n\nIn terms of timing, there are a number of steps. There is sperm fusion with the ovum and then a cortical reaction to prevent further sperm fusion. In humans, only after this happens does the ovum actually undergo its last (second) [meiotic division](_URL_1_) to generate the second polar body and the haploid pronucleus. The sperm pronucleus and the ovum pronucleus move close, their nuclear membranes dissolve, the genetic material groups, and a nucleus forms. This forms the zygote. The zygote starts to divide shortly afterwards.\n\nThe zygote then undergoes many cell divisions (to form a blastocyst) over the next few days before it implants in the uterine wall around day 9. Only successful implantation leads to a pregnancy. \n\nIn terms of timing of this, [this paper](_URL_0_) looks at embryos generated by a standard technique of *in vitro* fertilization called intracytoplasmic sperm injection (ICSI). Since the sperm are injected directly into the ovum, they can observe events with precise timing. They looked at 93 oocytes that formed two polar bodies (and thus went through the second meiotic step) every 2 hrs for 20 hrs and then several times over the next few days.\n\n > Out of the 93 normally fertilized oocytes, 21 extruded the second polar body at 2 h after micro-injection (23%) and 63 oocytes at 4 h (68%). Pronuclei appeared as early as 6 h after ICSI in 16 normally fertilized oocytes (17%). At 8 h, 75 (80%) oocytes had two visible pronuclei, at 16 h 92 (99%), at 18 h 76 (82%) and at 20 h 63 (68%).\n\nThe second meiosis event happens as soon as 2 hrs, peaking at around 6 hrs according to their figures. Pronuclei are seen by 8 hrs and by 16 hrs almost all oocytes had 2 pronuclei. These start to merge shortly afterwards. By 20 hrs, they observed 11% already at the 2 cell stage. On the next day, all except 3 were at the 2 cell stage or beyond, with most at the 3-4 cell stage. \n\nSo the tl;dr is that it takes a bit longer than 16 hrs after fertilization for the pronuclei to start to merge.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8610048", "title": "Paternal age effect", "section": "Section::::Mechanisms.:DNA point mutations.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 453, "text": "In contrast to oogenesis, the production of sperm cells is a lifelong process. Each year after puberty, spermatogonia (precursors of the spermatozoa) divide meiotically about 23 times. By the age of 40, the spermatogonia will have undergone about 660 such divisions, compared to 200 at age 20. Copying errors might sometimes happen during the DNA replication preceding these cell divisions, which may lead to new (\"de novo\") mutations in the sperm DNA.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44290", "title": "DNA profiling", "section": "Section::::Profiling processes.:DNA family relationship analysis.\n", "start_paragraph_id": 31, "start_character": 0, "end_paragraph_id": 31, "end_character": 374, "text": "During conception, the father's sperm cell and the mother's egg cell, each containing half the amount of DNA found in other body cells, meet and fuse to form a fertilized egg, called a zygote. The zygote contains a complete set of DNA molecules, a unique combination of DNA from both parents. This zygote divides and multiplies into an embryo and later, a full human being.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1771587", "title": "Pregnancy", "section": "Section::::Physiology.:Development of embryo and fetus.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 546, "text": "The sperm and the egg cell, which has been released from one of the female's two ovaries, unite in one of the two fallopian tubes. The fertilized egg, known as a zygote, then moves toward the uterus, a journey that can take up to a week to complete. Cell division begins approximately 24 to 36 hours after the female and male cells unite. Cell division continues at a rapid rate and the cells then develop into what is known as a blastocyst. The blastocyst arrives at the uterus and attaches to the uterine wall, a process known as implantation.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5579717", "title": "Bird anatomy", "section": "Section::::Reproductive and urogenital systems.\n", "start_paragraph_id": 90, "start_character": 0, "end_paragraph_id": 90, "end_character": 361, "text": "The sperm is stored in the female's sperm storage tubules for a period varying from a week to more than 100 days, depending on the species. Then, eggs will be fertilized individually as they leave the ovaries, before the shell is calcified in the oviduct. After the egg is laid by the female, the embryo continues to develop in the egg outside the female body.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "30675928", "title": "Oocyte activation", "section": "Section::::DNA synthesis.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 341, "text": "4 hours after fusion of sperm and ovum, DNA synthesis begins. Male and female pronuclei move to the centre of the egg and membranes break down. Male protamines are replaced with histones and the male DNA is demethylated. Chromosomes then orientate on the metaphase spindle for mitosis. This combination of the two genomes is called syngamy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "415293", "title": "Cryptorchidism", "section": "Section::::Mechanism.:Normal development.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 355, "text": "Spermatogenesis continues after birth. In the third to fifth months of life, some of the fetal spermatogonia residing along the basement membrane become type A spermatogonia. More gradually, other fetal spermatogonia become type B spermatogonia and primary spermatocytes by the fifth year after birth. Spermatogenesis arrests at this stage until puberty.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14082", "title": "Horse breeding", "section": "Section::::Breeding and gestation.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 1166, "text": "The gestation period lasts for about eleven months, or about 340 days (normal average range 320–370 days). During the early days of pregnancy, the conceptus is mobile, moving about in the uterus until about day 16 when \"fixation\" occurs. Shortly after fixation, the embryo proper (so called up to about 35 days) will become visible on trans-rectal ultrasound (about day 21) and a heartbeat should be visible by about day 23. After the formation of the endometrial cups and early placentation is initiated (35–40 days of gestation) the terminology changes, and the embryo is referred to as a fetus. True implantation – invasion into the endometrium of any sort – does not occur until about day 35 of pregnancy with the formation of the endometrial cups, and true placentation (formation of the placenta) is not initiated until about day 40-45 and not completed until about 140 days of pregnancy. The fetus's sex can be determined by day 70 of the gestation using ultrasound. Halfway through gestation the fetus is the size of between a rabbit and a beagle. The most dramatic fetal development occurs in the last 3 months of pregnancy when 60% of fetal growth occurs.\n", "bleu_score": null, "meta": null } ] } ]
null
rhj0a
What connection is there between the esophageal sphincter and your ears?
[ { "answer": "Are you referring to the upper esophageal sphincter or the lower?", "provenance": null }, { "answer": "Alright here we go.\n\nThe esophageal sphincter is innervated by the [vagus nerve.](_URL_2_)\n\nThis nerve does also have a branch called [Alderman's nerve](_URL_0_) which goes to the exact region of the ear you're referring to.\n\nThe vagus nerve is well known when speaking of referred pain, and would be the link in this case.\n\nReferred pain is not well understood but the [wiki](_URL_1_) page has some information about theories, and some explanations for it.\n\nThe best way to explain the feeling you're asking about is likely to compare it to brain freeze. \n\nDoes this answer your questions?", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "10923250", "title": "Southern marsupial mole", "section": "Section::::Morphology.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 591, "text": "The external ear openings are covered with fur and do not have a pinnae. The nostrils are small vertical slits right below the shield-like rostrum. Although the brain has been regarded as very primitive and represents the \"lowliest marsupial brain\", the olfactory bulbs and the rubercula olfactoria are very well developed. This seems to suggest that the olfactory sense plays an important role in the marsupial moles' life, as it would be expected for a creature living in an environment lacking visual stimuli. The middle ear seems to be adapted for the reception of low-frequency sounds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "534130", "title": "Nasal concha", "section": "Section::::Function.:Smell.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 568, "text": "The superior conchae completely cover and protect the nerve axons piercing through the cribriform plate (a porous bone plate that separates the nose from the brain) into the nose. Some areas of the middle conchae are also innervated by the olfactory bulb. All three pairs of conchae are innervated by pain and temperature receptors, via the trigeminal nerve (or, the fifth cranial nerve). Research has shown that there is a strong connection between these nerve endings and activation of the olfactory receptors, but science has yet to fully explain this interaction.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "454837", "title": "Auriculotherapy", "section": "Section::::Distribution of Auricular Points.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 264, "text": "Points located on the ear lobe are related to the head and facial region, those on the scapha are related to the upper limbs, while those on the antihelix and anihelix crura to the trunk and lower limbs, and those in the concha are related to the internal organs.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "462393", "title": "Olfactory bulb", "section": "Section::::Structure.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 502, "text": "In most vertebrates, the olfactory bulb is the most rostral (forward) part of the brain, as seen in rats. In humans, however, the olfactory bulb is on the inferior (bottom) side of the brain. The olfactory bulb is supported and protected by the cribriform plate of the ethmoid bone, which in mammals separates it from the olfactory epithelium, and which is perforated by olfactory nerve axons. The bulb is divided into two distinct structures: the main olfactory bulb and the accessory olfactory bulb.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46348501", "title": "Evolution of the cochlea", "section": "Section::::The ear.:Evolutionary perspective.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 447, "text": "The cochlea is the tri-chambered auditory detection portion of the ear, consisting of the scala media, the scala tympani, and the scala vestibuli. Regarding mammals, placental and marsupial cochleae have similar cochlear responses to auditory stimulation as well as DC resting potentials. This leads to the investigation of the relationship between these therian mammals and researching their ancestral species to trace the origin of the cochlea.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "21567917", "title": "Vestibular evoked myogenic potential", "section": "Section::::The vestibular system.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 739, "text": "There are six receptor organs located in the inner ear: cochlea, utricle, saccule, and the lateral, anterior, and posterior semicircular canals. The cochlea is a sensory organ with the primary purpose to aid in hearing. The otolith organs (utricle and saccule) are sensors for detecting linear acceleration in their respective planes (utrical=horizontal plane (forward/backward; up/down); saccule=sagital plane (up/down)), and the three semicircular canals (anterior/superior, posterior, and horizontal) detect head rotation or angular acceleration in their respective planes of orientation (anterior/superior=pitch (nodding head), posterior=roll (moving head from one shoulder to other), and horizontal=yaw (shaking head left to right). \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37811476", "title": "Protruding ear", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 827, "text": "Prominent ear, otapostasis or bat ear is an abnormally protruding human ear. It may be unilateral or bilateral. The concha is large with poorly developed antihelix and scapha. It is the result of malformation of cartilage during primitive ear development in intrauterine life. The deformity can be corrected anytime after 6 years. The surgery is preferably done at the earliest in order to avoid psychological distress. Correction by otoplasty involves changing the shape of the ear cartilage so that the ear is brought closer to the side of the head. The skin is not removed, but the shape of the cartilage is altered. The surgery does not affect hearing. It is done for cosmetic purposes only. The complications of the surgery, though rare, are keloid formation, hematoma formation, infection and asymmetry between the ears.\n", "bleu_score": null, "meta": null } ] } ]
null
rgmpb
farting and its relation to poop.
[ { "answer": " > what purpose does farting serve?\n\nFarting serves the purpose of releasing excess gas in your digestive system. These gases are generally produced by (beneficial to you) bacteria that live within your digestive system\n\n > Why do they smell identical to the shit that I would imminently blast out?\n\nA human perceives smell by directly having relevant particles enter the nose. When you smell poop, you're detecting poop-particles entering your nose. When you smell farts, you're detecting those same poop-particles entering your nose.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "4107544", "title": "Mandarin Chinese profanity", "section": "Section::::Insults.:Excrement.\n", "start_paragraph_id": 309, "start_character": 0, "end_paragraph_id": 309, "end_character": 373, "text": "Originally, the various Mandarin Chinese words for \"excrement\" were less commonly used as expletives, but that is changing. Perhaps because farting results in something that is useless even for fertilizer: \"fàng pì\" (; lit. \"to fart\") is an expletive in Mandarin. The word \"pì\" (; lit. \"fart\") or the phrase is commonly used as an expletive in Mandarin (i.e. \"bullshit!\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6136724", "title": "The Gas We Pass", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 289, "text": "The Gas We Pass: The Story of Farts (おなら \"Onara\") is a children's book written by Shinta Chō (). It was first published in Japan in 1978; the first American edition was in 1994. The book tells children about flatulence (also known as farting), and that it is completely natural to do so. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2200874", "title": "Bloating", "section": "Section::::Causes.:Bowel gas.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 362, "text": "Flatulence or farting works much like burping, but helps the body pass gas through the anus, not the mouth. Bacteria present in the intestinal tract cause gas to be expelled from the anus. They produce the gas as food is digested and moved from the small intestine. This gas builds up and causes swelling or bloating in the abdominal area before it is released.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38221", "title": "Shit", "section": "Section::::Usage in campaigns.:Sanitation promotion.\n", "start_paragraph_id": 102, "start_character": 0, "end_paragraph_id": 102, "end_character": 293, "text": "Using the term \"shit\" (or other locally used crude words) – rather than feces or excreta – during campaigns and triggering events is a deliberate aspect of the community-led total sanitation approach which aims to stop open defecation, a massive public health problem in developing countries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4512409", "title": "Flatulence humor", "section": "Section::::Inculpatory pronouncements.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 411, "text": "The sourcing of a fart involves a ritual of assignment that sometimes takes the form of a rhyming game. These are frequently used to discourage others from mentioning the fart or to turn the embarrassment of farting into a pleasurable subject matter. The trick is to pin the blame on someone else, often by means of deception, or using a back and forth rhyming game that includes phrases such as the following:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38221", "title": "Shit", "section": "Section::::Usage.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 620, "text": "In the word's literal sense, it has a rather small range of common usages. An unspecified or collective occurrence of feces is generally \"shit\" or \"some shit\"; a single deposit of feces is sometimes \"a shit\" or \"a piece of shit\"; and to defecate is \"to shit\" or \"to take a shit\". While it is common to speak of shit as existing in \"a pile\", \"a load\", \"a hunk\", and other quantities and configurations, such expressions flourish most strongly in the figurative. For practical purposes, when actual defecation and excreta are spoken of, it is either through creative euphemism or with a vague and fairly rigid literalism.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11236", "title": "Fart (word)", "section": "Section::::Vulgarity and offensiveness.:Modern usage.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 569, "text": "According to \"The Alphabet of Manliness\", the assigning of blame for farting is part of a ritual of behaviour. This may involve deception and a back and forth rhyming game, for example, \"He who smelt it, dealt it\" and \"He who denied it, supplied it\". Derived terms include \"fanny fart\" (queef), \"brain fart\" (slang for a special kind of abnormal brain activity which results in human error while performing a repetitive task, or more generally denoting a degree of mental laxity or any task-related forgetfulness, such as forgetting how to hold a fork) and \"old fart\".\n", "bleu_score": null, "meta": null } ] } ]
null
3iybpu
Why did none of the southern states have ballots for Lincoln in the 1860 election?
[ { "answer": "Something to keep in mind is that printed, government-supplied ballots happened relatively late in our political process. In 1860, one would vote by writing out a ballot for a candidate or slate of candidates, or alternatively by using a ballot that was printed in a newspaper or similar publication or handed out by a candidate's supporters at the polls. So Lincoln not appearing on ballots in the South is fairly normal; he was so unpopular there that few people would have dared to write him in or publish a ballot for him. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "40519", "title": "1860 United States presidential election", "section": "Section::::An election for disunion.\n", "start_paragraph_id": 71, "start_character": 0, "end_paragraph_id": 71, "end_character": 683, "text": "Among the slave states, the three states with the highest voter turnouts voted the most one-sided. Texas, with five percent of the total wartime South's population, voted 75 percent Breckinridge. Kentucky and Missouri, with one-fourth the total population, voted 73 percent pro-union Bell, Douglas and Lincoln. In comparison, the six states of the Deep South making up one-fourth the Confederate voting population, split 57 percent Breckinridge versus 43 percent for the two pro-union candidates. The four states that were admitted to the Confederacy after Fort Sumter held almost half its population, and voted a narrow combined majority of 53 percent for the pro-union candidates.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27956", "title": "South Carolina", "section": "Section::::History.:Antebellum.\n", "start_paragraph_id": 92, "start_character": 0, "end_paragraph_id": 92, "end_character": 441, "text": "In the United States presidential election of 1860 voting was sharply divided, with the south voting for the Southern Democrats and the north for Abraham Lincoln's Republican Party. Lincoln was anti-slavery, did not acknowledge the right to secession, and would not yield federal property in Southern states. Southern secessionists believed Lincoln's election meant long-term doom for their slavery-based agrarian economy and social system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40520", "title": "1864 United States presidential election", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 431, "text": "Despite his early fears of defeat, Lincoln won strong majorities in the popular and electoral vote, partly as a result of the recent Union victory at the Battle of Atlanta. As the Civil War was still raging, no electoral votes were counted from any of the eleven southern states that had joined the Confederate States of America. Lincoln's re-election ensured that he would preside over the successful conclusion of the Civil War.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "59578070", "title": "List of United States Senate elections (1788–1913)", "section": "Section::::1864 and 1865 elections.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 207, "text": "These elections, corresponding with Abraham Lincoln's re-election as president, saw the Republicans gain two seats. As these elections occurred during the Civil War, most of the Southern states were absent.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3088255", "title": "Virginia Conventions", "section": "Section::::Secession Convention of 1861.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 1001, "text": "Abraham Lincoln's constitutional election reflected the nation's sectional divide, though 82 percent of the electorate had split among the Unionists, Lincoln, Stephen A. Douglas and John Bell. Even before Lincoln's inauguration, the Deep South states that had cast Electoral College votes for John C. Breckinridge resolved to secede from the United States and form the Confederate States of America. The Virginia Assembly called a special convention for the sole purpose of considering secession from the United States. Virginia was deeply divided, returning a convention of delegates amounting to about one-third for secession and two thirds Unionist. But the Unionists would prove to be further divided between those who would be labelled Conditional Unionists who would favor Virginia in the Union only if Lincoln made no move at \"coercion\", and those who would later be called Unconditional Unionists who would be unwavering in their loyalty to the constitutional government of the United States.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40520", "title": "1864 United States presidential election", "section": "Section::::General election.:Results.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 633, "text": "Because eleven Southern states had declared secession from the Union and formed the Confederate States of America, only twenty-five states participated in the election. Three new states participated for the first time: Kansas, West Virginia, and Nevada. The reconstructed portions of Louisiana and Tennessee chose presidential electors - Congress did not count their votes, which could not have changed the result and in any case had been cast for Lincoln. Despite Kentucky's state government never seceding from the Union, the Commonwealth had an election participation rate decrease of almost 40% compared to the election of 1860.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "555910", "title": "Jefferson Territory", "section": "Section::::Establishment.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 422, "text": "On November 7, 1860, the U.S. presidential election produced a victory for Abraham Lincoln and precipitated the secession of seven slave states and the formation of the Confederate States of America. These events eliminated any chance for federal endorsement of the Territory of Jefferson and any role in government for Governor Steele, a staunch pro-Union Democrat and vocal opponent of Lincoln and the Republican Party.\n", "bleu_score": null, "meta": null } ] } ]
null
414mp0
Can somebody explain possible reasons why "Super Luminous Supernova" differ from the garden variety supernova?
[ { "answer": "I mean... the best source is probably [the paper](_URL_0_) itself.\n\nBut, **TL;DR** we don't really know yet. It seems too energetic for most models, so the theorists will need some time to adjust.\n\n > Only within the past two decades has the most luminous class of supernovae (super-luminous supernovae, SLSNe) been identified. Compared with the most commonly discovered SNe (Type Ia), SLSNe are more luminous by over two magnitudes at peak and rarer by at least 3 orders of magnitude.\n\n > The power source for ASASSN-15lh is unknown. Traditional mechanisms invoked for normal SNe likely cannot explain SLSNe-I. The lack of hydrogen or helium suggests that shock interactions with hydrogen-rich circumstellar material, invoked to interpret some SLSNe, cannot explain SLSNe-I or ASASSN-15lh.\n\n > Another possibility is that the spindown of a rapidly rotating, highly magnetic neutron star (a magnetar) powers the extraordinary emission. The total observed energy radiated so far (1.1 ± 0.2 × 10^(52) ergs) strains a magnetar interpretation because, for P ≲ 1 ms, gravitational wave radiation should limit the total rotational energy available", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "27680", "title": "Supernova", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 787, "text": "Theoretical studies indicate that most supernovae are triggered by one of two basic mechanisms: the sudden re-ignition of nuclear fusion in a degenerate star or the sudden gravitational collapse of a massive star's core. In the first class of events, the object's temperature is raised enough to trigger runaway nuclear fusion, completely disrupting it. Possible causes are accumulation of sufficient material from a binary companion through accretion, or a merger. In the massive star case, the core of a massive star may undergo sudden collapse, releasing gravitational potential energy as a supernova. While some observed supernovae are more complex than these two simplified theories, the astrophysical mechanics have been established and accepted by most astronomers for some time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27680", "title": "Supernova", "section": "", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 688, "text": "Supernovae can expel several solar masses of material at speeds up to several percent of the speed of light. This drives an expanding and fast-moving shock wave into the surrounding interstellar medium, sweeping up an expanding shell of gas and dust observed as a supernova remnant. Supernovae are a major source of elements in the interstellar medium from oxygen through to rubidium. The expanding shock waves of supernovae can trigger the formation of new stars. Supernova remnants might be a major source of cosmic rays. Supernovae might produce strong gravitational waves, though, thus far, the gravitational waves detected have been from the merger of black holes and neutron stars.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "63025", "title": "Variable star", "section": "Section::::Intrinsic variable stars.:Cataclysmic or explosive variable stars.:Supernovae.\n", "start_paragraph_id": 126, "start_character": 0, "end_paragraph_id": 126, "end_character": 992, "text": "Supernovae are the most dramatic type of cataclysmic variable, being some of the most energetic events in the universe. A supernova can briefly emit as much energy as an entire galaxy, brightening by more than 20 magnitudes (over one hundred million times brighter). The supernova explosion is caused by a white dwarf or a star core reaching a certain mass/density limit, the Chandrasekhar limit, causing the object to collapse in a fraction of a second. This collapse \"bounces\" and causes the star to explode and emit this enormous energy quantity. The outer layers of these stars are blown away at speeds of many thousands of kilometers an hour. The expelled matter may form nebulae called \"supernova remnants\". A well-known example of such a nebula is the Crab Nebula, left over from a supernova that was observed in China and North America in 1054. The core of the star or the white dwarf may either become a neutron star (generally a pulsar) or disintegrate completely in the explosion.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "118570", "title": "Magnetar", "section": "Section::::Bright supernovae.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 396, "text": "Unusually bright supernovae are thought to result from the death of very large stars as pair-instability supernovae (or pulsational pair-instability supernovae). However, recent research by astronomers has postulated that energy released from newly formed magnetars into the surrounding supernova remnants may be responsible for some of the brightest supernovae, such as SN 2005ap and SN 2008es.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54957", "title": "Younger Dryas", "section": "Section::::Causes.:Vela Supernova hypothesis.\n", "start_paragraph_id": 69, "start_character": 0, "end_paragraph_id": 69, "end_character": 812, "text": "Another hypothesis discussed is that effects of a supernova could have been a factor in the Younger Dryas. Effects of a supernova have been suggested before, but without confirming evidence. Potential evidence that these effects could have been caused by a celestial event, a supernova are observations of Gamma-ray bursts and X-ray flashes have been compared to nebular records to test this as well as supernovae flash models, comparable to the records of in-galaxy supernovae, to study the effects of such an event on Earth. These effects include depletion in the ozone layer, increased UV exposure, global cooling, and nitrogen changes in the Earth's surface and troposphere. As Brakenridge states, the only supernova possible at that time was the Vela Supernova, or classified as the Vela Supernova Remnant.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "63025", "title": "Variable star", "section": "Section::::Intrinsic variable stars.:Cataclysmic or explosive variable stars.:Supernovae.\n", "start_paragraph_id": 128, "start_character": 0, "end_paragraph_id": 128, "end_character": 426, "text": "A supernova may also result from mass transfer onto a white dwarf from a star companion in a double star system. The Chandrasekhar limit is surpassed from the infalling matter. The absolute luminosity of this latter type is related to properties of its light curve, so that these supernovae can be used to establish the distance to other galaxies. One of the most studied supernovae is SN 1987A in the Large Magellanic Cloud.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9075104", "title": "Dark Energy Survey", "section": "Section::::Overview.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1364, "text": "Type Ia supernovae are believed to be thermonuclear explosions that occur when white dwarf stars in binary systems accrete mass from their companion stars. These events are important for the study of cosmology because they are very bright, which allows astronomers to detect them at very large distance. The expansion of the universe can be constrained based on observations of the luminosity distance and redshift of distant type IA supernova. The other three techniques (BAO, galaxy clusters, and weak lensing) used by the Dark Energy Survey allow scientists to understand simultaneously the expansion of the universe and the evolution of the dark matter density field perturbations. These perturbations were intrinsically tied to the formation of galaxies and galaxy clusters. The standard model of cosmology assumes that quantum fluctuations of the density field of the various components that were present when our universe was very young were enhanced through a very rapid expansion called inflation. Gravitational collapse enhances these initial fluctuation as baryons fall into the gravitational potential field of more dense regions of space to form galaxies. Nevertheless, the growth rate of these dark matter halos is sensitive to the dynamics of the expansion of the Universe and DES will use this connection to probe the properties of that expansion.\n", "bleu_score": null, "meta": null } ] } ]
null
3uxhup
if earth's orbit is elliptical, how is it that summer occurs when earth is furthest from the sun, and spring and fall, when earth is closest to the sun, is cooler than summer?
[ { "answer": "The difference in temperatures between seasons isn't due to being closer or farther from the sun, but by the fact that you are tilted away from the sun in the winter, so you get less direct sunlight. And the amount that we are closer to the sun during the (northern) winter because of the elliptical orbit only results in the earth getting a bit more energy from the sun (less than 10% more) because of how big the orbit is.", "provenance": null }, { "answer": "The change in orbit is more or less negligible. You're talking 3 million miles out of about 91 million miles, about a 3% change. \n\nThe tilt of the Earth 23.4 degrees. Now think about shining a flash light directly at the wall. It makes a spot right? If you tilt the angle around 30 degrees what happens? The spot stretches out, it covers twice as much ground as it did when you were pointing straight. \n\nBut the flash light didn't change its power level, the 'spot' is still delivering x energy to the wall, over twice as much area. So any individual 'unit' of that spot is receiving half the energy it was before. \n\nWhile not the only factor, that's a big one. Any given spot on the surface in winter is receiving far less energy than it does in summer. ", "provenance": null }, { "answer": "The Earth's orbit is not a perfect circle, but it's *really* close. It's aphelion (furthest distance of an orbit) is 152 million kilometers, while it's perihelion (closest distance of an orbit) is 147 million kilometers. While 5 million kilometers sounds like a lot, in astronomical terms that is very little. Mars for example has a difference of 43 million kilometers.\n\nWhen the Earth tilt, the side closer to the sun receives more sunlight for the same surface area, and the light has to travel through less atmosphere before hitting the ground. That has a far greater effect than our relatively small variation in orbit distances.", "provenance": null }, { "answer": "This is one of those misconception things that gets printed in textbooks and then circulated around. \n\nThe eliptical nature of earth's orbit does not coincide with the seasons. If it did, wouldn't the whole world experience the same season at the same time? \n\nWhat actually causes the seasons is the procession (wobble) of the earth as you mentioned, but for a slightly different reason. What happens is that as the earth tilts your hemisphere towards the sun, it receives a greater than average amount of energy causing a summer for that hemisphere. Now, at the same time the other hemisphere is tilted away from the sun and receives less than an average amount of sunlight causing a winter.\n\nAs you move from the equator to the poles of the earth, the amount of sunlight becomes more extreme, which helps cause the large swing in temperature between seasons. \n\nIf there were no wobble, there would be no seasons because the earth would always recieve an average amount of sunlight everywhere. \n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "373812", "title": "Joseph Adhémar", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 603, "text": "The Earth's orbit is elliptical, with the Sun at one focus; lines drawn through the summer and winter solstice; and the spring and autumn equinox; intersect with the sun at right angles. The Earth is closest to the Sun (perihelion) near the northern hemisphere winter solstice. The earth moves faster through its orbit when closer to the sun. Hence, the period from the northern hemisphere's autumn equinox to winter and spring is shorter by around seven days than the period from spring to summer to autumn; the reverse is true in the southern hemisphere. Hence, northern hemisphere winter is shorter.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "321956", "title": "List of common misconceptions", "section": "Section::::Science and technology.:Astronomy.\n", "start_paragraph_id": 118, "start_character": 0, "end_paragraph_id": 118, "end_character": 385, "text": "BULLET::::- Seasons are not caused by the Earth being closer to the Sun in the summer than in the winter, but by the Earth's 23.4-degree axial tilt. Each Hemisphere is tilted towards the Sun in its respective summer (July in the Northern Hemisphere and January in the Southern Hemisphere), resulting in longer days and more direct sunlight, with the opposite being true in the winter.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "91173", "title": "Axial tilt", "section": "Section::::Earth.:Seasons.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 635, "text": "Earth's axis remains tilted in the same direction with reference to the background stars throughout a year (regardless of where it is in its orbit). This means that one pole (and the associated hemisphere of Earth) will be directed away from the Sun at one side of the orbit, and half an orbit later (half a year later) this pole will be directed towards the Sun. This is the cause of Earth's seasons. Summer occurs in the Northern hemisphere when the north pole is directed toward the Sun. Variations in Earth's axial tilt can influence the seasons and is likely a factor in long-term climate change \"(also see Milankovitch cycles)\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "88213", "title": "Apsis", "section": "Section::::Perihelion and aphelion.:Earth perihelion and aphelion.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 1381, "text": "Because of the increased distance at aphelion, only 93.55% of the solar radiation from the Sun falls on a given area of land as does at perihelion. However, this fluctuation does not account for the seasons, as it is summer in the northern hemisphere when it is winter in the southern hemisphere and \"vice versa.\" Instead, seasons result from the tilt of Earth's axis, which is 23.4 degrees away from perpendicular to the plane of Earth's orbit around the sun. Winter falls on the hemisphere where sunlight strikes least directly, and summer falls where sunlight strikes most directly, regardless of the Earth's distance from the Sun. In the northern hemisphere, summer occurs at the same time as aphelion. Despite this, there are larger land masses in the northern hemisphere, which are easier to heat than the seas. Consequently, summers are warmer in the northern hemisphere than in the southern hemisphere under similar conditions. Astronomers commonly express the timing of perihelion relative to the vernal equinox not in terms of days and hours, but rather as an angle of orbital displacement, the so-called longitude of the periapsis (also called longitude of the pericenter). For the orbit of the Earth, this is called the \"longitude of perihelion\", and in 2000 it was about 282.895°; by the year 2010, this had advanced by a small fraction of a degree to about 283.067°.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "24873453", "title": "Season", "section": "Section::::Four-season calendar reckoning.:Astronomical.:Change over time.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 543, "text": "Over thousands of years, the Earth's axial tilt and orbital eccentricity vary (see Milankovitch cycles). The equinoxes and solstices move westward relative to the stars while the perihelion and aphelion move eastward. Thus, ten thousand years from now Earth's northern winter will occur at aphelion and northern summer at perihelion. The severity of seasonal change — the average temperature difference between summer and winter in location — will also change over time because the Earth's axial tilt fluctuates between 22.1 and 24.5 degrees.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "35553026", "title": "Position of the Sun", "section": "Section::::Declination of the Sun as seen from Earth.:Overview.\n", "start_paragraph_id": 33, "start_character": 0, "end_paragraph_id": 33, "end_character": 388, "text": "The Sun appears to move northward during the northern spring, contacting the celestial equator on the March equinox. Its declination reaches a maximum equal to the angle of Earth's axial tilt (23.44°) on the June solstice, then decreases until reaching its minimum (−23.44°) on the December solstice, when its value is the negative of the axial tilt. This variation produces the seasons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2594258", "title": "Interglacial", "section": "Section::::Interglacials during the Pleistocene.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 372, "text": "Warm summers in the Southern hemisphere occur when that hemisphere is tilted toward the sun and the Earth is nearest the sun in its elliptical orbit. Cool summers occur when the Earth is farthest from the sun during that season. These effects are more pronounced when the eccentricity of the orbit is large. When the obliquity is large, seasonal changes are more extreme.\n", "bleu_score": null, "meta": null } ] } ]
null
7wtxhx
-why is that if you get distracted, your muscles become weaker?
[ { "answer": "Grip strength is a voluntary muscle movement, therefore by distracting them you are taking their mind off that action and so the signal to it will be decreased. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2648712", "title": "Eye strain", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 305, "text": "When concentrating on a visually intense task, such as continuously focusing on a book or computer monitor, the ciliary muscle tightens. This can cause the eyes to get irritated and uncomfortable. Giving the eyes a chance to focus on a distant object at least once an hour usually alleviates the problem.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6731253", "title": "Hysterical strength", "section": "Section::::Research.\n", "start_paragraph_id": 16, "start_character": 0, "end_paragraph_id": 16, "end_character": 508, "text": "Early experiments showed that adrenaline increases twitch, but not tetanic force and rate of force development in muscles. It is questionable, however, as to whether adrenaline, released from the adrenal medulla into the venous circulation, can reach the muscle quickly enough in order to be able to cause such an effect in the midst of a crisis. It may be that noradrenaline released from sympathetic nerve terminals directly innervating skeletal muscle has more of an effect over the timescale of seconds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17476149", "title": "Emotional self-regulation", "section": "Section::::Strategies.:Attentional deployment.:Distraction.\n", "start_paragraph_id": 30, "start_character": 0, "end_paragraph_id": 30, "end_character": 710, "text": "Distraction, an example of attentional deployment, is an early selection strategy, which involves diverting one's attention away from an emotional stimulus and towards other content. Distraction has been shown to reduce the intensity of painful and emotional experiences, to decrease facial responding and neural activation in the amygdala associated with emotion, as well as to alleviate emotional distress. As opposed to reappraisal, individuals show a relative preference to engage in distraction when facing stimuli of high negative emotional intensity. This is because distraction easily filters out high-intensity emotional content, which would otherwise be relatively difficult to appraise and process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "841074", "title": "Muscle fatigue", "section": "Section::::Nervous fatigue.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 561, "text": "In novice strength trainers, the muscle's ability to generate force is most strongly limited by nerve’s ability to sustain a high-frequency signal. After a period of maximum contraction, the nerve’s signal reduces in frequency and the force generated by the contraction diminishes. There is no sensation of pain or discomfort, the muscle appears to simply ‘stop listening’ and gradually cease to move, often going backwards. As there is insufficient stress on the muscles and tendons, there will often be no delayed onset muscle soreness following the workout.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3266190", "title": "Muscle weakness", "section": "Section::::Types.:Neuromuscular fatigue.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1318, "text": "For extremely powerful contractions that are close to the upper limit of a muscle's ability to generate force, neuromuscular fatigue can become a limiting factor in untrained individuals. In novice strength trainers, the muscle's ability to generate force is most strongly limited by nerve’s ability to sustain a high-frequency signal. After an extended period of maximum contraction, the nerve’s signal reduces in frequency and the force generated by the contraction diminishes. There is no sensation of pain or discomfort, the muscle appears to simply ‘stop listening’ and gradually cease to move, often lengthening. As there is insufficient stress on the muscles and tendons, there will often be no delayed onset muscle soreness following the workout. Part of the process of strength training is increasing the nerve's ability to generate sustained, high frequency signals which allow a muscle to contract with their greatest force. It is this \"neural training\" that causes several weeks worth of rapid gains in strength, which level off once the nerve is generating maximum contractions and the muscle reaches its physiological limit. Past this point, training effects increase muscular strength through myofibrillar or sarcoplasmic hypertrophy and metabolic fatigue becomes the factor limiting contractile force.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "424433", "title": "Weakness", "section": "Section::::Differential diagnosis.:Types.:Neuromuscular fatigue.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 1318, "text": "For extremely powerful contractions that are close to the upper limit of a muscle's ability to generate force, neuromuscular fatigue can become a limiting factor in untrained individuals. In novice strength trainers, the muscle's ability to generate force is most strongly limited by nerve’s ability to sustain a high-frequency signal. After an extended period of maximum contraction, the nerve’s signal reduces in frequency and the force generated by the contraction diminishes. There is no sensation of pain or discomfort, the muscle appears to simply ‘stop listening’ and gradually cease to move, often lengthening. As there is insufficient stress on the muscles and tendons, there will often be no delayed onset muscle soreness following the workout. Part of the process of strength training is increasing the nerve's ability to generate sustained, high frequency signals which allow a muscle to contract with their greatest force. It is this \"neural training\" that causes several weeks worth of rapid gains in strength, which level off once the nerve is generating maximum contractions and the muscle reaches its physiological limit. Past this point, training effects increase muscular strength through myofibrillar or sarcoplasmic hypertrophy and metabolic fatigue becomes the factor limiting contractile force.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "805912", "title": "Delayed onset muscle soreness", "section": "Section::::Mechanism.:Repeated-bout effect.\n", "start_paragraph_id": 21, "start_character": 0, "end_paragraph_id": 21, "end_character": 369, "text": "As a result of this effect, not only is the soreness reduced, but other indicators of muscle damage, such as swelling, reduced strength and reduced range of motion, are also more quickly recovered from. The effect is mostly, but not wholly, specific to the exercised muscle: experiments have shown that some of the protective effect is also conferred on other muscles.\n", "bleu_score": null, "meta": null } ] } ]
null
295ex1
What unified the Austrian-Hungary Empire - Religion, Culture, etc.?
[ { "answer": "It was conquest! Always conquest.\n\n The Austrian Empire, or the Austro-Hungarian Empire, had its roots with the Habsburgs, and the Duchy of Austria. Austria and Vienna lay on trade roots coming east out of Italy, North out of the Balkans, and South and East out of the Baltics/Ukrainian steppe. Throughout the Renaissance and Early Modern period, this made Austria wealthy and powerful. The Habsburgs, Austria's ruling family, also gained tremendous political power across most of Germany, they became the Emperor of all of Germany. Austria used this power and prestige to spread their rule across southern and eastern Germany. They joined the thrones of Bohemia and Hungary to the Austrian one, giving Austria de facto rule over the area, while northern Italy was drug into Austria through a series of wars, and an advantageous marriage in Spain, which for a time unified the Austrian and Spanish thrones. \n\nBut all this power, wealth, and prestige also made Austria a target. It waged several notable campaigns against the Ottoman Turks, who after toppling the Byzantines continued to push north through the Balkans. Austria continued to strengthen its southern border, as well as Hungary's southern border, to prevent [serious Turkish invasions](_URL_0_). Thus Austria began to see itself as the traditional defender of Christianity in the Balkans, as well as the defender of Europe against the Islamic Turks. \n\nSo really, Austria, and later Austro-Hungary, was a polyglot empire because Austria expanded in such a haphazard and seemingly random way. The most important connecting thread was always \"what is best for the Habsburgs?\" who sat at the top of the whole pyramid. In the Renaissance, it made Austria a dynamic political and military power, which hung a curtain across southern Germany. But following the Thirty Years Wars, and especially after the Napoleonic Wars, the multitude of nationalities, religions, political and social ideas, and ethnic alignments all made Austria a strange and unhealthy nation. The Habsburgs continued to rule as if the Austrian Empire were still the \"Habsburg Lands\", which you sometimes see printed on older maps of the Renaissance. This led to the breakdown of Austria, which really culminated in the July Crisis of 1914, which directly caused the conflict which would destroy the Austrian (at that point Austro-Hungarian) Empire in 1918. ", "provenance": null }, { "answer": "Are you talking about the 16th century event of \"unification\" under a Habsburg monarch, or the event of the 19th century organisation of the Empire into Austria-Hungary?\nOr maybe are you asking in general what was the reason why so many different nations stayed together for so long?\n\n\n(I just want to here mention that i can't speak for the Bohemian part of the Empire, but mostly for the Croatian and i suspect Hungarian part.)\n\nWell anyway, generally speaking one of the main reasons is the threat of the Ottoman Empire. After the Battle of Mohacs in 1526 Turks conquered a large part of the then independant Kingdom of Hungary, Bohemia and Croatia and the last Hungarian king died without an heir. With the prospect of the Ottoman (Muslim) conqest, the nobles in the wesern parts of the Hungarian Kingdom elected Ferdinand Habsburg of neighbouring Austria (later Holy Roman Emperor) as the king to protect them. (Not to say that all Hungarians chose Habsburgs over the Ottomans. Quite the opposite, many supported the Ottoman backed Hungarian king in a kind of civil war)\n\nWhat followed were several centuries of fierce continuous wars, skirsmishes and border conflicts between the Habsburgs and the Ottomans for the area. For the nations on the borders, the Empire's protection was desperatly needed despite all the bad things it brought with it, and as such few attempts of splitting up were ever considered.\n\nIt lasted until around beginning of 18th century (e.g. we can take the marking point the 1699 treaty of Karlowitz) when the Ottoman threat was almost gone and most of the areas of old kingdom back in Habsburg hands, and by then we already have a well established state that the strict feudal system of law and inheritance made de jure right of Habsburgs which few had the power, legality or will to attempt to dissolve.\n\nSince then fast forward couple of decades and with Enlightment, Apsolutism, rise of citizenry and later nationalism, Napoleon, ( and with the outside threat of Ottoman Empire gone) the Empire entered into a series of years of political turmoil, which eventually resulted in formation of Austro-Hungary dual empire. Still then and there, as you noticed, were not so many common factors between the nations in the Empire and I would say the long tradition of the Empire had been the main reason why no main political party thought about dissolution of the empire (together with the fear of being fiercly prosecuted for the mention of it of course) It needed a world war to finally break it.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2983", "title": "Austria-Hungary", "section": "Section::::Politics.:Ethnic relations.\n", "start_paragraph_id": 86, "start_character": 0, "end_paragraph_id": 86, "end_character": 554, "text": "The \"Austro-Hungarian Compromise of 1867\" created the personal union of the independent states of Hungary and Austria, linked under a common monarch also having joint institutions. The Hungarian majority asserted more of their identity within the Kingdom of Hungary, and it came to conflict with some of her own minorities. The imperial power of German speakers who controlled the Austrian half was resented by others. In addition, the emergence of nationalism in the newly independent Romania and Serbia also contributed to ethnic issues in the empire.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13275", "title": "Hungary", "section": "Section::::History.:From the 18th century to World War I.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 1137, "text": "Because of external and internal problems, reforms seemed inevitable and major military defeats of Austria forced the Habsburgs to negotiate the Austro-Hungarian Compromise of 1867, by which the dual Monarchy of Austria–Hungary was formed. This Empire had the second largest area in Europe (after the Russian Empire), and it was the third most populous (after Russia and the German Empire). The two realms were governed separately by two parliaments from two capital cities, with a common monarch and common external and military policies. Economically, the empire was a customs union. The old Hungarian Constitution was restored, and Franz Joseph I was crowned as King of Hungary. The era witnessed impressive economic development. The formerly backward Hungarian economy became relatively modern and industrialized by the turn of the 20th century, although agriculture remained dominant until 1890. In 1873, the old capital Buda and Óbuda were officially united with Pest, thus creating the new metropolis of Budapest. Many of the state institutions and the modern administrative system of Hungary were established during this period.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "41003457", "title": "Economic history of World War I", "section": "Section::::Central Powers.:Austria-Hungary.\n", "start_paragraph_id": 78, "start_character": 0, "end_paragraph_id": 78, "end_character": 977, "text": "The Austro-Hungarian monarchical personal union of the two countries was a result of the Compromise of 1867. Kingdom of Hungary lost its former status after the Hungarian Revolution of 1848. However following the 1867 reforms, the Austrian and the Hungarian states became co-equal within the Empire. Austria-Hungary was geographically the second-largest country in Europe after the Russian Empire, at , and the third-most populous (after Russia and the German Empire). In comparison with Germany and Britain, the Austro-Hungarian economy lagged behind considerably, as sustained modernization had begun much later in Austria-Hungary. The Empire built up the fourth-largest machine building industry of the world, after the United States, Germany, and Britain. Austria-Hungary was also the world's third largest manufacturer and exporter of electric home appliances, electric industrial appliances and facilities for power plants, after the United States and the German Empire.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2983", "title": "Austria-Hungary", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 404, "text": "Austria-Hungary, often referred to as the Austro-Hungarian Empire or the Dual Monarchy, was a constitutional monarchy in Central and Eastern Europe between 1867 and 1918. It was formed when the Austrian Empire adopted a new constitution; as a result Austria (Cisleithania) and Hungary (Transleithania) were placed on equal footing. It dissolved into several new states at the end of the First World War.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "27335750", "title": "List of Major League Baseball players from Europe", "section": "Section::::Austria.:Austria-Hungary.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 494, "text": "Austria-Hungary was a monarchic union between the crowns of the Austrian Empire and the Kingdom of Hungary in Central Europe. The union was a result of the \"Ausgleich\" or Compromise of 1867, under which the Austrian House of Habsburg agreed to share power with the separate Hungarian government, dividing the territory of the former Austrian Empire between them. The Dual Monarchy had existed for 51 years when it dissolved on October 31, 1918 following military defeat in the First World War.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39937022", "title": "International relations of the Great Powers (1814–1919)", "section": "Section::::The Eastern Question.:Long-term goals.:Austro-Hungarian Empire.\n", "start_paragraph_id": 158, "start_character": 0, "end_paragraph_id": 158, "end_character": 1305, "text": "The Austro-Hungarian Empire, headquartered at Vienna, was a largely rural, poor, multicultural state. It was operated by and for the Habsburg family, who demanded loyalty to the throne, but not to the nation. Nationalistic movements were growing rapidly. The most powerful were the Hungarians, who preserved their separate status within the Habsburg Monarchy and with the Austro-Hungarian Compromise of 1867, the creation of the Dual Monarchy they were getting practical equality. Other minorities, were highly frustrated, although some – especially the Jews – felt protected by the Empire. German nationalists, especially in the Sudetenland (part of Bohemia) however, looked to Berlin in the new German Empire. There was a small German-speaking Austrian element located around Vienna, but it did not display much sense of Austrian nationalism. That is it did not demand an independent state, rather it flourished by holding most of the high military and diplomatic offices in the Empire. Russia was the main enemy, As well as Slavic and nationalist groups inside the Empire (especially in Bosnia-Herzegovina) and in nearby Serbia. Although Austria, Germany, and Italy had a defensive military alliance – the Triple Alliance – Italy was dissatisfied and wanted a slice of territory controlled by Vienna. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4305070", "title": "History of Western civilization", "section": "Section::::Continental Europe: 1815–1870.\n", "start_paragraph_id": 135, "start_character": 0, "end_paragraph_id": 135, "end_character": 389, "text": "After years of dealing with Hungarian revolutionist, whose kingdom Austria had conquered centuries earlier, the Austrian emperor, Franz Joseph agreed to divide the empire into two parts: Austria and Hungary, and rule as both Emperor of Austria and king of Hungary. The new Austro-Hungarian Empire was created in 1867. The two peoples were united in loyalty to the monarch and Catholicism.\n", "bleu_score": null, "meta": null } ] } ]
null
6io37x
why is it so much easier to spend money than to earn it?
[ { "answer": "Because most employment will only pay you a set amount for an hour's work, limiting you in how much money you can make in a span of time, but you can never exhaust the human race's output of purchasable goods. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "54301172", "title": "Well-being contributing factors", "section": "Section::::Personal factors.:Personal Finance.\n", "start_paragraph_id": 177, "start_character": 0, "end_paragraph_id": 177, "end_character": 1014, "text": "It has been argued that money cannot effectively \"buy\" much happiness unless it is used in certain ways, and that \"Beyond the point at which people have enough to comfortably feed, clothe, and house themselves, having more money – even a lot more money – makes them only a little bit happier.\" In his book \"Stumbling on Happiness\", psychologist Daniel Gilbert described research suggesting money makes a significant difference to the poor (where basic needs are not yet met), but has a greatly diminished effect once one reaches middle class (i.e. the Easterlin paradox). Every dollar earned is just as valuable to happiness up to a $75,000 annual income, thereafter, the value of each additional dollar earns a diminishing amount of happiness. According to the latest systematic review of the economic literature on life satisfaction, one's perception of their financial circumstances fully mediates the effects of objective circumstances on one's well-being. People overestimate the influence of wealth by 100%.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "10203857", "title": "Contentment", "section": "Section::::General.:Contentment and positive psychology.:Money.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 496, "text": "Indeed, when one has met his basic needs and have more to spare, it is time to spend or give some to experience happiness. This is because happiness is really a state of in-and-out flow of one's energy. Using or giving money is an expression of out-flowing of one's life-state. Attempt to just hoard more and more in the belief that it brings more happiness can lead to the opposite result if only because the means – that is the pursuit of money for happiness – has unwittingly become the ends.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54301172", "title": "Well-being contributing factors", "section": "Section::::Personal factors.:Personal Finance.\n", "start_paragraph_id": 185, "start_character": 0, "end_paragraph_id": 185, "end_character": 291, "text": "Some studies suggest, however, that people are happier after spending money on experiences, rather than physical things, and after spending money on others, rather than themselves. However, purchases that buy ‘time’, for instance, cleaners or cooks typically increase individual well-being.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5416", "title": "Capitalism", "section": "Section::::Capital accumulation.:Rate of accumulation.\n", "start_paragraph_id": 181, "start_character": 0, "end_paragraph_id": 181, "end_character": 429, "text": "Other things being equal, the greater the amount of profit-income that is disbursed as personal earnings and used for consumptive purposes, the lower the savings rate and the lower the rate of accumulation is likely to be. However, earnings spent on consumption can also stimulate market demand and higher investment. This is the cause of endless controversies in economic theory about \"how much to spend, and how much to save\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43545389", "title": "Secular stagnation", "section": "Section::::Post-2009.\n", "start_paragraph_id": 19, "start_character": 0, "end_paragraph_id": 19, "end_character": 309, "text": "A third is that there is a \"persistent and disturbing reluctance of businesses to invest and consumers to spend\", perhaps in part because so much of the recent gains have gone to the people at the top, and they tend to save more of their money than people—ordinary working people who can't afford to do that.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5660960", "title": "Economic stagnation", "section": "Section::::Stagnation in the United States.:Post-2008 period.\n", "start_paragraph_id": 47, "start_character": 0, "end_paragraph_id": 47, "end_character": 309, "text": "A third is that there is a \"persistent and disturbing reluctance of businesses to invest and consumers to spend\", perhaps in part because so much of the recent gains have gone to the people at the top, and they tend to save more of their money than people—ordinary working people who can't afford to do that.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "206509", "title": "Simple living", "section": "Section::::Practices.:Reducing consumption, work time, and possessions.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 632, "text": "Some people practice simple living by reducing consumption. By lowering expenditure on goods or services, the time spent earning money can be reduced. The time saved may be used to pursue other interests, or help others through volunteering. Some may use the extra free time to improve their quality of life, for example pursuing creative activities such as art and crafts. Developing a detachment from money has led some individuals, such as Suelo and Mark Boyle, to live with no money at all. Reducing expenses may also lead to increasing savings, which can lead to financial independence and the possibility of early retirement.\n", "bleu_score": null, "meta": null } ] } ]
null
8lf3k2
What is the process for a new atom or element to form, specifically from the beginning when there was only hydrogen and helium?
[ { "answer": " > If all matter began from hydrogen and helium, how did we end up with 120+ elements?\n\nThere are 118 known elements, and some of them don't occur in significant amounts in nature.\n\n > Is it possible to create a specific element by mashing x amount of protons, neutrons, and electrons together?\n\nYes, although there are typically easier ways of producing a given element. We have many different kinds of nuclear reactions in our arsenal, and many stable (or nearly-stable) nuclides that we can use as a starting point.\n\n > Obviously I know this is not how it works AT ALL but how could other elements form from just 2 elements?\n\n[Here](_URL_0_) is a chart of most of the currently-known elements, with the primary production mechanisms shown. In these astrophysical sites, like neutron star mergers and supernovae, there are complicated networks of many nuclear reactions and decays happening. They produce many different isotopes of many different elements.", "provenance": null }, { "answer": "Stars fuse hydrogen into helium until they run out of hydrogen. Then the star begins to fuse helium until it runs out of that. The heavier the product of a fusion reaction is, the less energy is released by that reaction. Eventually, the star produces iron, a reaction that yields no energy at all. The energy produced by fusion pushes outward on a star’s outer layers, preventing it from collapsing. Once iron is produced, the energy propping up the outer layers dwindles and those outer layers are drawn toward the center of the star by gravity. The outer layers rush inwards fast enough that for about a second, there are temperatures 50% higher than the star’s normal temperature. This increased temperature allows fusion reactions that can’t take place ordinarily. This produces a wide range of elements heavier than iron. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "281260", "title": "Protogalaxy", "section": "Section::::Properties.:Composition.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 327, "text": "Since there had been no previous star formation to create other elements, protogalaxies would have been made up almost entirely of hydrogen and helium. The hydrogen would bond to form H molecules, with some exceptions. This would change as star formation began and produced more elements through the process of nuclear fusion.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19167840", "title": "Chronology of the universe", "section": "Section::::Early universe.:First molecules.\n", "start_paragraph_id": 78, "start_character": 0, "end_paragraph_id": 78, "end_character": 326, "text": "At around 100,000 years, the universe has cooled enough for helium hydride, the first molecule, to form. In April 2019, this molecule was first announced to have been discovered in interstellar space. (Much later, atomic hydrogen reacts with helium hydride to create molecular hydrogen, the fuel required for star formation.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "19167840", "title": "Chronology of the universe", "section": "Section::::Early universe.:Recombination, photon decoupling, and the cosmic microwave background (CMB).\n", "start_paragraph_id": 82, "start_character": 0, "end_paragraph_id": 82, "end_character": 410, "text": "At around 377,000 years, the universe has cooled to a point where free electrons can combine with the hydrogen and helium nuclei to form neutral atoms. This process is relatively fast (and faster for the helium than for the hydrogen), and is known as recombination. The name is slightly inaccurate and is given for historical reasons: in fact the electrons and atomic nuclei were combining for the first time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "902", "title": "Atom", "section": "Section::::Origin and current state.:Formation.\n", "start_paragraph_id": 111, "start_character": 0, "end_paragraph_id": 111, "end_character": 310, "text": "Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple alpha process) the sequence of elements from carbon up to iron; see stellar nucleosynthesis for details.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4651", "title": "Beta decay", "section": "Section::::Description.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 391, "text": "In this form of decay, the original element becomes a new chemical element in a process known as nuclear transmutation. This new element has an unchanged mass number , but an atomic number that is increased by one. As in all nuclear decays, the decaying element (in this case ) is known as the \"parent nuclide\" while the resulting element (in this case ) is known as the \"daughter nuclide\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2324648", "title": "Oddo–Harkins rule", "section": "Section::::Definition.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 382, "text": "All atoms bigger than hydrogen are formed in stars or supernovae through nucleosynthesis, when gravity, temperature and pressure reach levels high enough to fuse protons and neutrons together. Protons and neutrons form the atomic nucleus, which accumulates electrons to form atoms. The number of protons in the nucleus, called atomic number, uniquely identifies a chemical element.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17052696", "title": "History of Solar System formation and evolution hypotheses", "section": "Section::::Solar evolution hypotheses.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 1075, "text": "Albert Einstein's development of the theory of relativity in 1905 led to the understanding that nuclear reactions could create new elements from smaller precursors, with the loss of energy. In his treatise \"Stars and Atoms\", Arthur Eddington suggested that pressures and temperatures within stars were great enough for hydrogen nuclei to fuse into helium; a process which could produce the massive amounts of energy required to power the Sun. In 1935, Eddington went further and suggested that other elements might also form within stars. Spectral evidence collected after 1945 showed that the distribution of the commonest chemical elements, carbon, hydrogen, oxygen, nitrogen, neon, iron etc., was fairly uniform across the galaxy. This suggested that these elements had a common origin. A number of anomalies in the proportions hinted at an underlying mechanism for creation. Lead has a higher atomic weight than gold, but is far more common. Hydrogen and helium (elements 1 and 2) are virtually ubiquitous yet lithium and beryllium (elements 3 and 4) are extremely rare.\n", "bleu_score": null, "meta": null } ] } ]
null
2ial7v
why do we try to keep people in vegetative states alive, who are not going to recover?
[ { "answer": "There's several reasons. It's the medical team's ethical and professional duty to maintain the person's life; they have a duty of care to the patient. It could be considered medical negligent or even murder/manslaughter if they ended the person's life. Also, they can't say for sure whether the person will recover or not.", "provenance": null }, { "answer": "* doctors can be wrong about whether a patient is in a vegetative state\n* family members are often in denial about the prospects of the patient to recover\n* it is conceivable that some future medical breakthrough could help some people in vegetative states", "provenance": null }, { "answer": "Usually it's because there is some hope of possible recovery. People hear stories about patients coming out of a coma after many years or suddenly springing to life and being able to talk [after being given certain medications](_URL_0_) or listening to certain music, for example. It really comes down to the Next of Kin not wanting to let go and forever lose any possibility of recovery.", "provenance": null }, { "answer": "Honestly, I think it's because we don't deal with death well. I've worked in healthcare for about 8 years and have seen a lot of very dead but still living people that are only still alive because their family won't let them go.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "27070098", "title": "Surrogate decision-maker", "section": "Section::::Laws related to surrogacy in the US.:Current status of the law.\n", "start_paragraph_id": 48, "start_character": 0, "end_paragraph_id": 48, "end_character": 381, "text": "3. Most people want to be kept \"alive\" by machines. Most people don't want to drain their family's funds to keep them alive especially when they are in a persistent vegetative state with no possible chance for recovery. In these cases, it is often weighing the risks and benefits of keeping the patient breathing, when they are clearly not living their life to its full potential.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "49453403", "title": "Revival (comics)", "section": "Section::::Plot summary.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 666, "text": "The revivers are immortal and heal from all wounds. Some of them begin to take physical and non-physical risks because they do not fear physical or emotional harm to themselves or others. When they experience strong negative emotions, they cry blood and become violent. As a result, Dana and Ramin investigate several murders in the weeks following Revival Day. Meanwhile, some people outside the quarantine area believe the government is covering up a religious miracle. Others believe they can absorb the revivers' immortality by ingesting their flesh, leading to an active smuggling business that moves body parts of revivers and other recently dead individuals.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2483516", "title": "Resocialization", "section": "Section::::Resocialization institutions.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 496, "text": "No two people respond to resocialization programs in the same manner. While some residents are found to be \"rehabilitated\", others might become bitter and hostile. As well, over a long period of time, a strictly controlled environment can destroy a person's ability to make decisions and live independently. This is known as institutionalisation, a negative outcome of total institution that prevents an individual from ever functioning effectively in the outside world again. (Sproule, 154-155)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13061359", "title": "International healthcare accreditation", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 418, "text": "Apart from using hospitals and healthcare services to regain their health if it has become impaired, or to prevent ill health occurring in the first place, people the world over may also use them for a wide variety of other services, for example “improving upon nature” (e.g. cosmetic surgery, gender reassignment surgery or acquiring help to overcome difficulties with becoming a parent (e.g. infertility treatment).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17618410", "title": "Humanitarian Accountability Partnership International", "section": "Section::::Humanitarian accountability.:Importance of accountability.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 206, "text": "People who have survived conflict or a natural disaster often have acute needs. Frequently, they have been displaced from their homes and lack their usual economic, social or psychological support systems.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5369993", "title": "Dead Souls (Rankin novel)", "section": "Section::::Plot summary.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 207, "text": "Those left alive must continue to cope with their problems. Knowing some answers does not really resolve the divisions and imperfections in society which it is the job of Rebus and his colleagues to police.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2548579", "title": "Voluntary euthanasia", "section": "Section::::Arguments for and against.:For.\n", "start_paragraph_id": 41, "start_character": 0, "end_paragraph_id": 41, "end_character": 445, "text": "Today in many countries there is a shortage of hospital space. Medical personnel and hospital beds could be used for people whose lives could be saved instead of continuing the lives of those who want to die, thus increasing the general quality of care and shortening hospital waiting lists. It is a burden to keep people alive past the point they can contribute to society, especially if the resources used could be spent on a curable ailment.\n", "bleu_score": null, "meta": null } ] } ]
null
fahc23
why is it in the ingredients section, producers use ingredient a and/or b? do they know what they are putting in the food?
[ { "answer": "It's usually because they may have switched ingredients, or different factories use different products for the same purpose. So instead of different wrappers this covers then all", "provenance": null }, { "answer": "They either have multiple facilities with different suppliers or they alternate as the market price fluctuates.\n\nThey do know internally what went into what batch for traceability purposes, but they don't want to print different labels.", "provenance": null }, { "answer": "Most business will have alternates available to continue producing their product in the event that:\n\nA) The first option becomes unavailable.\nB) The price of one option becomes too high.\nC) The availability is regionally specific.\n\nBecause they know they will change the ingredients sometimes, they print both versions on the label to avoid having to make label changes when they change the ingredients.", "provenance": null }, { "answer": "It's cheaper and easier to print a million labels that all say the same thing than it is to print half a million labels with ingredient A and half a million with ingredient B.\n\nTypically when you see alternative ingredients on a label, it either means that different production centers use slightly different ingredients, they have products coming from different suppliers, or they've changed ingredients and they're using up their old ingredient while they transition to the new one. It could also be that they change up ingredients based on market price - substituting one veggie for another similar relative when the price goes up too high, for example.\n\nSince the consumer doesn't know which batch they have, it can be helpful (for allergies, for example) to list both, to be extra safe.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "2679622", "title": "Ingredient", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 424, "text": "An ingredient is a substance that forms part of a mixture (in a general sense). For example, in cooking, recipes specify which ingredients are used to prepare a specific dish. Many commercial products contain secret ingredients that are purported to make them better than competing products. In the pharmaceutical industry, an active ingredient is that part of a formulation that yields the effect expected by the customer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9686210", "title": "Center for Food Safety and Applied Nutrition", "section": "Section::::Area of regulation.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 320, "text": "FDA maintains a list of additives that are used in food in the United States as well as a list of additives Generally Recognized as Safe (GRAS, pronounced grass). Products that contain ingredients that are not GRAS are usually dietary supplements (for example, many energy drinks contain stimulants which are not GRAS).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7888444", "title": "Food chemistry", "section": "Section::::Food additives.\n", "start_paragraph_id": 32, "start_character": 0, "end_paragraph_id": 32, "end_character": 373, "text": "Food additives are substances added to food for preserving flavours, or improving taste or appearance. The processes are as old as adding vinegar for pickling or as an emulsifier for emulsion mixtures like mayonnaise. These are generally listed by \"E number\" in the European Union or GRAS (\"generally recognized as safe\") by the United States Food and Drug Administration.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1517937", "title": "Yusheng", "section": "Section::::Serving.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 505, "text": "The base ingredients are first served. The leader amongst the diners or the restaurant server proceeds to add ingredients such as the fish, the crackers and the sauces while saying \"auspicious wishes\" ( ) as each ingredient is added, typically related to the specific ingredient being added. For example, phrases such as (; \"may there be abundance year after year\") are uttered as the fish is added, as the Chinese word for \"surplus\" or \"abundance\" ( ) sounds the same as the Chinese word for \"fish\" ( ).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37707816", "title": "Ingredient branding", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 606, "text": "Ingredient-branding takes a special position in marketing, as it cannot be clearly allocated to either industrial or consumer goods marketing. On the one hand, the consumer is the end-user of the ingredient, but at the same time is not part of the buying decision for the component, as this is up to the producer of the end product. On the other hand the producer will only decide on the usage of the ingredient - or at least take it into account in the communication policy - if the image of this ingredient will have an effect on the consumer, meaning a positive influence on his or her buying decision.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58391294", "title": "Food labelling and advertising law (Chile)", "section": "Section::::Legislation and regulations.:Law 20.606.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 434, "text": "BULLET::::- A duty of manufacturers, producers, distributors and importers of food to inform, on their containers or on their labels, as to the ingredients contained, including all additives and their nutritional information, to follow food health regulations in which the characteristics and content of said food will determine the labelling, and especially to ensure that the information is sufficiently visible and comprehensible.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7391076", "title": "Barbecue sauce", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 294, "text": "The ingredients vary widely even within individual countries, but most include some variation on vinegar, tomato paste, or mayonnaise (or a combination thereof) as a base, as well as liquid smoke, onion powder, spices such as mustard and black pepper, and sweeteners such as sugar or molasses.\n", "bleu_score": null, "meta": null } ] } ]
null
a1xnvz
how having a good lawyer can get you out of serious charges.
[ { "answer": "It's sort of like asking 'why hire a plumber, when any idiot with a wrench can do a pipe fitting?' Maybe so, but a professional is probably gonna get you better results.\n\nSo it is with lawyers; there are many particularities in the legal system that a good lawyer will know how to navigate/exploit, that a bad/overworked lawyer (i.e. public defender) will not.\n\n(that said the OJ acquittal was much more about incompetent prosecution than anything else)", "provenance": null }, { "answer": "Evidence is only as strong as the arguments you can make about it. A good lawyer can come up with some crafty arguments that a poor lawyer cannot; they can also probably argue about it more effectively. If you pay well, they may be able to spend more time on your case which will help them discover new evidence. ", "provenance": null }, { "answer": "The system has a lot of red tape that requires a lot of hands to do things. Someone forgot to sign off on a custody chain document, a tried police offer mistypes a report or something is left out when it should have been locked. The more money you have the more resources you have. \n\nHigh price lawyers have investigators, (typically ex-L.E.O.s) and teams of people that review every detail of the process and go the extra length. All this is billable hours and as long as the client can afford it they will throw man power till something comes up. Most people who are guilty get off on these types of technicalities because a public defender or middle to low class citizen can not afford to go to these lengths. \n\n", "provenance": null }, { "answer": "Evidence is available, but it's almost never the perfect video of the actual crime, with close-ups of their face and their photo ID while they commit the crime. Because nobody is that stupid (well, only a few are).\n\nSo usually it's a matter of convincing a judge and / or jury that the couple straws that you have as evidence are proof, beyond any doubt, that the crime was committed by the person. For example you have some deformed bullets of a certain caliber that may be from a gun of the same type that the person has purchased earlier, maybe some hair, or blood that was recovered, maybe a dented vehicle, etc.\n\nThe lawyers still have to convince the judge, and they have to do it while the opposing lawyers argue against every single thing that's said, and discredit any single piece of evidence with examples from the past of how vehicles got dented or hair got into places in otherwise innocent situations, etc.\n\nThe better the lawyers, and the more money is spent on their fees, the longer time they can spend digging through past cases, gathering examples of (similar) evidence where the person was actually innocent, and pulling arguments from previous lawsuits in the past that can discredit whatever story the opposing lawyers are trying to \"convince\" the judge / jury of.", "provenance": null }, { "answer": "A quality legal team can do a number of things that affect the outcome of a trial. Each piece of evidence can be challenged. Did the police gather it legally and did they maintain a proper chain of custody? If you can get evidence thrown out or just appear to be tainted in the eyes of the jury, you weaken the prosecution’s case. \n\nHigh priced legal teams will also bring in their own experts to argue the validity of evidence and testimony. \n\nTop legal teams also include jury selection experts to try ensure the friendliest possible jury. \n\nFinally, comprehensive legal teams will investigate the crime and provide alternate explanations for the crime. \n\n", "provenance": null }, { "answer": "In addition to the other answers, much of the law depends on precedent. If a higher judge ruled a particular way in a past case, that sets a precedent. \n\nE.g. Roe v. Wade or Arizona v. Miranda are Supreme Court case precedents that essentially tell lower judges how to rule. \n\nBut most precedent isn’t quite as well known. And there are, literally, millions of them! If your attorney can find a precedent that makes an argument in your favor or dismisses a particular piece of evidence, etc., then they can make a big difference. \n\nBut finding the right precedent to make such an argument is like looking for a needle in a haystack. But a lawyer with more experience and more experienced staff and more staff is more likely to find that argument while the public defender with less staff and resources will likely never find such an argument. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "353859", "title": "Confidentiality", "section": "Section::::Legal confidentiality.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 317, "text": "However, most jurisdictions have exceptions for situations where the lawyer has reason to believe that the client may kill or seriously injure someone, may cause substantial injury to the financial interest or property of another, or is using (or seeking to use) the lawyer's services to perpetrate a crime or fraud.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15605490", "title": "Law of Ukraine", "section": "Section::::Lawyers and law firms.\n", "start_paragraph_id": 94, "start_character": 0, "end_paragraph_id": 94, "end_character": 219, "text": "In order to defend a person charged with criminal offense, an attorney must have a certificate entitling him to practice law (issued by Regional Qualification-Disciplinary Bar Commission) or a Power of Attorney letter.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14175314", "title": "Hyde Amendment (1997)", "section": "Section::::Need for restraint.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 558, "text": "Most prosecutors are elected officials, but that is not true of federal prosecutors, whose conduct is subject to the Hyde Amendment. The decision to file charges can be affected by public opinion or politically-powerful groups. If prosecutors do not carefully screen the cases chosen to pursue, individuals may be charged even when there is insufficient evidence. The high public profile of the suspect or the sensational nature of the crime increasingly has more bearing on the decision to charge than the weight of the evidence or the nature of the crime.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "440936", "title": "Prosecutor", "section": "Section::::Common law jurisdictions.:United States.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 296, "text": "Prosecutors in some jurisdictions have the discretion to not pursue criminal charges, even when there is probable cause, if they determine that there is no reasonable likelihood of conviction. Prosecutors may dismiss charges in this situation by seeking a voluntary dismissal or nolle prosequi. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "499423", "title": "Legal aid", "section": "Section::::By country.:United States.\n", "start_paragraph_id": 83, "start_character": 0, "end_paragraph_id": 83, "end_character": 311, "text": "Defendants under criminal prosecution who cannot afford to hire an attorney are not only guaranteed legal aid related to the charges, but they are guaranteed legal representation, either in the form of public defenders, or in absence of provisions for such or due to case overloads, a court-appointed attorney.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3008265", "title": "Right to counsel", "section": "Section::::In the United States.:Appointment of counsel.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 366, "text": "A criminal defendant unable to retain counsel has the right to appointed counsel at the government's expense. While the Supreme Court recognized this right gradually, it currently applies in all federal and state criminal proceedings where the defendant faces authorized imprisonment greater than one year (a \"felony\") or where the defendant is actually imprisoned.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14141359", "title": "Criminal defense lawyer", "section": "Section::::United States.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 485, "text": "The accused may hire a criminal defense lawyer to help with counsel and representation dealing with police or other investigators, perform his or her own investigation, and at times present exculpatory evidence that negates potential charges by the prosecutor. Criminal defense lawyers in the United States who are employed by governmental entities such as counties, state governments, and the federal government are often referred to as public defenders or court-appointed attorneys.\n", "bleu_score": null, "meta": null } ] } ]
null
drpogv
the interstellar medium is hot? 54,000 degrees f? how is this possible?
[ { "answer": "Key phrase: **near**-vacuum. The interstellar medium is so thin that it can be treated as a vacuum for all practical purposes, but it’s not a perfect vacuum. It contains the occasional particle, and such particles can be measured to derive a temperature.", "provenance": null }, { "answer": "To think of temperature as average kinetic energy of molecules even it is not exactly the thermodynamic definition of it.\n\nThe important part is average kinetic energy of molecules and not the total kinetic energy per unit of volume or something similar. So if each of the very few particles you have out there have a higher kinetic energy you have high temperature.\n\nCompare warm water and air at the same temperature. A hair dryer might have air at 140 F (60C) and it feel nice and warm on your skin but if you would submerge you hand in water at the same temperatur you will get burn damage in a few second.\n\nSo it is quite clear that energy and temperature is not directly but depend on the medium. The same way hot air contain less energy per unit of volume then water, the few particle in vacuum contain less energy then air at the same temperature.\n\n & #x200B;\n\nWater contain 4.12 J/(cm\\^3\\*K) compare to air that is at 0.0012J/(cm\\^3\\*K) and a perfect vacuum would be 0J/(cm\\^3\\*K) . The number is energy need to increase the same volume a degree, the exact unit is not important just that for water you need 3400 times more energy then for air. So the energy difference per unit of volume and degree is a lot less between air and and a near perfect vacuum then between water and air.\n\nSo we are used to huge energy difference at the same temperature down here on earth. For near perfect vacuum it is just even lower the air.", "provenance": null }, { "answer": "To add to this F is a measure of temperature not heat. Heat is measured in Joules not F. The interstellar medium has high temperature but low heat - like the way a spark has a higher temperature but less heat than a bath", "provenance": null }, { "answer": "The notion of temperature starts to break down in near-vacuum conditions. \n\nTemperature is often expressed as an average of molecular speed when you have a bunch of molecules bumping into one another, they transfer momentum back and forth and that keeps most of them close to that average. But in a near-vacuum, molecules only collide infrequency. A cubic meter of space in the solar system might have a million atoms and molecules in it, but they aren't interacting, they are just zipping past each other. Since many were set loose by violence processes in the sun, they are going really fast and keep going really fast. We can apply the same maths we use to feature temperature in a closed, interacting system to produce some number, but it doesn't really mean the same thing.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "69453", "title": "Interstellar medium", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 1051, "text": "In all phases, the interstellar medium is extremely tenuous by terrestrial standards. In cool, dense regions of the ISM, matter is primarily in molecular form, and reaches number densities of 10 molecules per cm (1 million molecules per cm). In hot, diffuse regions of the ISM, matter is primarily ionized, and the density may be as low as 10 ions per cm. Compare this with a number density of roughly 10 molecules per cm for air at sea level, and 10 molecules per cm (10 billion molecules per cm) for a laboratory high-vacuum chamber. By mass, 99% of the ISM is gas in any form, and 1% is dust. Of the gas in the ISM, by number 91% of atoms are hydrogen and 8.9% are helium, with 0.1% being atoms of elements heavier than hydrogen or helium, known as \"metals\" in astronomical parlance. By mass this amounts to 70% hydrogen, 28% helium, and 1.5% heavier elements. The hydrogen and helium are primarily a result of primordial nucleosynthesis, while the heavier elements in the ISM are mostly a result of enrichment in the process of stellar evolution.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "69453", "title": "Interstellar medium", "section": "Section::::Interstellar matter.:Interaction with interplanetary medium.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 585, "text": "The interstellar medium begins where the interplanetary medium of the Solar System ends. The solar wind slows to subsonic velocities at the termination shock, 90–100 astronomical units from the Sun. In the region beyond the termination shock, called the heliosheath, interstellar matter interacts with the solar wind. Voyager 1, the farthest human-made object from the Earth (after 1998), crossed the termination shock December 16, 2004 and later entered interstellar space when it crossed the heliopause on August 25, 2012, providing the first direct probe of conditions in the ISM .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22689597", "title": "Warm–hot intergalactic medium", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 928, "text": "The warm–hot intergalactic medium (WHIM) refers to a sparse, warm-to-hot (10 to 10 K) plasma that cosmologists believe to exist in the spaces between galaxies and to contain 40–50% of the baryons (that is, 'normal matter' which exists as plasma or as atoms and molecules, in contrast to dark matter) in the universe at the current epoch. It can be described as a web of hot, diffuse gas. Much of what is known about the warmhot intergalactic medium comes from computer simulations of the cosmos. The WHIM is expected to form a filamentary structure of tenuous, highly ionized baryons with a density of 1−10 particles per cubic meter. Within the WHIM, gas shocks are created as a result of active galactic nuclei, along with the gravitationally-driven processes of merging and accretion. Part of the gravitational energy supplied by these effects is converted into thermal emissions of the matter by collisionless shock heating.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39674", "title": "Planetary nebula", "section": "Section::::Origins.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 291, "text": "The venting of atmosphere continues unabated into interstellar space, but when the outer surface of the exposed core reaches temperatures exceeding about 30,000 K, there are enough emitted ultraviolet photons to ionize the ejected atmosphere, causing the gas to shine as a planetary nebula.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25101402", "title": "Astrophysical X-ray source", "section": "Section::::X-ray interstellar medium.\n", "start_paragraph_id": 152, "start_character": 0, "end_paragraph_id": 152, "end_character": 626, "text": "The Hot Ionized Medium (HIM), sometimes consisting of Coronal gas, in the temperature range 10 – 10 K emits X-rays. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures – of varying sizes – can be observed, such as stellar wind bubbles and superbubbles of hot gas, by X-ray satellite telescopes. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "69453", "title": "Interstellar medium", "section": "Section::::Heating and cooling.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 693, "text": "The ISM is usually far from thermodynamic equilibrium. Collisions establish a Maxwell–Boltzmann distribution of velocities, and the 'temperature' normally used to describe interstellar gas is the 'kinetic temperature', which describes the temperature at which the particles would have the observed Maxwell–Boltzmann velocity distribution in thermodynamic equilibrium. However, the interstellar radiation field is typically much weaker than a medium in thermodynamic equilibrium; it is most often roughly that of an A star (surface temperature of ~10,000 K) highly diluted. Therefore, bound levels within an atom or molecule in the ISM are rarely populated according to the Boltzmann formula .\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "558397", "title": "Rogue planet", "section": "Section::::Retention of heat in interstellar space.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 382, "text": "Interstellar planets generate little heat and are not heated by a star. In 1998, David J. Stevenson theorized that some planet-sized objects adrift in interstellar space might sustain a thick atmosphere that would not freeze out. He proposed that these atmospheres would be preserved by the pressure-induced far-infrared radiation opacity of a thick hydrogen-containing atmosphere.\n", "bleu_score": null, "meta": null } ] } ]
null
cymo2s
what would it take to change an atom?
[ { "answer": "Yes, bombarding with neutrons is one of the ways, but it's really slow.\n\nThe same process happens when iron is created in the star's core. It starts collapsing so quick that the elementary particles are being crammed inside the atoms, changing iron to heavier metals like gold and uranium.", "provenance": null }, { "answer": "What sort of element an atom is depends on the number of protons in its core.\n\nAny atom with 79 protons will be gold and any atom with 82 protons will be lead.\n\nAtoms are made up out of three things: Protons and neutrons in the core and lectrons in the shell orbiting around the core. Protons are positively charged and electrons negatively while neutrons are neutral. A core with a certain number of protons will sekk out to have the same number of electrons obrinting around it to even out the charge. \n\nElectrons orbiting around the atom especially those in the outermost layer determine chemistry, which is how we mostly deal with stuff around us. \n\nTo figure out what element you have to you have to look at how the electrons around it make it react and deduce from that the number of protons in the core. \n\nGold and lead differ by three protons. \n\nAtms also have neutrons in their core which are basically like protons without the charge. Having more or less neutrons does not matter for the question whether or not something is gold or another element. A different number of neutrons means that an atom is a different isotope of the same element. Different isotopes can have different degrees of stability. Instable atoms decay radioactive to turn into a different element. \n\nSo to turn lead into gold you would have to take away three protons from every atom of lead. \n\nThe easiest way to do that may be to simply add protons and neutrons until it radioactive decays by itself into gold. \n\nThis is complicated by the fact that we normally would add protons in pairs as an alpha particle. \n\nGetting from 82 to 79 by adding 2s and hoping for the best is non trivial, but possible in theory. \n\nIn practice you would waste a large amount of energy to end up with a small amount of gold that is most radioactive and won't last long. \n\nSome isotopes of mercury and platinum are better candidates for transmutation into gold by means of radioactive alchemy, but it would still not be an easy or profitable thing to do. \n\nDigging the stuff out of the ground is cheaper and less likely to give you cancer than trying your hand at creating a nuclear physics based philosophers stone.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "59444", "title": "Energy level", "section": "Section::::Energy level transitions.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 972, "text": "Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "45307", "title": "Atomic electron transition", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 333, "text": "Atomic electron transition is a change of an electron from one energy level to another within an atom or artificial atom. It appears discontinuous as the electron \"jumps\" from one energy level to another, typically in a few nanoseconds or less. It is also known as an electronic (de-)excitation or atomic transition or quantum jump.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "48579", "title": "Atomic, molecular, and optical physics", "section": "Section::::Electronic configuration.\n", "start_paragraph_id": 36, "start_character": 0, "end_paragraph_id": 36, "end_character": 384, "text": "Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "516756", "title": "Internal conversion", "section": "Section::::Mechanism.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 592, "text": "An amount of energy exceeding the atomic binding energy of the s electron must be supplied to that electron in order to eject it from the atom to result in IC; that is to say, internal conversion cannot happen if the decay energy of the nucleus is less than a certain threshold. There are a few radionuclides in which the decay energy is not sufficient to convert (eject) a 1s (K shell) electron, and these nuclides, to decay by internal conversion, must decay by ejecting electrons from the L or M or N shells (i.e., by ejecting 2s, 3s, or 4s electrons) as these binding energies are lower.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "59613", "title": "Ionization energy", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 308, "text": "where is any atom or molecule capable of ionization, is that atom or molecule with an electron removed, and is the removed electron. This is generally an endothermic process. Generally, the closer the outermost electrons are to the nucleus of the atom , the higher the atom's or element's ionization energy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "11230975", "title": "Resonance fluorescence", "section": "Section::::General theory.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 215, "text": "So the atom decays exponentially and the atomic dipole moment shall oscillate. The dipole moment oscillates due to the Lamb shift, which is a shift in the energy levels of the atom due to fluctuations of the field.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9812205", "title": "Transition dipole moment", "section": "Section::::Applications.\n", "start_paragraph_id": 24, "start_character": 0, "end_paragraph_id": 24, "end_character": 341, "text": "of an electronic transition within similar atomic orbitals, such as s-s or p-p, is forbidden due to the triple integral returning an ungerade (odd) product. Such transitions only redistribute electrons within the same orbital and will return a zero product. If the triple integral returns a gerade (even) product, the transition is allowed.\n", "bleu_score": null, "meta": null } ] } ]
null
3ileqr
what's so great about kevin bacon?
[ { "answer": "I'm not so familiar with him myself, but just by the sheer volume and variety of films he has done, it's easy to pick something you like. Try typing \"bacon number [name of another actor]\" into Google, it will quickly became apparent just how well connected and involved he is.", "provenance": null }, { "answer": "He's usually part of large ensemble casts -- that's why the \"six degrees\" game is so easy with him. He's been on screen with pretty much every other big Hollywood name.\n\nHe also doesn't always play the good guy. A lot of his protagonist roles are ant-heroic. He also doesn't stick to just blockbusters -- Kevin Bacon has starred in a few controversial indie films as well.", "provenance": null }, { "answer": "You need to see \"Death Sentence.\" Directing is kind of meh, but his acting, and story is phenomenal. Also, hes in a shit tone of movies with a lot of famous actors. Hell, even his acting debut was in Animal House, there are ton's of famous actors in that movie alone: _URL_0_", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "24311404", "title": "IWantGreatCare", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 452, "text": "Bacon is a commentator on e-health and patient feedback. In June 2012, he formed part of a UK delegation invited to Washington for the Health Datapalooza, a US health data forum attended by UK health secretary Andrew Lansley, US President Barack Obama and Jon Bon Jovi. The discussion centred on how the two countries can work more closely to make health data a driver for innovation, economic growth and – most importantly – better care for patients.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "52303116", "title": "Chris Bacon (composer)", "section": "Section::::Career.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 201, "text": "Bacon has also contributed music to many film and TV productions, including \"American Hustle\", \"\", and \"Goosebumps\". Bacon's next project is the Amazon reboot of \"The Tick\", directed by Wally Pfister.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43607", "title": "Six Degrees of Kevin Bacon", "section": "Section::::History.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 841, "text": "In a January 1994 interview with \"Premiere\" magazine Kevin Bacon mentioned while discussing the film \"The River Wild\" that \"he had worked with everybody in Hollywood or someone who's worked with them.\" Following this, a lengthy newsgroup thread which was headed \"Kevin Bacon is the Center of the Universe\" appeared. Four Albright College students, including Brian Turtle, claim to have invented the game that became known as \"Six Degrees of Kevin Bacon\" after watching two movies featuring Bacon back to back, \"Footloose\" and \"The Air Up There\". During the second they began to speculate on how many movies Bacon had been in and the number of people with whom he had worked. In the interview, Brian Turtle explained how \"it became one of our stupid party tricks I guess. People would throw names at us and we'd connect them to Kevin Bacon.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22122461", "title": "Bacon mania", "section": "Section::::Innovation.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 319, "text": "A 2009 \"Baltimore Sun\" story describes bacon as being \"more than bacon,\" and stated that for \"obsessive and adoring Bacon Nation it's about cheap thrills and a chance for Internet fame.\" Calling it \"like an extreme sport\", the article described the innovators and enthusiasts celebrating bacon in all its incarnations.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "16827", "title": "Kevin Bacon", "section": "Section::::Acting career.:2000s.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 580, "text": "Bacon was again acclaimed for a dark starring role playing an offending pedophile on parole in \"The Woodsman\" (2004), for which he was nominated for best actor and received the Independent Spirit Award. He appeared in the HBO Films production of \"Taking Chance\", based on an eponymous story written by Lieutenant Colonel Michael Strobl, an American Desert Storm war veteran. The film premiered on HBO on February 21, 2009. Bacon won a Golden Globe Award and a Screen Actors Guild Award for Outstanding Performance by a Male Actor in a Miniseries or Television Movie for his role.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8988845", "title": "SixDegrees.org", "section": "Section::::Business model.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 499, "text": "Kevin Bacon was influenced to start SixDegrees.org because of a game called Six Degrees of Kevin Bacon.This game was made up to link Kevin Bacon to another star that he played a movie role with him. In this game an actor is selected at random and in six connections or less this actor is linked to Kevin Bacon. Bacon realized he could do the same thing with charitable organizations. Kevin Bacon used this ideology of linking celebrities to charities to create the business model of SixDegrees.org.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "22719621", "title": "Bacon: A Love Story", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 590, "text": "Bacon: A Love Story, A Salty Survey of Everybody's Favorite Meat is a 2009 non-fiction book about bacon, written by American writer Heather Lauer. Lauer started the blog \"Bacon Unwrapped\" and a social networking site about bacon in 2005, after the idea came to her while she was out drinking with her two brothers; her online success inspired her to write the book, which describes curing and cooking bacon, gives over 20 bacon recipes, and analyzes the impact of bacon on popular culture. The text is interspersed with facts about bacon and bacon-related quips from comedian Jim Gaffigan.\n", "bleu_score": null, "meta": null } ] } ]
null
czxrmd
why does a standing bike fall down but a moving bike does not?
[ { "answer": "The spinning wheels act like gyroscopes to stabilize the bike. The heavier the wheels and the faster they spin the more stable the bike is.", "provenance": null }, { "answer": "The moving bike has the contact point of the front wheel behind the centre of rotation for the steering, so it naturally straightens out, this tends to push a bike towards a position where it is stable.\n\nWhereas for a standing bike theres no motion, so the steering movement causes no force to stabilise the bike.", "provenance": null }, { "answer": "It's not just gyroscopes that others have mentioned. In fact, this actually plays a very small role, and it is still easy to ride a bike where there is a counter rotating wheel designed to eliminate all gyroscopic effects. \n\nIt does play a role in a normal bike, but only when you're going very fast. The main advantage the gyroscopic effect gives is making it easier to control the steering. \n\nThere are several mechanisms in play. Indeed you'll sometimes hear that scientists don't know how bicycles work, but that isn't really true. \n\nThe reality is the maths just gets quite complex. \n\n[minute physics has a great video which answers your question](_URL_0_)\n\nEssentially, bikes are designed so that the front wheel will automatically turn into the direction it is leaning, pushing the bike into an upright position again if it is moving forward.\n\nIf it is not moving forward, it will fall over.", "provenance": null }, { "answer": "The force that the wheels generate while the bike is moving is called centripetal (not centrifugal) force. Basically, what it says is that when you apply a rotational force around an axle, the force makes the axle point perpendicular to the rotation. So the bike's wheels literally force the bike to stay upright as they rotate perpendicular to the wheel's axle. I'm sure Wikipedia can explain it better, but hope this helps", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "5343488", "title": "Bicycle and motorcycle dynamics", "section": "Section::::Lateral dynamics.:Balance.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 658, "text": "Even when staying relatively motionless, a rider can balance a bike by the same principle. While performing a track stand, the rider can keep the line between the two contact patches under the combined center of mass by steering the front wheel to one side or the other and then moving forward and backward slightly to move the front contact patch from side to side as necessary. Forward motion can be generated simply by pedaling. Backwards motion can be generated the same way on a fixed-gear bicycle. Otherwise, the rider can take advantage of an opportune slope of the pavement or lurch the upper body backwards while the brakes are momentarily engaged.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1581009", "title": "Track stand", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 636, "text": "The track stand or standstill is a technique that bicycle riders can use to maintain balance while their bicycle remains stationary or moves only minimal distances. The technique originated in track cycling and is now used by other types of cyclists wishing to stop for a short time without putting a foot on the ground, such as bike commuters at stop signs. To perform a track stand, a cyclist holds the cranks in an approximately horizontal position with the front wheel steered to the left or right, and pedals forward, and back in the case of a fixed-gear bicycle, which the steered front wheel converts into a side-to-side motion.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1581009", "title": "Track stand", "section": "Section::::Technique.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1143, "text": "A cyclist executing a basic track stand holds the bicycle's cranks in a horizontal position, with his or her dominant foot forward. Track stands executed on bicycles with a freewheel usually employ a small uphill section of ground. The uphill needs to be sufficient to allow the rider to create backward motion by relaxing pressure on the pedals, thus allowing the bike to roll backwards. Once the track stand is mastered, even a very tiny uphill section is sufficient: e.g. the camber of the road, a raised road marking, and so on. Where no such uphill exists, or even if the gradient is downhill, a track stand can be achieved on a freewheeling bicycle by using a brake to initiate a backwards movement. If a fixed-gear bicycle is being used, an uphill slope is not needed since the rider is able to simply back pedal to move backwards. In both cases forward motion is accomplished by pedalling forwards. The handlebars are held at approximately a 45 degree angle, converting the bike's forward and back motion into side-to-side motion beneath the rider's body. This allows the rider to keep the bike directly below their center of gravity.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28635039", "title": "Balancing of rotating masses", "section": "Section::::Dynamic balance.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 360, "text": "This is seen when a bicycle wheel gets buckled. The wheel will not rotate itself when stationary due to gravity as it is still statically balanced, but will not rotate smoothly as the centre of mass is to the side of the centre bearing. The spokes on a bike wheel need to be tuned in order to stop this and keep the wheel operating as efficiently as possible.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8876936", "title": "Bicycle suspension", "section": "Section::::Recumbent bikes.\n", "start_paragraph_id": 111, "start_character": 0, "end_paragraph_id": 111, "end_character": 463, "text": "Many recumbent bicycles have at least a rear suspension because the rider is usually unable to lift themselves off the seat while riding. Single pivot is usually adequate when the pedaling thrust is horizontal - that is, forwards rather than downwards. This is usually the case provided the bottom bracket is higher than the seat's base height. Where the bottom bracket is significantly lower than the seat base, there may still be some pedalling-induced bounce.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1105729", "title": "Tall bike", "section": "Section::::Design considerations.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 394, "text": "One consistent issue is that the seat tends to end up in line with, or behind, the rear axle, which creates a powerful tendency to lift the front wheel of the bicycle on acceleration. Some bicycle builders simply accept this tendency, but others solve the problem by moving the seat post forward, lowering the handlebars, or by using a smaller wheel in front, typically a 24\" instead of a 26\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31831849", "title": "Cycling in Munich", "section": "Section::::Bicycle network.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 687, "text": "Being unnecessarily elevated above road level means that each intersection requires a ramp. These are rarely continuous; rolling down from and up onto the lowered curb lead to discomfort and possible wheel damage. The ramps between buildings and the road are constructed for motorised vehicles not for the pedestrians and cyclists for whom the path is for. This effect is very noticeable even when walking, and more so when guiding a pram or wheelchair or bicycle. At easily attainable speeds, vehicle instability results from being lifted from the saddle through the up and down motion, requiring the cyclist to travel much more slowly than he/she would on the street running parallel.\n", "bleu_score": null, "meta": null } ] } ]
null
9h1sio
Do we know why there is nothing left of the Circus Maximus in Rome when compared to many other structures from the area?
[ { "answer": "Contrary to popular belief, more of the Circus Maximus survives than one might think. The \"floor\" level of the race track is now about 10 meters below ground level, and has never been systematically excavated. The *spina* (the central median of the track) was partially dug in 1587 on the orders of Pope Sixtus V. They were looking for the great Egyptian obelisks which had once adorned it, and found two of them: the bronze age granite obelisk taken by Augustus from Heliopolis, which now stands in the Piazza del Populo; and the massive, 522-ton obelisk originally quarried under Thutmosis III (1500-1450 BCE) for the temple of Amon at Karnak, and now standing in the Piazza S. Giovanni in Laterano. In modern times, the starting gates at the west end were briefly excavated (1908) and then reburied. \n\nThere have been two major excavations of the still-extant seating on the east end (1930 and again 1979-88), which revealed a confusing series of Trajanic and later rebuildings. It was, apparently, a structure which underwent frequent renovation. From those excavations it also seems apparent that the vast majority of the highest banks of seats, which remained above ground level, were robbed out in the early Medieval period and taken elsewhere, probably for building material. Seats for 250,000 spectators' backsides is a lot of marble, after all. \n\nThe last races were held in 549 CE by the Ostrogoth Totila. Thereafter, the structure seems to have fallen quickly into disuse. In the early Medieval period, the area was used as grazing and farmland, and the seating structures were converted into a variety of industrial works. The whole area was cleared out and put in its present, relatively cleared state in the early 20th century.\n\nWhy hasn't Italy excavated more of the Circus? First, there is not much new to discover there which is easily accessible. Second, money, which Italy does not have much of these days, especially for archaeology. Third, the space is popular and frequently used (I saw Sting perform there in 2004).\n\nWhy do [some structures](_URL_0_) survive so well, while others virtually disappear? The christian church played a big role in the preservation or destruction of many ancient monuments. If it became sacred to the christians, it generally survived and was maintained; if not, then most places lacked the will or the funds to upkeep large and useless old pagan edifices. The local attitude towards preservation, for whatever reason, also played a role, as well as their access to sufficient building materials.\n\nUnfortunately, two of the best and most recent sources are not in English. One is Marcattili, *Circo Massimo : architetture, funzioni, culti, ideologia* (Roma : L'Erma di Bretschneider, 2009); the other is Polt, *Circus Maximus : das gesammelte Werk ; Geschichten, Stücke, Monologe und Dialoge* (Zürich : Kein & Aber, 2002).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "53266260", "title": "Roman circus of Toledo", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 234, "text": "Given the size of the Circus, as it happened in almost all Hispanic-Roman cities, it was located on the outskirts of the walled enclosure. It is certain that from the city there was a causeway to the circus, which has not been found.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4142704", "title": "Roman circus of Mérida", "section": "Section::::Modern status.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 469, "text": "Mérida's circus remains very well preserved. As is true with the Circus Maximus, most circuses's structures have been destroyed over time as the area occupied by them was great and often in very flat land near their respective cities. The Mérida circus however has kept numerous structures, including the \"Porta Pompae\" (\"main entrance\"), the \"Porta Triumphalis\" (\"triumph gate\"), the \"spina\" (the longitudinal wall), the \"tribunal iudicium\" (\"tribune of the judges\").\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4142704", "title": "Roman circus of Mérida", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 351, "text": "The Roman circus of Mérida () is a ruined Roman circus in Mérida, Spain. Used for chariot racing, it was modelled on the Circus Maximus in Rome and other circus buildings throughout the empire. Measuring more than 400 m in length and 30 m of width, it is one of the best preserved examples of the Roman circus. It could house up to 30,000 spectators.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4657098", "title": "Flavian Palace", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 237, "text": "The imposing ruins which flank the southeastern side of the Palace above the Circus Maximus are a later addition built by Septimius Severus; they are the supporting piers for a large extension which completely covered the eastern slope.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "25507", "title": "Roman Empire", "section": "Section::::Daily life.:Recreation and spectacles.\n", "start_paragraph_id": 164, "start_character": 0, "end_paragraph_id": 164, "end_character": 738, "text": "Circuses were the largest structure regularly built in the Roman world, though the Greeks had their own architectural traditions for the similarly purposed hippodrome. The Flavian Amphitheatre, better known as the Colosseum, became the regular arena for blood sports in Rome after it opened in 80 AD. The circus races continued to be held more frequently. The Circus Maximus could seat around 150,000 spectators, and the Colosseum about 50,000 with standing room for about 10,000 more. Many Roman amphitheatres, circuses and theatres built in cities outside Italy are visible as ruins today. The local ruling elite were responsible for sponsoring spectacles and arena events, which both enhanced their status and drained their resources.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37328", "title": "Circus Maximus", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 525, "text": "The Circus Maximus (Latin for \"greatest\" or \"largest circus\"; Italian: Circo Massimo) is an ancient Roman chariot-racing stadium and mass entertainment venue located in Rome, Italy. Situated in the valley between the Aventine and Palatine Hills, it was the first and largest stadium in ancient Rome and its later Empire. It measured in length and in width and could accommodate over 150,000 spectators. In its fully developed form, it became the model for circuses throughout the Roman Empire. The site is now a public park.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37328", "title": "Circus Maximus", "section": "Section::::Topography and construction.:Regal era.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 577, "text": "The Circus Maximus was sited on the level ground of the Valley of Murcia \"(Vallis Murcia)\", between Rome's Aventine and Palatine Hills. In Rome's early days, the valley would have been rich agricultural land, prone to flooding from the river Tiber and the stream which divided the valley. The stream was probably bridged at an early date, at the two points where the track had to cross it, and the earliest races would have been held within an agricultural landscape, \"with nothing more than turning posts, banks where spectators could sit, and some shrines and sacred spots\".\n", "bleu_score": null, "meta": null } ] } ]
null
3midoi
Did Vikings prefer sword and shield or two handed in the battlefield
[ { "answer": "The favorite weapon set would be spear and shield, usually with some sort of knife or - if you could afford it - sword in case you needed to get more up-and-personal.\n\nFighting in the Viking Age relied heavily on shield walls - tight lines of men standing close together and protecting each other with their shields, much like the earlier Greek hoplite phalanx. And just like a hoplite phalanx, the most useful weapon was a spear - it gave you reach, was ideally suited for quick hard stabs, and could (depending on the size of the spearhead) be lighter and quicker than a sword (Viking Age swords tended to be about 2.5lb / 1.1kg; a spear with a moderately sized head and a light ash or hazel shaft could weigh less and be balanced more ideally for a thrust). Most importantly, spears gave each fighter the reach to support the person on either side, and to reach gaps between shields to the right and left instead of being forced to try to get around the shield immediately in front of him. Swords are much less useful in this style of fighting.\n\nThe goal of a shieldwall, however, was to push through the other side's formation and get them broken into smaller groups (and hopefully terrify them into running away), at which point swords and long knives (seaxes) might become more useful (though I've seen a fair number of early medieval wounds in the back from spears, so even then they continued to be useful).\n\nAnd of course, spears can also be thrown. Some were designed specifically to be javelins, but many more appear to have been rather versatile (so you could use it either way). And we have textual accounts of warriors starting off a battle by throwing spears, but keeping one back to use when the fighting became hand-to-hand (the [Battle of Maldon](_URL_0_) is particularly worth a look, if you want to read a great war poem with good descriptions of Viking Age fighting).\n\nSpears were so common that viking age stories like Beowulf refer to especially reliable soldiers as aeswiga ('spear warrior'), and whole people groups as the 'Spear Danes.'\n\nThere were other options besides spear and shield - we have people buried with axes that were pretty clearly designed to work better as weapons than tools (some made for use in two hands, toward the end of the Viking Age), and there's a Pictish stone carving showing a warrior with a spear in two hands instead of the more typical one. You don't really see 2-handed swords in this period, however, perhaps because the metallurgy required to make such a long and strong piece of steel was still in its infancy in western Europe, but likely also because the shield remained such an important point of a warrior's social status, enabling him to fight in the shieldwall and, in some early medieval law courts, a required proof of his good social standing.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1091379", "title": "Viking Age arms and armour", "section": "Section::::Weapons.:Sword.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 324, "text": "The Viking Age sword was for single-handed use to be combined with a shield, with a double edged blade length of up to 90 cm. Its shape was still very much based on the Roman spatha with a tight grip, long deep fuller and no pronounced cross-guard. It was not exclusive to the Vikings, but rather was used throughout Europe\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44563788", "title": "Viking raid warfare and tactics", "section": "Section::::Common weapons.:Sword.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 473, "text": "Viking Age swords were common in battles and raids. They were used as a secondary weapon when fighting had fallen out of formation or their primary weapon was damaged. While there were many variations of swords, the Vikings used double-edged swords, often with blades 90 centimeters long and 15 centimeters wide. These swords were designed for slashing and cutting, rather than thrusting, so the blade was carefully sharpened while the tip was often left relatively dull. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "32610", "title": "Vikings", "section": "Section::::Weapons and warfare.\n", "start_paragraph_id": 115, "start_character": 0, "end_paragraph_id": 115, "end_character": 1136, "text": "Knowledge about the arms and armour of the Viking age is based on archaeological finds, pictorial representation, and to some extent on the accounts in the Norse sagas and Norse laws recorded in the 13th century. According to custom, all free Norse men were required to own weapons and were permitted to carry them at all times. These arms were indicative of a Viking's social status: a wealthy Viking had a complete ensemble of a helmet, shield, mail shirt, and sword. However, swords were rarely used in battle, probably not sturdy enough for combat and most likely only used as symbolic or decorative items. A typical \"bóndi\" (freeman) was more likely to fight with a spear and shield, and most also carried a seax as a utility knife and side-arm. Bows were used in the opening stages of land battles and at sea, but they tended to be considered less \"honourable\" than melee weapons. Vikings were relatively unusual for the time in their use of axes as a main battle weapon. The Húscarls, the elite guard of King Cnut (and later of King Harold II) were armed with two-handed axes that could split shields or metal helmets with ease.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "44563788", "title": "Viking raid warfare and tactics", "section": "Section::::Common weapons.:Sword.\n", "start_paragraph_id": 35, "start_character": 0, "end_paragraph_id": 35, "end_character": 568, "text": "A sword was considered a personal object amongst Vikings. Warriors named their swords, as they felt such objects guarding their lives deserved identities. A sword, depending on the make, was often associated with prestige and value due to the importance of honor in the Viking Age. No real method has been discovered as to how the Vikings made their weapons, but it is believed that individual pieces were welded together. While the Vikings used their own swords in battle, they were interested in the Frankish battle swords because of their acclaimed craftsmanship. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "8774444", "title": "Pirates, Vikings and Knights II", "section": "Section::::Gameplay.:Classes.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 629, "text": "The Vikings can play as a berserker, huscarl or gestir. A berserker possess a two-handed bearded axe as well as a shorter axe and shortsword. Berserkers can use their special ability to drive themselves into a bloodlust, allowing them to move faster and attack more effectively. The huscarl is armed with a shortsword and a shield which can be used to ram enemies with their special ability, four throwing axes and a two-handed axe. The gestir is equipped with a shield and langseax, three javelins which can be thrown at enemies, and a spear which gestirs can use their special ability on to charge enemies and knock them away.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1091379", "title": "Viking Age arms and armour", "section": "Section::::Weapons.:Axe.\n", "start_paragraph_id": 29, "start_character": 0, "end_paragraph_id": 29, "end_character": 201, "text": "Vikings most commonly carried sturdy axes that could be thrown or swung with head-splitting force. The Mammen Axe is a famous example of such battle-axes, ideally suited for throwing and melee combat.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1110369", "title": "Battle axe", "section": "Section::::Overview.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 759, "text": "Battle axes are particularly associated in Western popular imagination with the Vikings. Certainly, Scandinavian foot soldiers and maritime marauders employed them as a stock weapon during their heyday, which extended from the beginning of the 8th century to the end of the 11th century. They produced several varieties, including specialized throwing axes (see francisca) and \"bearded\" axes or \"skeggox\" (so named for their trailing lower blade edge which increased cleaving power and could be used to catch the edge of an opponent's shield and pull it down, leaving the shield-bearer vulnerable to a follow-up blow). Viking axes were wielded with one hand or two, depending on the length of the plain wooden haft. (See entry for Viking Age arms and armor.)\n", "bleu_score": null, "meta": null } ] } ]
null
19itto
in the 1960s did most normal Russians still claim to believe in communism's superiority?
[ { "answer": "When talking about Russian perceptions of Communism you have to be careful who you're talking about. I think at no point besides perhaps late 1917 would peasants have positive opinions of Communists, and that was simply because they weren't the Provisional Government. The Peasants, like most of the Russian Empire, did not get along very well with the Bolsheviks during their rise to power and the 1921-1922 peasant revolts against Lenin were particularly brutal affairs. The status of workers is more controversial, they by and large believed in Communism but not Bolshevism and this brought the Workers Soviets into a series of brutal confrontations with Lenin's forces during and after the Civil War. What the former ruling elite, nobility, clergy, and conservatives thought of Communism is rather obvious.\n\nStalinism and its measures such as forced collectivization and the great terror undoubtedly contributed to mass resentment against Communists, particularly among those ethnic minorities that suffered the worst under Stalin (Ukrainians, Baltics, Belorussians). These regions became hotbeds of anti-Communist activity, so much so that they initially welcomed the Germans as liberators (though life under German occupation was soon discovered to be hardly preferable). \n\nI think the heyday of Communism among the Soviet Empires various populations would have been the Khrushchev era in any regard, particularly the early 60s. De-stalinization, economic improvement, a spirit of national unity (not achieved among all obviously) following their victory in WW2, and relaxing repression. I wouldn't say this meant the majority of the population \"believed\" in Communism, past experiences with Communism always made the Soviet people weary of Marxists. However there was certainly a spirit of \"things are getting better\", which unfortunately was undone with Brezhnev and the era of stagnation.\n\nThough it needs to be said Khrushchev was hardly a messiah or a liberal-democratic ruler. He was a Stalinist who renounced Stalinism mostly because of his own personal trauma/disdain for Stalin and never lost the thuggishly banal ruling policies of Stalins inner circle. He did however make a number of wise decisions (along with other unwise ones, such as the invasion of Hungary and Cuban Missile Crisis) which probably helped him rule during the Soviet Unions peak.\n", "provenance": null }, { "answer": "This is not exactly the answer, but looking at Russia's \"performance\" might tell us something about how the state was appreciated by Russians, and where the image of \"stagnant crappy Russia\" comes from. \n\n**1921-1960**\n\n[This graph](_URL_13_) shows the development of both the US and Russia from 1921 to 1960. In 1921, most of Russia wasn't industrialized and living conditions were much lower than conditions in the US. But in the next couple of decades, life expectancy and income per person rise dramatically. In 1960, the average Russian has a life expectancy and income per person that is similar to Italy and Israel, and close to Belgium and France. The difference between the US and Russia is way less than it was in 1960. \n\n\n**1960-1982**\n\n[The next graph](_URL_1_) shows the development of Russia, compared to Italy and Israel. Israel is the small green dot. Life expectancy keeps rising in Italy and Israel, but it stagnates in Russia. Income per person increased way more in Italy in Israel (and income per person is measured on an exponential scale in the graph!). \n\n**1989-1999**:\n\n[Here you can see](_URL_2_) that many countries all over the world keep 'progressing', while Russia falls back, eventually resulting in living conditions similar to those in Turkey, completely incomparable with Italy and Israel, while they were once 'equals'. \n\n**Conclusion**\n\nUntil the 1960s, Russia was actually a very dynamic place that showed lots of progress, and I'm sure it impressed a lot of Russians. People became wealthier and healthier. The country left the pre-industrial paradigm where most people were farmers, and they encountered cars while they followed the space race. \n\nBut while the rest of the world 'continued', Russia was 'stuck'. The country would start stagnating in the 60s. The stagnation turned into outright regression in the 90s: while it was similar to 'developed countries' in 1960, it was incomparable to the West in 1990 and the following decade. \n\n**Source**\n\nI used [Gapminder](_URL_8_) for the data, it's very useful to take a look at the 'trajectory' of Russia. \n\n**TL;DR:**\n\nRussia in the 1910s: [A crappy pre-industrial place](_URL_0_)\n\nRussia in the 1960s: [A](_URL_3_) [relatively](_URL_15_) [modern](_URL_9_) [place](_URL_10_)\n\nRussia in the 1980s: [Hurray](_URL_4_) [for](_URL_11_) [thiry](_URL_14_) [years](_URL_5_) [of](_URL_7_) [stagnation!](_URL_6_)\n\n**Disclaimer**\n\nI know the images are exaggerating. \n\n**Justification for spending too much time and energy on this post**\n\n[Procrastination](_URL_12_)\n", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "52630", "title": "History of Russia (1991–present)", "section": "Section::::Reforms.:Obstacles to reform.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 1530, "text": "Finally, there is a human capital dimension to the failure of post-Soviet reforms in Russia. The former Soviet population was not necessarily uneducated. Literacy was nearly universal, and the educational level of the Soviet population was among the highest in the world with respect to science, engineering, and some technical disciplines, although the Soviets devoted little to what would be described as \"liberal arts\" in the West. With the move to a post-Communist system, the Russian university system collapsed. Rampant credential inflation in the Russian university system made it difficult for employers to determine who was really skilled and the problems of the higher education system more generally made it difficult to remedy other issues of human capital that came from the transition to a market-oriented system, such as upskilling and re-skilling. For example, former state enterprise managers were highly skilled at coping with the demands on them under the Soviet system of planned production targets, but discouraged the risk-and-reward centered behavior of market capitalism. These managers were responsible for a broad array of social welfare functions for their employees, their families, and the population of the towns and regions where they were located. Profitability and efficiency, however, were generally not the most prominent priorities for Soviet enterprise managers. Thus, almost no Soviet employees or managers had firsthand experience with decision-making in the conditions of a market economy.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7720531", "title": "Criticism of communist party rule", "section": "Section::::Areas of criticism.:Social development.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 842, "text": "The allegation that communist rule resulted in lower standards of living sharply contrasted with communist arguments boasting of the achievements of the social and cultural programs of the Soviet Union and other communist states. For instance, Soviet leaders boasted of guaranteed employment, subsidized food and clothing, free health care, free child care and free education. Soviet leaders also touted early advances in women's equality, particularly in Islamic areas of Soviet Central Asia. Eastern European communists often touted high levels of literacy in comparison with many parts of the developing world. A phenomenon called Ostalgie, nostalgia for life under Soviet rule, has been noted amongst former members of Communist countries, now living in Western capitalist states, particularly those who lived in the former East Germany.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5371832", "title": "Propaganda in the Soviet Union", "section": "Section::::Themes.:New society.:Production.\n", "start_paragraph_id": 77, "start_character": 0, "end_paragraph_id": 77, "end_character": 250, "text": "In the 1950s, Khrushchev repeatedly boasted that the USSR would soon surpass the West in material well-being. Other communists officials agreed that it would soon show its superiority, because capitalism was like a dead herring—shining as it rotted.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "389401", "title": "Scientific management", "section": "Section::::Adoption in planned economies.:Soviet Union.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 800, "text": "Sorensen was one of the consultants who brought American know-how to the USSR during this era, before the Cold War made such exchanges unthinkable. As the Soviet Union developed and grew in power, both sides, the Soviets and the Americans, chose to ignore or deny the contribution that American ideas and expertise had made: the Soviets because they wished to portray themselves as creators of their own destiny and not indebted to a rival, and the Americans because they did not wish to acknowledge their part in creating a powerful communist rival. Anti-communism had always enjoyed widespread popularity in America, and anti-capitalism in Russia, but after World War II, they precluded any admission by either side that technologies or ideas might be either freely shared or clandestinely stolen.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "790714", "title": "Free response", "section": "Section::::Example.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 438, "text": "\"Example thesis from 2007 AP World Student for change over time essay:\" 'Significant cultural ideologies such as Communism and Stalinism became the leading factors toward political and cultural change in the Soviet Union throughout 1914-1945. Ultimately, none of these changes were able to improve the standard of living for the working classes of Russia, which ironically had been the goal of the Russian Revolution in the first place.'\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13295032", "title": "Historiography in the Soviet Union", "section": "Section::::Marxist influence.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 917, "text": "Soviet historiography interpreted this theory to mean that the creation of the Soviet Union was the most important turning event in human history, since the USSR was considered to be the first socialist society. Furthermore, the Communist Party—considered to be the vanguard of the working class – was given the role of permanent leading force in society, rather than a temporary revolutionary organization. As such, it became the protagonist of history, which could not be wrong. Hence the unlimited powers of the Communist Party leaders were claimed to be as infallible and inevitable as the history itself. It also followed that a worldwide victory of communist countries is inevitable. All research had to be based on those assumptions and could not diverge in its findings. In 1956, Soviet academician Anna Pankratova said that \"the problems of Soviet historiography are the problems of our Communist ideology.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "215623", "title": "Communist state", "section": "Section::::Criticism.\n", "start_paragraph_id": 20, "start_character": 0, "end_paragraph_id": 20, "end_character": 955, "text": "Soviet advocates and socialists responded to these criticisms by highlighting the ideological differences in the concept of \"freedom\". McFarland and Ageyev noted that \"Marxist–Leninist norms disparaged \"laissez-faire\" individualism (as when housing is determined by one's ability to pay), also [condemning] wide variations in personal wealth as the West has not. Instead, Soviet ideals emphasized equality—free education and medical care, little disparity in housing or salaries, and so forth\". When asked to comment on the claim that former citizens of Communist states enjoy increased freedoms, Heinz Kessler, former East German Minister of National Defence, replied: \"Millions of people in Eastern Europe are now free from employment, free from safe streets, free from health care, free from social security\". The early economic development policies of Communist states have been criticised for focusing primarily on the development of heavy industry.\n", "bleu_score": null, "meta": null } ] } ]
null
23iui1
why does hot chocolate mix/powder stay dry even when milk or water is poured on top of it?
[ { "answer": "Water likes to stick to itself. That's why the surface of water and water droplets is smooth. \n\nPowder are full of little tiny holes. For water to go into the holes it would have to make a little spike of water. The water would rather stick to itself then make the spike.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "657067", "title": "Powdered milk", "section": "Section::::History and manufacture.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 289, "text": "Alternatively, the milk can be dried by drum drying. Milk is applied as a thin film to the surface of a heated drum, and the dried milk solids are then scraped off. However, powdered milk made this way tends to have a cooked flavour, due to caramelization caused by greater heat exposure.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15981527", "title": "Kenco Singles", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 238, "text": "A second type of capsule allows hot chocolate drinks to be made. Although chocolate powder very quickly turns to a thick impenetrable paste when wetted, the jetting technology ensures complete emptying of the capsule and thorough mixing.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5239060", "title": "Compound chocolate", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 379, "text": "Cocoa butter must be tempered to maintain gloss and coating. A chocolatier tempers chocolate by cooling the chocolate mass below its setting point, then rewarming the chocolate to between for milk chocolate, or between for semi-sweet chocolate. Compound coatings, however, do not need to be tempered. Instead, they are simply warmed to between above the coating's melting point.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "540333", "title": "Hot chocolate", "section": "Section::::Terminology.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 577, "text": "Hot chocolate can be made with dark, semisweet, or bittersweet chocolate chopped into small pieces and stirred into milk with the addition of sugar. American instant hot cocoa powder often includes powdered milk or other dairy ingredients so it can be made without using milk. In the United Kingdom, \"hot chocolate\" is a sweet chocolate drink made with hot milk or water, and powder containing chocolate, sugar, and powdered milk. \"Cocoa\" usually refers to a similar drink made with just hot milk and cocoa powder, then sweetened to taste with sugar (or not sweetened at all).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "427924", "title": "Condensed milk", "section": "Section::::Current use.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 782, "text": "In New Orleans, sweetened condensed milk is commonly used as a topping on chocolate or similarly cream-flavored snowballs. In Scotland, it is mixed with sugar and butter then boiled to form a popular sweet candy called tablet or Swiss milk tablet, this recipe being very similar to another version of the Brazilian candy brigadeiro called \"branquinho\". In some parts of the Southern United States, condensed milk is a key ingredient in lemon ice box pie, a sort of cream pie. In the Philippines, condensed milk is mixed with some evaporated milk and eggs, spooned into shallow metal containers over liquid caramelized sugar, and then steamed to make a stiffer and more filling version of \"crème\" caramel known as \"leche flan\", also common in Brazil under the name \"pudim de leite\".\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7089", "title": "Chocolate", "section": "Section::::Production.:Tempering.\n", "start_paragraph_id": 72, "start_character": 0, "end_paragraph_id": 72, "end_character": 497, "text": "As a solid piece of chocolate, the cocoa butter fat particles are in a crystalline rigid structure that gives the chocolate its solid appearance. Once heated, the crystals of the polymorphic cocoa butter are able to break apart from the rigid structure and allow the chocolate to obtain a more fluid consistency as the temperature increases – the melting process. When the heat is removed, the cocoa butter crystals become rigid again and come closer together, allowing the chocolate to solidify.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33115990", "title": "Chocolate gravy", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 669, "text": "Milk is commonly used as the liquid in chocolate gravy, while some recipes use water. Some recipes devised in eastern Oklahoma use more sugar, and the fat comes from the use of butter after the gravy is complete, making it more like warm chocolate pudding served over biscuits. In a traditional gravy, a roux is made with fat and flour before the milk is added; in chocolate gravy all the dry ingredients are mixed first, milk slowly incorporated, then stirred continuously until cooked. When a thick and rich consistency is achieved, the butter and vanilla are added. Other ingredients, such as crumbled bacon, are usually added afterward near the end of preparation.\n", "bleu_score": null, "meta": null } ] } ]
null
2xkybe
at what point does the flow of air on an object go from cooling it down to heating it up due to friction?
[ { "answer": "Never, it isn't friction but [air compression](_URL_0_) that heats the object. \n\nAs for at what point it out-factors the effect of the air whisking heat away, it depends on the shape. Basically a 'bad' aerodynamic shape will squish a lot more air than a 'good' aerodynamic shape, which will allow the air to flow around it without getting too squished.", "provenance": null }, { "answer": "The \"cooling\" effect of flowing air occurs in two scenarios: when there is evaporation on the surface of the object (a wet towel hung in the wind is cooled down); or when the temperature of the flowing air is cooler than the object (essentially just constant heat exchange). Unless evaporative cooling comes into play, air flowing over an object cannot make the object cooler than the air itself. \n\nAir friction is constantly heating up an object. When wind blows on your face, air friction is transferring thermal energy to you, albeit a minuscule amount that is dwarfed by the other cooling effects. \n\nAn object heating up or cooling down when it moves through air is basically the sum of Air friction + heat exchange + evaporative cooling (if any). Whether the object heats up or cools down depends on a lot of variables like its shape, its original temp, its speed, temp of air, etc. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "154910", "title": "Wind chill", "section": "Section::::Explanation.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 574, "text": "A surface loses heat through conduction, evaporation, convection, and radiation. The rate of convection depends on both the difference in temperature between the surface and the fluid surrounding it and the velocity of that fluid with respect to the surface. As convection from a warm surface heats the air around it, an insulating boundary layer of warm air forms against the surface. Moving air disrupts this boundary layer, or epiclimate, allowing for cooler air to replace the warm air against the surface. The faster the wind speed, the more readily the surface cools.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1711063", "title": "Aerodynamic heating", "section": "Section::::Physics.\n", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 810, "text": "At high speeds through the air, the object's kinetic energy is converted to heat through compression and friction. At lower speed, the object will lose heat to the air through which it is passing, if the air is cooler. The combined temperature effect of heat from the air and from passage through it is called the stagnation temperature; the actual temperature is called the recovery temperature. These viscous dissipative effects to neighboring sub-layers make the boundary layer slow down via a non-isentropic process. Heat then conducts into the surface material from the higher temperature air. The result is an increase in the temperature of the material and a loss of energy from the flow. The forced convection ensures that other material replenishes the gases that have cooled to continue the process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "621008", "title": "Air cooling", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 277, "text": "In all cases, the air has to be cooler than the object or surface from which it is expected to remove heat. This is due to the second law of thermodynamics, which states that heat will only move spontaneously from a hot reservoir (the heat sink) to a cold reservoir (the air).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "39124477", "title": "Combined forced and natural convection", "section": "Section::::Cases.:Two-dimensional mixed convection with aiding flow.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 434, "text": "The first case is when natural convection aids forced convection. This is seen when the buoyant motion is in the same direction as the forced motion, thus accelerating the boundary layer and enhancing the heat transfer. Transition to turbulence, however, can be delayed. An example of this would be a fan blowing upward on a hot plate. Since heat naturally rises, the air being forced upward over the plate adds to the heat transfer.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9662955", "title": "Convective heat transfer", "section": "Section::::Types.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 657, "text": "BULLET::::- Free or natural convection: when fluid motion is caused by buoyancy forces that result from the density variations due to variations of thermal ±temperature in the fluid. In the absence of an internal source, when the fluid is in contact with a hot surface, its molecules separate and scatter, causing the fluid to be less dense. As a consequence, the fluid is displaced while the cooler fluid gets denser and the fluid sinks. Thus, the hotter volume transfers heat towards the cooler volume of that fluid. Familiar examples are the upward flow of air due to a fire or hot object and the circulation of water in a pot that is heated from below.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3128803", "title": "Drying", "section": "Section::::Methods of drying.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1022, "text": "BULLET::::- Application of hot air (convective or direct drying). Air heating increases the drying force for heat transfer and accelerates drying. It also reduces air relative humidity, further increasing the driving force for drying. In the falling rate period, as moisture content falls, the solids heat up and the higher temperatures speed up diffusion of water from the interior of the solid to the surface. However, product quality considerations limit the applicable rise to air temperature. Excessively hot air can almost completely dehydrate the solid surface, so that its pores shrink and almost close, leading to crust formation or \"case hardening\", which is usually undesirable. For instance in wood (timber) drying, air is heated (which speeds up drying) though some steam is also added to it (which hinders drying rate to a certain extent) in order to avoid excessive surface dehydration and product deformation owing to high moisture gradients across timber thickness. Spray drying belongs in this category.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "37754", "title": "Mountain", "section": "Section::::Climate.\n", "start_paragraph_id": 27, "start_character": 0, "end_paragraph_id": 27, "end_character": 698, "text": "However, when air is hot, it tends to expand, which lowers its density. Thus, hot air tends to rise and transfer heat upward. This is the process of convection. Convection comes to equilibrium when a parcel of air at a given altitude has the same density as its surroundings. Air is a poor conductor of heat, so a parcel of air will rise and fall without exchanging heat. This is known as an adiabatic process, which has a characteristic pressure-temperature dependence. As the pressure gets lower, the temperature decreases. The rate of decrease of temperature with elevation is known as the adiabatic lapse rate, which is approximately 9.8 °C per kilometre (or 5.4 °F per 1000 feet) of altitude.\n", "bleu_score": null, "meta": null } ] } ]
null
bgdpcu
the cause of the geometric patterns formed in sand with a tone generator
[ { "answer": "[Standing waves](_URL_2_)\n\nIf you shake a string at right frequency \"knot\" points will form that stay stationary. \n\nThis is due to the wave created by the shaking and the wave reflected from the other end interfering with each other.\n\nVideo: Standing waves on a string _URL_1_\n\nObjects that are more complex than a string will have different kind of standing waves on them. They too will form knot points that are stationary (or move only very little).\n\nExamples for a circular surface: _URL_0_\n\nThe sand will move away from the areas that move alot and accumulate on the stationary areas.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "411512", "title": "Cymatics", "section": "Section::::Work of Hans Jenny.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 728, "text": "From the physical-mathematical standpoint, the form of the nodal patterns is predetermined by the shape of the body set in vibration or, in the case of acoustic waves in a gas, the shape of the cavity in which the gas is contained. The sound wave, therefore, does not influence at all the shape of the vibrating body or the shape of the nodal patterns. The only thing that changes due to the vibration is the arrangement of the sand. The image formed by the sand, in turn, is influenced by the frequency spectrum of the vibration only because each vibration mode is characterized by a specific frequency. Therefore, the spectrum of the signal that excites the vibration determines which patterns are actually nodally displayed.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28998247", "title": "Pitch circularity", "section": "Section::::Research on pitch perception.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 218, "text": "Normann et al. showed that pitch circularity can be created using a bank of single tones; here the relative amplitudes of the odd and even harmonics of each tone are manipulated so as to create ambiguities of height. \n", "bleu_score": null, "meta": null }, { "wikipedia_id": "54582001", "title": "Hagal dune field", "section": "Section::::Formation.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 546, "text": "The linear dunes (dashes) were formed through the action of bidirectional winds, acting perpendicular to the line of the sand dune, causing a funneling effect directing the sand to accumulate along the linear axis of the dune. The round-shaped dunes (dots) were formed when the winds that caused the linearly-shaped accumulations were interrupted. The round dunes are classified as \"barchanoid dunes\". However, the exact mechanism of either formation is still unknown and this is the reason the area was chosen for imaging by the HiRISE mission.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "15955754", "title": "Multi-scale camouflage", "section": "Section::::History.:2000s fractal-like digital patterns.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 446, "text": "Fractal-like patterns work because the human visual system efficiently discriminates images which have different fractal dimension or other second-order statistics like Fourier spatial amplitude spectra; objects simply appear to pop out from the background. Timothy O'Neill helped the Marine Corps to develop first a digital pattern for vehicles, then fabric for uniforms, which had two colour schemes, one designed for woodland, one for desert.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "53741", "title": "Symmetry", "section": "Section::::In the arts.:In pottery and metal vessels.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 448, "text": "Since the earliest uses of pottery wheels to help shape clay vessels, pottery has had a strong relationship to symmetry. Pottery created using a wheel acquires full rotational symmetry in its cross-section, while allowing substantial freedom of shape in the vertical direction. Upon this inherently symmetrical starting point, potters from ancient times onwards have added patterns that modify the rotational symmetry to achieve visual objectives.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1125112", "title": "Hans Jenny (cymatics)", "section": "Section::::Life and career.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 482, "text": "Jenny made use of crystal oscillators and his so-called tonoscope to set plates and membranes vibrating. He spread quartz sand onto a black drum membrane 60 cm in diameter. The membrane was caused to vibrate by singing loudly through a cardboard pipe, and the sand produced symmetrical Chladni patterns, named after Ernst Chladni, who had discovered this phenomenon in 1787. Low tones resulted in rather simple and clear pictures, while higher tones formed more complex structures.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26610144", "title": "D-Shape", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 675, "text": "D-Shape is a large 3-dimensional printer that uses binder-jetting, a layer by layer printing process, to bind sand with an inorganic seawater and magnesium-based binder in order to create stone-like objects. Invented by Enrico Dini, founder of Monolite UK Ltd, the first model of the D-Shape printer used epoxy resin, commonly used as an adhesive in the construction of skis, cars, and airplanes, as the binder. Dini patented this model in 2006. After experiencing problems with the epoxy, Dini changed the binder to the current magnesium-based one and patented his printer again in September 2008. In the future, Dini aims to use the printer to create full-scale buildings.\n", "bleu_score": null, "meta": null } ] } ]
null
esskzv
Commercial kitty litter was invented in 1947. Where did house cats pee and poop before then?
[ { "answer": "If your cat was an indoor cat, you would likely provide them some sort of absorbent material. In the 1922 *Feeding and Care of the Domestic and Long-Haired Cat*, the following advice is offered:\n\n > Each room should also contain a fair sized granite pan, partly filled with sand or sawdust. I prefer saw dust as it does not hold moisture as long as sand and is free from fleas. \n\nSimilar advice was offered in the 1921 *Your Dog and Your Cat, How to Care for Them: A Treatise on the Care of the Dog and Cat in the Home*, with the author writing that:\n\n > A pan of sawdust, sand, or torn bits of paper should be kept in some convenient place for their use in attending to their functions. They must have free access to this if they are to be clean with their habits.\n\nLikewise, the 1889 *Our Cats and All About Them* reminds readers that:\n\n > Always have a box with dry earth near the cat's sleeping place, unless there is an opening for egress near.\n\nI do find amusing how much the authors of these old manuals strive to *avoid* directly stating what these are for. It is clear enough, of course, but the language is still euphemistic, speaking of 'their functions' and 'moisture'. Vague allusions to the 'cleanliness' of the cat are common too, such as the 1895 guide which notes:\n\n > The cat is an excessively cleanly animal, and when housed should be provided with means for remaining so. A small box, or -- what is better, as it can be well washed -- a galvanized flat pan such as used for roasting meat, should be placed in some well-ventilated corner out of sight, and kept filled about an inch deep with sand, clean earth, or sawdust. Perhaps the latter is preferable, as it can be burned. The litter should be changed frequently.\n\nAlso going on to note that for kittens, a bed of peat-moss litter has the \"desired effect\" of teaching them cleanliness, when changed at least once a day.\n\nWhile hardly a scientific survey of the literature, I found in only a single book, 1887's *The Cat* , reference to actual product in noting that the creature is \"*guided by a peculiar instinct to scratch up earth for the purpose of hiding their excrements*\" and that indoors even will do their best to avoid the carpet, \"resort[ing] to cinders or coal-dust\"*. They go on to note similar ways to accomodate this as others did, writing that:\n\n > It is a good plan to have a large flower-pot saucer - the larger the better, but not less than fifteen inches in diameter - kept in some suitable corner, with a little clean garden-earth or sand in it. It need not contain much earth and it can be changed at will; but should not be allowed to become foul as to offend the cat.\n\nWriting advice for owners of \"catteries\", that is, breeders with large collections of cats, the most practical advice in 1901 *Domestic and Fancy Cats* is simply that:\n\n > Sanitary arrangements in these catteries are not so difficult, for the free access to the outside runs, if cats have been trained to habits of cleanliness, will be readily sought for and discovered by them.\n\nBut recognizing this isn't always possible, the author continues:\n\n > Still it may be desirable to provide receptacles, and I know of no better than the large stoneware pans supplied by Spratts Patent, or zinc trays can be mate whatever size and shape is desired. Opinion varies as to what these are to be filled with. I have from the earliest period, and down to date, been an advocate of dry earth; some however consider sawdust as far and away the best, and only a few years ago I was informed by a large breeder that if earth and sawdust be placed in separate receptacles, sawdust will be selected by the cat. Be this as it may, I am still open to conviction of is efficacy, over Nature's deodorizer. An efficient deodorizer or disinfectant should always be kept at hand, such as Izal, Sanitas, Jeyes', or Lawes', which rank above most others. \n\nGoing back further into the 19th century, there is even stronger emphasis on the *cleanliness* of the cat, with an author in the 1870s writing that:\n\n > Cats of the right sort never fail to bring their kittens up in the way they should go, and soon succeed in teaching them all they know themselves. They will bring in living mice for them, and always take more pride in the best warrior-kitten than in the others. They will also inculcate the doctrine of cleanliness in their kits, so that the carpet shall never be wet. I have often been amused at seeing my own cat bringing kitten after kitten to the sand-box, and showing it how to use it, in action explaining to them what it was there for. When a little older, she entices them out to the garden.\n\nOf course, they later go on too note that a cat will *literally die* if they get too dirty, writing:\n\n > I have known cats take ill and die from having their coats accidentally soiled beyond remedy.\n\nThis might be a bit excessive, but this emphasis on the 'instinctive cleanliness', as countless guides in the late 19th to early 20th century noted, was the \"natural virtue which renders pussy so generally a favoured intimate of the household\".\n\nSo the sum of it is that there was no one solution offered, but there was certainly a general consensus on the necessity of providing an indoor place for relief, and while the advice varied as to the specific material, be it sawdust, earth, or otherwise, it ought be something absorbent and changed frequently.\n\n**Sources**\n\n*Feeding and Care of the Domestic and Long-Haired Cat* by Ellen V. Celty and Anna Ray\n\n*Your Dog and Your Cat, How to Care for Them: A Treatise on the Care of the Dog and Cat in the Home* by Roy Henry Spaulding\n\n*The Cat, A Guide to the Classification and Varieties of Cats and a Short Treaties Upon Their Care, Diseases, and Treatment* by Rush Shippen Huidekoper\n\n*Domestic and Fancy Cats: A Practical Treatise on Their Varieties, Breeding, Management and Diseases* by John Jennings\n\n*Cats: Their Points and Characteristics, with Curiosities of Cat Life, and a Chapter on Feline Ailments*\n\n*Our Cats and All About Them: Their Varieties, Habits, and Management, and for Show, the Standard of Excellence and Beauty* by Harrison Weir\n\n*The Cat: Its Natural History; Domestic Varieties; Management and Treatment* by Philip M. Rule\n\nThis is just a sampling of texts out there, but you can find them and more on _URL_0_, HathiTrust, and Project Gutenberg.\n\nAfterward: Looking through a lot of old books about cats and trying to find more references, I had to share one false positive hit for \"sawdust\" which ended up being about a ship's cat:\n\n > Tuesday was flogging day; and to add, if possible, to the terror of the condemned wretch, after the gratings were rigged and the man stripped and lashed thereto, sawdust was sprinkled on the deck all round, to soak up the blood. But at every flogging match\n\n > > “There sat auld Nick in shape o’ beast,”\n\n > at least in the shape of Tom the cat, who would not have missed the fun for all the world. There on the bulwark he would sit, his eyes gleaming with satisfaction, his mouth squared, and his beard all a-bristle. He seemed to count every dull thud of his nine-tailed namesake, and emitted short sharp mews of joy when, towards the middle of the third dozen, the blood began to trickle and get sprinkled about on sheet and shroud. Though I never disliked Tom, still, at times such as these, I really believed he was the devil himself as reputed, and would have given two months’ pay for a chance to brain him. When the flogging was over, Tom used to jump down and, purring loudly, rub his head against his master’s leg.\n\nTom seems like *kind of a dick*.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "30182396", "title": "Timeline of United States inventions (1946–1991)", "section": "Section::::Cold War (1946–1991).:Post-war and the late 1940s (1946–1949).\n", "start_paragraph_id": 37, "start_character": 0, "end_paragraph_id": 37, "end_character": 323, "text": "BULLET::::- Cat litter is one of any of a number of materials used in litter boxes to absorb moisture from cat feces and urine, which reduces foul odors such as ammonia and renders them more tolerable within the home. The first commercially available cat litter was Kitty Litter, available in 1948 and invented by Ed Lowe.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "675677", "title": "Litter box", "section": "Section::::Types of litter box filler.:Non-clumping conventional litter.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 603, "text": "The first commercially available cat litters in the United States was \"Kitty Litter\", available in 1947 and marketed by Ed Lowe. This was the first large-scale use of clay (in the form of Fuller's earth) in litter boxes; previously sand was used. Clay litter is much more absorbent than sand and is manufactured into large grains or clumps of clay to make it less likely to be tracked from the litter box. The brand name \"Kitty Litter\" has become a genericized trademark, used by many to denote any type of cat litter. Today, cat litter can be obtained quite economically at a variety of retail stores.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "675677", "title": "Litter box", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 753, "text": "A litter box, sometimes called a sandbox, litter tray, cat pan, litter pan, or catbox, is an indoor feces and urine collection box for cats, as well as rabbits, ferrets, miniature pigs, small dogs (such as Beagles and Chihuahuas), and other pets that instinctively or through training will make use of such a repository. They are provided for pets that are permitted free roam of a home but who cannot or do not always go outside to excrete their metabolic waste. Many owners of these animals prefer not to let them roam outside for fear that they might succumb to outdoor dangers, such as weather, wildlife, or traffic (indoor cats, on average, live ten years longer than outdoor cats). A litter box makes it possible to shelter pets from these risks.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5256121", "title": "Society for the Prevention of Cruelty to Animals (Hong Kong)", "section": "Section::::History.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 474, "text": "In the 1950s, cat boxes were introduced as places for people to leave unwanted cats and kittens. The cat box locations were expanded in 1970. In 2000, the organisation took a different approach, pioneering the Cat Colony Care Programme in Asia involving trap-neuter-return. In 2014, the society reported on its website that 5,000 cats' lives are ended annually in its care and that of the Agriculture, Fisheries and Conservation Department, a reduction from 40,000 in 1963.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13287128", "title": "Friskies", "section": "Section::::History.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 887, "text": "When Friskies cat food was introduced in the 1950s, cats were becoming more popular as pets, but dry food for cats had not been introduced yet. In the early 1950s, a series of specialty dog food products were introduced under the Friskies brand, including one for puppies and cats. According to \"The Encyclopedia of Consumer Brands\", \"it was soon discovered that cats disliked the new 'puppy food'.\" A sales manager named Henry Arnest was considered \"eccentric\" for advocating that Friskies make a pet food specifically for cats. According to Arnest, the company thought it was \"a nutty idea.\" He convinced Friskies executives to do a market trial for cat food, which was conducted on the west coast of the United States in 1956. The cat food was made of mackerel byproducts, cereals, vegetables and vitamins. The trial surprised Friskies executives when the cat food sold successfully.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "607794", "title": "Kit Kat", "section": "Section::::History.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 227, "text": "Use of the name Kit Kat or Kit Cat for a type of food goes back to the 18th century, when mutton pies known as a Kit-Kat were served at meetings of the political Kit-Cat Club in London owned by pastry chef Christopher Catling.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13287128", "title": "Friskies", "section": "Section::::History.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 794, "text": "The cat food product was named \"Little Friskies for Cats.\" A cartoon cat was used as its mascot. It was advertised on television, newspapers, and through \"Friskies Research Digest\", a publication for veterinarians and animal breeders published by Friskies. Canned \"treats\" for cats were first test-marketed in 1958. They were initially popular on the west coast of the United States, but not in the east. In 1959, Carnation executives considered withdrawing from the east coast market, because its cat food products weren't popular there, but decided to stay. A Carnation report found that consumers preferred more upscale, single-serving food products, so the treats were rebranded as single-serving, canned foods called \"Friskies Buffet\" in 1967. Buffet became one of Friskies' best-sellers.\n", "bleu_score": null, "meta": null } ] } ]
null
7eslg9
why were there tons of super giant creatures like dinosaurs a few million years ago, but there aren't now? why did everything shrink in scale? even a lot of bugs were much bigger then. did earth's gravity change?
[ { "answer": "The prevailing theory is that our atmosphere had a lot more oxygen in it at the time (right now it's just over 20%). This let animals grow larger, especially with the scale of insects. Since most of them don't have a true respiratory system, they take in oxygen through their skin/exoskeletons. By having more O2 in the atmosphere, that made it easier to sustain larger size.", "provenance": null }, { "answer": "That's an interesting question. The one thing that has not changed, however, is earth's gravity. \n\n\n\nAs for insects the case is fairly well understood. Insects don't have lungs the same way we do, their respiration is more dependent on passive diffusion. This limits their size as smaller things have a greater surface to volume. Thus they could evolve bigger forms only in periods when the air oxygen concentration was higher. \n\nAs for dinosaurs the answer is more complex. By far most dinosaurs were not very big. But we tend to notice the ones that were huge and spectacular. Nevertheless, the biggest animal ever to grace the planet is alive today: the blue whale. ", "provenance": null }, { "answer": "That's not necessarily true, there's about 5-6 different whale species that are bigger than any dinosaur to ever exist, most notably the Blue Whale which absolutely *dwarf* even the largest dinosaurs.\n\nThere are certainly plenty of big insects today as well.", "provenance": null }, { "answer": "There's a few things to consider\n\n- Not everything shrunk in scale. Elephants are large. The blue whale is still larger than any dinosaur we have yet to discover. There are still plenty of creatures around today that are comparable in size to the dinosaurs. \n\n- There were smaller animals around during the time of the dinosaurs as well. There is a preservation bias when it comes to this. Larger animals are more likely to leave a trace we can find (i.e. it's a lot easier to find a T-Rex sized fossil than it is to find a chicken sized fossil) so it \"appears\" that there were much more larger creatures back then.\n\n- Time. Dinosaurs evolved over millions and millions of years. It's kind of like a long term arms race between predator and prey. Prey grows bigger, so predators grow bigger, so prey grows bigger, and so on. Larger creatures were much more susceptible to the mass extinction events that we think wiped out a lot of the dinosaurs - and there simply hasn't been enough time since then for larger creatures to evolve again. \n\n- Like /u/GenxCub pointed out. There was more oxygen is the atmosphere allowing animals (particularly insects) to grow much larger much more easily\n\n- /u/Phage0070 also raises a big point. The spread of humans has had a major impact both of the populations of megafauna and flora that existed at the time and the ability of new species of megafauna and flora to evolve. Humans put a massive strain on the natural resources of pretty much the entire planet and there simply isn't the \"space\" any more to fit large species. \n\nThere's probably more things that I can't think of just at the moment but the simple answer is: There is no simple answer. There's multiple factors that effect how species will evolve and it's a combination of all of these that allowed dinosaurs to evolve to the sizes they did millions of years ago and it's a different combination of these factors that has prevented this from happening in the modern age. ", "provenance": null }, { "answer": "There weren't exactly \"tons of super giant creatures\" during the age of the dinosaurs. Dinosaurs as a group were around for 180 million years, that is a lot to pick and choose from. Larger dinosaurs are more likely to leave fossils, more likely to be found, and your are more likely to have heard of them because they are cooler, giving the impression they were all giants.\n\nBut make no mistake, there were plenty of chicken-sized dinosaurs as well. Concluding most creatures were large from a few exceptional dinosaurs is like saying Chinese people are tall because of Yao Ming.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "300664", "title": "Theropoda", "section": "Section::::Biology.:Size.\n", "start_paragraph_id": 15, "start_character": 0, "end_paragraph_id": 15, "end_character": 354, "text": "Recent theories propose that theropod body size shrank continuously over a period of 50 million years, from an average of down to , eventually evolving into modern birds. This was based on evidence that theropods were the only dinosaurs to get continuously smaller, and that their skeletons changed four times as fast as those of other dinosaur species.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4085430", "title": "Dinosaur size", "section": "Section::::Record sizes.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 358, "text": "Recent theories propose that theropod body size shrank continuously over the past 50 million years, from an average of down to , as they eventually evolved into modern birds. This is based on evidence that theropods were the only dinosaurs to get continuously smaller, and that their skeletons changed four times faster than those of other dinosaur species.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14218857", "title": "Evolution of dinosaurs", "section": "Section::::Evolutionary trends.:Body size.\n", "start_paragraph_id": 62, "start_character": 0, "end_paragraph_id": 62, "end_character": 763, "text": "Body size is important because of its correlation with metabolism, diet, life history, geographic range and extinction rate. The modal body mass of dinosaurs lies between 1 and 10 tons throughout the Mesozoic and across all major continental regions. There was a trend towards increasing body size within many dinosaur clades, including the Thyreophora, Ornithopoda, Pachycephalosauria, Ceratopsia, Sauropomorpha, and basal Theropoda. Marked decreases in body size have also occurred in some lineages, but are more sporadic. The best known example is the decrease in body size leading up to the first birds; \"Archaeopteryx\" was below 10 kg in weight, and later birds \"Confuciusornis\" and \"Sinornis\" are starling- to pigeon-sized. This occurred for easier flight.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "162780", "title": "Megafauna", "section": "Section::::Evolution of large body size.:In terrestrial mammals.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 1002, "text": "Subsequent to the Cretaceous–Paleogene extinction event that eliminated the non-avian dinosaurs about Ma (million years) ago, terrestrial mammals underwent a nearly exponential increase in body size as they diversified to occupy the ecological niches left vacant. Starting from just a few kg before the event, maximum size had reached ~50 kg a few million years later, and ~750 kg by the end of the Paleocene. This trend of increasing body mass appears to level off about 40 Ma ago (in the late Eocene), suggesting that physiological or ecological constraints had been reached, after an increase in body mass of over three orders of magnitude. However, when considered from the standpoint of rate of size increase per generation, the exponential increase is found to have continued until the appearance of \"Indricotherium\" 30 Ma ago. (Since generation time scales with \"body mass\", increasing generation times with increasing size cause the log mass vs. time plot to curve downward from a linear fit.)\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9813", "title": "Extinction event", "section": "Section::::Evolutionary importance.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 533, "text": "For example, mammaliformes (\"almost mammals\") and then mammals existed throughout the reign of the dinosaurs, but could not compete for the large terrestrial vertebrate niches which dinosaurs monopolized. The end-Cretaceous mass extinction removed the non-avian dinosaurs and made it possible for mammals to expand into the large terrestrial vertebrate niches. Ironically, the dinosaurs themselves had been beneficiaries of a previous mass extinction, the end-Triassic, which eliminated most of their chief rivals, the crurotarsans.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "29989", "title": "Triassic", "section": "Section::::Life.:Terrestrial and freshwater fauna.\n", "start_paragraph_id": 40, "start_character": 0, "end_paragraph_id": 40, "end_character": 279, "text": "BULLET::::- Theropods: dinosaurs that first evolved in the Triassic period but did not evolve into large sizes until the Jurassic. Most Triassic theropods, such as the \"Coelophysis\", were only around 1–2 meters long and hunted small prey in the shadow of the giant Rauisuchians.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4085430", "title": "Dinosaur size", "section": "Section::::Record sizes.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 1080, "text": "The sauropods were the largest and heaviest dinosaurs. For much of the dinosaur era, the smallest sauropods were larger than anything else in their habitat, and the largest were an order of magnitude more massive than anything else that has since walked the Earth. Giant prehistoric mammals such as \"Paraceratherium\" and \"Palaeoloxodon\" (the largest land mammals ever) were dwarfed by the giant sauropods, and only modern whales surpass them in size. There are several proposed advantages for the large size of sauropods, including protection from predation, reduction of energy use, and longevity, but it may be that the most important advantage was dietary. Large animals are more efficient at digestion than small animals, because food spends more time in their digestive systems. This also permits them to subsist on food with lower nutritive value than smaller animals. Sauropod remains are mostly found in rock formations interpreted as dry or seasonally dry, and the ability to eat large quantities of low-nutrient browse would have been advantageous in such environments.\n", "bleu_score": null, "meta": null } ] } ]
null
en497e
Are the any dinosaur hybrids ie (like the mule is a hybrid of a male donkey and a female horse)?
[ { "answer": "animals in the wild hybridizing naturally is pretty rare, and a terrestrial animal becoming a fossil is also (in the scheme of things) pretty rare\n\nit's certainly possible but i've not heard of such a thing - which makes sense, as finding any evidence would be incredibly unlikely", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "1070735", "title": "Equus (genus)", "section": "Section::::Taxonomy and evolutionary history.:Hybrids.\n", "start_paragraph_id": 107, "start_character": 0, "end_paragraph_id": 107, "end_character": 500, "text": "Equine species can crossbreed with each other. The most common hybrid is the mule, a cross between a male donkey and a female horse. With rare exceptions, these hybrids are sterile and cannot reproduce. A related hybrid, a hinny, is a cross between a male horse and a female donkey. Other hybrids include the zorse, a cross between a zebra and a horse and a zonkey or zedonk, a hybrid of a zebra and a donkey. In areas where Grévy's zebras are sympatric with plains zebras, fertile hybrids do occur.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13645", "title": "Horse", "section": "Section::::Taxonomy and evolution.:Other modern equids.\n", "start_paragraph_id": 84, "start_character": 0, "end_paragraph_id": 84, "end_character": 369, "text": "Horses can crossbreed with other members of their genus. The most common hybrid is the mule, a cross between a \"jack\" (male donkey) and a mare. A related hybrid, a hinny, is a cross between a stallion and a jenny (female donkey). Other hybrids include the zorse, a cross between a zebra and a horse. With rare exceptions, most hybrids are sterile and cannot reproduce.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "14076", "title": "Horse breed", "section": "Section::::Hybrids.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 657, "text": "Horses can crossbreed with other equine species to produce hybrids. These hybrid types are not breeds, but they resemble breeds in that crosses between certain horse breeds and other equine species produce characteristic offspring. The most common hybrid is the mule, a cross between a \"jack\" (male donkey) and a mare. A related hybrid, the hinny, is a cross between a stallion and a jenny (female donkey). Most other hybrids involve the zebra (see Zebroid). With rare exceptions, most equine hybrids are sterile and cannot reproduce. A notable exception is hybrid crosses between horses and \"Equus ferus przewalskii\", commonly known as Przewalski's horse.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5146476", "title": "Reproductive isolation", "section": "Section::::Post-zygotic isolation.:Hybrid sterility.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 784, "text": "Hinnies and mules are hybrids resulting from a cross between a horse and a donkey or between a mare and a donkey, respectively. These animals are nearly always sterile due to the difference in the number of chromosomes between the two parent species. Both horses and donkeys belong to the genus \"Equus\", but \"Equus caballus\" has 64 chromosomes, while \"Equus asinus\" only has 62. A cross will produce offspring (mule or hinny) with 63 chromosomes, that will not form pairs, which means that they do not divide in a balanced manner during meiosis. In the wild, the horses and donkeys ignore each other and do not cross. In order to obtain mules or hinnies it is necessary to train the progenitors to accept copulation between the species or create them through artificial insemination.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55526", "title": "Donkey", "section": "Section::::Characteristics.:Breeding.\n", "start_paragraph_id": 18, "start_character": 0, "end_paragraph_id": 18, "end_character": 596, "text": "Donkeys can interbreed with other members of the family Equidae, and are commonly interbred with horses. The hybrid between a jack and a mare is a mule, valued as a working and riding animal in many countries. Some large donkey breeds such as the Asino di Martina Franca, the Baudet de Poitou and the Mammoth Jack are raised only for mule production. The hybrid between a stallion and a jenny is a hinny, and is less common. Like other inter-species hybrids, mules and hinnies are usually sterile. Donkeys can also breed with zebras in which the offspring is called a zonkey (among other names).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "701610", "title": "Zebroid", "section": "Section::::Genetics.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 729, "text": "Donkeys and wild equids have different numbers of chromosomes. A donkey has 62 chromosomes; the zebra has between 32 and 46 (depending on the species). In spite of this difference, viable hybrids are possible, provided the gene combination in the hybrid allows for embryonic development to birth. A hybrid has a number of chromosomes somewhere in between. The chromosome difference makes female hybrids poorly fertile and male hybrids generally sterile, due to a phenomenon called Haldane's rule. The difference in chromosome number is most likely due to horses having two longer chromosomes that contain similar gene content to four zebra chromosomes. Horses have 64 chromosomes, while most zebroids end up with 54 chromosomes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "75180", "title": "Bos", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 399, "text": "Bos (from Latin \"bōs: cow, ox, bull\") is the genus of wild and domestic cattle. \"Bos\" can be divided into four subgenera: \"Bos\", \"Bibos\", \"Novibos\", and \"Poephagus\", but these divisions are controversial. The genus has five extant species. However, this may rise to seven if the domesticated varieties are counted as separate species, and nine if the closely related genus \"Bison\" is also included.\n", "bleu_score": null, "meta": null } ] } ]
null
s1uvj
the united states' corporate taxes, and why ours are the highest in the world.
[ { "answer": "The American tax code is like swiss cheese. There are so many loop holes that in order to actually **make** money off of it, you need to raise the overall tax rate so high that it outweighs the loop holes. ", "provenance": null }, { "answer": "The US Statutory tax rate is 35% but it is a meaningless number. The effective rate that corporations actually pay is usually far lower. Sometimes zero or even negative when you factor in subsidies. ", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "53233377", "title": "Destination-based cash flow tax", "section": "Section::::Current corporate tax system.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 1264, "text": "Some argue that the U.S. corporate tax rate at 35% is the \"highest in the industrialized world\", while others argue it isn't. The rate varies from sector to sector, and can be as low as 21% in the manufacturing industry. A high tax rate would place the U.S. at a \"competitive disadvantage in the global marketplace.\" and encourages corporations to move to countries with lower taxes. The current tax system also provides a \"tax deduction for imported goods\", providing another incentive for companies to leave. Companies that import inventory before ultimately selling their product domestically to U.S. consumers can deduct the cost of imports from their taxable income as part of cost of goods sold giving the company a sometimes sizable benefit. As explained in \"Forbes,\" the border-adjustment tax \"would move away from a direct income tax, and more toward an indirect \"cash flow\" tax\" where a \"corporation would be entitled to immediately deduct the cost of all asset purchases.\" and the \"corporate tax rate would be reduced to 20%. Although this is being compared to a value added tax (VAT), \"under a typical VAT...the corporation couldn't deduct its wages.\" but under the Brady and Ryan blueprint, wages could be deducted so this would be an \"indirect VAT.\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55697602", "title": "Tax Cuts and Jobs Act of 2017", "section": "Section::::Plan elements.:Corporate tax.\n", "start_paragraph_id": 28, "start_character": 0, "end_paragraph_id": 28, "end_character": 1403, "text": "The corporate tax rate was lowered from 35% to 21%, while some related business deductions and credits were reduced or eliminated. The Act also changed the U.S. from a global to a territorial tax system with respect to corporate income tax. Instead of a corporation paying the U.S. tax rate (35%) for income earned in any country (less a credit for taxes paid to that country), each subsidiary pays the tax rate of the country in which it is legally established. In other words, under a territorial tax system, the corporation saves the difference between the generally higher U.S. tax rate and the lower rate of the country in which the subsidiary is legally established. \"Bloomberg\" journalist Matt Levine explained the concept, \"If we're incorporated in the U.S. [under the old global tax regime], we'll pay 35 percent taxes on our income in the U.S. and Canada and Mexico and Ireland and Bermuda and the Cayman Islands, but if we're incorporated in Canada [under a territorial tax regime, proposed by the Act], we'll pay 35 percent on our income in the U.S. but 15 percent in Canada and 30 percent in Mexico and 12.5 percent in Ireland and zero percent in Bermuda and zero percent in the Cayman Islands.\" In theory, the law would reduce the incentive for tax inversion, which is used today to obtain the benefits of a territorial tax system by moving U.S. corporate headquarters to other countries.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "55697602", "title": "Tax Cuts and Jobs Act of 2017", "section": "Section::::Objections.:International tax standards.\n", "start_paragraph_id": 182, "start_character": 0, "end_paragraph_id": 182, "end_character": 367, "text": "BULLET::::- Corporate taxes were 2.3% GDP in 2011, versus the OECD average of 3.0% GDP. Despite this, the US corporate tax rate was 35% prior to the passage of the Tax Cuts and Jobs Act, ten percentage points higher than the OECD average of 25%; the TCJA reduced the American corporate tax rate to 21%, four percentage points lower than the OECD average at the time.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "12255114", "title": "History of taxation in the United States", "section": "Section::::Corporate tax.\n", "start_paragraph_id": 110, "start_character": 0, "end_paragraph_id": 110, "end_character": 217, "text": "The United States' corporate tax rate was at its highest, 52.8 percent, in 1968 and 1969. The top rate was hiked last in 1993 to 35 percent. Under the \"Tax Cuts and Jobs Act\" of 2017, the rate adjusted to 21 percent.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "28688759", "title": "Unemployment in the United States", "section": "Section::::Political debates.:Tax policy.:Corporate income taxes.\n", "start_paragraph_id": 143, "start_character": 0, "end_paragraph_id": 143, "end_character": 784, "text": "U.S. corporate after-tax profits were at record levels during 2012 while corporate tax revenue was below its historical average relative to GDP. For example, U.S. corporate after-tax profits were at record levels during the third quarter of 2012, at an annualized $1.75 trillion. U.S. corporations paid approximately 1.2% GDP in taxes during 2011. This was below the 2.7% GDP level in 2007 pre-crisis and below the 1.8% historical average for the 1990–2011 period. In comparing corporate taxes, the Congressional Budget Office found in 2005 that the top statutory tax rate was the third highest among OECD countries behind Japan and Germany. However, the U.S. ranked 27th lowest of 30 OECD countries in its collection of corporate taxes relative to GDP, at 1.8% vs. the average 2.5%.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40277338", "title": "Repatriation tax holiday", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 1027, "text": "In 2004, the United States Congress enacted such a tax holiday for U.S. multinational companies in the American Jobs Creation Act of 2004 (AJCA)) section 965, allowing them to repatriate foreign profits to the United States at a 5.25% tax rate, rather than the existing 35% corporate tax rate. Under this law, corporations brought $362 billion into the American economy, primarily for the purposes of paying dividends to investors, repurchasing shares, and purchasing other corporations. The largest multi-national companies, Apple Inc., Microsoft Corp., Alphabet Inc., Cisco Systems Inc., and Oracle Corp., recalled only 9% of their cash possessions following the 2004 act. In 2011, Senate Democrats, arguing against another repatriation tax holiday, issued a report asserting that the previous effort had actually cost the United States Treasury $3.3 billion, and that companies receiving the tax breaks had thereafter cut over 20,000 jobs. A second repatriation tax holiday was defeated in the United States Senate in 2009.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33992380", "title": "Political debates about the United States federal budget", "section": "Section::::Debates about tax policy.:Can the U.S. outgrow the problem?:U.S. taxes relative to foreign countries.\n", "start_paragraph_id": 160, "start_character": 0, "end_paragraph_id": 160, "end_character": 886, "text": "In comparing corporate taxes, the Congressional Budget Office found in 2005 that the top statutory tax rate was the third highest among OECD countries behind Japan and Germany. However, the U.S. ranked 27th lowest of 30 OECD countries in its collection of corporate taxes relative to GDP, at 1.8% vs. the average 2.5%. Bruce Bartlett wrote in May 2011: \"...one almost never hears that total revenues are at their lowest level in two or three generations as a share of G.D.P. or that corporate tax revenues as a share of G.D.P. are the lowest among all major countries. One hears only that the statutory corporate tax rate in the United States is high compared with other countries, which is true but not necessarily relevant. The economic importance of statutory tax rates is blown far out of proportion by Republicans looking for ways to make taxes look high when they are quite low.\"\n", "bleu_score": null, "meta": null } ] } ]
null
2fqp0o
Thoughts on "The Forge of Christendom" by Tom Holland?
[ { "answer": "I'm working through Tom Hollands 'Persian Fire' and he is very honest in the forward about the amount of embellishment and speculation in the work.\n\nI'm reading this with a few other people, and while they find it significantly more enjoyable to read, we all feel that we're reading exactly what he said, speculation and embellishment. I don't know if that helps here at all.\n\nHopefully a historian will hop in.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "7338180", "title": "Herenaus Haid", "section": "Section::::Life and works.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 1322, "text": "His life work was the establishment of the catechism course in his church of \"Unsere liebe Frau\" (Our Lady), whereby he has merited a place in the history of catechism. The origin and growth of this foundation is described in his large work \"Die gesamte christliche Lehre in ihrem Zusammenhang\" 'the whole Christian teaching in its context' (7 volumes, Munich, 1837–45). In the preface to the seventh volume he explains the manner in which he was wont to conduct his catechizing. In his simple statements is to be found a complete theory or system of catechism. He lays special stress on the Roman catechism and the catechism of Canisius. The deep veneration in which Haid, from his earliest youth, had held the latter found expression in his later writings, when he not only edited under different forms and translated the \"Summa doctrinæ christianæ\" of Peter Canisius, but also published some of the smaller works and a comprehensive biography of their author. During the closing years of his life he was afflicted with almost total blindness, but he bore his affliction with the greatest resignation. When death claimed him he had almost reached his ninetieth year. An account of a number of Haid's smaller works, not mentioned above, is to be found in the third volume of Kayser's \"Bucherlexikon\" (Leipzig, 1835), 16.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38387342", "title": "Affirmations (L. Ron Hubbard)", "section": "Section::::Contents of the Affirmations.:\"The Book\".\n", "start_paragraph_id": 54, "start_character": 0, "end_paragraph_id": 54, "end_character": 514, "text": "The last part of the document is titled \"The Book\", which appears to allude to his authorship in mid-1938 of a still-unpublished manuscript called \"Excalibur\", which he refers to as \"The One Commandment\" in the Affirmations. He wrote that it had \"freed you forever from the fears of the material world and gave you material control over people.\" The document lists Hubbard's personal goals, self-compliments and statements of what he believed (or wanted to believe) were his extraordinary qualities. For instance:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "31729305", "title": "The Return of the Prodigal Son (Rembrandt)", "section": "Section::::Reception.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 471, "text": "Dutch priest Henri Nouwen (1932–1996) was so taken by the painting that he eventually wrote a short book, \"The Return of the Prodigal Son: A Story of Homecoming\" (1992), using the parable and Rembrandt's painting as frameworks. He begins by describing his visit to the State Hermitage Museum in 1986, where he was able to contemplate the painting alone for hours. Considering the role of the father and sons in the parable in relation to Rembrandt's biography, he wrote:\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "291978", "title": "Walden", "section": "Section::::Plot.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 648, "text": "Where I Lived, and What I Lived For: Thoreau recollects thoughts of places he stayed at before selecting Walden Pond, and quotes Roman Philosopher Cato's advice \"consider buying a farm very carefully before signing the papers.\" His possibilities included a nearby Hollowell farm (where the \"wife\" unexpectedly decided she wanted to keep the farm). Thoreau takes to the woods dreaming of an existence free of obligations and full of leisure. He announces that he resides far from social relationships that mail represents (post office) and the majority of the chapter focuses on his thoughts while constructing and living in his new home at Walden.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6907462", "title": "Abraham de Revier Sr.", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 529, "text": "He was one of the ninety-six original members of the church and was the patriarch of a leading family in the Sleepy Hollow community. He has also been credited as the author of a private memorandum book that is now lost to history, which was heavily drawn upon in 1715 by Dirck Storm to compose the church's history. However, he signed his 1716 will by his mark, so it is more likely that the memoranda should be credited to his son, also named Abraham and a later elder of the church, who had predeceased his father about 1712.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4041", "title": "Bede", "section": "Section::::\"Ecclesiastical History of the English People\".:Intent.\n", "start_paragraph_id": 34, "start_character": 0, "end_paragraph_id": 34, "end_character": 266, "text": "N.J. Higham argues that Bede designed his work to promote his reform agenda to Ceolwulf, the Northumbrian king. Bede painted a highly optimistic picture of the current situation in the Church, as opposed to the more pessimistic picture found in his private letters.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "6230187", "title": "Madeleine Bunting", "section": "Section::::Life and career.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 347, "text": "\"The Plot\" was published by Granta in 2010. The book is partly a memoir of her father and partly an account of the chapel he built on an acre plot of land in North Yorkshire. \"Love Of Country: A Hebridean Journey\", published in 2016, concerns visits Bunting made to the Hebrides. It received positive reviews in \"The Scotsman\" and \"The Guardian\".\n", "bleu_score": null, "meta": null } ] } ]
null
19yy8k
what exactly is the bridge of the song?
[ { "answer": "There are two ways to define the \"bridge.\" A little music theory 101:\n\nThink of how your favorite songs are built. There are choruses (which sound the same each time) and verses (which sound a little different each time). They might be put together like this: ABABA, with A being the chorus and B being a verse.\n\nHowever, to spice things up, sometimes songs throw in another element - a \"bridge.\" This is a section different from the chorus or verse to make things a little more interesting. So if I wanted to add a little variety to my song, it might go ABACABA, with C being the bridge. \n\nLet's look at an example - Weezer - If You're Wondering If I Want You To. This is a good example of simple song structure. It goes BABACA, with A being the chorus, B is the verse, and C is the bridge.\n\nThat's how the term \"bridge\" is usually used in modern music, at least. There's another meaning for this term. It can also mean a part of a song that leads up to the chorus. Example: Michael Jackson - Billie Jean. When the verse changes noticeably and he says, \"People always told me / be careful what you do...\" that's the bridge. It's getting you ready for the chorus. \n\nAlso, with this other definition of the \"bridge,\" in the Weezer song, when he starts saying, \"When the conversation stops...\" that part is the bridge.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "8311415", "title": "Anyone Seen the Bridge?", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 489, "text": "\"Anyone Seen the Bridge?\" (abbreviated as \"ASTB\") is an instrumental by the Dave Matthews Band, usually played as segue between two songs during a concert. It is an instrumental jam played by the entire band, with scat singing by Dave Matthews. Performances of the tune today typically are heard between \"So Much to Say\" and \"Too Much,\" and last around a minute and a half. The tune has been very popular during concerts since its debut, and has currently been played live over 400 times.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "950235", "title": "The Bridge (Billy Joel album)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 500, "text": "The Bridge is the tenth studio album by American singer-songwriter Billy Joel, released on July 9, 1986. It was the last studio album produced by Phil Ramone as well as the last to feature Joel's long-time bassist Doug Stegmeyer and rhythm guitarist Russell Javors. The album yielded several successful singles, including \"A Matter of Trust\" (peaking at No. 10), \"Modern Woman\" (which also appeared on the \"Ruthless People\" soundtrack, peaking at No. 10), and \"This Is the Time\" (peaking at No. 18).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "38166", "title": "Avignon", "section": "Section::::Culture.:\"Sur le Pont d'Avignon\".\n", "start_paragraph_id": 149, "start_character": 0, "end_paragraph_id": 149, "end_character": 515, "text": "The bridge of the song is the Saint-Bénézet bridge over the Rhône of which only four arches (out of the initial 22) now remain. A bridge across the Rhone was built between 1171 and 1185, with a length of some 900 m (2950 ft), but was destroyed during the siege of Avignon by Louis VIII of France in 1226. It was rebuilt but suffered frequent collapses during floods and had to be continually repaired. Several arches were already missing (and spanned by wooden sections) before the remainder was abandoned in 1669.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "33843618", "title": "Warm and Beautiful", "section": "Section::::Lyrics and music.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 775, "text": "The song is in the key of C major. The verses are in two phrases. Music professor Vincent Benitez finds the melody and harmony of the song particularly expressive. The melody of the first phrase begins on the tonic, C, goes up to the subdominant F, and concludes be descending to D. The melody of the second phrase of each verse is similar, except it ends with the sequence of a diminished seventh note followed by an ascending second, i.e., A flat up to B up to C. The melody of the bridge incorporates both leaps and steps, often going in opposite directions. Elements of the melodic structure are similar to those McCartney has used throughout his career, dating back to the Beatles arrangement of \"Falling in Love Again\" that they used in their 1962 concerts in Hamburg.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "13918224", "title": "Queen of the Slipstream", "section": "Section::::Recording and composition.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 221, "text": "The song is a romantic ballad composed in the key of E major with a chord progression of E-G#m-A. The bridge uses the progression of F#m-C#m-F#m-E-F#m-C#m-F#m-C#m. It is written in 4/4 time and is played at a slow tempo.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "43262651", "title": "Bridge chord", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 376, "text": "The Bridge chord is a bitonal chord named after its use in the music of composer Frank Bridge (1879–1941). It consists of a minor chord with the major chord a whole tone above (CEG & DFA), as well as a major chord with the minor chord a semitone above (CEG & DFA), which share the same mediant (E/F). () Both form eleventh chords under inversion (DFACEG = D and DFACEG = Dm).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7093757", "title": "The Bridge (Elton John song)", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 392, "text": "\"The Bridge\" is a song from Elton John's 2006 album \"The Captain & the Kid\". It is a simple, stripped-down production focused on John and his piano, with sparse further accompaniment. This is the first song since the title track of \"Breaking Hearts\" with this arrangement. The song, which was only released as a promotional single, peaked at #19 on Billboard's Hot Adult Contemporary Tracks.\n", "bleu_score": null, "meta": null } ] } ]
null
26bxor
how does a temp agency make money?
[ { "answer": "Company pays temp agency $15/hour-head. Agency pays worker $8/hr", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "441390", "title": "Temporary work", "section": "Section::::Agencies.\n", "start_paragraph_id": 17, "start_character": 0, "end_paragraph_id": 17, "end_character": 1014, "text": "The role of a temp agency is as a third party between client employer and client employee. This third party handles remuneration, work scheduling, complaints, taxes, etc. created by the relationship between a client employer and a client employee. Client firms request the type of job that is to be done, and the skills required to do it. Client firms can also terminate an assignment and are able to file a complaint about the temp. Work schedules are determined by assignment, which is determined by the agency and can last for an indeterminate period of time, extended to any point and cut short. Because the assignments are temporary, there is little incentive to provide benefits and the pay is low in situations where there is a lot of labor flexibility. (Nurses are an exception to this as there is currently a shortage). Workers can refuse assignment but risk going through an indeterminate period of downtime since work is based on availability of assignments, which the agency cannot \"create\" only fill.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23510063", "title": "Swedish Fortifications Agency", "section": "Section::::Economy.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 533, "text": "The SFA does not receive an allowance from the government budget. Instead, it covers its expenses by charging rent for the real estate it leases. In 2008, the agency's revenue was 3.0 billion SEK, and its net income 67 million SEK. The rent is adjusted so that the net income — which goes into the state treasury — conforms to a predefined level of return on equity, as set by the Ministry of Finance. To finance investments, the SFA borrows money from the National Debt Office (), which acts as the internal bank of the government.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "643551", "title": "Australian Taxation Office", "section": "Section::::History.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 263, "text": "According to its 2013–14 Annual Plan, the ATO employs an average of 22,022 people. In the 2012–13 financial year, the ATO collected revenues totalling $313.082 billion in individual income tax, company income tax, goods and services (GST) tax, excise and others.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "757040", "title": "U.S. Customs and Border Protection", "section": "Section::::Organization.:Overview.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 410, "text": "BULLET::::- Nearly 2,500 employees in CBP revenue positions collect over $30 billion annually in entry duties and taxes through the enforcement of trade and tariff laws. In addition, these employees fulfill the agency's trade mission by appraising and classifying imported merchandise. These employees serve in positions such as import specialist, auditor, international trade specialist, and textile analyst.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "23510063", "title": "Swedish Fortifications Agency", "section": "Section::::Personnel.\n", "start_paragraph_id": 8, "start_character": 0, "end_paragraph_id": 8, "end_character": 631, "text": "The SFA employed 689 people in 2008. The majority of the employees work on a local level in real estate units linked to garrisons, where employees work in areas such as project management, property development and maintenance services. At the regional and national level, employees work in real estate purchasing and sales, defense facility development, and various management functions. The SFA considers its core competencies to be security and protective technology. The agency has stated that it aims to increase the amount of outsourcing, and as an experiment in 2006, it outsourced the property maintenance of two garrisons.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "643551", "title": "Australian Taxation Office", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 392, "text": "As the Australian government's principal revenue collection body, the ATO collects income tax, goods and services tax (GST) and other federal taxes. The ATO also has responsibility for managing the Australian Business Register, delivering the Higher Education Loan Program, delivering many Australian government payments and administering key components of Australia's superannuation system.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "51275928", "title": "Adecco Staffing, USA", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 344, "text": "Adecco Staffing, USA is the second largest provider of recruitment and staffing services in the United States, offering human resource services such as temporary staffing, permanent placement, outsourcing, career transition or outplacement. Based in Jacksonville, FL, it serves small- and mid-sized businesses as well as Fortune 500 companies.\n", "bleu_score": null, "meta": null } ] } ]
null
1srplj
how does using a debit/credit card work when used internationally?
[ { "answer": "It displays in the currency of the ATM. So in the US its Dollars, in England its the Pound, in the Eurozone its the Euro and so on.", "provenance": null }, { "answer": "With a credit card, there is no balance of course. The currency is all in terms of the country you're in. The credit card company converts it and you see the amounts in your home currency in your monthly statement or on their website. Sometimes there is a fee, but generally this is actually the cheapest and best way to convert currency. Much cheaper than those booths/stores that convert it for you.\n\n\n\nNot sure about debit cards.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "9008", "title": "Debit card", "section": "Section::::Debit cards around the world.:Canada.\n", "start_paragraph_id": 75, "start_character": 0, "end_paragraph_id": 75, "end_character": 701, "text": "In Canada, the debit card is sometimes referred to as a \"bank card\". It is a client card issued by a bank that provides access to funds and other bank account transactions, such as transferring funds, checking balances, paying bills, etc., as well as point of purchase transactions connected on the Interac network. Since its national launch in 1994, Interac Direct Payment has become so widespread that, as of 2001, more transactions in Canada were completed using debit cards than cash. This popularity may be partially attributable to two main factors: the convenience of not having to carry cash, and the availability of automated bank machines (ABMs) and direct payment merchants on the network.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9008", "title": "Debit card", "section": "Section::::Debit cards around the world.:United Kingdom.\n", "start_paragraph_id": 213, "start_character": 0, "end_paragraph_id": 213, "end_character": 1508, "text": "In the United Kingdom, banks started to issue debit cards in the mid-1980s in a bid to reduce the number of cheques being used at the point of sale, which are costly for the banks to process; the first bank to do so was Barclays with the \"Barclays Connect\" card. As in most countries, fees paid by merchants in the United Kingdom to accept credit cards are a percentage of the transaction amount, which funds card holders' interest-free credit periods as well as incentive schemes such as points or cashback. For consumer credit cards issued within the EEA, the interchange fee is capped at 0.3%, with a cap of 0.2% for debit cards, although the merchant acquirers may charge the merchant a higher fee. Although merchants won the right through The Credit Cards (Price Discrimination) Order 1990 to charge customers different prices according to the payment method, few merchants in the UK charge less for payment by debit card than by credit card, the most notable exceptions being budget airlines and travel agents. Most debit cards in the UK lack the advantages offered to holders of UK-issued credit cards, such as free incentives (points, cashback etc. (the Tesco Bank debit card being one exception)), interest-free credit and protection against defaulting merchants under Section 75 of the Consumer Credit Act 1974. Almost all establishments in the United Kingdom that accept credit cards also accept debit cards, but a minority of merchants, for cost reasons, accept debit cards and not credit cards.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9008", "title": "Debit card", "section": "Section::::Debit cards around the world.:United States.\n", "start_paragraph_id": 222, "start_character": 0, "end_paragraph_id": 222, "end_character": 595, "text": "Some consumers prefer \"credit\" transactions because of the lack of a fee charged to the consumer/purchaser. A few debit cards in the U.S. offer rewards for using \"credit\". However, since \"credit\" transactions cost more for merchants, many terminals at PIN-accepting merchant locations now make the \"credit\" function more difficult to access. For example, if you swipe a debit card at Wal-Mart or Ross in the U.S., you are immediately presented with the PIN screen for online debit. To use offline debit you must press \"cancel\" to exit the PIN screen, and then press \"credit\" on the next screen.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9008", "title": "Debit card", "section": "Section::::Types of debit card systems.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1133, "text": "Although the four largest bank card issuers (American Express, Discover Card, MasterCard, and Visa) all offer debit cards, there are many other types of debit card, each accepted only within a particular country or region, for example Switch (now: Maestro) and Solo in the United Kingdom, Interac in Canada, Carte Bleue in France, EC electronic cash (formerly Eurocheque) in Germany, UnionPay in China, RuPay in India and EFTPOS cards in Australia and New Zealand. The need for cross-border compatibility and the advent of the euro recently led to many of these card networks (such as Switzerland's \"EC direkt\", Austria's \"Bankomatkasse\", and Switch in the United Kingdom) being re-branded with the internationally recognized Maestro logo, which is part of the MasterCard brand. Some debit cards are dual branded with the logo of the (former) national card as well as Maestro (for example, EC cards in Germany, Switch and Solo in the UK, Pinpas cards in the Netherlands, Bancontact cards in Belgium, etc.). The use of a debit card system allows operators to package their product more effectively while monitoring customer spending.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9008", "title": "Debit card", "section": "Section::::Debit cards around the world.:Canada.\n", "start_paragraph_id": 74, "start_character": 0, "end_paragraph_id": 74, "end_character": 971, "text": "Canada has a nationwide EFTPOS system, called Interac Direct Payment (IDP). Since being introduced in 1994, IDP has become the most popular payment method in the country. Previously, debit cards have been in use for ABM usage since the late 1970s, with credit unions in Saskatchewan and Alberta introducing the first card-based, networked ATMs beginning in June 1977. Debit cards, which could be used anywhere a credit card was accepted, were first introduced in Canada by Saskatchewan Credit Unions in 1982. In the early 1990s, pilot projects were conducted among Canada's six largest banks to gauge security, accuracy and feasibility of the Interac system. Slowly in the later half of the 1990s, it was estimated that approximately 50% of retailers offered Interac as a source of payment. Retailers, many small transaction retailers like coffee shops, resisted offering IDP to promote faster service. In 2009, 99% of retailers offer IDP as an alternative payment form.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "9008", "title": "Debit card", "section": "Section::::Types of debit card systems.:Offline debit system.\n", "start_paragraph_id": 13, "start_character": 0, "end_paragraph_id": 13, "end_character": 545, "text": "Offline debit cards have the logos of major credit cards (for example, Visa or MasterCard) or major debit cards (for example, Maestro in the United Kingdom and other countries, but not the United States) and are used at the point of sale like a credit card (with payer's signature). This type of debit card may be subject to a daily limit, and/or a maximum limit equal to the current/checking account balance from which it draws funds. Transactions conducted with offline debit cards require 2–3 days to be reflected on users’ account balances.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "26425793", "title": "Alternative payments", "section": "Section::::Types.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 503, "text": "A debit card (also known as a bank card or check card) is a plastic card that provides an alternative payment method to cash when making purchases. A charge card is a plastic card that provides an alternative to cash when making purchases in which the issuer and the cardholder enter into an agreement that the debt incurred on the charge account will be paid in full and by due date. Debit and charge cards are used and accepted in many countries and can be used at a point of sale location or online.\n", "bleu_score": null, "meta": null } ] } ]
null
3lxqik
how does the volkswagen emission software work?
[ { "answer": "VW and Audi included software that could sense when the engine was being emission tested. Once a testing situation was detected, the engine was electronically governed to operate in a manner that would pass emissions testing - this would come at the expense of performance. \n\nWhen the software detected normal operating behavior (daily driving), the safeguards were removed and the vehicle was allowed to pollute at 40x the allowed limit. The final result was increased MPG's and performance at the expense of reduced air quality. \n\nA comprehensive FAQ can be found on [Jalopnik](_URL_0_).", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "685912", "title": "Rebadging", "section": "Section::::Luxury vehicles.\n", "start_paragraph_id": 23, "start_character": 0, "end_paragraph_id": 23, "end_character": 805, "text": "The business strategy of Volkswagen is to standardize platforms, components, and technologies, thus improving the firm's profitability and growth. For example, Audi uses components from their more pedestrian counterparts, sold as Volkswagen Group's mass-market brands. As an effort to place Audi as a \"premium\" marque, Volkswagen introduces new technologies in Audi-branded cars before fitting them to mainstream products (such as the Direct-Shift Gearbox). Nevertheless, Volkswagen uses platform sharing extensively. For example, the basic A platform underpins the Golf, Jetta, New Beetle, Audi TT and A3, SEAT Leon and Toledo, as well as the Škoda Octavia, while the \"top end\" D platform served the VW Phaeton and Bentley Continental GT in steel form, and the Audi A8 in aluminum form during the 2000s.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47896187", "title": "Volkswagen emissions scandal", "section": "Section::::Background.:VW anti-pollution system.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 904, "text": "With the addition of a diesel particulate filter to capture soot, and on some vehicle models, a urea-based exhaust aftertreatment system, Volkswagen described the engines as being as clean as or cleaner than US and Californian requirements, while providing good performance. In reality, the system failed to combine good fuel economy with compliant emissions, and Volkswagen chose around 2006 to program the Engine Control Unit to switch from good fuel economy and high emissions to low-emission compliant mode when it detected an emissions test, particularly for the EA 189 engine. This caused the engine to emit levels above limits in daily operation, but comply with US standards when being tested, constituting a defeat device. In 2015 the news magazine \"Der Spiegel\" reported that at least 30 people at management level in Volkswagen knew about the deceit for years which Volkswagen denied in 2015.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47896187", "title": "Volkswagen emissions scandal", "section": "Section::::Volkswagen's response.:Initial response August, September 2015.\n", "start_paragraph_id": 39, "start_character": 0, "end_paragraph_id": 39, "end_character": 1050, "text": "Volkswagen announced that 11 million cars were involved in the falsified emission reports, and that over seven billion dollars would be earmarked to deal with the costs of rectifying the software at the heart of the pollution statements. The newly appointed CEO of Volkswagen Mathias Müller stated that the software was only activated in a part of those 11 million cars, which has yet to be determined. The German tabloid \"Bild\" claimed that top management had been aware of the software's use to manipulate exhaust settings as early as 2007. Bosch provided the software for testing purposes and warned Volkswagen that it would be illegal to use the software to avoid emissions compliance during normal driving. \"Der Spiegel\" followed \"Bild\" with an article dated 30 September 2015 to state that some groups of people were aware of this in 2005 or 2006. \"Süddeutsche Zeitung\" had similarly reported, that Heinz-Jakob Neusser, one of Volkswagen's top executives, had ignored at least one engineer's warnings over \"possibly illegal\" practices in 2011.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "20648089", "title": "Digital Millennium Copyright Act", "section": "Section::::Criticisms.:Abuse of the anti-circumvention provision.\n", "start_paragraph_id": 141, "start_character": 0, "end_paragraph_id": 141, "end_character": 289, "text": "In 2015 Volkswagen abused the DMCA to hide their vehicles emissions cheat. It has been suggested that had the DMCA not prevented access to the software \"..a researcher with legal access to Volkswagen's software could have discovered the code that changed how the cars behave in testing..\"\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47896187", "title": "Volkswagen emissions scandal", "section": "Section::::Volkswagen's response.:Other irregularities, November 2015.:3.0 liter TDI emissions.\n", "start_paragraph_id": 46, "start_character": 0, "end_paragraph_id": 46, "end_character": 1154, "text": "On 20 November 2015, the EPA said Volkswagen officials told the agency that all 3.0-liter TDI diesel engines sold in the US from 2009 through 2015 were also fitted with emissions-cheating software, in the form of \"alternate exhaust control devices\". These are prohibited in the United States, however the software is legal in Europe. Volkswagen acknowledges these devices' existence, but maintains that they were not installed with a \"forbidden purpose\". On 4 January 2016, the US Department of Justice filed a complaint in a federal court against VW, alleging that the respective 3.0-liter diesel engines only meet the legal emission requirements in a \"temperature conditioning\" mode that is automatically switched on during testing conditions, while at \"all other times, including during normal vehicle operation, the vehicles operate in a 'normal mode' that permits emissions of up to nine times the federal standard\". The complaint covers around 85,000 3.0 liter diesel vehicles sold in the United States since 2009, including the Volkswagen Touareg, Porsche Cayenne, Audi A6 Quattro, Audi A7 Quattro, Audi A8, Audi A8L, Audi Q5, and Audi Q7 models.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "47896187", "title": "Volkswagen emissions scandal", "section": "Section::::Volkswagen's response.:New orders, September 2015.\n", "start_paragraph_id": 60, "start_character": 0, "end_paragraph_id": 60, "end_character": 236, "text": "In September 2015, Volkswagen's Belgian importer, D'Ieteren, announced that it would offer free engine upgrades to 800 customers who had ordered a vehicle with a diesel engine which was likely to have been fitted with illegal software.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "528080", "title": "FADEC", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 342, "text": "A full authority digital engine (or electronics) control (FADEC) is a system consisting of a digital computer, called an \"electronic engine controller\" (EEC) or \"engine control unit\" (ECU), and its related accessories that control all aspects of aircraft engine performance. FADECs have been produced for both piston engines and jet engines.\n", "bleu_score": null, "meta": null } ] } ]
null
46c4zy
what is bio chemistry, and what do bio chemists do?
[ { "answer": "Biochemistry is, simply put, the study of the chemical reactions that underlie biological systems. Think of the way proteins are formed, or how glucose is broken down into ATP. Besides academic study and aid of related fields like pharmacology, biochemistry has a lot of industrial applications nowadays, such as the development of compostable plastics or biological remediation of pollution.", "provenance": null }, { "answer": "At the tiniest level everything is made up of atoms. Imagine them as simple Lego building blocks. Some things are really easy to make, like building a wall out of Lego bricks, but living things are VERY complex. \n\nA living cell is like a crazy complex Lego machine that has all sorts of moving gears and machinery that can put itself together.\n\nBio-Chemistry is the study of those crazy complex machines, what pieces they are made up of and how those pieces can connect to each other, again like a Lego brick.\n\nBio-Chemists do that actual studying. They try and figure out what the pieces are and find new ways they could connect. They try to build different machines by swapping some of the pieces out and seeing what happens.\n\nThen they try and see how useful their new machines are for doing things in the real world.\n\n/u/TokyoJokeyo explains some of the uses for this in the real world.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "22531935", "title": "Faculty of Agriculture, Kagawa University", "section": "Section::::Department of Applied Biological Science.:Bioresource Chemistry and Bioenvironmental Science.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 285, "text": "Bioresource Chemistry and Bioenvironmental Science focuses on essential knowledge and application of various chemical substances with biological functions, and on developing a solid foundation in the chemistry and biology of various ecosystems (from the terrestrial land to the seas).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "616670", "title": "Biochemical engineering", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 597, "text": "Biochemical engineering, also known as bioprocess engineering, is a field of study with roots stemming from chemical engineering and biological engineering. It mainly deals with the design, construction, and advancement of unit processes that involve biological organisms or organic molecules and has various applications in areas of interest such as biofuels, food, pharmaceuticals, biotechnology, and water treatment processes. The role of a biochemical engineer is to take findings developed by biologists and chemists in a laboratory and translate that to a large-scale manufacturing process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1686272", "title": "Chemical biology", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 560, "text": "Chemical biology is a scientific discipline spanning the fields of chemistry and biology. The discipline involves the application of chemical techniques, analysis, and often small molecules produced through synthetic chemistry, to the study and manipulation of biological systems. In contrast to biochemistry, which involves the study of the chemistry of biomolecules and regulation of biochemical pathways within and between cells, chemical biology deals with chemistry \"applied to\" biology (synthesis of biomolecules, simulation of biological systems etc.).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "4502", "title": "Biotechnology", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 552, "text": "Biotechnology (commonly abbreviated as biotech) is the broad area of biology involving living systems and organisms to develop or make products, or \"any technological application that uses biological systems, living organisms, or derivatives thereof, to make or modify products or processes for specific use\" (UN Convention on Biological Diversity, Art. 2). Depending on the tools and applications, it often overlaps with the (related) fields of molecular biology, bio-engineering, biomedical engineering, biomanufacturing, molecular engineering, etc.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "5180", "title": "Chemistry", "section": "", "start_paragraph_id": 2, "start_character": 0, "end_paragraph_id": 2, "end_character": 620, "text": "In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant chemistry (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the moon (astrophysics), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "3954", "title": "Biochemistry", "section": "Section::::Relationship to other \"molecular-scale\" biological sciences.\n", "start_paragraph_id": 55, "start_character": 0, "end_paragraph_id": 55, "end_character": 407, "text": "BULLET::::- 'Chemical biology' seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1686272", "title": "Chemical biology", "section": "Section::::Introduction.\n", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 386, "text": "Chemical biology is one of several interdisciplinary sciences that tend to differ from older, reductionist fields and whose goals are to achieve a description of scientific holism. Chemical biology has scientific, historical and philosophical roots in medicinal chemistry, supramolecular chemistry, bioorganic chemistry, pharmacology, genetics, biochemistry, and metabolic engineering.\n", "bleu_score": null, "meta": null } ] } ]
null
qh46x
why are astronomers now using radio telescopes more than optical ones?
[ { "answer": "Light and radio waves are the same thing, photons. \n\n\n\nOptical telescopes were used in the past because they were easy to make, and you don't have to process the image, you just take the photo and then look at it.\nRadio telescopes pick up light in the 'radio' range of the light spectrum, the data needs to be processed and interpreted for us to 'see' what they are looking at.\n\n\n\nWe use Radio telescopes over Optical telescopes for certain things, because radio frequencies have a longer wavelength, and have less chance of interacting with things, or getting blocked. It's basically easier to pick up a radio signal than visible light. (Think of how radio can pass through the walls in your house to reach your stereo, but your tv remote doesn't work if someone walks in the way)\n\n\n\nThere are many types of Telescopes, including Infrared, Ultraviolet, xray, etc.", "provenance": null }, { "answer": "Are you talking about deep space telescopes, like the hubble?\n\nThere's a thing called [redshift](_URL_1_) which relates to the ever expanding universe causing the frequency of photons hitting our sensors to decrease to the point of falling below the visible spectrum, requiring modern telescopes to look for infrared frequencies or below.\n\nBut since you're five, [here's a site](_URL_0_) with some really great videos that make it all pretty while it's explained :)", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "213607", "title": "History of the telescope", "section": "Section::::Other wavelengths.:Radio telescopes.\n", "start_paragraph_id": 76, "start_character": 0, "end_paragraph_id": 76, "end_character": 335, "text": "Because radio telescopes have low resolution, they were the first instruments to use interferometry allowing two or more widely separated instruments to simultaneously observe the same source. Very long baseline interferometry extended the technique over thousands of kilometers and allowed resolutions down to a few milli-arcseconds.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "50650", "title": "Astronomy", "section": "Section::::Amateur astronomy.\n", "start_paragraph_id": 119, "start_character": 0, "end_paragraph_id": 119, "end_character": 562, "text": "Most amateurs work at visible wavelengths, but a small minority experiment with wavelengths outside the visible spectrum. This includes the use of infrared filters on conventional telescopes, and also the use of radio telescopes. The pioneer of amateur radio astronomy was Karl Jansky, who started observing the sky at radio wavelengths in the 1930s. A number of amateur astronomers use either homemade telescopes or use radio telescopes which were originally built for astronomy research but which are now available to amateurs (\"e.g.\" the One-Mile Telescope).\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "507266", "title": "Observational astronomy", "section": "Section::::Developments and diversity.:Radio astronomy.\n", "start_paragraph_id": 26, "start_character": 0, "end_paragraph_id": 26, "end_character": 438, "text": "Radio astronomy has continued to expand its capabilities, even using radio astronomy satellites to produce interferometers with baselines much larger than the size of the Earth. However, the ever-expanding use of the radio spectrum for other uses is gradually drowning out the faint radio signals from the stars. For this reason, in the future radio astronomy might be performed from shielded locations, such as the far side of the Moon.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "748", "title": "Amateur astronomy", "section": "Section::::Objectives.\n", "start_paragraph_id": 6, "start_character": 0, "end_paragraph_id": 6, "end_character": 745, "text": "Most amateur astronomers work at visible wavelengths, but a small minority experiment with wavelengths outside the visible spectrum. An early pioneer of radio astronomy was Grote Reber, an amateur astronomer who constructed the first purpose built radio telescope in the late 1930s to follow up on the discovery of radio wavelength emissions from space by Karl Jansky. Non-visual amateur astronomy includes the use of infrared filters on conventional telescopes, and also the use of radio telescopes. Some amateur astronomers use home-made radio telescopes, while others use radio telescopes that were originally built for astronomical research but have since been made available for use by amateurs. The One-Mile Telescope is one such example.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "46656", "title": "Radio telescope", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 1324, "text": "A radio telescope is a specialized antenna and radio receiver used to receive radio waves from astronomical radio sources in the sky. Radio telescopes are the main observing instrument used in radio astronomy, which studies the radio frequency portion of the electromagnetic spectrum emitted by astronomical objects, just as optical telescopes are the main observing instrument used in traditional optical astronomy which studies the light wave portion of the spectrum coming from astronomical objects. Radio telescopes are typically large parabolic (\"dish\") antennas similar to those employed in tracking and communicating with satellites and space probes. They may be used singly or linked together electronically in an array. Unlike optical telescopes, radio telescopes can be used in the daytime as well as at night. Since astronomical radio sources such as planets, stars, nebulas and galaxies are very far away, the radio waves coming from them are extremely weak, so radio telescopes require very large antennas to collect enough radio energy to study them, and extremely sensitive receiving equipment. Radio observatories are preferentially located far from major centers of population to avoid electromagnetic interference (EMI) from radio, television, radar, motor vehicles, and other man-made electronic devices.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "249388", "title": "Open spectrum", "section": "Section::::Radio astronomy needs.\n", "start_paragraph_id": 5, "start_character": 0, "end_paragraph_id": 5, "end_character": 559, "text": "Astronomers use many radio telescopes to look up at objects such as pulsars in our own Galaxy and at distant radio galaxies up to about half the distance of the observable sphere of our Universe. The use of radio frequencies for communication creates pollution from the point of view of astronomers, at best, creating noise or, at worst, totally blinding the astronomical community for certain types of observations of very faint objects. As more and more frequencies are used for communication, astronomical observations are getting more and more difficult.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "58968", "title": "Observatory", "section": "Section::::Astronomical observatories.:Ground-based observatories.:Radio observatories.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 867, "text": "Beginning in 1930s, radio telescopes have been built for use in the field of radio astronomy to observe the Universe in the radio portion of the electromagnetic spectrum. Such an instrument, or collection of instruments, with supporting facilities such as control centres, visitor housing, data reduction centers, and/or maintenance facilities are called \"radio observatories\". Radio observatories are similarly located far from major population centers to avoid electromagnetic interference (EMI) from radio, TV, radar, and other EMI emitting devices, but unlike optical observatories, radio observatories can be placed in valleys for further EMI shielding. Some of the world's major radio observatories include the Socorro, in New Mexico, United States, Jodrell Bank in the UK, Arecibo in Puerto Rico, Parkes in New South Wales, Australia, and Chajnantor in Chile.\n", "bleu_score": null, "meta": null } ] } ]
null
3v5kic
how do recoilless rifles work?
[ { "answer": "When the round is launched, it blows expanding gas out of the back of the tube. This balances the expanding gas that blows out of the front, along with the round.\n\nIt can't be used in tanks and the like because the back of the tube is literally where the crew sits. It would kill everyone inside.\n\nIt's also just not necessary. The weight of the tank takes the brunt of the recoil, along with other systems designed to minimize it.", "provenance": null }, { "answer": "Half the energy of a round goes out the front, half the energy goes out the back leaving nothing to affect the gun. In a simplified regular gun all the energy goes out the front leaving the same amount to push back the gun. \n\nNewton's third law for each reaction there is an equal but opposite reaction.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "36624250", "title": "Man-portable anti-tank systems", "section": "Section::::Recoilless rifles.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 999, "text": "A recoilless rifle (RR) or recoilless gun is a type of lightweight artillery system or man-portable launcher that is designed to eject some form of countermass, such as propellant gas, from the rear of the weapon at the moment of firing, creating forward thrust that counteracts most of the weapon's recoil. Technically, only devices that use a rifled barrel are recoilless \"rifles\". Smoothbore variants are recoilless \"guns\". This distinction is often lost, and both are often called recoilless rifles. Though similar in appearance to a rocket launcher, a recoilless weapon fires shells that use conventional gun propellant. The key difference from rocket launchers (whether man-portable or not) is that the projectile of the recoilless rifle is initially launched using conventional explosive propellant rather than a rocket motor. While there are rocket-assisted rounds for recoilless launchers, they are still ejected from the barrel by the detonation of an initial explosive propelling charge.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "682676", "title": "List of artillery by type", "section": "Section::::Recoilless guns.:Recoilless guns.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 367, "text": "A recoilless gun or recoilless rifle (RCL) is a lightweight weapon that fires a heavier projectile than would be practical to fire from a recoiling weapon of comparable size. Technically, only devices that use a rifled barrel are recoilless rifles. Smoothbore variants are recoilless guns. This distinction is often lost, and both are often called recoilless rifles.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "833572", "title": "Internal ballistics", "section": "Section::::General concerns.:Ratio of propellant to projectile mass.\n", "start_paragraph_id": 72, "start_character": 0, "end_paragraph_id": 72, "end_character": 706, "text": "There is a solution to the recoil issue, though it is not without cost. A muzzle brake or recoil compensator is a device which redirects the powder gas at the muzzle, usually up and back. This acts like a rocket, pushing the muzzle down and forward. The forward push helps negate the feel of the projectile recoil by pulling the firearm forwards. The downward push, on the other hand, helps counteract the rotation imparted by the fact that most firearms have the barrel mounted above the center of gravity. Overt combat guns, large-bore high-powered rifles, long-range handguns chambered for rifle ammunition, and action-shooting handguns designed for accurate rapid fire, all benefit from muzzle brakes.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "236060", "title": "Recoilless rifle", "section": "", "start_paragraph_id": 1, "start_character": 0, "end_paragraph_id": 1, "end_character": 982, "text": "A recoilless rifle, recoilless launcher or recoilless gun, sometimes abbreviated \"RR\" or \"RCL\" (for ReCoilLess) is a type of lightweight artillery system or man-portable launcher that is designed to eject some form of countermass such as propellant gas from the rear of the weapon at the moment of firing, creating forward thrust that counteracts most of the weapon's recoil. This allows for the elimination of much of the heavy and bulky recoil-counteracting equipment of a conventional cannon as well as a thinner-walled barrel, and thus the launch of a relatively large projectile from a platform that would not be capable of handling the weight or recoil of a conventional gun of the same size. Technically, only devices that use spin-stabilized projectiles fired from a rifled barrel are recoilless rifles, while smoothbore variants (which can be fin-stabilized or unstabilized) are recoilless guns. This distinction is often lost, and both are often called recoilless rifles.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "236060", "title": "Recoilless rifle", "section": "", "start_paragraph_id": 3, "start_character": 0, "end_paragraph_id": 3, "end_character": 924, "text": "Because some projectile velocity is inevitably lost to the recoil compensation, recoilless rifles tend to have inferior range to traditional cannons, although with a far greater ease of transport, making them popular with paratroop, mountain warfare and special forces units, where portability is of particular concern, as well as with some light infantry and infantry fire support units. The greatly diminished recoil allows for devices that can be carried by individual infantrymen: heavier recoilless rifles are mounted on light tripods, wheeled light carriages, or small vehicles, and intended to be carried by crew of two to five. The largest versions retain enough bulk and recoil to be restricted to a towed mount or relatively heavy vehicle, but are still much lighter and more portable than cannons of the same scale. Such large systems have mostly been replaced by guided anti-tank missiles in first-world armies.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1257062", "title": "Transitional ballistics", "section": "Section::::Altering transitional ballistics.:Suppressing the blast.\n", "start_paragraph_id": 14, "start_character": 0, "end_paragraph_id": 14, "end_character": 648, "text": "A \"recoil compensator\" is designed to direct the gases upwards at roughly a right angle to the bore, in essence making it a small rocket that pushes the muzzle downwards, and counters the \"flip\", or rise of the muzzle caused by the high bore line of most firearms. These are often found on \"raceguns\" used for action shooting and in heavy, rifle caliber handguns used in metallic silhouette shooting. In the former case, the compensator serves to keep the sights down on target for a quick follow-up shot, while in the latter case they keep the heavy recoil directed backwards, preventing the pistol from trying to twist out of the shooter's grip.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "421016", "title": "Rocket launcher", "section": "Section::::Types.:Shoulder-fired.\n", "start_paragraph_id": 12, "start_character": 0, "end_paragraph_id": 12, "end_character": 217, "text": "Recoilless rifles are sometimes confused with rocket launchers. A recoilless rifle launches its projectile using an explosive powder charge, not a rocket engine, though some such systems have sustainer rocker motors.\n", "bleu_score": null, "meta": null } ] } ]
null
3q130z
why do night vision cameras cast a shadow?
[ { "answer": "Many night vision cameras use an infrared lamp to illuminate the scene being recorded. You can't see that light, but the camera can. Hth.", "provenance": null }, { "answer": null, "provenance": [ { "wikipedia_id": "46953", "title": "Chroma key", "section": "Section::::Tolerances.:Even lighting.\n", "start_paragraph_id": 45, "start_character": 0, "end_paragraph_id": 45, "end_character": 417, "text": "Sometimes a shadow can be used to create a visual effect. Areas of the bluescreen or greenscreen with a shadow on them can be replaced with a darker version of the desired background video image, making it look like the person is casting a shadow on them. Any spill of the chroma key color will make the result look unnatural. A difference in the focal length of the lenses used can affect the success of chroma key.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "40854179", "title": "Neutron microscope", "section": "Section::::Shadowgraphs.\n", "start_paragraph_id": 10, "start_character": 0, "end_paragraph_id": 10, "end_character": 626, "text": "Shadowgraphs are images produced by casting a shadow on a surface, usually taken with a pinhole camera and are widely used for nondestructive testing. Such cameras provide low illumination levels that require long exposure times. They also provide poor spatial resolution. The resolution of such a lens cannot be smaller than the hole diameter. A good balance between illumination and resolution is obtained when the pinhole diameter is about 100 times smaller than the distance between the pinhole and the image screen, effectively making the pinhole an f/100 lens. The resolution of an f/100 pinhole is about half a degree.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "17357470", "title": "Photographic lighting", "section": "Section::::Perceptual cause and effect.\n", "start_paragraph_id": 11, "start_character": 0, "end_paragraph_id": 11, "end_character": 643, "text": "The goal in all photographs is not to create an impression of normality. But as with magic, knowing what the audience normally expects to see required to pull off a lighting strategy which fools the brain or creates an other than normal impression. Light direction relative to the camera can make a round ball appear to be a flat disk or a sphere. The position of highlights and direction and length of shadows will provide other clues to shape and outdoors the time of day. The tone of the shadows on an object or provide contextual clues about the time of day or environment and by inference based on personal experience the mood of person.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "7340495", "title": "Eye shadow", "section": "", "start_paragraph_id": 4, "start_character": 0, "end_paragraph_id": 4, "end_character": 555, "text": "Many people use eye shadow simply to improve their appearance, but it is also commonly used in theatre and other plays, to create a memorable look, with bright, bold colors. Depending on skin tone and experience, the effect of eye shadow usually brings out glamour and gains attention. The use of eye shadow attempts to replicate the natural eye shadow that some women exhibit due to a natural contrasting pigmentation on their eyelids. Natural eye shadow can range anywhere from a glossy shine to one's eyelids, to a pinkish tone, or even a silver look.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "2369975", "title": "Catch light", "section": "Section::::Usage in film and television.\n", "start_paragraph_id": 9, "start_character": 0, "end_paragraph_id": 9, "end_character": 745, "text": "This method most often appears as bright spots and reflections of surroundings that can contain entire images in the subject's eyes. This property is sometimes used as a plot point in movies and television. Typically, this trope (or cliché) is represented by computer magnification of an image to gain information about the surroundings of the person being photographed, essentially using the eye as a mirror. Audiences usually perceive eyes without specular highlights to be lifeless or evil, and for this reason many cinematographers specifically eliminate catch lights on antagonistic characters. It is also commonly found in anime, usually used in an over-dramatized manner to show different emotions accompanied by exaggerated expressions.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "166868", "title": "Exposure (photography)", "section": "Section::::Optimum exposure.\n", "start_paragraph_id": 22, "start_character": 0, "end_paragraph_id": 22, "end_character": 902, "text": "In a scene with strong or harsh lighting, the \"ratio\" between highlight and shadow luminance values may well be larger than the \"ratio\" between the film's maximum and minimum useful exposure values. In this case, adjusting the camera's exposure settings (which only applies changes to the whole image, not selectively to parts of the image) only allows the photographer to choose between underexposed shadows or overexposed highlights; it cannot bring both into the useful exposure range at the same time. Methods for dealing with this situation include: using some kind of fill lighting to gently increase the illumination in shadow areas; using a graduated ND filter or gobo to reduce the amount of light coming from the highlight areas; or varying the exposure between multiple, otherwise identical, photographs (exposure bracketing) and then combining them afterwards in some kind of HDRI process.\n", "bleu_score": null, "meta": null }, { "wikipedia_id": "1578810", "title": "Photogram", "section": "Section::::History.:Prehistory.\n", "start_paragraph_id": 7, "start_character": 0, "end_paragraph_id": 7, "end_character": 1119, "text": "The phenomenon of the shadow has always aroused human curiosity and inspired artistic representation, as recorded by Pliny the Elder, and various forms of shadow play since the 1st millennium BCE. The photogram in essence is a means by which the fall of light and shade on a surface may be automatically captured and preserved. To do so required a substance that would react to light, and from the 17th century photochemical reactions were progressively observed or discovered in salts of silver, iron, uranium and chromium. In 1725 Johann Heinrich Schulze was the first to demonstrate a temporary photographic effect in silver salts, confirmed by Carl Wilhhelm Scheele in 1777, who found that violet light caused the greatest reaction in silver chloride. Humphry Davy and Thomas Wedgewood reported that they had produced pictures from stencils on leather and paper, but had no means of fixing them and some organic substances respond to light, as evidenced in sunburn (an effect used by Dennis Oppenheim in his 1970 \"Reading Position for Second Degree Burn\") and photosynthesis (with which Lloyd Godman forms images).\n", "bleu_score": null, "meta": null } ] } ]
null