id
stringlengths 5
6
| input
stringlengths 3
301
| output
list | meta
null |
---|---|---|---|
fh6po0
|
Do historians generally believe that terracotta army found near Qin Shi Huang's mausoleum were made using imported 'Hellenistic' expertise?
|
[
{
"answer": "There was a great answer on this question about a year ago by u/kungming2 in response to someone who watched the same documentary:\n\n_URL_0_",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "55985255",
"title": "Li Jian (art historian)",
"section": "Section::::The Terracotta Army.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 638,
"text": "In 2017 Li Jian was curator of the \"Terracotta Army: Legacy of the First Emperor of China\", the Qin Dynasty terracotta soldiers exhibition at the Virginia Museum of Fine Arts. The VMFA's director, Alex Neryges, stated that the Terracotta Army was the biggest archaeological discovery of all time calling the Qin Shi Huang dynasty “one of the most amazing civilizations in the history of our planet.” Discovered in 1974, the realistic terracotta portraits of the soldiers were uncovered by a farmer digging a well near the early capital city of China, Xianyang. Neryges reported record-breaking attendance at the VMFA for this exhibition.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "66297",
"title": "Chinese art",
"section": "Section::::History and development of Chinese art.:Early Imperial China (221 BC–AD 220).:Qin art.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 837,
"text": "The Terracotta Army, inside the Mausoleum of the First Qin Emperor, consists of more than 7,000 life-size tomb terra-cotta figures of warriors and horses buried with the self-proclaimed first Emperor of Qin (Qin Shi Huang) in 210–209 BC. The figures were painted before being placed into the vault. The original colors were visible when the pieces were first unearthed. However, exposure to air caused the pigments to fade, so today the unearthed figures appear terracotta in color. The figures are in several poses including standing infantry and kneeling archers, as well as charioteers with horses. Each figure's head appears to be unique, showing a variety of facial features and expressions as well as hair styles. The spectacular realism displayed by the sculptures is an evidence of the advancement of art during the Qin Dynasty.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53864454",
"title": "Tang Standing Horse figure, Canberra",
"section": "Section::::Iconology.:Caring for the afterlife.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 528,
"text": "For instance, numerous tomb artefacts of soldiers (also known as a \"terracotta army\") have been found in the Mausoleum of the First Qin Emperor. The Qin dynasty was militaristic, heavy-handed and bureaucratic - it was a time of intense and constant warfare with its neighbours and its military was the most powerful and technologically advanced in the world. Therefore, it is unsurprising to find an overwhelming quantity of tomb figurines cast in the image of soldiers in the tomb of Qin Shi Huang, the first emperor of China.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "148653",
"title": "Terracotta Army",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 257,
"text": "The Terracotta Army is a collection of terracotta sculptures depicting the armies of Qin Shi Huang, the first Emperor of China. It is a form of funerary art buried with the emperor in 210–209 BCE with the purpose of protecting the emperor in his afterlife.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "249686",
"title": "New Chronology (Rohl)",
"section": "Section::::Rohl's New Chronology.:Evidence adduced.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 632,
"text": "BULLET::::- Rohl notes that no Apis bull burials are recorded in the Lesser Vaults at Saqqara for the Twenty-first and early Twenty-second Dynasties. He also argues that the reburial sequence of the mummies of the New Kingdom pharaohs in the Royal Cache (TT 320) shows that these two dynasties were contemporary (thus explaining why there are too few Apis burials for the period). Rohl finds that in the royal burial ground at Tanis it appears that the tomb of Osorkon II of the 22nd Dynasty was built before that of Psusennes I of the 21st Dynasty; in Rohl's view this can only be explained if the two dynasties were contemporary.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35333953",
"title": "Mausoleum of the First Qin Emperor",
"section": "Section::::Archaeological studies.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 1070,
"text": "The necropolis complex of Qin Shi Huang is a microcosm of the Emperor's empire and palace, with the tomb mound at the center. There are two walls, the inner and outer walls, surrounding the tomb mound, and a number of pits containing figures and artifacts were found inside and outside the walls. To the west inside the inner wall were found bronze chariots and horses. Inside the inner wall were also found terracotta figures of courtiers and bureaucrats who served the Emperor. Outside of the inner wall but inside the outer wall, pits with terracotta figures of entertainers and strongmen, as well as a pit containing a stone suit of armour were found. To the north of the outer wall were found the imperial park with bronze cranes, swan and ducks with groups of musicians. Outside the outer walls were also found imperial stables where real horses were buried with terracotta figures of grooms kneeling beside them. To the west were found mass burial grounds for the labourers forced to build the complex. The Terracotta Army is about 1.5 km east of the tomb mound.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34555",
"title": "1970s",
"section": "Section::::Popular culture.:Architecture.\n",
"start_paragraph_id": 311,
"start_character": 0,
"end_paragraph_id": 311,
"end_character": 357,
"text": "Terracotta Army figures, dating from 210 BC, were discovered in 1974 by some local farmers in Lintong District, Xi'an, Shaanxi Province, China, near the Mausoleum of the First Qin Emperor (Chinese: 秦始皇陵; pinyin: Qín Shǐhuáng Ling). In 1978, electrical workers in Mexico City found the remains of the Great Pyramid of Tenochtitlan in the middle of the city.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2t3dcz
|
how's does liquid soup turn into foam when i pump it from it container? what's exactly is happening?
|
[
{
"answer": "When you push the pump (which contains one chamber for soap and one for air) it creates negative pressure that brings the liquid and air together, creating foam. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "268420",
"title": "Foam",
"section": "Section::::Formation.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 341,
"text": "One of the ways foam is created is through dispersion, where a large amount of gas is mixed with a liquid. A more specific method of dispersion involves injecting a gas through a hole in a solid into a liquid. If this process is completed very slowly, then one bubble can be emitted from the orifice at a time as shown in the picture below.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2035017",
"title": "Polybenzimidazole fiber",
"section": "Section::::Synthesis.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 529,
"text": "This foam can be reduced by conducting the polycondensation at a high temperature around 200 °C and under the pressure of 2.1-4.2 MPa. The foam can also be controlled by adding high boiling point liquids such as diphenylether or cetane to the polycondesation. The boiling point can make the liquid stay in the first stage of polycondesation but evaporate in the second stage of solid condensation. The disadvantage of this method is that there are still some liquids which remain in PBI and it is hard to remove them completely.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36550176",
"title": "Vanishing spray",
"section": "Section::::Technical details.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 639,
"text": "The can contains water (~80%), butane gas (~17%), surfactant (~1%), and other ingredients including vegetable oil (~2%). The liquefied butane expands when the product is ejected from the can. The butane evaporates instantly, forming bubbles of gas in the water/surfactant mixture. The surfactant(s) cause the bubbles to have stability and hence a gas-in-liquid colloid (foam) forms. The bubbles eventually collapse and the foam disappears, leaving only water and surfactant residue on the ground. More technical details can be found in the US patent applications for two of the commercial products available: Spuni (2001) and 9-15 (2010).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "268420",
"title": "Foam",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 444,
"text": "Solid foams can be closed-cell or open-cell. In closed-cell foam, the gas forms discrete pockets, each completely surrounded by the solid material. In open-cell foam, gas pockets connect to each other. A bath sponge is an example of an open-cell foam: water easily flows through the entire structure, displacing the air. A camping mat is an example of a closed-cell foam: gas pockets are sealed from each other so the mat cannot soak up water.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41136515",
"title": "Foam pump",
"section": "Section::::Operation.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 298,
"text": "Foamers can be purchased alone, or filled with a liquid product like soap. When the liquid is mixed with air, the liquid product can be dispersed through the pump-top as a foam. Foamers can also be re-used with different liquid products to extend the mass of the liquid by creating a foam-version.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12935867",
"title": "Spray foam",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 403,
"text": "Spray foam is a chemical product created by two materials, isocyanate and polyol resin, which react when mixed with each other and expand up to 30-60 times its liquid volume after it is sprayed in place. This expansion makes it useful as a specialty packing material which forms to the shape of the product being packaged and produces a high thermal insulating value with virtually no air infiltration.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41136515",
"title": "Foam pump",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 381,
"text": "A foam pump, or squeeze foamer and dispensing device is a non-aerosol way of dispensing liquid materials. The foam pump outputs the liquid in the form of foam and it is operated by squeezing. The parts of the foam pump are similar to those of the other pump devices. Many times the foaming pump comes with a protective cap. Most of the components are made from polypropylene (PP).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2jdyud
|
why do i sometimes wake up in the middle of the night with the sole thought of remembering i forgot to set my alarm?
|
[
{
"answer": "I'd love to know the answer to this too. Sometimes I can't even get to sleep until I *know* I set the alarm.",
"provenance": null
},
{
"answer": "You wake up in the middle of the night. You don't hear your alarm. You think you need to hear your alarm. You think that you didn't set the alarm.\n\nNow there are two possibilities:\n\n1. You have set your alarm and you go back to sleep, annoyed that you woke up too early.\n\n2. You didn't set your alarm, set it now and go back to sleep, happy that you woke up too early instead of too late.\n\nSleep well tonight :-)",
"provenance": null
},
{
"answer": "Okay. It might have something to do with this thing I read a while ago. I haven't tried it, so I'm not sure it works. \n\nYou wake up a minute before your alarm goes off because you think about it the night before enough for your brain to set up a sort of psychic \"alarm\" that wakes you up in the time. Maybe your brain realizes that you depend on your alarm to wake up and your subconscious remembers that you did not set it? \n\nI'm not sure if this is right, but it makes the most sense to me. As for the psychic alarm, I kinda believe that works. You do have an internal clock of sorts, but that doesn't explain why I always wake up late for classes... Oops. ",
"provenance": null
},
{
"answer": "This is your subconscious reminding you of an important task. Your brain does not have the memory of having set the alarm, knows that the alarm needs to be set to keep you out of trouble, and it wakes you up to solve the problem. Similar to waking up when you hear your baby crying in the night, except your brain provides the stimulus, not your baby. :o) ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "27834",
"title": "Sleep",
"section": "Section::::Physiology.:Awakening.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 536,
"text": "Today, many humans wake up with an alarm clock; however, people can also reliably wake themselves up at a specific time with no need for an alarm. Many sleep quite differently on workdays versus days off, a pattern which can lead to chronic circadian desynchronization. Many people regularly look at television and other screens before going to bed, a factor which may exacerbate disruption of the circadian cycle. Scientific studies on sleep have shown that sleep stage at awakening is an important factor in amplifying sleep inertia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "826127",
"title": "False awakening",
"section": "Section::::Types.:Type 1.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 382,
"text": "A common false awakening is a \"late for work\" scenario. A person may \"wake up\" in a typical room, with most things looking normal, and realize he or she overslept and missed the start time at work or school. Clocks, if found in the dream, will show time indicating that fact. The resulting panic is often strong enough to jar the person awake for real (much like from a nightmare).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1105247",
"title": "Alarm clock",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 841,
"text": "An alarm clock is a clock that is designed to alert an individual or group of individuals at specified time. The primary function of these clocks is to awaken people from their night's sleep or short naps; they are sometimes used for other reminders as well. Most use sound; some use light or vibration. Some have sensors to identify when a person is in a light stage of sleep, in order to avoid waking someone who is deeply asleep, which causes tiredness, even if the person has had adequate sleep. To stop the sound or light, a button or handle on the clock is pressed; most clocks automatically stop the alarm if left unattended long enough. A classic analog alarm clock has an extra hand or inset dial that is used to specify the time at which to activate the alarm. Alarm clocks are also found on mobile phones, watches, and computers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6449",
"title": "Clock",
"section": "Section::::Purposes.\n",
"start_paragraph_id": 118,
"start_character": 0,
"end_paragraph_id": 118,
"end_character": 543,
"text": "The primary purpose of a clock is to \"display\" the time. Clocks may also have the facility to make a loud alert signal at a specified time, typically to waken a sleeper at a preset time; they are referred to as \"alarm clocks\". The alarm may start at a low volume and become louder, or have the facility to be switched off for a few minutes then resume. Alarm clocks with visible indicators are sometimes used to indicate to children too young to read the time that the time for sleep has finished; they are sometimes called \"training clocks\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16794864",
"title": "Nomophobia",
"section": "Section::::Research evidence.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 201,
"text": "According to one study, the first thing that 61% of people do after waking up in the morning is check their smartphones. Further, 77% of the teens reported anxiety when they are without mobile phones.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28192562",
"title": "Diary of a Wimpy Kid: The Ugly Truth",
"section": "Section::::Summary.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 1018,
"text": "Greg is also given the responsibility of waking himself up. He tries a better alarm clock than his older one, which didn't work, a wind-up clock. He put it under his bed so he would have to get out of bed to find it. But with the clock ticking loudly under his bed, he feels like he is on top of a bomb and therefore gets no sleep. As a result, Greg accidentally sets off the fire alarm at school in his sleep-deprived state. The entire school has to evacuate, and the fire brigade is called. After everyone goes back in, the head teacher says that whoever set off the alarm will be suspended and should turn themselves in. Greg does not get caught, but a rumor goes around that the fire alarm squirts out an invisible liquid when you pull the handle and the teachers could detect it with a special wand. Then everyone thinks that the teachers used this as a trick to see which kid goes to wash his hands first. No one goes to wash their hands, and since it is the middle of flu season, the school has to close early.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31760228",
"title": "Go the Fuck to Sleep",
"section": "Section::::Themes.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 574,
"text": "\"Whatever the cause, it is definitely the case that, when faced with a kid who refuses to go to sleep, we get annoyed, like all parents before us, but, rather than just abandoning the child to the dark and telling it that it can go to sleep or stay awake as it likes but it is staying in the bed until morning (remember Proust at the opening of \"Swann’s Way\"?), we sit there with it, reading to it and singing to it and distracting it with swirling night lights until it decides it feels like going to sleep, all the while thinking to ourselves, Go the fuck to sleep, kid.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1imb4r
|
what is the point of those things that appear in your eyes after sleeping?
|
[
{
"answer": "They're called eye poop in Sweden",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "340429",
"title": "Rheum",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 273,
"text": "When the individual is awake, blinking of the eyelid causes rheum to be washed away with tears via the nasolacrimal duct. The absence of this action during sleep, however, results in a small amount of dry rheum accumulating in corners of the eye, most notably in children.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31507871",
"title": "Acute haemorrhagic conjunctivitis in Ghana",
"section": "Section::::Signs of the disease.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 447,
"text": "Acute Haemmorrhagic Conjunctivitis is normally recognized by the affected individual upon waking. The eyelids stick together requiring great effort in separating them. Intense whitish mucopurulent discharge is observed throughout the day with the eye having a reddish hue. There is pain which is worse upon looking up or at light. Other symptoms include sore eyes, feeling of grittiness or burning, redness, watery discharge, swelling of eyelids.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4404875",
"title": "Mucopurulent discharge",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 317,
"text": "BULLET::::- In ophthalmology, mucopurulent discharge from the eyes, and caught in the eyelashes, is a hallmark sign of bacterial conjunctivitis. The normal buildup of tears, mucus, and dirt (compare rheum) that appears at the edge of the eyelids after sleep is not mucopurulent discharge, as it does not contain pus.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2530463",
"title": "Recurrent corneal erosion",
"section": "Section::::Treatment.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 213,
"text": "Nocturnal Lagophthalmos (where one’s eyelids don’t close enough to cover the eye completely during sleep) may be an exacerbating factor, in which case using surgical tape to keep the eye closed at night can help.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2530463",
"title": "Recurrent corneal erosion",
"section": "Section::::Prevention.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 211,
"text": "BULLET::::- learn to wake with eyes closed and still and keeping artificial tear drops within reach so that they may be squirted under the inner corner of the eyelids if the eyes feel uncomfortable upon waking.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56533",
"title": "Diabetic retinopathy",
"section": "Section::::Signs and symptoms.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 435,
"text": "These spots are often followed within a few days or weeks by a much greater leakage of blood, which blurs the vision. In extreme cases, a person may only be able to tell light from dark in that eye. It may take the blood anywhere from a few days to months or even years to clear from the inside of the eye, and in some cases the blood will not clear. These types of large hemorrhages tend to happen more than once, often during sleep.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24807127",
"title": "Healing the man blind from birth",
"section": "Section::::Biblical accounts.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 202,
"text": "\"How then were your eyes opened?\" they asked. He replied, \"The man they call Jesus made some mud and put it on my eyes. He told me to go to Siloam and wash. So I went and washed, and then I could see.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3k414s
|
why do companies spend excessive amounts of money on logos that look barely different?
|
[
{
"answer": "Peter Drucker, a very famous business professor, was often quoted as saying: \"There are only two things in a business that make money - innovation and marketing. Everything else is a cost.\"\n\nA company's logo is its face to the world. Anything that people think of when they glimpse a company's logo is probably going to strongly impact that consumers perceptions of the company.\n\nOne of my firm's clients recently heard in a focus group that a new logo they were testing out made someone think of the Nazi Swastika. This one guy's random impression was enough for our client to want to get us to speak to hundreds more people to ensure that this was just one random guy thinking that and not evocative of a larger trend. It's pretty clear how the client would have lost a lot of money if their logo made people think of nazis and they didn't catch it before making the change. That type of thing is a pretty compelling case to spend money researching a logo. A bit of money spent on research can avoid huge costs in the future if the logo/marketing idea doesn't work out.",
"provenance": null
},
{
"answer": "Keep in mind that sometimes *barely different* is a good thing. When you have an established, recognizable brand, the last thing you want to do is make your new logo unrecognizable. \n\nInstead of looking at it as *barely changing*, look at it as *incremental change*. Comparing two sequential iterations of the Google or Windows logo seems like a small change, but comparing two a few iterations apart can be a quite drastic difference. Jumping from the first version to the last would have saved costs along the way, but most likely would have alienated a lot of people because they are suddenly unable to recognize the brand.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "355011",
"title": "Icon (computing)",
"section": "Section::::Types.:Brand icons for commercial software.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 296,
"text": "Because these company and program logos represent the company and product itself, much attention is given to their design, done frequently by commercial artists. To regulate the use of these brand icons, they are trademark registered and are considered part of the company intellectual property.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3740888",
"title": "Celebrity branding",
"section": "Section::::Effectiveness.:Example one (touchpoints).\n",
"start_paragraph_id": 141,
"start_character": 0,
"end_paragraph_id": 141,
"end_character": 1032,
"text": "Consumers decipher the cultural codes embodied in celebrity images and actively identify personal, social and cultural meaning in these idols. Therefore, this is why celebrity branding and endorsing through technology has become increasingly more of a trend with initial touch points of communicational advertising. More and more corporate brands are enlisting celebrities to differentiate their brand and create a more competitive advantage through media (IIicic & M. Webster, 2015). For example, if there are two brands that have a similar or identical product, it is almost guaranteed that the brand with the more established and well-known celebrity will be more successful in sales and interest (Ambroise, Pantin-Sohler, Valette – Florence & Albert, 2014). Big companies such as Adidas and Nike use high-profile celebrities to appeal to the emotional side of the average consumer. Celebrities provide much more than entertainment. They influence consumers' perceptions, behaviours, values and decisions (Choi and Rifon, 2012).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1029281",
"title": "Monochrom",
"section": "Section::::Main projects (in chronological order).\n",
"start_paragraph_id": 57,
"start_character": 0,
"end_paragraph_id": 57,
"end_character": 207,
"text": "BULLET::::- How well do people remember the logos of large corporations that sell consumer goods? An attempt to evaluate the actual power of commercial brands by making people draw famous logos from memory.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26248376",
"title": "Color psychology",
"section": "Section::::Brand meaning.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 1074,
"text": "Company logos can portray meaning just through the use of color. Color affects people's perceptions of a new or unknown company. Some companies such as Victoria's Secret and H&R Block used color to change their corporate image and create a new brand personality for a specific target audience. Research done on the relationship between logo color and five personality traits had participants rate a computer-made logo in different colors on scales relating to the dimensions of brand personality. Relationships were found between color and sincerity, excitement, competence, sophistication, and ruggedness. A follow up study tested the effects of perceived brand personality and purchasing intentions. Participants were presented with a product and a summary of the preferred brand personality and had to rate the likelihood of purchasing a product based on packaging color. Purchasing intent was greater if the perceived personality matched the marketed product or service. In turn color affects perceived brand personality and brand personality affects purchasing intent.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53759777",
"title": "Cannabis Act",
"section": "Section::::Act and its provisions.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 463,
"text": "Promotion and packaging: Companies are allowed to brand their products, but they must avoid anything that would appear to appeal directly to youth such as cartoon characters, animals, or celebrity endorsements. Event sponsorship is also not allowed. Companies can also use factual information on their packaging, such as THC levels, that would help consumers make a decision on what product to buy. Promotion is only allowed in places where youth cannot view it.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5303377",
"title": "Victory Auto Wreckers",
"section": "Section::::Advertising.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 237,
"text": "Kyle Weisner said of the advertisement, \"There are companies that do a lot of advertising, like, for example Empire, but they change their commercials weekly. With us, our message is the same, so we've never felt the need to change it.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16759967",
"title": "Branded asset management",
"section": "Section::::Background.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 460,
"text": "Branding emerged as a top management priority in the last decade due to the growing realization that a brand is one of the most valuable assets that firms can have. A brand is more than just a name on a stationery, clothes, plant, or equipment. It carries meaning to all stakeholders and represents a set of values, promises, even a personality of its own. Most companies with the biggest increases in brand value operate as single brands all over the world. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5hdpq6
|
britons of reddit, can someone explain the "first past the post system"?
|
[
{
"answer": "The country is divided into \"constituencies\".\n\nEach constituency elects a single representative, or MP. (Edit, as pointed out below): they do this by voting on the candidates, and the candidate with the most votes wins. The winner doesn't need a majority of votes, they just need more votes than anyone else.\n\nMost MPs represent a party (although independent candidates are allowed to stand, and occasionally win). The party with more than 50% of MPs gets to form the government.\n\nIf no party has more than 50% of MPs, the party with the most MPs gets to try to form a government by going into coalition with other parties, so that the parties in the coalition have more than 50% of the MPs between them.",
"provenance": null
},
{
"answer": "It's the system used in the US house of representatives effectively if that helps.\n\nBut yes, other posters are correct, it's a division of the electorate into areas, called constituencies in the UK, and a series of candidates run in each area. The one with the most votes (even if it's only 20% of those who voted) wins, and represents that area in parliament.",
"provenance": null
},
{
"answer": "I guess I'll add the corrollary: what is the \"post\" in this system? If it's just plurality voting, then it sounds like you don't have to get past any particular amount. So what's the \"post\"?",
"provenance": null
},
{
"answer": "I know this may be a tad controversial, but I quite like the system we have, as there is an MP who represents your area in terms of national issues, and who you can go directly to.\n\nProportional representation is good, but by it's very nature it dilutes the ability of the electorate to have direct representation. However this is somewhat alleviated by the council system, at least in regard to local issues.\n\nIt's certainly not perfect by any means, but I don't think any system is.",
"provenance": null
},
{
"answer": "It's also called simple plurality. Basically whoever gets the most votes in the contest wins 100% of the win. So if there are 10 votes and A gets 3, B gets 4, C gets 1 and D gets 2 then B wins with 40% of the vote, not a majority but a plurality.\n\nIt's a very common form of democratic contest, also used in most US states for the electoral college votes (but not all) and pretty much all other US elections. It's more common in the older Anglo nations, less common in nations with newer constitutions because it's kinda shit and obsolete because it produces a lot of really dysfunctional behaviour like spoilers, tactical voting and gerrymandering.\n\nIn the UK we have constituency FPTP which means that the overall contest is made up of hundreds of small FPTP contests for individual seats, rather than it just being a nationwide vote and whoever wins that wins 100% of the power. So whoever wins a constituency wins 100% of that constituency but whoever gets a plurality of votes nationally does not win 100% of the nation. Similar to the US Houses in that regard.\n\nThis is worth watching\n\n_URL_0_",
"provenance": null
},
{
"answer": "It's also worth mentioning that the votes are actually made on paper, collected up and counted. The polling stations all close at the same time, although most will have ballot boxes returned to 'HQ', where ever that is for the election, throughout the day and stacked ready. Then there is a competition between some of the Returning Officers (literally, they 'return' the vote result) to be the first to get their votes counted and to announce the result. I say some because some areas don't want to race! There can be recounts if the count is close.\n\nThere are elections every year in most places because local Council members tend to be replaced in 'thirds\" over a three year cycle as well as the General Election (for Parliament) and European Elections, plus County Council elections in some places.\n\nI worked in an area where a Council member election was tied and it was decided on the flip of a coin.",
"provenance": null
},
{
"answer": "First of all, it's debatably a poor system. It suits the 2 bigger parties (Conservative and Labour) that win, and hinders the smaller parties that don't, so makes it almost impossible to get the voting system changed as those parties in power vote against the change. Bare this in mind, will explain more later.\n\nThere are 650 'seats' in the UK elections, literally meaning the number of seats up for grabs in the Houses of Parliament. So, there are 650 areas of the UK that vote for an individual to be their area's MP (Member of Parliament). The individual with the most votes wins. That individual can stand as a representative of a specific party (Conservative, Labour, Lib Dem, Green, UKIP, SNP, DUP, Monster Raving Looney Party, etc.) or run as an independent. For example, the MP for Birmingham Yardley is Labour's Jess Phillips as she received the most votes of all the candidates standing for election in that area, called a constituency. She therefore takes 1 seat in the Houses of Parliament. \n\nThe process of separate constituencies voting for an individual to represent their area happens 650 times across the country during 1 election day, typically in May, once every 5 years. For 1 party to win outright, they must win 50% of the seats +1 (326, this figure being the 'post' in the first past the post phrase). This gives them a majority and can govern alone, as happened in 2015 when the Conservatives won. However, if a party doesn't get 50% of seats, as in 2010, the party with the most seats has the opportunity to form a government with another party. This is not necessarily with the next most voted party. The top 2 are usually Labour and Conservative, Labour being much more left wing, Conservative much further right, making a coalition here effectively impossible. \n\nFor there to be a more effective working relationship between parties, the party with the most seats try to combine their total of seats with a smaller parties total of seats to then make the magic 326 seat figure. In 2010, the Conservatives combined their 306 seats with the Lib Dems 57 seats, creating a majority, and they worked together for the next 5 years. \n\nNow, the reason the system is deemed unfair is that it disproportionately awards a high number of seats to the big parties in relation to number of votes and a disproportionally low number of seats to smaller parties. For example, the Conservatives got 330 seats from 11,000,000 votes in 2015. Whereas UKIP got 1 seat from 3,800,000 votes. This happens because the smaller parties can get a consistently solid number of votes across the country, but very rarely enough to come 1st in any given constituency. It can also be seen as unfair if a smaller party is very prevalent in a highly specific area, for example the SNP in Scotland (who only stand in Scotland and not the rest of the UK) got 59 seats from 1,500,000 votes. \n\nSigned up to answer, never had a ELI5 that I've been qualified to answer until now!\n\nEDIT: Removed opinions ",
"provenance": null
},
{
"answer": "Not in Britain, but this [CPG Grey video](_URL_0_) is an outstanding explanation",
"provenance": null
},
{
"answer": "_URL_0_\n\nBest video for understanding this system!\n",
"provenance": null
},
{
"answer": "Why just Britons? The US and many other \"democratic\" countries use the same system.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "22714412",
"title": "2011 New Zealand voting system referendum",
"section": "Section::::Referendum.:Alternative voting systems.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 207,
"text": "First past the post was used in New Zealand prior to MMP, and the three other systems were recommended by the Royal Commission on the Electoral System for further scrutiny in 1986 and were voted on in 1992.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43747946",
"title": "2015 United Kingdom general election",
"section": "Section::::Opinion polling.:Predictions one month before the vote.\n",
"start_paragraph_id": 92,
"start_character": 0,
"end_paragraph_id": 92,
"end_character": 408,
"text": "The first-past-the-post system used in UK general elections means that the number of seats won is not closely related to vote share. Thus, several approaches were used to convert polling data and other information into seat predictions. The table below lists some of the predictions. ElectionForecast was used by \"Newsnight\" and FiveThirtyEight. May2015.com is a project run by the \"New Statesman\" magazine.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "261709",
"title": "First-past-the-post voting",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 627,
"text": "A first-past-the-post (FPTP and sometimes abbreviated to FPP) electoral system is one in which voters indicate on a ballot the candidate of their choice, and the candidate who receives the most votes wins. This is sometimes described as \"winner takes all\". First-past-the-post voting is a plurality voting method. FPTP is a common, but not universal, feature of electoral systems with single-member electoral divisions, and is practised in close to one third of countries. Notable examples include Canada, India, the United Kingdom, and the United States, as well as most of their current or former colonies and protectorates.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14538121",
"title": "The First Post",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 491,
"text": "The First Post was a British daily online news magazine based in London. Launched in August 2005, it was sold to Dennis Publishing in 2008 and retitled \"The Week\" at the end of 2014. In its current format, it publishes news, current affairs, lifestyle, opinion, arts and sports pages, and features an online games arcade and a cinema featuring short films, virals, trailers and eyewitness news footage. There are also quick-read digests of the UK newspapers' news, opinion and sports pages.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14538121",
"title": "The First Post",
"section": "Section::::Contributors.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 459,
"text": "\"The First Post\" has no discernible political bias. Regular writers have included the left wing Alexander Cockburn, commenting on US politics, and Sir Peregrine Worsthorne, generally perceived as a conservative, writing on UK and international issues. Contributors are based in a wide range of countries. \"The First Post\" was devised by Mark Law who was the editor until September 2009. It is edited by Nigel Horne, former editor of the \"Telegraph\" magazine.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46497615",
"title": "2017 United Kingdom general election",
"section": "Section::::Opinion polling and seat projections.:Predictions three weeks before the vote.\n",
"start_paragraph_id": 138,
"start_character": 0,
"end_paragraph_id": 138,
"end_character": 284,
"text": "The first-past-the-post system used in UK general elections means that the number of seats won is not directly related to vote share. Thus, several approaches are used to convert polling data and other information into seat predictions. The table below lists some of the predictions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1902917",
"title": "Sub-Roman Britain",
"section": "Section::::Meaning of terms.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 813,
"text": "This period has attracted a great deal of academic and popular debate, in part because of the scarcity of the written source material. The term \"post-Roman Britain\" is also used for the period, mainly in non-archaeological contexts; \"sub-Roman\" and \"post-Roman\" are both terms that apply to the old Roman province of Britannia, i.e. Britain south of the Forth–Clyde line. The history of the area between Hadrian's Wall and the Forth–Clyde line is similar to that of Wales (see Rheged, Bernicia, Gododdin and Strathclyde). North of the line lay a thinly-populated area including the kingdoms of the Maeatae (in Angus), Dalriada (in Argyll), and the kingdom whose \"kaer\" (castle) near Inverness was visited by Saint Columba. The Romans referred to these peoples collectively as \"Picti\" Picts, meaning Painted Ones.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
b45v11
|
Did any military powers use light (most likely the reflection of it) in military tactics in an attempt to blind or burn the opposition?
|
[
{
"answer": "Sorry, we don't allow [\"example seeking\" questions](_URL_0_). It's not that your question was bad; it's that these kinds of questions tend to produce threads that are collections of disjointed, partial, inadequate responses. If you have a question about a specific historical event, period, or person, feel free to rewrite your question and submit it again. If you don't want to rewrite it, you might try submitting it to /r/history, /r/askhistory, or /r/tellmeafact. \n\nFor further explanation of the rule, feel free to consult [this META thread](_URL_1_).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "41706552",
"title": "Battlefield illumination",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 325,
"text": "Battlefield illumination is technology that improves visibility for military forces operating in difficult light conditions. The risks and dangers to armies fighting in poor light have been known since Ancient Chinese times. Prior to the advent of the electrical age, fire was used to improve visibility on the battlefield. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41706552",
"title": "Battlefield illumination",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 369,
"text": "Modern armies use a variety of equipment and discharge devices to create artificial light. If natural light is not present searchlights, whether using visible light or infrared, and flares can be used. As light can be detected electronically, modern warfare has accordingly seen increased use of night vision through the use of infrared cameras and image intensifiers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41706552",
"title": "Battlefield illumination",
"section": "Section::::Theory.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 447,
"text": "Ancient military strategists knew that natural light created shadows that can hide form while bright areas would expose a military force's size and number of a military force. Ancient armies would always prefer to fight with the Sun behind them in order to use the visual glare to partially blind an opposing enemy. Backlight would also obscure movement and numbers making it more difficult for an enemy to react quickly to any tactical assault. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29586670",
"title": "Counter-illumination",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 237,
"text": "Counter-illumination has not so far come into widespread military use, but during the Second World War it was trialled in ships in the Canadian Diffused lighting camouflage project, and in aircraft in the American Yehudi lights project.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1869769",
"title": "Blackout (wartime)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 506,
"text": "A blackout during war, or in preparation for an expected war, is the practice of collectively minimizing outdoor light, including upwardly directed (or reflected) light. This was done in the 20th century to prevent crews of enemy aircraft from being able to identify their targets by sight, for example during the London Blitz of 1940. In coastal regions a shore-side blackout of city lights also helped protect ships from being seen in silhouette against the shore by enemy submarines farther out at sea.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3456662",
"title": "Dazzler (weapon)",
"section": "Section::::History.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 230,
"text": "Handgun or rifle-mounted lights may also be used to temporarily blind an opponent and are sometimes marketed for that purpose. In both cases the primary purpose is to illuminate the target and their use to disorient is secondary.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52200141",
"title": "Moonlight Batteries, Royal Artillery",
"section": "Section::::World War II.:Sicily & Italy.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 542,
"text": "Battlefield illumination was also used in the campaigns in Sicily and Italy up to and including the Gothic line. It was used to illuminate the battlefield for not only infantry attack but also, because of the ridge nature of the terrain, catching out German Artillery in the full glare of light on the opposite slopes. Careful reconnoitring of the area and individual placement achieved excellent results. The 1st Canadian Group had with them the 422nd Search-Light (Independent) Battery Royal Artillery who undertook this task successfully.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1md4ht
|
quantum computers (you can explain it like i'm a dumbass 26yr old too if that suits you more.)
|
[
{
"answer": "_URL_0_\n\nThis video explains it fairly well.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "15426942",
"title": "Quantum technology",
"section": "Section::::Applications.:Computing.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 397,
"text": "Quantum computers are the ultimate quantum network, combining 'quantum bits' or 'qubit' which are devices that can store and process quantum data (as opposed to binary data) with links that can transfer quantum information between qubits. In doing this, quantum computers are predicted to calculate certain algorithms significantly faster than even the largest classical computer available today.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12589161",
"title": "Neural cryptography",
"section": "Section::::Neural key exchange protocol.:Security against quantum computers.\n",
"start_paragraph_id": 65,
"start_character": 0,
"end_paragraph_id": 65,
"end_character": 447,
"text": "A quantum computer is a device that uses quantum mechanisms for computation. In this device the data are stored as qubits (quantum binary digits). That gives a quantum computer in comparison with a conventional computer the opportunity to solve complicated problems in a short time, e.g. discrete logarithm problem or factorization. Algorithms that are not based on any of these number theory problems are being searched because of this property.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15426942",
"title": "Quantum technology",
"section": "Section::::Applications.:Computing.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 316,
"text": "Quantum computers are expected to have a number of significant uses in computing fields such as optimization and machine learning. They are famous for their expected ability to carry out 'Shor's Algorithm', which can be used to factorise large numbers which are mathematically important to secure data transmission.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "323392",
"title": "Theoretical computer science",
"section": "Section::::Topics.:Quantum computation.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 960,
"text": "A quantum computer is a computation system that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980 and Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25220",
"title": "Quantum computing",
"section": "Section::::Basics.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 611,
"text": "A classical computer has a memory made up of bits, where each bit is represented by either a one or a zero. A quantum computer, on the other hand, maintains a sequence of qubits, which can represent a one, a zero, or any quantum superposition of those two qubit states; a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. In general, a quantum computer with formula_1 qubits can be in any superposition of up to formula_2 different states. (This compares to a normal computer that can only be in \"one\" of these formula_2 states at any one time).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25220",
"title": "Quantum computing",
"section": "Section::::Obstacles.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 274,
"text": "There are a number of technical challenges in building a large-scale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47734869",
"title": "Counterfactual quantum computation",
"section": "Section::::Outline of the method.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 688,
"text": "The quantum computer may be physically implemented in arbitrary ways but the common apparatus considered to date features a Mach–Zehnder interferometer. The quantum computer is set in a superposition of \"not running\" and \"running\" states by means such as the Quantum Zeno Effect. Those state histories are quantum interfered. After many repetitions of very rapid projective measurements, the \"not running\" state evolves to a final value imprinted into the properties of the quantum computer. Measuring that value allows for learning the result of some types of computations such as Grover's algorithm even though the result was derived from the non-running state of the quantum computer.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
174ug2
|
eli: the current debate in the uk about the eu.
|
[
{
"answer": "[This](_URL_0_) excellent post by /u/loudribs over at /r/unitedkingdom gives a good overview of the pros and cons.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "50539063",
"title": "2015–16 United Kingdom renegotiation of European Union membership",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 873,
"text": "The United Kingdom renegotiation of European Union membership was a package of changes to the United Kingdom's terms of European Union (EU) membership and changes to EU rules which was first proposed by Prime Minister David Cameron in January 2013, with negotiations beginning in the summer of 2015 following the outcome of the UK General Election. The package was agreed by the President of the European Council Donald Tusk, and approved by EU leaders of all 27 other countries at the European Council session in Brussels on 18–19 February 2016 between the United Kingdom and the rest of the European Union. The changes were intended to take effect following a vote for \"Remain\" in the UK's in-out referendum, at which point suitable legislation would be presented by the European Commission. Due to the Leave result of the referendum, the changes were never implemented.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50115800",
"title": "Issues in the 2016 United Kingdom European Union membership referendum",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 372,
"text": "Issues in the United Kingdom European Union membership referendum, 2016 are the economic, human and political issues that were discussed during the campaign about the withdrawal of the United Kingdom from the European Union, during the period leading up to the Brexit referendum of 23 June 2016. [Issues that have arisen since then are outside the scope of this article].\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9317",
"title": "European Union",
"section": "Section::::History.:Lisbon Treaty (2007present).\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 673,
"text": "From the beginning of the 2010s, the cohesion of the European Union has been tested by several issues, including a debt crisis in some of the Eurozone countries, increasing migration from the Middle East, and the United Kingdom's withdrawal from the EU. A referendum in the UK on its membership of the European Union was held in 2016, with 51.9% of participants voting to leave. The UK formally notified the European Council of its decision to leave on 29 March 2017, initiating the formal withdrawal procedure for leaving the EU, committing the UK in principle to leave the EU two years later, on 29 March 2019, unless an extension was sought and granted, which occurred.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15770832",
"title": "United Kingdom competition law",
"section": "Section::::European Union law.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 884,
"text": "The United Kingdom joined the European Community (EC) with the European Community Act 1972, and through that became subject to EC competition law. Since the Maastricht Treaty of 1992, the EC was renamed the European Union (EU). Competition law falls under the social and economic pillar of the treaties. After the introduction of the Treaty of Lisbon the pillar structure was abandoned and competition law was subsumed in the Treaty on the Functioning of the European Union (TFEU). So where a British company is carrying out unfair business practices, is involved in a cartel or is attempting to merge in a way which would disrupt competition across UK borders, the Commission of the European Union will have enforcement powers and exclusively EU law will apply. The first provision is Article 101 TFEU, which deals with cartels and restrictive vertical agreements. Prohibited are...\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42880700",
"title": "Big Four (Western Europe)",
"section": "Section::::Overview.:Brexit.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 536,
"text": "A European Union membership referendum took place on Thursday 23 June 2016 in the UK and resulted in an overall vote to leave the EU, by 51.9%. The British government have triggered Article 50 of the Treaty on European Union to start the process to leave the EU, which is expected to take several years. The G4 now consists of the UK and the new EU big three (Germany, France and Italy), the large founding members of the European Communities that have retaken a leading role in Europe following the decision of the UK to leave the EU.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9580",
"title": "European Free Trade Association",
"section": "Section::::Membership.:Other negotiations.:United Kingdom.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 631,
"text": "The United Kingdom was a co-founder of EFTA in 1960, but ceased to be a member upon joining the European Economic Community. The country held a referendum in 2016 on withdrawing from the EU (popularly referred to as \"Brexit\"), resulting in a 51.9% vote in favour of withdrawing. A 2013 research paper presented to the Parliament of the United Kingdom proposed a number of alternatives to EU membership which would continue to allow it access to the EU's internal market, including continuing EEA membership as an EFTA member state, or the Swiss model of a number of bilateral treaties covering the provisions of the single market.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "55974639",
"title": "2018 in the United Kingdom",
"section": "Section::::Events.:March.\n",
"start_paragraph_id": 67,
"start_character": 0,
"end_paragraph_id": 67,
"end_character": 265,
"text": "BULLET::::- The EU rejects Theresa May's proposal for \"mutual recognition\" of standards between the UK and EU as part of a post-Brexit trade relationship, while also ruling out British membership of EU regulators such as the European Medicines Agency after Brexit.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2ghbxh
|
Is there a chance to be struck by lightening in a room with window and door closed ?
|
[
{
"answer": "Yes, but probably not in the way you think. A lightning bolt will most likely not shoot through a window, leave the wall unmolested, shoot through the air in your room and then head right towards you. Lighting is electricity and takes the path of least electrical resistance. Your walls have metal plumbing pipes and metal household electrical wires running through them that are the path of least resistance. Therefore, a lightning bolt will mostly likely touch down from air onto your roof or the outside of a wall, and then run along the wiring and plumbing towards the ground. For this reason, being inside a building is safer than being out in an open field. But, lightning bolts carry a lot of electrical current, so not all of the current is contained exactly in the pipes and wiring that it is traveling down. A lot of the current spills out, and dissipates in all directions. So if you are touching your metal sink knob right when a bolt hits your house and runs down the plumbing in the wall just behind your sink, some of current can travel through you and give you a shock. You may not be \"struck by lightning\" in the sense of lightning first touching down from air directly on you if you are in a building, but you can still be \"struck by lightning\" in the sense that some of the electrical current from the bolt travels through you as it makes its way down through the structure of the house. \n\nIn short, to stay safe from lightning when indoors, avoid touching plumbing or the water coming out of plumbing (sinks, showers, toilets) and avoid touching plugged-in electrical equipment (appliances, corded phones, charging devices).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "48254512",
"title": "Jochen Gerz",
"section": "Section::::Installations.:News to News (Ashes to Ashes) (1995).\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 649,
"text": "Upon entering the darkened room, the gaze falls on a black “picture”, which, surrounded by a vibrating light, appears to float in front of the wall. The image is made up of 16 monitors arranged as a compact rectangular block at a distance of 30 centimetres from the wall – the screens facing the wall. A crackling sound is audible, automatically suggesting fire and a threat. Those who venture a peek behind the tableau will note that the monitors show 16 lighted fireplaces. The banality of this domestic idyll comes as a disappointment, contrasting sharply with the spectacle of fascination and horror generated by the concealment of the reality.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8103336",
"title": "Light leak",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 258,
"text": "A light leak, considered as a problem, is a kind of stray light. It is possible to have a \"virtual\" light leak in spectral regions, like portions of the IR spectrum at room temperature, where surfaces inside the system emit significant amounts of radiation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30115275",
"title": "Illumination problem",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 602,
"text": "The illumination problem is a resolved mathematical problem attributed to Ernst Straus in the 1950s. Straus asked if a room with mirrored walls can always be illuminated by a single point light source, allowing for repeated reflection of light off the mirrored walls. Alternatively, the question can be stated as asking that if a billiard table can be constructed in any required shape, is there a shape possible such that there is a point where it is impossible to the billiard ball in a at another point, assuming the ball is point-like and continues infinitely rather than stopping due to friction.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49618176",
"title": "Republican Seismic Survey Center of Azerbaijan National Academy of Sciences",
"section": "Section::::Operations and research.:Behaviour during an earthquake.\n",
"start_paragraph_id": 85,
"start_character": 0,
"end_paragraph_id": 85,
"end_character": 276,
"text": "BULLET::::2. If you indoors lay down under a table or a bed. If doors are opened, stand under a doorway or in a corner inside a room. Do not forget that at jolting first of all burst and break external walls of buildings. Therefore to run or hide under a window is forbidden.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1860895",
"title": "False alarm",
"section": "Section::::Types.:Residential burglar alarms.:Causes and prevention.:Unsecured windows and doors.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 345,
"text": "Windows and doors that are not fully closed can cause the alarm contacts to be misaligned which can result in a false alarm. In addition, if a door or window is left slightly ajar, wind may be able to blow them open which will also cause a false alarm. To prevent this from happening, door and windows should always be shut securely and locked.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3743953",
"title": "Digital integration",
"section": "Section::::Applications.:Building services integration for energy management and building control.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 339,
"text": "BULLET::::- An intruder detection or access control system could be used in conjunction with light level sensors to turn lights on and off. So when you walk into a dark room the lights turn on (if you are allowed to be there) and when you leave they turn off behind you, thus making energy savings by preventing lights from being left on.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5699083",
"title": "Survival Under Atomic Attack",
"section": "Section::::Center Insert.:Five Keys To Household Safety (18).\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 287,
"text": "BULLET::::- 4. Close All Windows And Doors And Draw The Blinds: If you have time when an alert sounds, close the house up tight in order to keep out fire sparks and radioactive dusts and to lessen the chances of being cut by flying glass. Keep the house closed until all danger is past.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
168iuq
|
why do people enjoy the smell of their own farts??
|
[
{
"answer": "It's just you.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "424956",
"title": "Perfume (novel)",
"section": "Section::::Plot.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 688,
"text": "The effect his scent has had now confirms to Grenouille how much he hates people, especially as he realizes that they worship him now and that even this degree of control does not give him satisfaction. He decides to return to Paris, intending to die there, and after a long journey ends up at the fish market where he was born. He approaches a crowd of criminals gathered in a cemetery and pours the entire bottle of his final perfume on himself. The people are so drawn to him that they are compelled to obtain parts of his body, eventually tearing him to pieces and eating them. The story ends with the crowd, now embarrassed by their actions, agreeing that they did it out of \"love\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49322461",
"title": "The Arab of the Future",
"section": "Section::::Sensory symbolism.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 413,
"text": "Smell is also vividly represented throughout the novel. The young Riad associates new places and especially new people with their smells, ranging from perfume and incense to sweat, spoiled food, and flatulence. These odors tend to convey the quality of relationships, with Sattouf explaining, \"the people whose odor I preferred were generally the ones who were the kindest to me. I find that’s still true today.”\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2269070",
"title": "Simoom",
"section": "Section::::Figurative use of the word or phenomenon.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 686,
"text": "\"Walden\" (1854), by Henry David Thoreau, references a simoom; he uses it to describe his urge to escape something most unwanted. \"There is no odor so bad as that which arises from goodness tainted. It is human, it is divine, carrion. If I knew for a certainty that a man was coming to my house with the conscious design of doing me good, I should run for my life, as from that dry and parching wind of the African deserts called the simoom, which fills the mouth and nose and ears and eyes with dust till you are suffocated, for fear that I should get some of his good done to me — some of its virus mingled with my blood. No — in this case I would rather suffer evil the natural way.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24536042",
"title": "Feces",
"section": "Section::::Characteristics.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 306,
"text": "The perceived bad odor of feces has been hypothesized to be a deterrent for humans, as consuming or touching it may result in sickness or infection. Human perception of the odor may be contrasted by a non-human animal's perception of it; for example, an animal who eats feces may be attracted to its odor.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58403719",
"title": "Bathroom reading",
"section": "Section::::Bathroom reading and psychology.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 502,
"text": "Even when people read for extended periods of time during defecation, it is rare for bathroom readers to feel disgusted by the smell of their own feces, or even to consciously notice the smell. Sigmund Freud also noted this phenomenon in \"Civilization and Its Discontents\", though he described lack of awareness of fecal smell in general, not just while reading: \"in spite of all man's developmental advances, he scarcely finds the smell of his own excreta repulsive, but only that of other people's.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5910868",
"title": "Gus (Psych)",
"section": "Section::::Fictional biography.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 672,
"text": "Gus has a very refined sense of smell and has nicknamed his nose \"the Super Sniffer\". He is able to recognize the base component of a perfume by smelling it and can perform the same trick with food. The talent seems to be hereditary, as it has been displayed by both of his parents, and has led to the uncovering of crucial evidence in several cases. He has been shown to have a fear of dead people, having run away from a scene where a dead person is present on more than one occasion. He has showed the dislike of seeing blood on occasions. He is also well-versed in high-tech locks or safes, as demonstrated by his ability to crack an electronic lock on his first try.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28323652",
"title": "Management and Training Corporation",
"section": "Section::::Reported incidents of violence, abuse and poor conditions.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 217,
"text": "They have a lot of people in here. Sometimes it smells. It's too many people. Some people even talk about burning this place down. They just don't have enough space for all of us here. Sometimes it makes me go crazy.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5geddr
|
why was earth more subject to cosmic debris impacts billions of years ago compared to today?
|
[
{
"answer": "Because there was a whole lot more debris rocking around when the solar system was new. Earth and the other planets spent the better part of 4 billion years cleaning up the solar system by either smashing into things or slinging them out into orbits where they won't intersect planets.",
"provenance": null
},
{
"answer": "Because there were a lot more cosmic debris back than, in fact Earth is a collection of cosmic debris. Over the course of billions of years Earth along with the rest of the planets in the Solar System have cleaned out the space debris adjacent to their orbits. That being said there are still billions of space rocks left in the Solar System, but since space is so big they rarely hit us.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "9228",
"title": "Earth",
"section": "Section::::Habitability.:Natural and environmental hazards.\n",
"start_paragraph_id": 100,
"start_character": 0,
"end_paragraph_id": 100,
"end_character": 411,
"text": "Large areas of Earth's surface are subject to extreme weather such as tropical cyclones, hurricanes, or typhoons that dominate life in those areas. From 1980 to 2000, these events caused an average of 11,800 human deaths per year. Many places are subject to earthquakes, landslides, tsunamis, volcanic eruptions, tornadoes, sinkholes, blizzards, floods, droughts, wildfires, and other calamities and disasters.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38623877",
"title": "Asteroid Terrestrial-impact Last Alert System",
"section": "Section::::Context.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 582,
"text": "Major astronomical impact events have significantly shaped Earth's history, having been implicated in the formation of the Earth–Moon system, the origin of water on Earth, the evolutionary history of life, and several mass extinctions. Notable prehistorical impact events include the Chicxulub impact, 66 million years ago, believed to be the cause of the Cretaceous–Paleogene extinction event. The 37 million years old asteroid impact that caused Mistastin crater generated temperatures exceeding 2,370 °C, the highest known to have naturally occurred on the surface of the Earth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "63794",
"title": "Impact event",
"section": "Section::::Impacts and the Earth.:Geological significance.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 719,
"text": "These modified views of Earth's history did not emerge until relatively recently, chiefly due to a lack of direct observations and the difficulty in recognizing the signs of an Earth impact because of erosion and weathering. Large-scale terrestrial impacts of the sort that produced the Barringer Crater, locally known as Meteor Crater, northeast of Flagstaff, Arizona, are rare. Instead, it was widely thought that cratering was the result of volcanism: the Barringer Crater, for example, was ascribed to a prehistoric volcanic explosion (not an unreasonable hypothesis, given that the volcanic San Francisco Peaks stand only to the west). Similarly, the craters on the surface of the Moon were ascribed to volcanism.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "240542",
"title": "History of Siberia",
"section": "Section::::Russian Empire.:Tunguska event.\n",
"start_paragraph_id": 96,
"start_character": 0,
"end_paragraph_id": 96,
"end_character": 413,
"text": "Although the Tunguska event is believed to be the largest impact event on land in Earth's recent history, impacts of similar size in remote ocean areas would have gone unnoticed before the advent of global satellite monitoring in the 1960s and 1970s. Because the event occurred in a remote area, there was little damage to human life or property, and it was in fact some years until it was properly investigated.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24749",
"title": "Permian–Triassic extinction event",
"section": "Section::::Theories about cause.:Impact event.\n",
"start_paragraph_id": 63,
"start_character": 0,
"end_paragraph_id": 63,
"end_character": 843,
"text": "An impact crater on the sea floor would be evidence of a possible cause of the P–Tr extinction, but such a crater would by now have disappeared. As 70% of the Earth's surface is currently sea, an asteroid or comet fragment is now perhaps more than twice as likely to hit ocean as it is to hit land. However, Earth's oldest ocean-floor crust is 200 million years old because it is continually destroyed and renewed by spreading and subduction. Thus, craters produced by very large impacts may be masked by extensive flood basalting from below after the crust is punctured or weakened. Yet, subduction should not be entirely accepted as an explanation of why no firm evidence can be found: as with the K-T event, an ejecta blanket stratum rich in siderophilic elements (such as iridium) would be expected to be seen in formations from the time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "63794",
"title": "Impact event",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 470,
"text": "Impact events appear to have played a significant role in the evolution of the Solar System since its formation. Major impact events have significantly shaped Earth's history, have been implicated in the formation of the Earth–Moon system, the evolutionary history of life, the origin of water on Earth and several mass extinctions. The famous prehistoric Chicxulub impact, 66 million years ago, is believed to be the cause of the Cretaceous–Paleogene extinction event.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "578057",
"title": "Tollmann's bolide hypothesis",
"section": "Section::::Megatsunami.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 1030,
"text": "The cataclysmic scale of physical and ecological destruction that a megatsunami, like the one proposed by Kristan-Tollmann and Tollmann, would have caused, has not been recognized within the majority of long-term environmental records. Over a thousand cores from North America for which Holocene paleoclimatic and paleoenvironmental records have been reconstructed do not show evidence for the drastic environmental changes resulting from a large Holocene impact. There is a similar lack of evidence for mega-tsunami related, Holocene, catastrophic environmental disruptions and deposits reported from environmental records reconstructed from thousands of locations from all over the world. Other megatsunamis have been shown in coastal sediments analyzed by geologists and palynologists and point to tsunamis locally caused by either earthquakes, volcanic eruptions, or submarine slides. These non-impact related tsunamis show abundant records of their environmental effects through the study of pollen from cores and exposures.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3q307v
|
is it really true that cockroaches find humans repulsive?
|
[
{
"answer": "Cockroaches don't have the neurons required to feel revulsion, or much of anything, really. They're little machines with a bunch of hardwired responses and little to no ability to actually make decisions. That said, one of their responses is to remove foreign smells from their bodies. The oils on your finger qualify, so they try to remove them.",
"provenance": null
},
{
"answer": "Cockroaches wash themselves all the time, when anything gets on them. They can't talk, so statements of their opinions are all made up.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2499027",
"title": "Cockroach",
"section": "Section::::Behavior.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 969,
"text": "Cockroaches are social insects; a large number of species are either gregarious or inclined to aggregate, and a slightly smaller number exhibit parental care. It used to be thought that cockroaches aggregated because they were reacting to environmental cues, but it is now believed that pheromones are involved in these behaviors. Some species secrete these in their feces with gut microbial symbionts being involved, while others use glands located on their mandibles. Pheromones produced by the cuticle may enable cockroaches to distinguish between different populations of cockroach by odor. The behaviors involved have been studied in only a few species, but German cockroaches leave fecal trails with an odor gradient. Other cockroaches follow such trails to discover sources of food and water, and where other cockroaches are hiding. Thus, cockroaches have emergent behavior, in which group or swarm behavior emerges from a simple set of individual interactions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18824980",
"title": "Cockroaches in popular culture",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1002,
"text": "Because of their long, persistent association with humans, cockroaches are frequently referred to in art, literature, folk tales and theater and film. In Western culture, cockroaches are often depicted as vile and dirty pests. Their size, long antennae, shiny appearance and spiny legs make them disgusting to many humans, sometimes even to the point of phobic responses. This is borne out in many depictions of cockroaches, from political versions of the song \"La Cucaracha\" where political opponents are compared to cockroaches, through the 1982 movie \"Creepshow\" and TV shows such as \"The X-Files\", to the Hutu extremists' reference to the Tutsi minority as cockroaches during the Rwandan Genocide in 1994 and the controversial cartoons published in the \"Iran weekly magazine\" in 2006 which implied a comparison between Iranian Azeris and cockroaches. In Dutch Soccer the term \"kakkerlakken\" (Dutch for \"cockroaches\") is used as a colloquial, often derogatory term for the supporters of Feyenoord. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "201258",
"title": "Cockatoo",
"section": "Section::::Relationship with humans.:Aviculture.\n",
"start_paragraph_id": 93,
"start_character": 0,
"end_paragraph_id": 93,
"end_character": 968,
"text": "Cockatoos are often very affectionate with their owner and at times other people but can demand a great deal of attention. Furthermore, their intense curiosity means they must be given a steady supply of objects to tinker with, chew, dismantle and destroy. Parrots in captivity may suffer from boredom, which can lead to stereotypic behaviour patterns, such as feather-plucking. Feather plucking is likely to stem from psychological rather than physical causes. Other major drawbacks include their painful bites, and their piercing screeches. The salmon-crested and white cockatoo species are particular offenders. All cockatoos have a fine powder on their feathers, which may induce allergies in certain people. In general, the smaller cockatoo species such as Goffin's and quieter Galah's cockatoos are much easier to keep as pets. The cockatiel is one of the most popular and easiest parrots to keep as a pet, and many colour mutations are available in aviculture.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2499027",
"title": "Cockroach",
"section": "Section::::Relationship with humans.:In research and education.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 714,
"text": "Because of their ease of rearing and resilience, cockroaches have been used as insect models in the laboratory, particularly in the fields of neurobiology, reproductive physiology and social behavior. The cockroach is a convenient insect to study as it is large and simple to raise in a laboratory environment. This makes it suitable both for research and for school and undergraduate biology studies. It can be used in experiments on topics such as learning, sexual pheromones, spatial orientation, aggression, activity rhythms and the biological clock, and behavioral ecology. Research conducted in 2014 suggests that humans fear cockroaches the most, even more than mosquitoes, due to an evolutionary aversion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2499027",
"title": "Cockroach",
"section": "Section::::Relationship with humans.:As pests.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 933,
"text": "The Blattodea include some thirty species of cockroaches associated with humans; these species are atypical of the thousands of species in the order. They feed on human and pet food and can leave an offensive odor. They can passively transport pathogenic microbes on their body surfaces, particularly in environments such as hospitals. Cockroaches are linked with allergic reactions in humans. One of the proteins that trigger allergic reactions is tropomyosin. These allergens are also linked with asthma. About 60% of asthma patients in Chicago are also sensitive to cockroach allergens. Studies similar to this have been done globally and all the results are similar. Cockroaches can live for a few days up to a month without food, so just because no cockroaches are visible in a home does not mean they are not there. Approximately 20-48% of homes with no visible sign of cockroaches have detectable cockroach allergens in dust.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2499027",
"title": "Cockroach",
"section": "Section::::Biology.:Reproduction.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 340,
"text": "Cockroaches use pheromones to attract mates, and the males practice courtship rituals, such as posturing and stridulation. Like many insects, cockroaches mate facing away from each other with their genitalia in contact, and copulation can be prolonged. A few species are known to be parthenogenetic, reproducing without the need for males.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2499027",
"title": "Cockroach",
"section": "Section::::Relationship with humans.:In culture.\n",
"start_paragraph_id": 75,
"start_character": 0,
"end_paragraph_id": 75,
"end_character": 367,
"text": "Because of their long association with humans, cockroaches are frequently referred to in popular culture. In Western culture, cockroaches are often depicted as dirty pests. In a 1750–1752 journal, Peter Osbeck noted that cockroaches were frequently seen and found their way to the bakeries, after the sailing ship \"Gothenburg\" ran aground and was destroyed by rocks.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2j7n9g
|
Historians, how do you feel, in general, the accuracy and completeness of wikipedia entries compares to High School history textbooks?
|
[
{
"answer": "James Loewen's *Lies My Teacher Told Me* is an indictment of the high school history textbook, including on your chosen topic of Christopher Columbus. He'd picked old textbooks and criticized them for outdated scholarship (which is one of several issues I had with his book), but apparently the situation has not improved a whole lot since then.\n\nThe problem essentially boils down to a flawed textbook writing system that is based on incremental change that only allows new scholarship to slowly percolate into the books.\n\nWikipedia pages, at least in theory, can be updated every time a new paper or book comes up.",
"provenance": null
},
{
"answer": "It depends on the article, especially on how popular the topic is. I'd say, in general, that wikipedia articles are more complete than a typical textbook. This is due to space more than anything. A textbook can't spend 1200 words on Julius Caesar's [early career](_URL_0_) before becoming consul. But the rest of his career is pretty comparable to what you might find in a textbook.\n\nThankfully you do not have to rest on my anecdotal data. Check out this [study by Roy Rosenzweig on Wikipedia](_URL_1_). This is widely read by teachers and professors and informs many opinions on the website. He concludes that Wikipedia is just as accurate as a comparable encyclopedia, a tad less accurate than a topical encyclopedia, but more exhaustive than either. The issue is that the articles often focus on material that many academics would consider \"beside the point\" for a person looking for an overview on a topic and its importance to historians.\n\nEdit: the study is about Wikipedia, not at Wikipedia.",
"provenance": null
},
{
"answer": "I think a big difference is Wikipedia encourages you to easily look further at subjects. That was a big part of why I went into history, and also helps the development of history as a long tapestry of interrelated events, not a series of dates to remember.",
"provenance": null
},
{
"answer": "This is a little bit of an unfair comparison, because most high school textbooks (at least American ones) are pretty terrible, even when they try to get things right. So in areas where Wikipedia articles even have a reasonable stab towards neutrality, recent scholarship, and so forth, they automatically win, hand's down, because the textbooks are so poor. \n\nIt's also a bit of an apples-to-oranges comparison because Wikipedia articles are not so limited in brevity as a high school textbook, and brevity is responsible for a lot of the worst aspects of textbooks in my view (when you cannot say much about something, and cannot provide evidence, then you end up with the kind of mealy-mouthed summaries that make textbooks boring and vague). \n\nAn example: textbook coverage of the use of the atomic bomb in my experience is limited to a brief account of the end of WWII, a brief framing of the \"decision to use the bomb,\" and then say it was used and the war ended so hurrah. Wikipedia, by comparison, has separate articles about all aspects of this matter, and even discusses the more recent (e.g. last decade or so) historiography of the bomb which has generally rejected the idea that there was a distinct \"decision\" to use the bomb and has cast doubt on whether the bombs actually were responsible for the Japanese surrender. Now, it's not impossible for a textbook to cover at least _some_ of what it does better (it could, for example, at least acknowledge that the timing of the atomic bombings and the Soviet declaration of war were identical, and the latter may have been at least as influential, if not more so, for the Japanese), but no matter what it does it is going to be a lot more limited in its coverage because it only has maybe one or two paragraphs to devote to the subject, as opposed to Wikipedia's pages and pages.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2666752",
"title": "James W. Loewen",
"section": "Section::::Career.:\"Lies My Teacher Told Me\".\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 369,
"text": "The book reflects Loewen's belief that history should not be taught as straightforward facts and dates to memorize, but rather as analysis of the context and root causes of events. Loewen recommends that teachers use two or more textbooks, so that students may realize the contradictions and ask questions, such as, \"Why do the authors present the material like this?\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "254108",
"title": "Textbook",
"section": "Section::::K-12 textbooks.:High school.\n",
"start_paragraph_id": 73,
"start_character": 0,
"end_paragraph_id": 73,
"end_character": 742,
"text": "In recent years, high school textbooks of United States history have come under increasing criticism. Authors such as Howard Zinn (\"A People's History of the United States\"), Gilbert T. Sewall (\"\") and James W. Loewen (\"\"), make the claim that U.S. history textbooks contain mythical untruths and omissions, which paint a whitewashed picture that bears little resemblance to what most students learn in universities. Inaccurately retelling history, through textbooks or other literature, has been practiced in many societies, from ancient Rome to the Soviet Union (USSR) and the People's Republic of China. The content of history textbooks is often determined by the political forces of state adoption boards and ideological pressure groups.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "767082",
"title": "Historical thinking",
"section": "Section::::The Role of History Textbooks in Learning to Think Historically.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 589,
"text": "Still other critics believe that using textbooks undermines the process of learning history by sacrificing thinking skills for content—that textbooks allow teachers to cover vast amounts of names, dates, and places while encouraging students simply to memorize instead of question or analyze. For example, Sam Wineburg argues: \"Traditional history instruction constitutes a form of information, not a form of knowledge. Students might master an agreed-upon narrative, but they lacked any way of evaluating it, of deciding whether it, or any other narrative, was compelling or true” (41). \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5043734",
"title": "Wikipedia",
"section": "Section::::Reception.:Quality of writing.\n",
"start_paragraph_id": 88,
"start_character": 0,
"end_paragraph_id": 88,
"end_character": 1606,
"text": "In 2008, researchers at Carnegie Mellon University found that the quality of a Wikipedia article would suffer rather than gain from adding more writers when the article lacked appropriate explicit or implicit coordination. For instance, when contributors rewrite small portions of an entry rather than making full-length revisions, high- and low-quality content may be intermingled within an entry. Roy Rosenzweig, a history professor, stated that \"American National Biography Online\" outperformed Wikipedia in terms of its \"clear and engaging prose\", which, he said, was an important aspect of good historical writing. Contrasting Wikipedia's treatment of Abraham Lincoln to that of Civil War historian James McPherson in \"American National Biography Online\", he said that both were essentially accurate and covered the major episodes in Lincoln's life, but praised \"McPherson's richer contextualization [...] his artful use of quotations to capture Lincoln's voice [...] and [...] his ability to convey a profound message in a handful of words.\" By contrast, he gives an example of Wikipedia's prose that he finds \"both verbose and dull\". Rosenzweig also criticized the \"waffling—encouraged by the NPOV policy—[which] means that it is hard to discern any overall interpretive stance in Wikipedia history\". While generally praising the article on William Clarke Quantrill, he quoted its conclusion as an example of such \"waffling\", which then stated: \"Some historians [...] remember him as an opportunistic, bloodthirsty outlaw, while others continue to view him as a daring soldier and local folk hero.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3949711",
"title": "Lies My Teacher Told Me",
"section": "Section::::Themes.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1000,
"text": "In \"Lies My Teacher Told Me\", Loewen criticizes modern American high school history textbooks for containing incorrect information about people and events such as Christopher Columbus, the lies and inaccuracies in the history books regarding the dealings between the Europeans and the Native Americans, and their often deceptive and inaccurate teachings told about America's commerce in slavery. He further criticizes the texts for a tendency to avoid controversy and for their \"bland\" and simplistic style. He proposes that when American history textbooks elevate American historical figures to the status of heroes, they unintentionally give students the impression that these figures are superhumans who live in the irretrievable past. In other words, the history-as-myth method teaches students that America's greatest days have already passed. Loewen asserts that the muting of past clashes and tragedies makes history boring to students, especially groups excluded from the positive histories.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16734943",
"title": "Association of Christian Schools International v. Stearns",
"section": "Section::::Decision.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 351,
"text": "The August 2008 ruling concluded that various books offered by the school shouldn't be used for a college-preparatory history class because \"it didn't encourage critical thinking skills and failed to cover 'major topics, themes and components' of U.S. history\", Otero wrote. The judge said Calvary provided little admissible evidence to the contrary.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6014851",
"title": "Reliability of Wikipedia",
"section": "Section::::Assessments.:Expert opinion.:Academe.\n",
"start_paragraph_id": 66,
"start_character": 0,
"end_paragraph_id": 66,
"end_character": 284,
"text": "In 2007, the \"Chronicle of Higher Education\" published an article written by Cathy Davidson, Professor of Interdisciplinary Studies and English at Duke University, in which she asserts that Wikipedia should be used to teach students about the concepts of reliability and credibility.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
920cra
|
how come some animals, like kangaroos, bulls and some apes, can get so jacked buy eating almost no proteins?
|
[
{
"answer": "One benefit they have is that they don’t sit around staring at lasers and poking at buttons all day",
"provenance": null
},
{
"answer": "They don't eat \"no\" proteins, but their diet consists of food with only a small amount of protein in it (grasses, etc do still have a small amount of protein)\n\nSo they eat a LOT of it. And have digestive systems (and muscles) that have evolved to extract the maximum benefit out of what they do eat. ",
"provenance": null
},
{
"answer": "All plant cells have protein. No always a lot of it, but it's plenty if you eat all day and can break down all the fiber. Lettuce and cabbage for example have 1% protein, that means we'd have to eat 5 kg of it each day for our recommended daily intake - assuming that we can digest it all. Sounds like a lot, but grazing animals usually eat a lot more food than we do. Hay usually has somewhere around 10% protein content. Since it's dehydrated, the protein is concentrated a lot. Corn silage, a very important cattle feed, has around 5%, and usually is mixed with a bit of high protein fodder like soy, the byproducts of vegetable oil extraction or other protein rich plants.",
"provenance": null
},
{
"answer": "Plants do actually have a considerable amount of protein in them. The limiting factor there is how it’s extracted. Humans for example, it takes us far more energy to break down plant matter than it does meat. It’s one of those arguments vegans use a lot. X plant has more protein per 100g than red meat. But neglect the fact it takes twice as much (citation needed) energy to get that protein. Digestive systems of other animals function in a way that maximises the energy input/output of consuming entirely vegetation.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "17064",
"title": "Kangaroo",
"section": "Section::::Biology and behaviour.:Diet.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 352,
"text": "Kangaroos have single-chambered stomachs quite unlike those of cattle and sheep, which have four compartments. They sometimes regurgitate the vegetation they have eaten, chew it as cud, and then swallow it again for final digestion. However, this is a different, more strenuous, activity than it is in ruminants, and does not take place as frequently.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4554427",
"title": "Kangaroo meat",
"section": "Section::::Traditional Aboriginal use.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 470,
"text": "The kangaroo is chopped up so that many people can eat it. The warm blood and fluids from the gluteus medius and the hollow of the thoracic cavity are drained of all fluids. People drink these fluids, which studies have shown are quite harmless. Kangaroos are cut in a special way; into the two thighs, the two hips, the two sides of ribs, the stomach, the head, the tail, the two feet, the back and lower back. This is the way the Arrernte people everywhere cut it up.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "229914",
"title": "Ape",
"section": "Section::::Biology.:Diet.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 856,
"text": "Apart from humans and gorillas, apes eat a predominantly frugivorous diet, mostly fruit, but supplemented with a variety of other foods. Gorillas are predominately folivorous, eating mostly stalks, shoots, roots and leaves with some fruit and other foods. Non-human apes usually eat a small amount of raw animal foods such as insects or eggs. In the case of humans, migration and the invention of hunting tools and cooking has led to an even wider variety of foods and diets, with many human diets including large amounts of cooked tubers (roots) or legumes. Other food production and processing methods including animal husbandry and industrial refining and processing have further changed human diets. Humans and other apes occasionally eat other primates. Some of these primates are now close to extinction with habitat loss being the underlying cause.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12515769",
"title": "Tenkile",
"section": "Section::::Diet.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 524,
"text": "Since the Tenkile Tree Kangaroos are critically endangered, not much is known about their diets due to the lack of Tenkile in the wild. What is known about them is that they are mainly herbivores, which differs from other tree kangaroos. The Tenkile have been known to look for their food either in the treetops or on the ground. Their known diet is made up of tree leaves, ferns, and soft vines [10]. Other tree kangaroos have been known to diet on the same vegetation with a little variety, fruits, eggs, and young birds.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2422141",
"title": "Matschie's tree-kangaroo",
"section": "Section::::Ecology and behavior.:Diet.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 719,
"text": "The Matschie's tree-kangaroos are mainly folivorous, eating anything from leaves, sap, insects, flowers, and nuts. It was also found that they have eaten chickens in captivity as well as feeding on a variety of plants, carrots, lettuce, bananas, potatoes, hard-boiled eggs, and yams. Since they eat high fiber foods, they only eat maybe about 1 to 2 hours throughout the day and the other time of the day they are resting and digesting their food. Their digestion is similar to that of the ruminants; they have a large, “tubiform forestomach”, where most of the fermentation and breakdown of tough material takes place at; in the hind stomach, there is a mucosa lining with many glands that help absorption begin here.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17143",
"title": "Koala",
"section": "Section::::Description.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 944,
"text": "Unlike kangaroos and eucalyptus-eating possums, koalas are hindgut fermenters, and their digestive retention can last for up to 100 hours in the wild, or up to 200 hours in captivity. This is made possible by the extraordinary length of their caecum— long and in diameter—the largest proportionally of any animal. Koalas can select which food particles to retain for longer fermentation and which to pass through. Large particles typically pass through more quickly, as they would take more time to digest. While the hindgut is proportionally larger in the koala than in other herbivores, only 10% of the animal's energy is obtained from fermentation. Since the koala gains a low amount of energy from its diet, its metabolic rate is half that of a typical mammal, although this can vary between seasons and sexes. The koala conserves water by passing relatively dry faecal pellets high in undigested fibre, and by storing water in the caecum.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "178953",
"title": "Tree-kangaroo",
"section": "Section::::Behaviour.:Diet.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 349,
"text": "The main diet of the tree-kangaroo is leaves and fruit that it gathers from the trees, but occasionally scavenged from the ground. Tree-kangaroos will also eat grains, flour, various nuts, sap and tree bark. Some captive tree-kangaroos (perhaps limited to New Guinea species) eat protein foods such as eggs, birds and snakes, making them omnivores.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1r8l2n
|
What would be considered the greatest political "blunders" of U.S.'s 1st President, George Washington?
|
[
{
"answer": "Over reliance on Alexander Hamilton. Hamilton and Washington's relationship is well known and predates his time as President but it was his over reliance on Hamilton during and after his Presidency that greatly increased political tensions, contributed to the rise of the Republican party and contributed to Adams loss in 1800. \n\nInitially Washington had leaned very heavily on James Madison, who was the most effective politician in the house, but Washington eventually turned to his cabinet largely out of constitutional reasons. His cabinet was initially a good mix of geography and political persuasion. The political arguing between Jefferson and Hamilton is known well, but initially Washington was far less biased than one would think, for instance he very nearly vetoed BOTUS I. As political tensions mounted Washington increasingly relied on Hamilton at the expense of the moderates and what would be Jeffersonians within his cabinet. For instance the negotiations between Jay and the British were run through Hamilton( the secretary of the treasury) and not state. Edmund Randolph (the secretary of state) was left almost entirely in the dark and consequently so was the ambasaddor to France, James Monroe. Jay and Hamilton's working around of the Pro-French Virginians put both Monroe and Randolph into the near impossible position of attempting to explain the Jay Treaty to their still nominal allies in France which contributed to the more famous hostilities of Adams presidency.\n\nAfter Jefferson's departure there was no remaining Republican leaning members of Washington's cabinet, only Randolph and mostly high federalists. In 1795 a letter written by Edmund Randolph to France had been seized by the British who turned it over to Alexander Hamilton, who informed Washington that it contained treasonous material to the United States. Washington, never actually read the letter nor did he have someone else read it relying solely on the opinion and word of Hamilton. He confronted Randolph in a cabinet meeting forcing him to resign. Within the 18th century society of honor, Washington's actions were highly offensive made evident by Randolph taking the unusual steps of publicly attacking Washington in his [A Vindication of Mr. Randolph's Resignation](_URL_0_) at about the same time Monroe was also removed from France. These removals meant that there was now no Republicans holding high office within the United States, and only one moderate Federalist(Lee). Jefferson and Madison in particular saw the removal of Randolph as the last straw, and any lingering sentiments towards reconciling with the Federalists was highly unlikely. Jefferson in a pained letter even wrote to Washington, telling him of how dangerous he felt Hamilton was to Washington. \n\nAdams too many problems during his presidency to get into at any length but one of the ones not of his creation was because of Washington. Adams had expanded the armed forces of the United States (either to oppose France or as some Federalists hopped to stop domestic violence ie: Republican). At this highly partisan time Adams thought that only Washington could be offered command of the army to ensure that the army held the support of all Americans. Perhaps showing the severity of the time, Washington actually accepted but refused to have anyone other than Hamilton as his second in command. Adams huffed and puffed but was left with little choice but to accept Hamilton. Hamilton was not the man to bring the nation together, especially at the head of an army one of his best friends Gouvernor Morris wrote of Hamilton\n\n\"Our poor friend Hamilton bestrode his hobby to the great annoyance of his friends, and not without injury to himself.. He well knew that his favorite form (of government) was inadmissible, unless as the result of civil war; and I suspect that his belief in that which he called an 'approaching crisis' arose from a conviction that the king of government most suitable in his opinion, to this extensive country, could established no other way\".\n\nHamilton's appointment panicked Republicans even more so, many who now openly thought that the army's purpose was to crush the Republican movement. Jefferson sent out letters to Republican leaders imploring them to avoid giving the Federalists any reason whatsoever for using the army against them. In doing so Jefferson allowed the public to arrive at their own conclusion regarding the army, which contributed to his victory in 1800. Adams had planned to appoint moderates in Pennsylvania and New York to high offices within the army (the two states he needed to win to secure his election) but Hamilton ensured that only high federalists and no Republicans were appointed to command, again reinforcing the Jeffersonian fear. It was at this point that Adams realized he had lost complete control of the Federalist party and he attempted to reassert his authority through sacking his cabinet(minus Lee) and negotiations with France, causing a civil war within the party.\n\nHad Washington continued to rely on moderate Republicans like Madison and appointed a bi-partisan cabinet much of the political infighting that resulted in the two party system could have been lessened or delayed.\n\noutside of that I'm not sure what else you can really hold Washington accountable for major political mistakes. Ellis in *American Creation* holds the entirety of the founding generation responsible for failing to find a solution for the natives and slavery. However Ellis notes in the book (for the Creek confederacy for instance) the Federal government held few tools on hand to deal with many of the issues affecting the Creek. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "36960695",
"title": "Minor characters in the Revolution at Sea Saga",
"section": "Section::::George Washington.\n",
"start_paragraph_id": 67,
"start_character": 0,
"end_paragraph_id": 67,
"end_character": 267,
"text": "George Washington, best known for being the first official president of the United States and a great military leader, appears in \"The Maddest Idea\", and is the one who assigns Major Edward Fitzgerald to the job of flushing out the traitor. \"(See George Washington)\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5897716",
"title": "His Excellency: George Washington",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 293,
"text": "His Excellency: George Washington is a 2004 biography of the first President of the United States, General George Washington. It was written by Joseph Ellis, a professor of History at Mount Holyoke College, who specializes in the founding fathers and the revolutionary and federalist periods.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "544496",
"title": "Richard Reeves (American writer)",
"section": "Section::::Career.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 514,
"text": "In November 2005, Reeves theorized that George W. Bush could be regarded as the worst president in U.S. history, noting: \"The History News Network at George Mason University has just polled historians informally on the Bush record. Four hundred and fifteen, about a third of those contacted, answered, making the project as unofficial as it was interesting. These were the results: 338 said they believed Bush was failing, while 77 said he was succeeding. Fifty said they thought he was the worst president ever.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33790475",
"title": "John Thornton Augustine Washington",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 274,
"text": "John Thornton Augustine Washington (May 20, 1783 – October 9, 1841) was a prominent Virginia (now West Virginia) landowner, farmer, and statesman and a member of the Washington family. Washington was a grandnephew of George Washington, first President of the United States.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5095780",
"title": "Sean Wilentz",
"section": "Section::::Politics.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 316,
"text": "In 2006, he wrote an article denouncing the George W. Bush presidency that was titled \"The Worst President in History?\" which appeared in \"Rolling Stone\" magazine. The article received a response from \"National Review\", attacking Wilentz's analysis as \"blinkered\" and calling him \"the modern Arthur Schlesinger Jr.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8702096",
"title": "Presidency of George Washington",
"section": "Section::::Historical evaluation.\n",
"start_paragraph_id": 139,
"start_character": 0,
"end_paragraph_id": 139,
"end_character": 395,
"text": "George Washington's presidency has generally been viewed as one of the most successful, and he is often considered to be one of the three greatest American presidents ever. When historians began ranking the presidents in 1948, Washington ranked 2nd in Arthur M. Schlesinger Sr.'s poll, and has subsequently been ranked 3rd in the Riders-McIver Poll (1996), and 2nd in the 2017 survey by C-SPAN.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2899869",
"title": "Historical characters in the Southern Victory Series",
"section": "Section::::Mentioned Historical Characters from Before the Change.:Washington, George.\n",
"start_paragraph_id": 405,
"start_character": 0,
"end_paragraph_id": 405,
"end_character": 412,
"text": "As a military hero and the first President of the United States, George Washington was universally revered as a major Founding Father and one of the most memorable presidents in American history. Also after it broke into two mutually antagonistic nations, U.S. historians continued to so regard Washington, alongside Thomas Jefferson, Abraham Lincoln, and Theodore Roosevelt as the most memorable of presidents.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
908pcg
|
how can video games produce sounds from specific areas in the game?
|
[
{
"answer": "Science.meme\n\nBut seriously, it’s all in just using the right balance between left and right to mimic what he hear and how we perceive direction in real life. We experience the sounds we hear all the time in stereo (i.e. out of each of our 2 ears), so it’s *relatively* straightforward to create this same effect by splitting the sounds just right between 2 (or more, but a minimum of 2) speakers. \n\nYou can confirm by playing one of these games with headphones on backwards. It’s pretty trippy, to be honest. ",
"provenance": null
},
{
"answer": "humans have 2 ears, TV's have 2 (or sometimes more) speakers. Our brains use differences between what each of our 2 ears hear to determine where an object is in real space. That's why objects echoing fuck up where we think a sound is coming from. \n\nTVs do the same thing with their stereo speakers. They use the left and right channel to make sounds that come from the left or the right. But manufacturers are even smarter than that and they can mimic how a sound from behind you sounds kind of muffled vs something in front. So it just mimics that. Or if you have a \"surround sound\" system or headphones it can use actual speakers located in front or behind you to make sounds come from there. ",
"provenance": null
},
{
"answer": "You need at least two speakers in order to achieve directional sound. Sounds can be panned from left to right, depending on where the sound source is located (data that the game engine has easy access to). If your sound equipment has additional speakers, those can also be used to produce sound coming from a certain direction.\n\nPsychoacoustics is the study of sound traveling through your body to your ears. The shape of your ears determines how sounds might be heard when coming from other directions. Slight changes to the sound can make it seem like the sound is coming from above, below, or behind, while still using one speaker per ear. Sound can also pass through your body instead of through air, so a voice can be made to sound like your own character's by increasing the bass tones and other frequencies that would travel through your flesh and bones (sounds you can't hear from other people unless they were in intimate contact).\n\nSound can also be shaped by the environment. The hard walls of a cave or masonry building can add reverb (the same sound being heard in quick succession), allowing you to tell whether a sound is coming from inside the same room. Distant mountains or buildings can create echoes, allowing you to tell that the sound came from outdoors.\n\nDifferent frequencies of sound travel through air in different ways and have different effects. Bass sounds can travel for great distances, but can be overpowered by other sounds. Treble sounds are often absorbed by materials, and so can indicate that a sound came from nearby.\n\nSome newer games like *Rainbow 6 Siege* try to model how sounds travel through a building, bass sounds traveling easily through walls and floors/ceilings, while treble sounds travel through air. Different parts of the sound may take different paths to the player.",
"provenance": null
},
{
"answer": "Let's start with a simple example. We'll generate a pure tone that's somewhere out in front of you (and your simplified-for-the-sake-of-argument omnidirectional ears).\n\nThe time it takes that tone to reach your left and right ears will be slightly different because the distance it travels will be slightly different.\n\nThis has two effects:\n\nAttenuation. The volume of a sound declines as the reciprocal of range. So while the original sound is a single volume, your ears are hearing it at slightly different volumes.\n\nTime delay (phase discrepancy). Sound is transmitted in waves. The slightly greater/lower distance means that the wave 'starts' at a different place for your two ears. This phase discrepancy also clues us in where a sound is coming from.\n\nOnce you start to add complexity, you start to get a lot more information.\n\nFor example, if our ears are cardoid instead of omnidirectional, the attenuation varies with the angle of arrival but the phase discrepancy does not.\n\nDifferent materials (and width of materials) reflect and absorb sound in different ways (in general, low frequencies penetrate while high frequencies reflect). Your actual ears evolved the way they did to create differential reflections and allow you to use two receivers to fix a point in 3-dimensional space.\n\nYou also end up with constructive/destructive interference effects which can be incredibly helpful when you're tracking sound from moving sources/receivers over time.\n\nThat being said, the people who design sound effects for video games generally aren't the type of people with the mathematical background to design sonar arrays. Instead, they start with sampled sounds and toss a bunch of canned signal processing tools until it sounds right to them.",
"provenance": null
},
{
"answer": "There are two main ways to produce directional sound and positional audio. The theory for each is more or less the same in both stereo speaker configurations and surround sound configurations.\n\nThe first method is called *amplitude positioning* and is by far the simpler and less computationally intensive of the two. In amplitude positioning, a single sequence of audio samples, such as a voice or gunshot, is played back in each of the speakers at different volumes.\n\nAmplitude positioning on a stereo speaker configuration allows for reasonable positioning across one dimension (left/right). positioning across a surround sound setup allows for positioning across two dimensions.\n\nThe second method is called *head related transfer function*, or HRTF, and is much more computationally complex. HRTFs adjust the pitch, delay, and amplitude (volume) of sounds in order to position them in 3D space.\n\nAlthough surround sound speaker configurations still provide the best positioning, HRTFs can provide excellent and highly accurate positioning on stereo speakers as well. Many \"virtual 7.1\" headsets are in fact stereo headphones with a digital signal processor that uses a set of HRTFs to convert a 7.1 audio signal into a 2.0 audio signal.\n\nRunning HRTFs in real time is computationally complex and thus demanding of CPU resources, so many sound cards provide hardware acceleration for doing so. Reverberation and other sound processing techniques are often supported in hardware as well.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "311632",
"title": "Video game programmer",
"section": "Section::::Disciplines.:Sound programmer.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 325,
"text": "Many games use advanced techniques such as 3D positional sound, making audio programming a non-trivial matter. With these games, one or two programmers may dedicate all their time to building and refining the game's sound engine, and sound programmers may be trained or have a formal background in digital signal processing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45204814",
"title": "Development of The Last of Us",
"section": "Section::::Production.:Music and sound production.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 2290,
"text": "The sound design team began working on the game early in development, in order to achieve the best results; they immediately realised that it would be challenging. Early in development, Druckmann told the sound team to \"make it subtle\", and underplay ideas. Audio lead Phillip Kovats was excited to completely create all sounds; no sounds were carried across from previous games. The team looked at ways to create sounds from a naturalistic point-of-view, and how to introduce minimalism into a game. By doing so, they found that it added feelings of tension, loss and hope, and that the game appeared to be a typical \"action game\" without the minimalism approach. They used a high dynamic range, allowing them the opportunity to inform players on tactical information, and locations to explore. The game's sound design was created to reflect a more \"grounded\" and subtle mood than \"Uncharted\", particularly focusing on the lack of sound. Taking inspiration from \"No Country for Old Men\", the team attempted to \"do more with less\"; Kovats said that the team was trying to tell a story by \"going for a reductive quality\". Straley stated that the audio is vital to some scenes in the game; \"It's more about the psychology of what's happening on the audioscape than what you're seeing,\" he stated. He felt that this decision allowed a more impactful and meaningful effect with sound occurred. The sound team also attempted to portray the game's dark themes through sound. The team felt that it was important to let sounds play for as long as possible in the game, drawing tension. The team used a propagation technique to help players determine the exact locations of enemies, using this as a tactical advantage. This system, created by the team at Naughty Dog, is processed at random in the game engine. For the game's audio, the engine throws out 1500–2500 ray casts per frame; though most games avoid this, the game's engine allowed it to work. The team spent a lot of time recording sounds for the game, namely doors, and rusty metal. Sound designer Neil Uchitel traveled to Rio de Janeiro, discovering locations to record sounds; he recorded chickens, which were used in the game as the voices of rats. The team continued to add and change the game's sounds until the end of development.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44029501",
"title": "Development of Red Dead Redemption",
"section": "Section::::Production.:Music production.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 1641,
"text": "From the beginning of development, the sound development team wished to achieve authenticity in the game's sounds. After the art department sent artwork to the sound department, the latter were inspired to achieve realism, researching all sounds that were to be used in the game. Throughout development, sound editors often presented ideas, which would then be effortlessly achieved by the audio programmers. In the three main areas of the game world, there are unique ambiences; these are broken down into smaller sounds, such as bugs and animals, which are further refined to reflect the weather and time. The sound department was given specific instructions for the tone of game locations; for example, Thieves' Landing was to feel \"creepy\" and \"off-putting\". The sounds of the game's weapons were also intricately developed; in order to feel as realistic as possible, each weapon has a variety of similar firing sounds. The development of the game's Foley began with a week-long session, where two Foley artists from Los Angeles were sent to record thousands of sounds relating to the game's setting. The sound department also spent time on specific gameplay elements; Dead Eye was meant to sound \"organic\" as opposed to \"sci-fi or electronic\", while animals—a feature that the team found challenging—was to immerse players in the experience. For the final sound mix, audio director Jeffrey Whitcher and lead sound designer Matthew Smith worked together to balance and blend the three main aspects of the soundtrack: dialogue, sound effects, and music. Smith coded systems to blend the three aspects, in order to keep the mix \"dynamic\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "311632",
"title": "Video game programmer",
"section": "Section::::Disciplines.:Sound programmer.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 283,
"text": "Not always a separate discipline, sound programming has been a mainstay of game programming since the days of \"Pong\". Most games make use of audio, and many have a full musical score. Computer audio games eschew graphics altogether and use sound as their primary feedback mechanism.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33161585",
"title": "IEZA Framework",
"section": "Section::::Description.:Effect.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 562,
"text": "Sound in the Effect domain expresses activity in the game world. Sound usually consists of a mix of one-shot sound events in the game world (either triggered by the player or by the game itself), such as the sound of an explosion, and continuous sound streams, such as the sound of a continuously burning fire. Sound of the Effect category often mimics the realistic behavior of sound in the real world. In many games it is the part of game audio that is dynamically processed using techniques such as real-time volume changes, panning, filtering and acoustics.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5186567",
"title": "Advanced Dungeons & Dragons: Cloudy Mountain",
"section": "Section::::Gameplay.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 606,
"text": "Sound is an integral part of the game. Although most of the map is in darkness, when approaching certain adversaries it is possible to hear them before seeing them. Snakes make a hissing sound, for example. However, every cave contains a number of bats; although harmless to the player, bats create a loud flapping sound with their wings that obscures the sound of any other monster, making it more likely for the player to run into one and be taken by surprise. One particularly troublesome adversary is the giant spider, which makes no noise at all but it has the ability to consume the player's arrows.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22010244",
"title": "Spore: Galactic Adventures",
"section": "Section::::Gameplay.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 395,
"text": "Players can select a range of special effects and drop them into the level. Sounds can be added to the game in the same manner, although new sound files cannot be added to the game. A complexity meter exists to prevent too many objects being dropped into the game; it also enables the player to beam down and experience a planet firsthand rather than exploring it with a holographic projection.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
8ylwcs
|
what does the wow! alien signal mean? the image is just a bunch of numbers and letters so what is the significance of those specific letters?
|
[
{
"answer": "[Here is a better visualization of the WOW signal](_URL_0_)\n\nBasically those numbers and letters are a printout of raw data because you can fit more datapoints on a printed sheet of paper this way. The numbers go up and then into the alphabet to represent the strength of a signal. \n\nAs you saw on the printout, the norm was nothing and maybe 1's 2's and 3's. Suddenly you get a 7, and then start seeing letters! Not just one, but a sustained 70 second signal. That was something of note. ",
"provenance": null
},
{
"answer": "Thanks everyone this has been very enlightening!",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "43991244",
"title": "Arrival (film)",
"section": "Section::::Plot.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 300,
"text": "Donnelly discovers that the symbol for time is present throughout the message, and that the writing occupies exactly one-twelfth of the space in which it is projected. Banks suggests that the full message is split among the twelve craft, and the aliens want all the nations to share what they learn.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1211349",
"title": "Wow! signal",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 362,
"text": "The Wow! signal was a strong narrowband radio signal received on August 15, 1977, by Ohio State University's Big Ear radio telescope in the United States, then used to support the search for extraterrestrial intelligence. The signal appeared to come from the direction of the constellation Sagittarius and bore the expected hallmarks of extraterrestrial origin.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28153",
"title": "Search for extraterrestrial intelligence",
"section": "Section::::Post detection disclosure protocol.\n",
"start_paragraph_id": 97,
"start_character": 0,
"end_paragraph_id": 97,
"end_character": 547,
"text": "The SETI Institute does not officially recognize the Wow! signal as of extraterrestrial origin (as it was unable to be verified). The SETI Institute has also publicly denied that the candidate signal Radio source SHGb02+14a is of extraterrestrial origin though full details of the signal, such as its exact location have never been disclosed to the public. Although other volunteering projects such as Zooniverse credit users for discoveries, there is currently no crediting or early notification by SETI@Home following the discovery of a signal.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52581728",
"title": "Alien language in science fiction",
"section": "",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 637,
"text": "BULLET::::- In the 2016 science-fiction movie, \"Arrival\", a linguist is tasked by the U.S. Army to try and understand an alien language of complex symbols. The film received significant media attention for its unique and detailed portrayal of what human communication with aliens might resemble. Film production went as far as employing several linguistic professors from McGill University, including Jessica Coon, who serves as Canada Research Chair in Syntax and Indigenous Languages, and Wolfram Research Founder and CEO, Stephen Wolfram and his son, Christopher, to analyze the symbols which served as the language used in the film.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3189412",
"title": "Pocoyo",
"section": "Section::::Characters.:Recurring.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 227,
"text": "BULLET::::- Aliens are sweet, big and little green, pink, and blue tripedal beings that Pocoyo finds in space in search of his toy plane and other adventures. They communicate with one another using staccato 'clicking' noises.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "236268",
"title": "Pioneer plaque",
"section": "Section::::Fiction.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 217,
"text": "BULLET::::- In the TV show \"The Big Bang Theory\", on episode 21 of season 8 (The Communication Deterioration), the characters mention the plaque while discussing ideas about sending a message to alien races in space.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "757130",
"title": "Stargate (device)",
"section": "Section::::Operation.:Addresses.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 961,
"text": "The symbols dialed are often referred to as \"coordinates\", and are written as an ordered string; for example, this is the address used in the show for the planet Abydos: (corresponding to the constellations of Taurus, Serpens Caput, Capricornus, Monoceros, Sagittarius and Orion). As explained by Dr. Daniel Jackson in the movie, the Stargate requires seven correct symbols to connect to another Stargate. As shown in the picture opposite, the first six symbols act as co-ordinates, creating three intersecting lines, the destination. The Stargate uses the seventh symbol as the point of origin allowing one to plot a straight line course to the destination. With the stargates of the Milky Way, with 38 address symbols and one point of origin, there are 1,987,690,320 possible six symbol co-ordinates. With the stargates of the Pegasus or Destiny, with 35 address symbols and one point of origin, there are only 1,168,675,200 possible six symbol co-ordinates.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2l1vlx
|
how does the chinese economy work/differs from the rest of the world.
|
[
{
"answer": "Its basically American capitalism with loose or non-existent civil rights laws (child labor, unequal wages and hours ) and the government can own and compete in business. Imagine if the US government decided to start building super cheap economy cars, built with unpaid slave labor provided by the prison system and super ultra subsidized by itself, and than sold those cars with huge tax incentives to the customer within America in direct competition with American manufacturers. Substitute analogous industry with any alternative you like. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "19284336",
"title": "Economy of China",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 853,
"text": "The socialist market economy of the People's Republic of China is the world's second largest economy by nominal GDP and the world's largest economy by purchasing power parity. Until 2015, China was the world's fastest-growing major economy, with growth rates averaging 6% over 30 years. Due to historical and political facts of China's developing economy, China's public sector accounts for a bigger share of the national economy than the burgeoning private sector. According to the IMF, on a per capita income basis China ranked 67th by GDP (nominal) and 73rd by GDP (PPP) per capita in 2018. The country has an estimated $23 trillion worth of natural resources, 90% of which are coal and rare earth metals. China also has the world's largest total banking sector assets of $39.93 trillion (268.76 trillion CNY) with $27.39 trillion in total deposits.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5405",
"title": "China",
"section": "Section::::Economy.\n",
"start_paragraph_id": 93,
"start_character": 0,
"end_paragraph_id": 93,
"end_character": 1467,
"text": "China had the largest economy in the world for most of the past two thousand years, during which it has seen cycles of prosperity and decline. As of 2018, China had the world's second-largest economy in terms of nominal GDP, totaling approximately US$13.5 trillion (90 trillion Yuan). In terms of purchasing power parity (PPP GDP), China's economy has been the largest in the world since 2014, according to the World Bank. Since economic reforms began in 1978, China has developed into a highly diversified economy and one of the most consequential players in international trade. Major sectors of competitive strength include manufacturing, retail, mining, steel, textiles, automobiles, energy generation, green energy, banking, electronics, telecommunications, real estate, e-commerce, and tourism. China has been the world's #1 manufacturer since 2010, after overtaking the US, which had been #1 for the previous hundred years. China has also been #2 in high-tech manufacturing since 2012, according to US National Science Foundation. China is the second largest retail market in the world, next to the United States. China leads the world in e-commerce, accounting for 40% of the global market share. China is the leader in electric vehicles, manufacturing and buying half of all the plug-in electric cars (BEV and PHEV) in the world in 2018. China had 174 GW of installed solar capacity by the end of 2018, which amounts to more than 40% of the global capacity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5405",
"title": "China",
"section": "Section::::Economy.\n",
"start_paragraph_id": 94,
"start_character": 0,
"end_paragraph_id": 94,
"end_character": 1177,
"text": "China has been the world's second-largest economy in terms of nominal GDP since 2010. In terms of purchasing power parity (PPP) GDP, China's economy has been the largest in the world since 2014. As of 2018, China was second in the world in total number of billionaires and millionaires—there were 338 Chinese billionaires and 3.5 million millionaires. However, it ranks behind over 70 countries (out of around 180) in per capita economic output, making it a middle income country. Additionally, its development is highly uneven. Its major cities and coastal areas are far more prosperous compared to rural and interior regions. China brought more people out of extreme poverty than any other country in history—between 1978 and 2018, China reduced extreme poverty by 800 million. China reduced the extreme poverty rate—per international standard, it refers to an income of less than $1.90/day—from 88% in 1981 to 1.85% by 2013. According to the World Bank, the number of Chinese in extreme poverty fell from 756 million to 25 million between 1990 and 2013. China's own national poverty standards are higher and thus the national poverty rates were 3.1% in 2017 and 1% in 2018.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "249863",
"title": "Dumping (pricing policy)",
"section": "Section::::Anti-dumping actions.:Actions in the European Union.:Chinese economic situation.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 547,
"text": "However, China has one of the world's cheapest labour costs. Criticisms have argued that it is quite unreasonable to compare China's goods price to the United States as analogue. China is now developing to a more free and open market, unlike its planned-economy in the early 1960s, the market in China is more willing to embrace the global competition. It is thus required to improve its market regulations and conquer the free trade barriers to improve the situation and produce a properly judged pricing level to assess the \"dumping\" behaviour.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26847",
"title": "Socialism",
"section": "Section::::Economics.:Market socialism.\n",
"start_paragraph_id": 186,
"start_character": 0,
"end_paragraph_id": 186,
"end_character": 1239,
"text": "The current economic system in China is formally referred to as a [[socialist market economy with Chinese characteristics]]. It combines a large state sector that comprises the commanding heights of the economy, which are guaranteed their public ownership status by law, with a private sector mainly engaged in commodity production and light industry responsible from anywhere between 33% to over 70% of GDP generated in 2005. Although there has been a rapid expansion of private-sector activity since the 1980s, privatisation of state assets was virtually halted and were partially reversed in 2005. The current Chinese economy consists of 150 [[corporatised]] state-owned enterprises that report directly to China's central government. By 2008, these state-owned corporations had become increasingly dynamic and generated large increases in revenue for the state, resulting in a state-sector led recovery during the 2009 financial crises while accounting for most of China's economic growth. However, the Chinese economic model is widely cited as a contemporary form of state capitalism, the major difference between Western capitalism and the Chinese model being the degree of state-ownership of shares in publicly listed corporations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26067550",
"title": "List of the largest trading partners of China",
"section": "Section::::Background.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 551,
"text": "'China developed a network of economic relations with both the industrial economies and those constituting the semi-periphery and periphery of the world system.' As Chinese economy growing so fast, China also have many trading partners in the world. All of them are important partners of China in trading and they all contributed to the development of Chinese economy, but the largest partners of China are always changing because of the alteration of policy or other reasons. This article will show you the largest partners of China in recent years.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5405",
"title": "China",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1187,
"text": "Since the introduction of economic reforms in 1978, China's economy has been one of the world's fastest-growing with annual growth rates consistently above 6 percent. According to the World Bank, China's GDP grew from $150 billion in 1978 to $12.24 trillion by 2017. According to official data, China's GDP in 2018 was 90 trillion Yuan ($13.28 trillion). Since 2010, China has been the world's second-largest economy by nominal GDP and since 2014, the largest economy in the world by purchasing power parity (PPP). China is also the world's largest exporter and second-largest importer of goods. China is a recognized nuclear weapons state and has the world's largest standing army and second-largest defense budget. The PRC is a permanent member of the United Nations Security Council as it replaced the ROC in 1971, as well as an active global partner of ASEAN Plus mechanism. China is also a leading member of numerous formal and informal multilateral organizations, including the Shanghai Cooperation Organization (SCO), WTO, APEC, BRICS, the BCIM, and the G20. China has been characterized as a potential superpower, mainly because of its massive population, economy, and military.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
s099a
|
why do non-hearing people sound "that way" when they speak?
|
[
{
"answer": "I would assume it's because they have never heard the language spoken before and this leads to an accent based, not on the sound of it, but based on the movements of the mouth to create the sounds. ",
"provenance": null
},
{
"answer": "They can see how lips move, but they can't see how throat, tongue, windpipe, etc move. Observing mouth movement is not enough to recreate all the sounds, you need to be able to hear it in order to accurately replicate what others are saying. Not to mention you can't get the tone or pacing correctly either by just looking at mouth movement. In addition some of them can't hear their own voice either, which makes it even harder to know if they pronounce something correctly or not. ",
"provenance": null
},
{
"answer": "Constantly while you speak, you hear your own voice and you correct the placement of your tongue and lips to tailor the sounds coming out so it sounds like you think it should and how everyone else sounds, too.\n\nWhen a deaf person is speaking, they don't get to hear much or anything of the sound they are making - maybe only feel the vibrations in their skull from the sound, but that's it. So they've learned to put their tongues and lips in pretty much the same position, but they don't have the same kind of feedback to help them tailor it into the specific sounds we take for granted.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "33742208",
"title": "Models of communication",
"section": "Section::::Constructionist.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 238,
"text": "BULLET::::- psychological noise are the preconception bias and assumptions such as thinking someone who speaks like a valley girl is dumb, or someone from a foreign country can’t speak English well so you speak loudly and slowly to them.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "104433",
"title": "Schwa",
"section": "Section::::Schwa syncope.:Hindi.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 257,
"text": "While native speakers correctly pronounce the sequence differently in different contexts, non-native speakers and voice-synthesis software can make them \"sound very unnatural\", making it \"extremely difficult for the listener\" to grasp the intended meaning.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "563664",
"title": "Hearing (person)",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 1205,
"text": "However, when examined in the context of Deaf culture, the term “hearing” often does not hold the same meaning as when one thinks simply of a person's ability to hear sounds. In Deaf culture, “hearing”, being the opposite of “Deaf” (which is used inclusively, without the many gradations common to mainstream culture), is often used as a way of differentiating those who do not view the Deaf community as a linguistic minority, do not embrace Deaf values, history, language, mores, and sense of personal dignity as Deaf people do themselves. Among language minorities in the United States – for example, groups such as Mexicans, Koreans, Italians, Chinese, or Deaf users of sign language – the minority language group itself has a “we” or “insider” view of their cultural group as well as a “they” or “outsider” view of those who do not share the values of the group. So, in addition to using “hearing” to identify a person who can detect sounds, Deaf culture uses this term as a \"we and they\" distinction to show a difference in attitude between people who embrace the view of deaf people who use sign language as a language minority, and those who view deafness strictly from its pathological context. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30692239",
"title": "Schwa deletion in Indo-Aryan languages",
"section": "Section::::Hindi.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 248,
"text": "While native speakers pronounce the sequences differently in different contexts, non-native speakers and voice-synthesis software can make them \"sound very unnatural\", making it \"extremely difficult for the listener\" to grasp the intended meaning.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57049891",
"title": "Mock language",
"section": "Section::::Definition.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 461,
"text": "The speaker is unintentionally indexing a language ideology that all Americans should speak English or that other languages are secondary in the US. Using words outside the speaker's native language neglects context of the conversion, meaning of the word or phrase, or conceptual knowledge including historical injustices to the borrowed language, culture, and physical surroundings. It is a borrowing words in different languages and using it in your context.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3237319",
"title": "Auditory verbal agnosia",
"section": "Section::::Presentation.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 314,
"text": "Patients with pure word deafness complain that speech sounds simply do not register, or that they tend not to come up. Other claims include speech sounding as if it were in a foreign language, the words having a tendency to run together, or the feeling that speech was simply not connected to the patient's voice.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5366050",
"title": "Speech perception",
"section": "Section::::Basics.:Acquired brain disabilities.:Agnosia.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 660,
"text": "Speech agnosia: Pure word deafness, or speech agnosia, is an impairment in which a person maintains the ability to hear, produce speech, and even read speech, yet they are unable to understand or properly perceive speech. These patients seem to have all of the skills necessary in order to properly process speech, yet they appear to have no experience associated with speech stimuli. Patients have reported, \"I can hear you talking, but I can't translate it\". Even though they are physically receiving and processing the stimuli of speech, without the ability to determine the meaning of the speech, they essentially are unable to perceive the speech at all.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
26yhzu
|
why do road workers cut strips out of the interstate and refill them?
|
[
{
"answer": "Potholes are partially caused by the stuff under the road settling and no longer supporting it. They need to cut out the road and repair the road bed or else the problem will just come back.\n\nA freeway isn't just a strip of concrete, they have to dig out the ground and provide a stable foundation when building it. Just putting blacktop on a pothole is like putting a bandaid on a gunshot wound.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "48223709",
"title": "Plastic roads",
"section": "Section::::Construction.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 763,
"text": "These roads are made from recycled plastics, and the first step in constructing them is to collect and manage the plastic material. The plastics involved in building these roads consists mainly of common post-consumer products such as product packaging. Some of the most common plastics used in packaging are polyethylene terephthalate (PET or PETE), polyvinyl chloride (PVC), polypropylene (PP), and high and low density polyethylene (HDPE and LDPE). These materials are first sorted from plastic waste. After sorting, the material is cleaned, dried, and shredded. The shredded plastic is mixed and melted at around 170°C. Hot bitumen is then added and mixed with the melted plastic. After mixing the mixture is laid as one would with regular asphalt concrete. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22317788",
"title": "Highway strip",
"section": "Section::::Design.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 1079,
"text": "The strips are usually straight sections of the highway, where any central reservation is made of crash barriers that can be removed quickly (in order to allow airplanes to use the whole width of the road), and other features of an airbase (taxiways, airport ramps) can be built. The road will need a thicker than normal surface and a solid concrete base. The specialized equipment of a typical airfield are stored somewhere nearby and only carried there when airfield operations start. The highway strips can be converted from motorways to airbases typically within 24 to 48 hours. The road would need to be swept to remove all debris before any aircraft movement could take place. Road runways can however also be quite small—the short runways built in the Swedish Bas 90 system are commonly only 800 meters (0.5 miles) in length. The STOL-capability of the Viggen and Gripen allowed for such short runways. In the case of Finnish road airbases, the space needed for landing aircraft is reduced by means of a wire, similar to the CATOBAR system used on some aircraft carriers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34085624",
"title": "Road recycler",
"section": "Section::::Types of equipment.:Road recycler.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 397,
"text": "A road recycler or road reclaimer is an asphalt pavement grinder or a combination grinder and soil stabilizer when it is equipped to blending cement, foamed asphalt and/or lime and water with the existing pavement (usually only very thin asphalt) to create a new, recycled road surface. It usually refers to the process of blending the asphalt road with a binder and base course in a single pass.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2796239",
"title": "Full depth recycling",
"section": "Section::::Processes.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 289,
"text": "Since this method recycles the materials \"in situ\", there is no need to haul in aggregate or haul out old material for disposal. The vehicle movements are reduced and there is no need for detours since it can be done under traffic, making this process more convenient for local residents.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20399788",
"title": "Haslem v. Lockwood",
"section": "Section::::Argument of the defendant-respondent.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 226,
"text": "(1) The manure mixed with the dirt and ordinary scrapings of the highway, being spread out over the surface of the highway, was a part of the real estate, and belonged to the owner of the fee, subject to the public easement. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "420502",
"title": "Frontage road",
"section": "Section::::Overview.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 535,
"text": "Frontage roads provide access to homes and businesses which would otherwise be cut off by a limited-access road and connect these locations with roads which have direct access to the main roadway. Frontage roads give indirect access to abutting property along a freeway, either preventing the commercial disruption of an urban area that the freeway traverses or allowing commercial development of abutting property. At times, they add to the cost of building an expressway due to costs of land and the costs of paving and maintenance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "870957",
"title": "Single-track road",
"section": "Section::::Types.:Temporary one-lane restrictions.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 720,
"text": "When reconstruction is being done on 2-lane highways where traffic is moderately heavy, a worker will often stand at each end of the construction zone, holding a sign with \"SLOW\" or \"GO\" written on one side and \"STOP\" on the reverse. The workers, who communicate through yelling, hand gestures, or radio, will periodically reverse their signs to allow time for traffic to flow in each direction. A modification of this for roadways that have heavier traffic volumes is to maintain one direction on the existing roadway, and detour the other, thus not requiring the use of flaggers. An example of this is the M-89 reconstruction project in Plainwell, MI, where westbound traffic is detoured via county roads around town.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
7bypun
|
Does anyone know a good read on snipers in WWII?
|
[
{
"answer": "Karabiner 98k by Karem and Steves is a great resource for K98k snipers, particularly Volume IIa. There are currently 3 volumes, Volume 1 covers pre-war (banner model, standard modell, etc) to 1938 rifles at Mauser-Oberndorf, JP Sauer, Mauser-Borsigwalde, Ermawerke, Berlin Lubecker, and BSW. Volume 2 covers wartime production, and Volume 3 covers the Kriegsmodell. 2A specifically has a lot of info regarding sniper rifle development at Mauser-Oberndorf, specifically the ZF-39 (low and high turret), the ZF-41, and the Jung Prismatic Optics prototypes. The 3 book set is an excellent resource for the history and engineering of the German K98k. The only downside to the books is that they are pretty pricey, at $345 for the 4 book set (Volume 2 is so large that they had to split it into 2 volumes). \nLink: _URL_1_ \nThe only other \"WW2 German Sniper\" book I know about is Backbone of the Wehrmacht Vol 2. \nLink: _URL_0_ \nI personally do not own volume 2 (I own volume 1), but it seems to be a decent source on the rifles. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "28123",
"title": "Sniper",
"section": "Section::::Military history.:World War II.\n",
"start_paragraph_id": 58,
"start_character": 0,
"end_paragraph_id": 58,
"end_character": 395,
"text": "Common sniper rifles used during the Second World War include: the Soviet M1891/30 Mosin–Nagant and, to a lesser extent, the SVT-40; the German Mauser Karabiner 98k and Gewehr 43; the British Lee–Enfield No. 4 and Pattern 1914 Enfield; the Japanese Arisaka 97; the American M1903A4 Springfield and M1C Garand. The Italians trained few snipers and supplied them with a scoped Carcano Model 1891.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28123",
"title": "Sniper",
"section": "Section::::Military history.:World War I.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 270,
"text": "The main sniper rifles used during the First World War were the German Mauser Gewehr 98; the British Pattern 1914 Enfield and Lee–Enfield SMLE Mk III, the Canadian Ross Rifle, the American M1903 Springfield, the Italian M1891 Carcano, and the Russian M1891 Mosin–Nagant\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5521772",
"title": "The Sniper (poem)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 316,
"text": "\"The Sniper\" is a World War I poem by Scottish poet W D Cocker, written in 1917 about the impact a sniper has had not only on the life of the young soldier, but also on that soldier's family back home. It is not revealed which side the sniper is on, as the deed is the same, whether the victim is German or British.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4490386",
"title": "Sniper Elite (video game)",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 307,
"text": "The main character of \"Sniper Elite\" is Karl Fairburne, a German-born American OSS secret agent disguised as a German sniper. He is inserted into the Battle of Berlin in 1945, during the final days of World War II, with the critical objective of obtaining German nuclear technology before the Soviet Union.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7665043",
"title": "Sniper equipment",
"section": "Section::::Sniper rifles.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 650,
"text": "Historic military sniper rifles up to and including the Second World War were usually based on the standard service rifle of the country in question. They included the German Mauser Gewehr 98K, U.S. M1903 Springfield and M1 Garand, Soviet Mosin–Nagant, Norwegian Krag–Jørgensen, Japanese Arisaka, and British Lee–Enfield No. 4. Models used for sniping were generally factory tested for accuracy and fitted with specialized components, including not just optics but also such items as slings, cheek pieces, and flash eliminators, which disperse gases at the muzzle away from the sniper's view, helping avoiding having the sniper blinded by the flash.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7771606",
"title": "John Plaster",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 638,
"text": "John L. Plaster (born 1949) is a former United States Army Special Forces officer regarded as one of the leading sniper experts in the world. A decorated Vietnam War veteran who served in the covert Studies and Observations Group (SOG), Plaster co-founded a renowned sniper school that trains military and law enforcement personnel in highly specialized sniper tactics. He is the author of \"The Ultimate Sniper: An Advanced Training Manual for Military and Police Snipers\", \"The History of Sniping and Sharpshooting\", and \"Secret Commandos: Behind Enemy Lines with the Elite Warriors of SOG\", a memoir of his 3 years of service with SOG.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3024878",
"title": "The Sniper (story)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 612,
"text": "The Sniper is a short story written by Irish writer Liam O'Flaherty, set during the early weeks of the Irish Civil War, during the Battle of Dublin. It is O'Flaherty's first published work of fiction, published in a small London-based socialist weekly \"The New Leader\" (12 January 1923) while the war it depicted was still ongoing. The favorable notice it generated helped get other works by O'Flaherty published, and started his career. It is widely read today in secondary schools of many English-speaking countries, owing to its being easy to read, its short length, and its having a notable surprise ending.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
42xszv
|
Iroquois vs. Haudenosaunee
|
[
{
"answer": "The exact etymology of \"Iroquois\" is debatable, and there are plenty of Iroquoian people who use it and related terms for various purposes. If you're talking about an individual person, it's best to go with their specific nation. Joseph Brant is Mohawk (or *Kanien'kehá:ka* if you're feeling particularly ambitious). Haudenosaunee is the accepted English variation for the name of the confederacy itself, based on the Mohawk name (*Rotinonshonni*). As such, it's reserved of usage for the political entity or its citizens as a whole. Joseph Brant was a defender of *the* Haudenosaunee, but he was not *a* Haudenosaunee.\n\nIroquois is used more generically, to refer to people culturally but often outside the context of the Haudenosaunee as a political entity. After the revolution, Joseph Brant led many Iroquois refugees to Ontario and helped found the Six Nations of Grand River (now home to the Iroquois Nationals lacrosse team) as one of the successors to the Haudenosaunee. In the derivative form \"Iroquian\" is extends to cover culturally and linguistically related peoples including the Wendat (Huron), the Chonnonton (Neutral), and the Cherokee. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "19195965",
"title": "Iroquois",
"section": "Section::::Iroquois Confederacy.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 534,
"text": "The Iroquois Confederacy or Haudenosaunee is believed to have been founded by the Peacemaker in 1142 or 1451 AD, bringing together five distinct nations in the southern Great Lakes area into \"The Great League of Peace\". Each nation within this Iroquoian confederacy had a distinct language, territory, and function in the League. Iroquois power at its peak extended into present-day Canada, westward along the Great Lakes and down both sides of the Allegheny mountains into present-day Virginia and Kentucky and into the Ohio Valley.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "384160",
"title": "Beaver Wars",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 767,
"text": "The Iroquois sought to expand their territory into the Ohio Country and monopolize the fur trade and the trade between European markets. They originally were a confederacy of five nations—Mohawk, Oneida, Onondaga, Cayuga and Seneca, inhabiting the lands in upstate New York along the shores of Lake Ontario east to Lake Champlain and Lake George on the Hudson river, and the lower-estuary of the St Lawrence river. The Iroquois Confederation, led by the dominant Mohawk, mobilized against the largely Algonquian-speaking tribes and Iroquoian speaking Huron and related tribes of the Great Lakes region. The Iroquois were armed by their Dutch and much later, English trading partners; the Algonquians and Hurons were backed by the French, their chief trading partner.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19195965",
"title": "Iroquois",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 536,
"text": "The Iroquois ( or ) or Haudenosaunee (; \"People of the Longhouse\") are a historically powerful northeast Native American confederacy in North America. They were known during the colonial years to the French as the Iroquois League, and later as the Iroquois Confederacy, and to the English as the Five Nations, comprising the Mohawk, Onondaga, Oneida, Cayuga, and Seneca. After 1722, they accepted the Tuscarora people from the Southeast into their confederacy, as they were also Iroquoian-speaking, and became known as the Six Nations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57193",
"title": "Indian Territory",
"section": "Section::::Tribes in Indian Territory.:Tribes from the Great Lakes and Northeastern Woodlands.:Iroquois Confederacy.\n",
"start_paragraph_id": 88,
"start_character": 0,
"end_paragraph_id": 88,
"end_character": 588,
"text": "The Iroquois Confederacy was an alliance of tribes, originally from the upstate New York area consisting of the Seneca, Cayuga, Onondaga, Oneida, Mohawk, and, later, Tuscarora. In pre-revolutionary war days, their confederacy expanded to areas from Kentucky and Virginia north. All of the members of the Confederacy, except the Oneida and Tuscarora, allied with the British during the Revolutionary War, and were forced to cede their land after the war. Most moved to Canada after the Treaty of Canandaigua in 1794, some remained in New York, and some moved to Ohio, joining the Shawnee.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22748201",
"title": "Timeline of Montreal history",
"section": "Section::::Pre-European period.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 208,
"text": "BULLET::::- The Iroquois, or \"Haudenosaunee\", were centred, from at least 1000 CE, in northern New York, and their influence extended into what is now southern Ontario and the Montréal area of modern Quebec.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52477",
"title": "History of Canada",
"section": "Section::::Pre-colonization.:Indigenous peoples.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 463,
"text": "The Five Nations of the Iroquois (Haudenosaunee) were centred from at least 1000 CE in northern New York, but their influence extended into what is now southern Ontario and the Montreal area of modern Quebec. They spoke varieties of Iroquoian languages. The Iroquois Confederacy, according to oral tradition, was formed in 1142 CE. In addition, there were other Iroquoian-speaking peoples in the area, including the St. Lawrence Iroquoians, the Erie, and others.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "255627",
"title": "List of capitals in the United States",
"section": "Section::::Native American capitals.:Iroquois Confederacy.\n",
"start_paragraph_id": 111,
"start_character": 0,
"end_paragraph_id": 111,
"end_character": 746,
"text": "The Iroquois Confederacy or Haudenosaunee, which means \"People of the Longhouse,\" was an alliance between the Five and later Six-Nations of Iroquoian language and culture of upstate New York. These include the Seneca, Cayuga, Onondaga, Oneida, Mohawk, and, after 1722, the Tuscarora Nations. Since the Confederacy's formation around 1450, the Onondaga Nation has held privilege of hosting the Iroquois Grand Council and the status of Keepers of the Fire and the Wampum —which they still do at the official Longhouse on the Onondaga Reservation. Now spread over reservations in New York and Ontario, the Six Nations of the Haudenosaunee preserve this arrangement to this day in what they claim to be the \"world's oldest representative democracy.\"\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2xh2ow
|
How did the various Viking colonizers of parts of Britain react to different forms of Paganism in the areas they commanded?
|
[
{
"answer": "They didn't react to them, because as far as we can tell, there were no surviving population groups practising pre-Christian religions in the British Isles when the Vikings first began making incursions in 793. Ireland and Scotland had fully Christianised over the course of the fifth to seventh centuries and developed pervasive Christian identities of their own right -- especially so in Ireland, where sharing religion and language across the many hundreds of small states across the island facilitated a degree of common ethnic consciousness that made Norse settlers initially quite unwelcome. When the Vikings began appearing at the end of the eighth century, there were no pagans left in the British Isles, or at least any remaining communities were small enough to escape documentation and to leave no trace in the archaeolgical record.\n\nIf you're interested in how Scandinavian pagans interacted with other polytheistic communities, it might be smart to direct your inquiries into their early voyages into Eastern Europe, especially the Kievan Rus. I haven't studied that part of the world to the extent I've studied Ireland (and to a lesser extent, Scotland), so I can't speak with any authority there, nor can I recommend you any texts.\n\nSources: Stefan Brink and Neil Price (eds.), *The Viking World*, various articles\n\nJ. H. Barrett (ed.), *Contact, Continuity and Collapse: The Norse Colonisation of the North Atlantic*, various aritcles\n\nClare Downham, *Viking Kings of Britain and Ireland: The Dynasty of Ivarr*",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1606288",
"title": "Religion in Medieval England",
"section": "Section::::Christianisation.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 818,
"text": "The Viking invasions of the eighth and ninth centuries reintroduced paganism to North-East England, leading in turn to another wave of conversion. Indigenous Scandinavian beliefs were very similar to other Germanic groups, with a pantheon of gods including Odin, Thor and Ullr, combined with a belief in a final, apocalyptic battle called Ragnarok. The Norse settlers in England were converted relatively quickly, assimilating their beliefs into Christianity in the decades following the occupation of York, of which the Archbishop had survived. The process was largely complete by the early tenth century and enabled England's leading Churchmen to negotiate with the warlords. As the Norse in mainland Scandinavia started to convert, many mainland rulers recruited missionaries from England to assist in the process.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "969337",
"title": "England in the Middle Ages",
"section": "Section::::Religion.:Rise of Christianity.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 809,
"text": "The Viking invasions of the 8th and 9th centuries reintroduced paganism to North-East England, leading in turn to another wave of conversion. Indigenous Scandinavian beliefs were very similar to other Germanic groups, with a pantheon of gods including Odin, Thor and Ullr, combined with a belief in a final, apocalyptic battle called Ragnarok. The Norse settlers in England were converted relatively quickly, assimilating their beliefs into Christianity in the decades following the occupation of York, which the Archbishop had survived. The process was largely complete by the early 10th century and enabled England's leading Churchmen to negotiate with the warlords. As the Norse in mainland Scandinavia started to convert, many mainland rulers recruited missionaries from England to assist in the process.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "621178",
"title": "Christianization",
"section": "Section::::Christianization of Europe (7th-15th centuries).:Great Britain and Ireland.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 259,
"text": "The Viking invasions of Britain and Ireland destroyed many monasteries and new Viking settlers restored paganism—though of a different variety to the Saxon or classical religions—to areas such as Northumbria and Dublin for a time before their own conversion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "674375",
"title": "Christianisation of the Germanic peoples",
"section": "Section::::History.:England.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 361,
"text": "During the prolonged period of Viking incursions and settlement of Anglo-Saxon England pagan ideas and religious rites made something of a comeback, mainly in the Danelaw during the 9th century and particularly in the Kingdom of Northumbria, whose last king to rule it as an independent state was Eric Bloodaxe, a Viking, probably pagan and ruler until 954 AD.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14674095",
"title": "Scotland in the Early Middle Ages",
"section": "Section::::Religion.:Viking paganism.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 1818,
"text": "The Viking occupation of the islands and coastal regions of modern Scotland brought a return to pagan worship in those areas. Norse paganism had some of the same gods as had been worshipped by the Anglo-Saxons before their conversion and is thought to have been focused around a series of cults, involving gods, ancestors and spirits, with calendric and life cycle rituals often involving forms of sacrifice. The paganism of the ruling Norse elite can be seen in goods found in 10th century graves in Shetland, Orkney and Caithness. There is no contemporary account of the conversion of the Vikings in Scotland to Christianity. Historians have traditionally pointed to a process of conversion to Christianity among Viking colonies in Britain dated to the late 10th century, for which later accounts indicate that Viking earls accepted Christianity. However, there is evidence that conversion had begun before this point. There are a large number of isles called Pabbay or Papa in the Western and Northern Isles, which may indicate a \"hermit's\" or \"priest's isle\" from this period. Changes in patterns of grave goods and Viking place names using -kirk also suggest that the Christianity had begun to spread before the official conversion. Later documentary evidence suggests that there was a Bishop operating in Orkney in the mid-9th century and more recently uncovered archaeological evidence, including explicitly Christian forms such as stone crosses, suggest that Christian practice may have survived the Viking take over in parts of Orkney and Shetland and that the process of conversion may have begun before Christianity was officially accepted by Viking leaders. The continuity of Scottish Christianity may also explain the relatively rapid way in which Norse settlers were later assimilated into the religion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34142436",
"title": "North Germanic peoples",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 258,
"text": "With the end of the Viking Age in the 11th century, the North Germanic peoples were converted from their native Norse paganism to Christianity, while their previously tribal societies were centralized into the modern kingdoms of Denmark, Norway and Sweden. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32610",
"title": "Vikings",
"section": "Section::::History.:End of the Viking Age.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 525,
"text": "During the Viking Age, Scandinavian men and women travelled to many parts of Europe and beyond, in a cultural diaspora that left its traces from Newfoundland to Byzantium. This period of energetic activity also had a pronounced effect in the Scandinavian homelands, which were subject to a variety of new influences. In the 300 years from the late 8th century, when contemporary chroniclers first commented on the appearance of Viking raiders, to the end of the 11th century, Scandinavia underwent profound cultural changes.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2heab8
|
what does a magnetic polar reversal mean for everyday life?
|
[
{
"answer": "The poles don't actually switch like flipping around. One or both weaken, then wander. During the weak time, the Earth is bathed in radiation that a strong field prevents. Time to go underground.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "39750637",
"title": "Gregg Braden",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 572,
"text": "Gregg Braden (born June 28, 1954) is an American author of Consciousness literature, who wrote about the 2012 phenomenon and became noted for his claim that the magnetic polarity of the earth was about to reverse. Braden argued that the change in the earth's magnetic field might have effects on human DNA. He has also argued that human emotions affect DNA and that collective prayer may have healing physical effects. He has published many books through the Hay House publishing house. In 2009, his book \"Fractal Time\" was on the bestseller list of \"The New York Times\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16864766",
"title": "Polar wander",
"section": "Section::::True polar wander.:Earth.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 576,
"text": "True polar wander represents the shift in the geographical poles relative to Earth's surface, after accounting for the motion of the tectonic plates. This motion is caused by the rearrangement of the mantle and the crust in order to align the maximum inertia with the current rotation axis (fig.1). This is similar to a spinning top; when its rotation is disturbed, it slowly recovers and it will realign its rotation axis to its position of maximum inertia. The difference is that unlike Earth, the spinning top's mass distribution is constant through its volume over time. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "439821",
"title": "Polar drift",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 240,
"text": "Polar drift is a geological phenomenon caused by variations in the flow of molten iron in Earth's outer core, resulting in changes in the orientation of Earth's magnetic field, and hence the position of the magnetic north- and south poles.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2203131",
"title": "Geomagnetic reversal",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 454,
"text": "A geomagnetic reversal is a change in a planet's magnetic field such that the positions of magnetic north and magnetic south are interchanged (not to be confused with geographic north and geographic south). The Earth's field has alternated between periods of \"normal\" polarity, in which the predominant direction of the field was the same as the present direction, and \"reverse\" polarity, in which it was the opposite. These periods are called \"chrons\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17574685",
"title": "History of geophysics",
"section": "Section::::20th century.:Geomagnetism.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 722,
"text": "The motion of the conductive molten metal beneath the Earth's crust, or the Earth's dynamo, is responsible for the existence of the magnetic field. The interaction of the magnetic field and solar radiation has an impact on how much radiation reaches the surface of Earth and the integrity of the atmosphere. It has been found that the magnetic poles of the Earth have reversed several times, allowing researchers to get an idea of the surface conditions of the planet at that time. The cause of the magnetic poles being reversed is unknown, and the intervals of change vary and do not show a consistent interval. It is believed that the reversal is correlated to the Earth's mantle, although exactly how is still debated.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "308058",
"title": "Magnetic declination",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 410,
"text": "Magnetic declination, or magnetic variation, is the angle on the horizontal plane between magnetic north (the direction the north end of a magnetized compass needle points, corresponding to the direction of the Earth's magnetic field lines) and true north (the direction along a meridian towards the geographic North Pole). This angle varies depending on position on the Earth's surface and changes over time.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24944",
"title": "Plate tectonics",
"section": "Section::::Development of the theory.:Floating continents, paleomagnetism, and seismicity zones.\n",
"start_paragraph_id": 74,
"start_character": 0,
"end_paragraph_id": 74,
"end_character": 809,
"text": "Meanwhile, debates developed around the phenomena of polar wander. Since the early debates of continental drift, scientists had discussed and used evidence that polar drift had occurred because continents seemed to have moved through different climatic zones during the past. Furthermore, paleomagnetic data had shown that the magnetic pole had also shifted during time. Reasoning in an opposite way, the continents might have shifted and rotated, while the pole remained relatively fixed. The first time the evidence of magnetic polar wander was used to support the movements of continents was in a paper by Keith Runcorn in 1956, and successive papers by him and his students Ted Irving (who was actually the first to be convinced of the fact that paleomagnetism supported continental drift) and Ken Creer.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2rywbs
|
if i find an undiscovered island in international waters, is it mine?
|
[
{
"answer": "I imagine you could make a claim assuming it is in international waters. ",
"provenance": null
},
{
"answer": "follow up question, is that even possible with all the satellites we have?",
"provenance": null
},
{
"answer": "Essentially, on the scale of nations, there's not really legal rules exactly. Its more like, if you can convince everyone that it is yours, then it's yours.\n\nFor my money, I would instead make a deal with the US or your large power of choice. \"Let me own this whole island, and I'll be part of your country.\" The odds of the international community recognizing the US's claim on an undiscovered island is way better than them recognizing yours.",
"provenance": null
},
{
"answer": "If you claim it and no other state disputes your claim then congratulations it is yours. You aren't a state yet though unless other states recognize that you are a state and don't try to use their authority over you. ",
"provenance": null
},
{
"answer": "You could theoretically claim it. However, the chance of finding one would be almost nonexistent as countries will always be looking for islands at sea. Why? Not because of the land itself but due to the economic exclusion zone that grants the country exclusive resource exploitation rights within a 200 nautical mile radius.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "20024604",
"title": "Howland and Baker islands",
"section": "Section::::Economic potential.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 543,
"text": "The only immediate mining potential is on and immediately offshore of the islands themselves (phosphates, sand, gravel, and coral) which would conflict with their protected status per the study. Iron deposits on a few seamounts are also mentioned as an \"intermediate\" possibility but no energy resources are identified. The islands have phosphorite and guano resources. However, all commercial extraction activities, including fishing and deep-sea mining, are prohibited in the wildlife refuges and submerged lands and waters of the Monument.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "198146",
"title": "Macquarie Island",
"section": "Section::::History.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 244,
"text": "On 5 December 1997, Macquarie Island was inscribed on the UNESCO World Heritage List as a site of major geoconservation significance, being the only place on earth where rocks from the earth's mantle are being actively exposed above sea-level.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26031988",
"title": "Tjärven",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 441,
"text": "SMA has confirmed that many Russian mines dating from the first world war may still lie on the bottom of the sea east of the light station, making anchoring or diving dangerous in the area. The island can be visited by boat travelers under acceptable weather circumstances, but it is difficult to dock this remote and slippery island. And the areas surrounding it is heavily trafficked by the cruise ships plying between Sweden and Finland.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1518241",
"title": "Taiping Island",
"section": "Section::::Geography.:Natural resources.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 339,
"text": "The island has historically been mined for phosphates to the point of exhaustion, and today has no major natural resources. There is potentially a large amount of undiscovered reserves of oil and natural gas beneath surrounding waters within the South China Sea Basin, however, there has yet to be formal exploration and mining conducted.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10431114",
"title": "Deep sea mining",
"section": "Section::::Brief history.:Laws and regulations.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 301,
"text": "Within the EEZ of nation states seabed mining comes under the jurisdiction of national laws. Despite extensive exploration both within and outside of EEZs, only a few countries, notably New Zealand, have established legal and institutional frameworks for the future development of deep seabed mining.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15028",
"title": "International Seabed Authority",
"section": "Section::::Activities.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 536,
"text": "Contrary to early hopes that seabed mining would generate extensive revenues for both the exploiting countries and the Authority, no technology has yet been developed for gathering deep-sea minerals at costs that can compete with land-based mines. Until recently, the consensus has been that economic mining of the ocean depths might be decades away. Moreover, the United States, with some of the most advanced ocean technology in the world, has not yet ratified the Law of the Sea Convention and is thus not a member of the Authority.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19209894",
"title": "Naval Station Treasure Island",
"section": "Section::::Environmental issues.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 551,
"text": "After the Naval Station closed in 1997, Treasure Island was opened to residential and other uses, but according to the United States Environmental Protection Agency and the state Department of Toxic Substances Control, the ground at various locations on the island is contaminated with toxic substances. Caesium-137 levels three times higher than previously recorded were found in April 2013. These are thought to date from the base's use by ships contaminated in post-war nuclear testing, and from a nuclear training facility previously based there.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1ucez7
|
how do music editing programs change the pitch without changing the speed?
|
[
{
"answer": "Sound can be converted between \"time domain\" and \"frequency domain\" using Fourier transform (sort of a spectrum analyzer). So you can convert the sound to frequencies, do modifications there (for example shift frequencies) and convert back to time domain, and you get pitch shift without speed change. In practice doing this is very time consuming, so it's done in small blocks and quality will not be perfect.",
"provenance": null
},
{
"answer": "You take the audio signal and extract a set of parameters from it (usually it's a fourier transform but other methods exist). This set of parameters help 'describe' the song (if it's a fourier transform each number you get tells you how much of each pitch you have), so you can then use them in a mathematical formula to recreate the original sound (though there tends to be some distortion so you don't get exactly the same thing as the original).\n\nThese parameters are extracted from a set of 'windows' of the song (very small segments of the sound, as the sounds change through time so you want only a small section of the song each time so it changes as little as possible). Also, windows are sometimes overlapped to help with recreating the original sound as close as possible.\n\nNow, the value you get is not for a single pitch but for a group of pitches. Here we have a compromise, you can either get a very detailed map or a small window. Each will have an effect on what you get back, so a decent middle point needs to be found, be it from information you already know (this changes relatevely fast/slowly) or trial and error.\n\n\nTo change only pitch you just change this values (shift them by the amount of pitch you want to change, so the amount you had of one pitch is now the amout you have at the new pitch) and run them through the formula to get a new pitched sound.\n\nIf you want to change the speed without changing pitch you can make each window larger or shorter when recreating the sound or delete / duplicate windows to reach the wanted lenght. \n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2421675",
"title": "Transcription (music)",
"section": "Section::::Transcription aids.:Slow-down software.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 456,
"text": "The software generally goes through a two-step process to accomplish this. First, the audio file is played back at a lower sample rate than that of the original file. This has the same effect as playing a tape or vinyl record at slower speed - the pitch is lowered meaning the music can sound like it is in a different key. The second step is to use Digital Signal Processing (or DSP) to shift the pitch back up to the original pitch level or musical key.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46943",
"title": "Audio time stretching and pitch scaling",
"section": "Section::::Resampling.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 960,
"text": "The simplest way to change the duration or pitch of a digital audio clip is through sample rate conversion. This is a mathematical operation that effectively rebuilds a continuous waveform from its samples and then samples that waveform again at a different rate. When the new samples are played at the original sampling frequency, the audio clip sounds faster or slower. Unfortunately, the frequencies in the sample are always scaled at the same rate as the speed, transposing its perceived pitch up or down in the process. In other words, slowing down the recording lowers the pitch, speeding it up raises the pitch. This is analogous to speeding up or slowing down an analogue recording, like a phonograph record or tape, creating the Chipmunk effect. Using this method the two effects cannot be separated. A drum track containing no pitched instruments can be moderately sample rate converted for tempo without adverse effects, but a pitched track cannot.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3404398",
"title": "Pitch shift",
"section": "Section::::Pitch shifter and harmonizer.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 318,
"text": "Pitch correction is a form of pitch shifting and is found in software such as Auto-Tune to correct intonation inaccuracies in a recording or performance. Pitch shifting may raise or lower all sounds in a recording by the same amount, whereas in practice, pitch correction may make different changes from note to note.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34658245",
"title": "Multi-image",
"section": "Section::::Multi-image production technologies.:Audio production.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 334,
"text": "Audio editing of the music or voice-over was done manually to create a scratch track, usually with a cutting block and tape. Once the audio edits were completed, the final version would be copied onto another tape; either to inch, cassette or other format so that there tape used to run the presentation would be a fresh uncut tape.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9874889",
"title": "De-essing",
"section": "Section::::Process of de-essing.:De-essing with automation.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 305,
"text": "This method is made feasible by editing automation points directly, as opposed to programming by manipulating gain sliders in a write-mode. An audio engineer would not be able to react fast enough to precisely reduce and restore vocal levels for the brief duration of sibilants during real-time playback.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2774090",
"title": "Pitch correction",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 592,
"text": "Pitch correction is an electronic effects unit or audio software that changes the intonation (highness or lowness in pitch) of an audio signal so that all pitches will be notes from the equally tempered system (i.e., like the pitches on a piano). Pitch correction devices do this without affecting other aspects of its sound. Pitch correction first detects the pitch of an audio signal (using a live pitch detection algorithm), then calculates the desired change and modifies the audio signal accordingly. The widest use of pitch corrector devices is in Western popular music on vocal lines.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "700160",
"title": "Remaster",
"section": "Section::::Remastering.:Music.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 303,
"text": "When the remastering starts, engineers use software tools such as a limiter, an equaliser, and a compressor. The compressor and limiters are ways of controlling the loudness of a track. However, this is not to be confused with the volume of a track, which is controlled by the listener during playback.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3achmo
|
Is the total mass of oxygen atoms greater in the ocean or in the air?
|
[
{
"answer": "Oxygen is about 23.20% of the atmosphere by mass, and the atmosphere has a total mass of about 5 x 10^(18) kg. So the atmosphere contains about 1.16 x 10^(18) kg of oxygen.\n\nOxygen is about 88.89% (16/18) of water by mass, and the oceans have an approximate mass of 1.35 x 10^(21) kg. So the oceans contain about 1.2 x 10^(21) kg.\n\nIn other words, the oceans contain roughly 1000 times more oxygen than the atmosphere does, by mass.",
"provenance": null
},
{
"answer": "Just using some quick values from Wikipedia:\n\nMass of Earth's oceans: 1.35x10^21 kg\n\n[Percent oxygen by mass](_URL_1_): 85.84%\n\nMultiply these values to get 1.2x10^21 kg of oxygen in the oceans.\n\nFor the atmosphere, oxygen is contained in O2, H2O, CO2, and some other minor gases such as O3, N2O, etc. For the purposes of estimation, we will neglect all of these except for O2, [which comprises](_URL_0_) about 21% of the atmosphere by volume. We will assume air is an ideal gas, so fraction by volume is equivalent to fraction by moles. The total mass of the atmosphere is 5.15x10^18 kg; multiplying these values we get 1.1x10^18 kg of oxygen in the atmosphere.\n\nSo there is more oxygen in the oceans, by about a factor of 1000.",
"provenance": null
},
{
"answer": "The pressure of the atmosphere is equal to the pressure of 10m water. The average depth of oceans is more than 4km, and if there was no land the average depth of oceans would be around 2.5 km. So a rough estimation is that the mass of oceans is 250x that of the atmosphere.\n\n~~Water~~ *EDIT: Oxygen* is almost 90% of water by mass, and only around 25% of the atmosphere. There is no comparison, oxygen in oceans is 1000 more than oxygen in the atmosphere.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "583598",
"title": "Oxygen cycle",
"section": "Section::::Reservoirs.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 551,
"text": "Oxygen is one of the most abundant elements on Earth and represents a large portion of each main reservoir. By far the largest reservoir of Earth's oxygen is within the silicate and oxide minerals of the crust and mantle (99.5% by weight). The Earth's atmosphere, hydrosphere and biosphere together weigh less than 0.05% of the Earth's total mass. Besides O, additional oxygen atoms are present in various forms spread throughout the surface reservoirs in the molecules of biomass, HO, CO, HNO, NO, NO, CO, HO, O, SO, HSO, MgO, CaO, AlO, SiO, and PO.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28742",
"title": "Supercontinent",
"section": "Section::::Supercontinents and atmospheric gases.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 966,
"text": "The process of Earth's increase in atmospheric oxygen content is theorized to have started with continent-continent collision of huge land masses forming supercontinents, and therefore possibly supercontinent mountain ranges (supermountains). These supermountains would have eroded, and the mass amounts of nutrients, including iron and phosphorus, would have washed into oceans, just as we see happening today. The oceans would then be rich in nutrients essential to photosynthetic organisms, which would then be able to respire mass amounts of oxygen. There is an apparent direct relationship between orogeny and the atmospheric oxygen content). There is also evidence for increased sedimentation concurrent with the timing of these mass oxygenation events, meaning that the organic carbon and pyrite at these times were more likely to be buried beneath sediment and therefore unable to react with the free oxygen. This sustained the atmospheric oxygen increases.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3268926",
"title": "Great Oxidation Event",
"section": "Section::::Geological evidence.:Iron speciation.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 584,
"text": "The concentration of ferruginous and euxinic states in iron mass can also provide clues of the oxygen level in the atmosphere. When the environment is anoxic, the ratio of ferruginous and euxinic out of the total iron mass is lower than the ratio in an anoxic environment such as the deep ocean. One of the hypotheses suggests that microbes in the ocean already oxygenated the shallow waters before the GOE event around 2.6- 2.5 Ga. The high concentration ferruginous and euxinic states of sediments in the deep ocean showed consistency with the evidence from banded iron formations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24377796",
"title": "Geological history of oxygen",
"section": "Section::::Effects on life.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 753,
"text": "The large size of insects and amphibians in the Carboniferous period, when the oxygen concentration in the atmosphere reached 35%, has been attributed to the limiting role of diffusion in these organisms' metabolism. But Haldane's essay points out that it would only apply to insects. However, the biological basis for this correlation is not firm, and many lines of evidence show that oxygen concentration is not size-limiting in modern insects. There is no significant correlation between atmospheric oxygen and maximum body size elsewhere in the geological record. Ecological constraints can better explain the diminutive size of post-Carboniferous dragonflies - for instance, the appearance of flying competitors such as pterosaurs, birds and bats.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20903424",
"title": "Breathing",
"section": "Section::::Effects of ambient air pressure.:Breathing at altitude.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 903,
"text": "The atmospheric pressure decreases exponentially with altitude, roughly halving with every rise in altitude. The composition of atmospheric air is, however, almost constant below 80 km, as a result of the continuous mixing effect of the weather. The concentration of oxygen in the air (mmols O per liter of air) therefore decreases at the same rate as the atmospheric pressure. At sea level, where the ambient pressure is about 100 kPa, oxygen contributes 21% of the atmosphere and the partial pressure of oxygen () is 21 kPa (i.e. 21% of 100 kPa). At the summit of Mount Everest, , where the total atmospheric pressure is 33.7 kPa, oxygen still contributes 21% of the atmosphere but its partial pressure is only 7.1 kPa (i.e. 21% of 33.7 kPa = 7.1 kPa). Therefore, a greater volume of air must be inhaled at altitude than at sea level in order to breath in the same amount of oxygen in a given period.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47463",
"title": "Thermosphere",
"section": "Section::::Neutral gas constituents.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 583,
"text": "The density of the Earth's atmosphere decreases nearly exponentially with altitude. The total mass of the atmosphere is M = ρ H ≃ 1 kg/cm within a column of one square centimeter above the ground (with ρ = 1.29 kg/m the atmospheric density on the ground at z = 0 m altitude, and H ≃ 8 km the average atmospheric scale height). 80% of that mass is concentrated within the troposphere. The mass of the thermosphere above about 85 km is only 0.002% of the total mass. Therefore, no significant energetic feedback from the thermosphere to the lower atmospheric regions can be expected.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15056636",
"title": "Atlantic Data Base for Exchange Processes at the Deep Sea Floor",
"section": "",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 283,
"text": "BULLET::::- The largest differences in estimates of oxygen fluxes at the sea floor are found at continental margins which export large amounts of organic carbon to the deep sea. The knowledge of transport and biological utilization at the continental margins need further attention.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5h84ex
|
All this talk about humans on Mars someday. How will that be possible with such extreme cold temperatures on Mars?
|
[
{
"answer": "Well, the temperature isn't even the biggest problem. Radiation and (lack of) pressure are at least as problematic. All of these limit the extent of extravehicular surface activities feasible to humans. We do need heated, insulated pressure suits on Mars, but they can be quite a bit more lightweight and less clumsy than the spacesuits used on the Moon and in orbit.",
"provenance": null
},
{
"answer": "I find this an odd question. It applies equally well to living in Reykjavik, Chicago, or Moscow. Humans will, of course, live in heated and insulated habitats or wear protective clothing and gear to protect them from the natural cold.\n\nConsider how many people on Earth live lives that would not be possible or practical without modern technology. People who earn their living based on technologies that didn't exist 10, 20, 50, or 100 years ago. People who live in places at population levels that would be untenable without modern buildings, heating or cooling, long distance water transport and irrigation, long distance food transport, and so on. In the US 3/4 of workers drive alone to work, and nearly half of all commutes are farther than 10 miles in one direction. A ubiquitous lifestyle made possible by the advancement in automobile development and manufacturing. Today cars are common and relatively inexpensive but only a century ago they were incredibly rare and costly. From the perspective of 1916 or even more so 1891 (125 years ago) the world of today or even the 1950s and its extensive reliance on the automobile is one that seems unnatural and difficult to comprehend. Just as folks in the 1980s would find our modern dependence on computers and the internet seemingly baffling and troubling.\n\nFor people who are unfamiliar with it relying on new technology is like walking on thin ice. You have no confidence in it and you will have fear that it will fail spectacularly at any moment leaving you without it or worse off than before. It's only after extensive experience with a technology and finding out its limitations as well as its strengths that it becomes easier to rely on. People have come to rely on automobiles and the internet because they've both consistently delivered. Every single year in America the automobiles there drive over a trillion cumulative miles, and have been doing so since the 1970s. That experience tells you more than enough about the reliability and dependability of the automobile as a part of the US socio-industrial infrastructure. Similarly, an unimaginable amount of network traffic is constantly being handled by the internet and trillions of dollars in business is happening through or on the internet.\n\nToday we see the technology of spaceflight and space colonization as outside our bubble of familiarity. As experimental, as potentially unreliably, as risky to rely on. And for now to some degree it is. But over time we will improve that technology, scale up our manufacturing of it, and scale up its use. Just as happened with the automobile, the telephone, the internet, nitrate fertilizers, artificial irrigation, green houses, transcontinental shipping, and so many other technological wonders that our modern world relies on. And over time the measurements corresponding to usage of those technologies (liters of water recycled, cubic meters of habitable volume manufactured, kilograms of CO2 scrubbed) will grow exponentially, to thousands, millions, billions, perhaps trillions. And over time those technologies that are an essential part of space colonization but not, currently, essential to life on Earth will move inside the bubble of familiarity.\n\nMillions of people will live off Earth, and their lives will depend on different technologies and different infrastructure than ours just as ours depends on different things than those living a hundred or a thousand years ago on Earth. But for them it will be normal. They will have developed the technologies to sufficient levels of reliability and performance to depend on them comfortably. For them it will just be the normal way of life.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "12395",
"title": "Greenhouse effect",
"section": "Section::::Bodies other than Earth.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 467,
"text": "In complete contast the mean temparature on Mars is very cold at -63 deg C [-82 deg F] This is despite having over 95% atmospheric CO2, almost the same as Venus, but at a much lower pressure than both earth an Venus. Mars is further from the Sun than earth but still receives about 44% of the suns heat [approx 500w/msq] when compared to earth. Also any atmospheric or surface heating from solar flares and cosmic radiation affects Mars as it has no magnetic field. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42302371",
"title": "Mars habitat",
"section": "Section::::Overview.:Air.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 438,
"text": "One of the challenges for a Mars habitat is for it to maintain suitable temperatures in the right places in a habitat. Things like electronics and lights generate heat that rises in the air, even as there are extreme temperature fluctuation outside. There can be large temperature swings on Mars, for example at the equator it may reach 70 degrees F (20 degrees C) in the daytime but then go down to minus 100 degrees F (−73 C) at night.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1744360",
"title": "Colonization of Mars",
"section": "Section::::Conditions for human habitation.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 346,
"text": "Conditions on the surface of Mars are closer to the conditions on Earth in terms of temperature and sunlight than on any other planet or moon, except for the cloud tops of Venus. However, the surface is not hospitable to humans or most known life forms due to the radiation, greatly reduced air pressure, and an atmosphere with only 0.1% oxygen.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "463835",
"title": "Life on Mars",
"section": "Section::::Survival under simulated Martian conditions.\n",
"start_paragraph_id": 112,
"start_character": 0,
"end_paragraph_id": 112,
"end_character": 435,
"text": "Although numerous studies point to resistance to some of Mars conditions, they do so separately, and none has considered the full range of Martian surface conditions, including temperature, pressure, atmospheric composition, radiation, humidity, oxidizing regolith, and others, all at the same time and in combination. Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31028310",
"title": "Interplanetary contamination",
"section": "Section::::Evidence for possible habitats outside Earth.:Mars.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 435,
"text": "Although numerous studies point to resistance to some of Mars conditions, they do so separately, and none has considered the full range of Martian surface conditions, including temperature, pressure, atmospheric composition, radiation, humidity, oxidizing regolith, and others, all at the same time and in combination. Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56677683",
"title": "Mars suit",
"section": "Section::::Environmental design requirements.:Temperature.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 265,
"text": "There can be large temperature swings on Mars; for example, at the equator, daytime temperature may reach in the Martian summer, and drop down to at night. According to a 1958 NASA report, long-term human comfort requires temperatures in the range at 50% humidity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21051206",
"title": "Carbonate–silicate cycle",
"section": "Section::::The cycle on other planets.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 911,
"text": "Mars is such a planet, since it is located at the edge of our solar system’s habitable zone, which means its surface is too cold for liquid water to form without a greenhouse effect. With its thin atmosphere, Mars's mean surface temperature is -55 °C. In attempting to explain Mars’ topography that resembles fluvial channels despite seemingly insufficient incoming solar radiation, some have suggested that a cycle similar to Earth's carbonate-silicate cycle could have existed – similar to a retreat from Snowball Earth periods. It has been shown using modeling studies that gaseous CO and HO acting as greenhouse gases could not have kept Mars warm during its early history when the sun was fainter because CO would condense out into clouds. Even though CO clouds do not reflect in the same way that water clouds do on Earth, which means it could not have had much of a carbonate-silicate cycle in the past.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2ep6o3
|
why do apples make me hungrier?
|
[
{
"answer": "The problem with apples is that they are purely carbs and nothing else, and carbs will always leave you with that \"empty\" feeling. Granted, they have a lot of health benefits, are highly nutritious, and I definitely eat them all the time. However, foods that contain fats and proteins are the ones that make us feel full. What you need to do, and what I do, is eat the apple with a couple of almonds or a piece of low calorie string cheese. That way you are getting a good mix of carbs, fats, and proteins as part of a healthy snack that will leave you feeling full for quite some time. Hope this helps! ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "9393968",
"title": "Esopus Spitzenburg",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 349,
"text": "It is fairly large, oblong and has red skin and crisp flesh. Like many late-season apples, it improves with a few weeks of cool storage, which brings it to its full, rich flavor. Hedrick praised this apple as attractive and keeping well in cold storage, but added that it was imperfect in that the trees lack vigor and are vulnerable to apple scab.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "55377540",
"title": "Russeting",
"section": "Section::::Causes.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 505,
"text": "Apples are particularly susceptible to russet. Many naturally-occurring varieties exhibit the feature consistently, while other cultivars may develop russet due to environmental stresses. As a result, cuticular structure is impaired, leading to reduced strength of the peel, which impacts handling and post-harvest processing. Russeting and cuticular cracks may accelerate the development of flesh browning due to oxidation, as well as softening of internal tissue due to the loss of an external support.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4392124",
"title": "Macoun apple",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 369,
"text": "Aside from its short season of availability, the popularity of the apple is somewhat compromised by the problems it gives orchardists. The 'Macoun' has a short stem, and there is a tendency for the apple to push itself off the branch as the fruit matures; also, the 'Macoun' tends not to produce reliable crops each year, with a good harvest followed by a sparser one.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4022319",
"title": "Schnitz un knepp",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 387,
"text": "Today, commercial producers of apple snitz use named-variety apples that cannot be sold as fresh because of blemishes, and they peel the apples. The peelings do not go to waste; they are pressed for cider. Some home orchards may have a tree that produces tart apples, prized for the flavorful snitz they make. They may also choose to only core and slice their apples, not peeling them. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44912613",
"title": "Prima apple",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 215,
"text": "It has a juicy flesh with a balanced mild sub-acid flavour, a red flushed skin over yellow background. It does not fall off the tree, and like most early harvest apples, does not keep well, even with refrigeration.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4022319",
"title": "Schnitz un knepp",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 300,
"text": "Apples other than named varieties grafted from a parent tree, were usually small, misshapen and rather tart - because of Johnny Appleseed's Swedenborgian faith, he sold only ungrafted trees - but drying the snitz concentrates the fruit sugars, making them a bright spot in an otherwise dreary diet. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1659702",
"title": "Mammea americana",
"section": "Section::::Description.:Fruit.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 399,
"text": "Mammee apples' diameter ranges from to . When unripe, the fruit is hard and heavy, but its flesh slightly softens when fully ripe. Beneath the skin, there is a white, dry membrane, whose taste is astringent, that adheres to the flesh. The flesh is orange or yellow, not fibrous, and can have various textures (crispy or juicy, firm or tender). Generally, the flesh smell is pleasant and appetizing.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6g7wor
|
Why is there a dotted image on the side of public bus windows?
|
[
{
"answer": "It's called frit. Has a number of purposes- it's ceramic based paint that helps the adhesive bond to the window in the mount. It also minimizes UV reducing its ability to break down the sealant. \n\nAnd, I've heard they think it makes a car more appealing- so you don't go from black window gasket to window- it's a slow transition. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "26483185",
"title": "Spider map",
"section": "Section::::Design.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 574,
"text": "At the centre of the map is a rectangular area with a yellow background which shows the local street layout and bus stops labelled with letters (A to Z, and if necessary AA to ZZ) of all the bus-stops in the local area. Beyond this is a schematic bus map for an area about radius with a pale yellow background, which shows all bus stops in their relative positions. Further out of the map shows the remainder of the route against a white background, but without showing all bus stops. Bus routes themselves are shown as distinctive coloured lines, and are clearly numbered.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "633072",
"title": "School bus",
"section": "Section::::Features.:Livery.\n",
"start_paragraph_id": 68,
"start_character": 0,
"end_paragraph_id": 68,
"end_character": 360,
"text": "To specifically identify them as such, purpose-built school buses are painted a specific shade of yellow, designed to optimize their visibility for other drivers. In addition to \"School Bus\" signage in the front and rear above the window line, vehicles are marked with the name of the operator (school district or bus contractor) and an identification number.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4146",
"title": "Bus",
"section": "Section::::Design.:Liveries.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 449,
"text": "Transit buses are normally painted to identify the operator or a route, function, or to demarcate low-cost or premium service buses. Liveries may be painted onto the vehicle, applied using adhesive vinyl technologies, or using decals. Vehicles often also carry bus advertising or part or all of their visible surfaces (as mobile billboard). Campaign buses may be decorated with key campaign messages; these can be to promote an event or initiative.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46629097",
"title": "See-through graphics",
"section": "Section::::Applications.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 513,
"text": "See-through graphics are used for Out of Home (OOH) advertising campaigns as part of vehicle wraps on buses, trams and the back window of taxis. It is also used for advertising on static sites such as telephone kiosks, bus shelters and on glass windows and partitions in airports and other transport hubs. The main benefit is that advertisers can install larger and more impactful graphics which cover windows as well as standard walls. There are a number of tips and tricks to ensure a successful \"window wrap\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "240496",
"title": "Bus stop",
"section": "Section::::Information.:Public facing information.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 528,
"text": "The bus stop \"flag\" (a panel usually projecting from the top of a bus stop pole) will sometimes contain the route numbers of all the buses calling at the stop, optionally distinguishing frequent, infrequent, 24-hour, and night services. The flag may also show the logo of the dominant bus operator, or the logo of a local transit authority with responsibility for bus services in the area. Additional information may include an unambiguous, unique name for the stop, and the direction/common destination of most calling routes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10239864",
"title": "New York City Subway tiles",
"section": "Section::::Original IRT and BMT tiles.:Heins & LaFarge (1901–1907).\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 579,
"text": "Their bas-reliefs in the subway have been likened to the work of the Italian Renaissance artist Andrea Della Robbia. Much of their tile work was station-identifying signs to guide passengers. Besides serving an aesthetic function, the images are helpful to New York City's large population of non-English speakers and those who can't read. A traveler can be told to \"get off at the stop with the picture of a beaver.\" As well as pictorial plaques and ceramic signs, Heins and LaFarge designed the running decorative motifs, such as egg-and-dart patterns, along station ceilings.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4313702",
"title": "CNA Center",
"section": "Section::::Lighted window messages.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 518,
"text": "Utilizing a combination of lights on/off and 1,600 window blinds open/closed (and sometimes foamboard cutouts), the windows on CNA Center are often used to display lighted window messages, typically denoting holidays, remembrances, and other events denoting Chicago civic pride, such as when the Blackhawks played in and won the 2010 Stanley Cup Finals and when the Cubs made their 2016 World Series run. Building engineers use a computer program to plot which windows need to be lighted to create the proper message.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4zgu02
|
how does private individual buy stocks offline/online? and how does the process work?
|
[
{
"answer": "\"private individuals\" don't really buy stocks, you need to go through a broker. You place an order with one of a thousand or more investment houses/companies that you have made an account with, the company then goes through the process of buying you that stock through their electronic systems. It takes only milliseconds for a transaction to occur.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "19372783",
"title": "Stock",
"section": "Section::::Trading.:Buying.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 638,
"text": "There are other ways of buying stock besides through a broker. One way is directly from the company itself. If at least one share is owned, most companies will allow the purchase of shares directly from the company through their investor relations departments. However, the initial share of stock in the company will have to be obtained through a regular stock broker. Another way to buy stock in companies is through Direct Public Offerings which are usually sold by the company itself. A direct public offering is an initial public offering in which the stock is purchased directly from the company, usually without the aid of brokers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19372783",
"title": "Stock",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 904,
"text": "Stock can be bought and sold privately or on stock exchanges, and such transactions are typically heavily regulated by governments to prevent fraud, protect investors, and benefit the larger economy. As new shares are issued by a company, the ownership and rights of existing shareholders are diluted in return for cash to sustain or grow the business. Companies can also buy back stock, which often lets investors recoup the initial investment plus capital gains from subsequent rises in stock price. Stock options, issued by many companies as part of employee compensation, do not represent ownership, but represent the right to buy ownership at a future time at a specified price. This would represent a windfall to the employees if the option is exercised when the market price is higher than the promised price, since if they immediately sold the stock they would keep the difference (minus taxes).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41915",
"title": "Primary market",
"section": "Section::::Concept.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 502,
"text": "In a primary market, companies, governments or public sector institutions can raise funds through bond issues and corporations can raise capital through the sale of new stock through an initial public offering (IPO). This is often done through an investment bank or finance syndicate of securities dealers. The process of selling new shares to investors is called underwriting. Dealers earn a commission that is built into the price of the security offering, though it can be found in the prospectus. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32956655",
"title": "Bought out deal",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 565,
"text": "A bought out deal is a method of offering securities to the public through a sponsor or underwriter (a bank, financial institution, or an individual). The securities are listed in one or more stock exchanges within a time frame mutually agreed upon by the company and the sponsor. This option saves the issuing company the costs and time involved in a public issue. The cost of holding the shares can be reimbursed by the company, or the sponsor can offer the shares to the public at a premium to earn profits. Terms are agreed upon by the company and the sponsor.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53595317",
"title": "Securities market participants (United States)",
"section": "Section::::Parties to transactions.:Investor.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 757,
"text": "Investors may not be members of stock exchanges. Rather they must buy and sell securities through broker-dealers which are registered with the appropriate regulatory body for that purpose. In accepting investors as clients, broker-dealers take on the risks of their clients not being able to meet their financial obligations. Hence retail (individual) investors generally are required to keep their investment assets in custody with the broker-dealer through which they buy and sell securities. A broker-dealer would normally not accept an order to buy from a retail clients unless there is sufficient cash on deposit with the broker-dealer to cover the cost of the order, nor sell unless the client already has the security in the broker-dealer's custody.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2785607",
"title": "Stock trader",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 258,
"text": "Stock traders can trade on their own account, called proprietary trading, or through an agent authorized to buy and sell on the owner’s behalf. Trading through an agent is usually through a stockbroker. Agents are paid a commission for performing the trade.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35337618",
"title": "STOCK Act",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 447,
"text": "The Stop Trading on Congressional Knowledge (STOCK) Act () is an Act of Congress designed to combat insider trading. It was signed into law by President Barack Obama on April 4, 2012. The bill prohibits the use of non-public information for private profit, including insider trading by members of Congress and other government employees. It confirms changes to the Commodity Exchange Act, specifies reporting intervals for financial transactions.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
6kfnbt
|
According to a History Channel documentry, during the Vietnam war, US ground troops would face 240 days of combat a year, versus 11 days of combat a year during WW2. Is this claim true?
|
[
{
"answer": "I just recently [made a post](_URL_0_) concerning the psychological effects of long-term frontline service in the U.S. Army during WWII.\n\n > Being assigned to the Infantry branch (as roughly half of all men entering the Army were in mid-1944) was an assignment fraught with danger;\n\nDeployed overseas|Total battle casualties|*Deaths among battle casualties*|*KIA*|*DOW*|*Died while MIA*|*Died while POW*|WIA|MIA|POW\n:--|:--|:--|:--|:--|:--|:--|:--|:--|:--\n757,712|661,059|*142,962*|*117,641*|*19,613*|*1,795*|*3,913*|471,376|15,830|56,212\n\n > There was no such thing as a \"tour of duty\" for U.S. soldiers during WWII; those on the front lines saw service until they were killed, wounded, taken prisoner, disabled, or otherwise too mentally broken to continue. It was estimated that a soldier began to lose his effectiveness after about 90 combat days (American troops were often on the front lines for as long as two months without a visit to a rear area rest center; the British had a policy of allowing four days of rest for every twelve days of combat) and was completely ineffective after 200-240 combat days. After 180 combat days, less than three percent of the original men in a unit remained, the rest lost to various causes. Neuropsychiatric casualties accounted for 15 to 25 percent of all non-battle losses (the Army hospitalized about 900,000 men for neuropsychiatric reasons during the war, a number Eisenhower demanded not be released to the press) and it was estimated that for every ten days of combat, three to ten percent of men in a unit became neuropsychiatric casualties. The \"rate of replacement\" for men varied wildly; many were killed nearly immediately, while others made it through the entire war without suffering any bodily harm whatsoever. \n\n > Men who showed particular aptitude for certain tasks were often sent back to the United States on temporary duty to attend specialist schools; they then returned to their units. Week-long passes were often offered to rear areas such as London or Paris. Combat veterans who had been made unsuitable for frontline service through wounds or other causes were often assigned to apply their expertise in training new soldiers at replacement training centers stateside. \n\n > In early 1945, to improve the morale of replacements (by now called \"reinforcements\"), General Joseph Stilwell proposed that they be pre-designated for assignment to certain units while still in the United States, and shipped in groups. Four men would form a squad, four squads a platoon, and four platoons a company. Companies and platoons would be broken up as needed, but the basic unit of four men would always remain intact. The plan was begrudgingly adopted by theater authorities in Europe in March 1945, and it is not possible to tell how effective it was since it only operated for a short time. The Surgeon General of the United States proposed that soldiers be given a six-month non-combat furlough after 200 to 240 days and have the option of serving it in the United States. \n\n > Before the invasion of Japan, it was proposed that infantry divisions be augmented with a fourth regiment (many being the \"orphan\" regiments detached from divisions when they moved from a \"square\" to \"triangular\" structure in 1940) so that one could be shifted completely out of the combat zone, as well as a policy that soldiers only serve 120 combat days before a substantial rest. As Japan surrendered, this was never implemented. A proposal was also made to rotate divisions completely out of the front line to rear areas, something done in the German *Heer*.\n\n > The replacement system in the *Heer* operated differently than in the U.S. Army, and it can be argued that there was an effect on morale. Divisions in the German Army were raised in certain geographic areas (not dissimilar to U.S. National Guard units) known as *wehrkreise* and men were assigned to divisional replacement units and took their training together, shipping to the front in groups, where they received more training before being assigned to their unit. The U.S. replacement system was rather impersonal. Men from all over the country (remember that the United States is massive; Germany is only about the size of Montana) were shipped to replacement training centers to receive instruction, and then shipped overseas to wherever they were needed, be it Europe or the Pacific. A port of embarkation (many replacements were assigned to augment divisions less than a month before they shipped out; the only significant small-unit training these men received would be combat) gave way to a trans-Atlantic or -Pacific voyage, intermediate and then field army replacement depots, replacement battalions, and then final units, each step of the way another obstacle to unit cohesion. Men often arrived scared and isolated, having little, if any, training upkeep offered along the way.\n\n > **Sources:** \n\n > * Atkinson, Rick. *The Guns at Last Light: The War in Western Europe 1944-1945*. New York: Picador, 2013.\n\n > * Ruppenthal, Roland G. *United States Army in World War II, European Theater of Operations, Logistical Support of the Armies Volume II: September 1944-May 1945*. Washington: United States Army Center of Military History, 1959.\n\n > * United States. United States Army Adjutant General's Corps. *Army Battle Casualties\nand Nonbattle Deaths in World War II Final Report, 7 December 1941-31 December 1946*. Washington: Statistical and Accounting Branch, Office of the Adjutant General, 1953.\n\n > * United States, United States Army Medical Department. *Neuropsychiatry in World War II Volume I, Zone of Interior*. Washington: Office of the Surgeon General, Department of the Army, 1973.\n\n > * United States, War Department, *Army Ground Forces Historical Study No. 7: The Provision of Enlisted Replacements*. Washington: Army Ground Forces Historical Section, 1946.\n\nThis is a very interesting claim that is supposed to have originated in a speech given by Army general Barry McCaffrey to Vietnam War veterans at the Vietnam War Veterans' Memorial in 1993. Was raw data collected from a sample of a large enough number of men to form a representative conclusion, or an assertion made based on interviews with just a few veterans? The method of data collection is thus unknown, and calls the results into question. The possible decision to include rear-echelon service troops in the World War II situation and not the Vietnam War one (as that average appears *very* high in comparison) makes the average significantly smaller; frontline infantrymen in the ETO were routinely in the line for as long as 30 days (and sometimes up to 60 or 90 days) at a time without relief, and never truly left as divisions were not moved completely to the rear (\"in reserve\" often meant only a mile or two from the front) except in extraordinary circumstances. In the Pacific it might have been a little different, as the number of pitched battles (i.e. Kwajalein or Saipan) were relatively spaced out and divisions were often moved back to Hawaii or Australia for several months of rest and refit. What criteria were used to obtain a definition of what \"combat\" was, and were they applied relatively equally in both scenarios?\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1176764",
"title": "Tactical Air Command",
"section": "Section::::History.:Operation Desert Shield/Desert Storm.\n",
"start_paragraph_id": 190,
"start_character": 0,
"end_paragraph_id": 190,
"end_character": 446,
"text": "The ground war began in late February 1991 and lasted approximately 100 hours. TAC close air support A-10 aircraft supported ground forces as they had trained for in the United States and Europe for well over a decade. Military planners and Washington officials were correct when they proclaimed that the war in the desert would \"...not be another Viet Nam,\" and Desert Storm would go into the history books as one of TAC's most shining moments.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15358000",
"title": "Vietnam: The Ten Thousand Day War",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 259,
"text": "Vietnam: The Ten Thousand Day War, a 26-part half-hour Canadian television documentary on the Vietnam War, and was produced in 1980 by Michael Maclear. The series aired in Canada on CBC Television, in the United States and in the United Kingdom on Channel 4.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24809784",
"title": "1965 in the United States",
"section": "Section::::Events.:August.\n",
"start_paragraph_id": 78,
"start_character": 0,
"end_paragraph_id": 78,
"end_character": 350,
"text": "BULLET::::- August 18 – Vietnam War – Operation Starlite: 5,500 United States Marines destroy a Viet Cong stronghold on the Van Tuong peninsula in Quang Ngai Province, in the first major American ground battle of the war. The Marines were tipped-off by a Viet Cong deserter who said that there was an attack planned against the U.S. base at Chu Lai.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34750",
"title": "1965",
"section": "Section::::Events.:August.\n",
"start_paragraph_id": 147,
"start_character": 0,
"end_paragraph_id": 147,
"end_character": 350,
"text": "BULLET::::- August 18 – Vietnam War – Operation Starlite: 5,500 United States Marines destroy a Viet Cong stronghold on the Van Tuong peninsula in Quảng Ngãi Province, in the first major American ground battle of the war. The Marines were tipped-off by a Viet Cong deserter who said that there was an attack planned against the U.S. base at Chu Lai.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46725249",
"title": "November 1967",
"section": "Section::::November 23, 1967 (Thursday).\n",
"start_paragraph_id": 158,
"start_character": 0,
"end_paragraph_id": 158,
"end_character": 580,
"text": "BULLET::::- After a five-day fight, American troops captured Hill 875 overlooking Dak To, in a one-hour charge on Thanksgiving Day to end the Battle of Dak To, one of the deadliest engagements of the Vietnam War. In all, 361 Americans were killed, 15 missing in action, and 1,441 had been wounded. The South Vietnamese Army suffered 73 deaths. The North Vietnamese and Viet Cong lost more than 1,200 troops, with an indeterminate number of wounded, indicating, as one historian would note, that \"A loss rate of 4 to 1\" was \"clearly acceptable to the North Vietnamese leadership.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42935108",
"title": "July 1966",
"section": "Section::::July 6, 1966 (Wednesday).\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 656,
"text": "BULLET::::- Vietnam War (\"Operation Washington\"): Lieutenant Colonel Arthur J. Sullivan, battalion commander of 1st Recon Battalion, moved his battalion headquarters to Hau Doc, 25 km west of Chu Lai. In eight days his reconnaissance teams would cover 400 square kilometers of his area of operation, sighting 46 enemy forces scattered throughout the dense jungle terrain, roughly equating to 200 soldiers at most. The ground combat and supporting elements resulted in 13 of the enemy soldiers dead, with four prisoners. Because of the poor results, General Lewis J. Fields, the commanding general of the Chu Lai TOAR, ended the operation on July 14, 1966.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42675525",
"title": "1966 in Vietnam",
"section": "Section::::Events.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 235,
"text": "BULLET::::- June 15 - Vietnam War: Battle of Hill 488 - A small United States Marine Corps reconnaissance platoon inflicts large casualties on regular North Vietnamese Army and Viet Cong fighters before withdrawing with fourteen dead.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3pin59
|
Did the Byzantines ever consider building a 'Great Wall' of their own to repel the Arabs and Turks?
|
[
{
"answer": "The situation for the Byzantines was very different.\n\nIn the cases of the Romans versus the Picts/Germans/other peoples, or the Chinese versus the various nomadic peoples to their north, you have a big, strong, rich, centralised empire that has to deal with endemic raiding from less organised but mobile and elusive enemies. These states have resources to throw at the problem (though not infinite ones) and have border control as their main priority.\n\nThe Byzantines, on the other hand, were a regional power sitting right next to, depending on the timeframe, far greater world-class empires. \n\nSure, the Arabs would raid Byzantine territory year after year in the 7th and 8th century. But it was a different kind of raid: the Byzantine problem wasn't finding the raiders and delaying them until overwhelming force could be brought to bear against them. They didn't have overwhelming force. The Caliphate was much bigger and stronger. The Byzantine priority was survival and minimising the damage they suffered.\n\nThey were never in a position to expend the massive resources it would take to create giant walls. At times their enemies were weaker and divided, but that usually prompted the Byzantines to try and re-take their lost territory rather than construct big static defences. \n\nThat said, in the Balkans they did construct the [Long Walls of Thrace](_URL_0_) in the 5th century, a 56 kilometer stretch of wall west of Constantinople, protecting its peninsula from the Black Sea to the Sea of Marmara. These defences don't seem to have been very effective.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "20431259",
"title": "Sack of Constantinople (1204)",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 303,
"text": "The Byzantine Empire was left much poorer, smaller, and ultimately less able to defend itself against the Turkish conquests that followed; the actions of the Crusaders thus directly accelerated the collapse of Christendom in the east, and in the long run facilitated the expansion of Islam into Europe.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47624813",
"title": "Fortifications of Chania",
"section": "Section::::History.:Hellenistic and Byzantine walls.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 212,
"text": "Eventually, the Byzantines retook the city, and built a new fortress on the hill of Kastelli in the 10th century, to prevent a second Arab invasion. Some parts of the Byzantine wall still exist in Sifaka Street.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5634481",
"title": "Thagaste",
"section": "Section::::History.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 213,
"text": "The Byzantines fortified the city with walls. It fell to the Umayyad Caliphate toward the end of the seventh century. After centuries of neglect, French colonists rebuilt the city, which is now called Souk Ahras.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3592736",
"title": "Siege of Constantinople (717–718)",
"section": "Section::::Historical assessment and impact.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 779,
"text": "The second Arab siege of Constantinople was far more dangerous for Byzantium than the first as, unlike the loose blockade of 674–678, the Arabs launched a direct, well-planned attack on the Byzantine capital, and tried to cut off the city completely from land and sea. The siege represented a final effort by the Caliphate to \"cut off the head\" of the Byzantine Empire, after which the remaining provinces, especially in Asia Minor, would be easy to capture. The reasons for the Arab failure were chiefly logistical, as they were operating too far from their Syrian bases, but the superiority of the Byzantine navy through the use of Greek fire, the strength of Constantinople's fortifications, and the skill of Leo III in deception and negotiations also played important roles.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "867736",
"title": "Anatolian beyliks",
"section": "Section::::History.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 387,
"text": "As the Byzantine empire weakened, their cities in Asia Minor could resist the assaults of the beyliks less and less, and many Turks gradually settled in the western parts of Anatolia. As a result, many more beyliks were founded in these newly conquered western regions who entered into power struggles with the Byzantines, the Genoese, the Knights Templar as well as between each other.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3592736",
"title": "Siege of Constantinople (717–718)",
"section": "Section::::Historical assessment and impact.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 1520,
"text": "The outcome of the siege was of considerable macrohistorical importance. The Byzantine capital's survival preserved the Empire as a bulwark against Islamic expansion into Europe until the 15th century, when it fell to the Ottoman Turks. Along with the Battle of Tours in 732, the successful defence of Constantinople has been seen as instrumental in stopping Muslim expansion into Europe. Historian Ekkehard Eickhoff writes that \"had a victorious Caliph made Constantinople already at the beginning of the Middle Ages into the political capital of Islam, as happened at the end of the Middle Ages by the Ottomans—the consequences for Christian Europe [...] would have been incalculable\", as the Mediterranean would have become an Arab lake, and the Germanic successor states in Western Europe would have been cut off from the Mediterranean roots of their culture. Military historian Paul K. Davis summed up the siege's importance as follows: \"By turning back the Moslem invasion, Europe remained in Christian hands, and no serious Moslem threat to Europe existed until the fifteenth century. This victory, coincident with the Frankish victory at Tours (732), limited Islam's western expansion to the southern Mediterranean world.\" Thus the historian John B. Bury called 718 \"an ecumenical date\", while the Greek historian Spyridon Lambros likened the siege to the Battle of Marathon and Leo III to Miltiades. Consequently, military historians often include the siege in lists of the \"decisive battles\" of world history.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15190852",
"title": "Byzantine army (Palaiologan era)",
"section": "Section::::Strategy and tactics.:Fortifications and siege warfare.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 1094,
"text": "The Byzantine army regained an increasingly offensive role against the crusaders in the mid to late 13th century but many fortifications regained by the Byzantines fell out of use; a lack of manpower and multiple pressing fronts relegated these castles to abandonment. Some of the castles captured in Greece were used to control the local hostile Greek, Albanian, Vlach or other tribal peoples that opposed Frankish rule and since the Byzantines were both Greek and Orthodox, the threat that the Crusaders had to contend with existed on a lesser scale for the Byzantines, giving them another reason not to repair them. Constantinople's fortifications remained formidable, but repairing them proved impossible after 1370 due to the destructive nature of an ongoing civil war. By the time the Byzantines emerged from it, they were forced to acknowledge the suzerainty of the Ottoman Sultan, who threatened military action if any repairs were made to the millennium-old Walls of Constantinople. Heavily outnumbered, the walls of the capital provided the defenders in 1453 with 6 weeks of defense.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
271aue
|
why does america mostly use traffic lights at intersections when europe uses a lot of traffic circles? what are the benefits to either?
|
[
{
"answer": "Roundabouts have a lot less points of intersection/wreck opportunities as well. And the results show they are far more safe",
"provenance": null
},
{
"answer": "_URL_0_\n\n\nThe Mythbusters answer this. Something to do with Roundabouts being able to have more cars in them at once. \n\n\n*Edit - added synopsis of video.",
"provenance": null
},
{
"answer": "A roundabout (or traffic circle) can usually handle a greater volume of traffic. It's relatively rare, at least outside of the rush hour, for traffic to come to a complete standstill, which is the case with traffic lights. However, roundabouts do force traffic to slow down: at traffic lights, drivers may be tempted to speed through a green light, but at a roundabout you have to slow down. So you have this twin benefit of traffic that slows down (which is safer) but flows more smoothly (which is quicker). Also, while at a standard intersection you have to watch for traffic coming from several different directions, at a roundabout you only need to worry about traffic coming from one direction (from the left in countries that drive on the right, and vice versa).\n\nBut there are downsides. First of all, if you have an intersection where lots of pedestrians are likely to be milling around, traffic lights can be programmed to allow pedestrians to cross safely. This isn't possible with roundabouts, where foot bridges or foot tunnels become necessary unless you want to put in crossings on every road just before the roundabout, completely negating most of the advantages. Also, roundabouts can actually become choked at times, and while traffic lights can be programmed to take account of this, roundabouts can't be regulated in this way. Roundabouts also need more space, especially where large trucks and other vehicles have to use them.",
"provenance": null
},
{
"answer": "In general, I like roundabouts, they are a great solution to many residential and medium roads. They allow the roads to intersect without causing too many hold ups. Traffic lights allow bigger roads to be feasible but are still a pain because once you get lots of traffic people have to wait no matter what the solution. \n\nRoundabouts:\n\nUpsides: Constant traffic flow, self-regulating, cheap, generally good on small-medium sized roads, reduces accidents/high speed accidents, multiple directions can be in the roundabout at any one time (depends on size)\nDownsides: Have to slow down considerably to negotiate, bad for slightly mismatched road ends (sometimes a 'kidney' roundabout is used which really slows down traffic), bad if one traffic flow direction is very dominant and stops traffic flow in the other direction, usually bad in large intersections due to visibility/multiple lanes, inhibit bus/truck flow\n\n\n\nTraffic lights:\n\nUpsides: Lets all directions have a go, if green allows continuous flow, allows multiple lanes with ease, can handle slightly offset roads and poor visibility, good for large roads, night cycles can allow good flow even at night if well programmed, creates construction work/jobs\n\nDownsides: Stops traffic completely, long waits, often causes traffic jams, only one direction/line at a time, expensive, increase in accidents/high speed accidents\n\n\nBonus! Ramps:\n\nUpsides: Free-flowing traffic on large roads/highways!\n\nDownsides: Expensive :(\n",
"provenance": null
},
{
"answer": "Roundabouts are safer but cause more accidents at a much slower speed.",
"provenance": null
},
{
"answer": "I'm not sure this is necessarily true for all of Europe, but here in Britain we don't have as many four way intersections as you do in the States. A lot of our intersections are meeting points of three, five, six or more roads, so roundabouts seem a more obvious choice. As for whether either is more efficient than the other, I'm not sure. A quite major junction near to where I live was recently replaced with traffic lights (it used to be a roundabout) and, ignoring all the commotion caused by the construction work, the new traffic light system seems incredibly less efficient than what used to exist.\n\nHowever, some of our roundabouts (particularly the larger ones with a higher rate of traffic) have traffic lights on them. Some of these are only turned on during rush hour, but some of them are permanent. For the most part (at least the ones I've used), these traffic lights actually seem to make the junction less efficient than before the traffic lights were installed. \n\nIt's fair to say though that here in Britain we're not massive fans of traffic lights over roundabouts - putting traffic lights on roundabouts/replacing them always seems to be met with disagreement from locals. I don't know what the case is in the rest of Europe, though.\n\nEDIT ~ [Here's](_URL_0_) a rather extreme demonstration of the lengths we'll go to avoid traffic lights.",
"provenance": null
},
{
"answer": "I think roundabouts are fucking horrible, but people seem to like them. IMO, roundabouts trust more heavily on the good judgement of drivers to be able to merge into and out of traffic, whereas light just dictate exactly what to do at any given moment. This makes you less susceptible to minor accidents but enables people who don't give a fuck to try and cheat the system and fly through intersections at the wrong time. When used correctly, lights provide a much easier and less stressful driving environment by removing almost all judgement decisions. It also allows coordination between multiple intersections on a larger systems level, and helps when there's poor visibility and awkward/atypical intersection geometries. ",
"provenance": null
},
{
"answer": "Traffic lights allow for protected pedestrian crossings. They also take up less space. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "163395",
"title": "Traffic light",
"section": "Section::::Technology.:Mounting.\n",
"start_paragraph_id": 175,
"start_character": 0,
"end_paragraph_id": 175,
"end_character": 282,
"text": "In other countries like Australia, New Zealand, Lebanon and the United Kingdom, traffic lights are mounted at the stop line before the intersection and also after the intersection. Some busy intersections have an overhead traffic light for heavy vehicles and vehicles further away.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "163395",
"title": "Traffic light",
"section": "Section::::Technology.:Mounting.\n",
"start_paragraph_id": 172,
"start_character": 0,
"end_paragraph_id": 172,
"end_character": 303,
"text": "In North America, there is often a pole-mounted signal on the same side of the intersection, but additional pole-mounted and overhead signals are usually mounted on the far side of the intersection for better visibility. Most traffic lights are mounted that way in the Western United States and Canada.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "163395",
"title": "Traffic light",
"section": "Section::::Technology.:Mounting.\n",
"start_paragraph_id": 171,
"start_character": 0,
"end_paragraph_id": 171,
"end_character": 346,
"text": "In Spain, the mounted traffic lights on the far side of the intersection is meant for the traffic that exits the intersection in that particular direction. This is often done due to the pedestrian crossings, so that traffic has to wait if they get a red light. These intersections also come with a stop line in the exit area of the intersection.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "163395",
"title": "Traffic light",
"section": "Section::::Implementation.\n",
"start_paragraph_id": 177,
"start_character": 0,
"end_paragraph_id": 177,
"end_character": 788,
"text": "According to transportation engineers, traffic lights can have both positive and negative effects on traffic safety and traffic flow. The separation of conflicting streams of traffic in time can reduce the chances of right-angle collisions. But also the frequency of rear-end crashes can be increased by the installation of traffic lights, and they can adversely affect the safety of bicycle and pedestrian traffic. They can increase the traffic capacity at intersections, but can also result in excessive traffic delay. Hans Monderman, the innovative Dutch traffic engineer, and pioneer of shared space schemes, was sceptical of their role, and is quoted as having said of them: \"We only want traffic lights where they are useful and I haven't found anywhere where they are useful yet.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59257",
"title": "Roundabout",
"section": "Section::::Modern roundabout.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 319,
"text": "Because of the requirement for low speeds, roundabouts usually are not used on controlled-access highways, but may be used on lower grades of highway such as limited-access roads. When such roads are redesigned to take advantage of roundabouts, traffic speeds must be reduced via tricks such as curving the approaches.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24515301",
"title": "Traffic Light Tree",
"section": "",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 369,
"text": "Although some motorists were initially confused by the traffic lights, mistaking them for real signals, the sculpture soon became a favourite among both tourists and locals. In 2005, Saga Motor Insurance commissioned a survey asking British motorists about the best and worst roundabouts in the country. The one containing \"Traffic Light Tree\" was the clear favourite.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "163395",
"title": "Traffic light",
"section": "Section::::Technology.:Mounting.\n",
"start_paragraph_id": 170,
"start_character": 0,
"end_paragraph_id": 170,
"end_character": 826,
"text": "Traffic signals in most areas of Europe are located at the stop line on same side of the intersection as the approaching traffic (there being both right- and left-hand traffic) and are often mounted overhead as well as on side of the road. At particularly busy junctions for freight, higher lights may be mounted specifically for trucks. The stop line alignment is done to prevent vehicles blocking any crosswalk and allow for better pedestrian traffic flow. There may also be a special area a few meters in advance of the stop line where cyclists may legally wait but not motor vehicles; this advanced stop line is often painted with a different road surface with greater friction and a high colour, both for the benefit of cyclists and for other vehicles. The traffic lights are mounted so that cyclists can still see them.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1c079f
|
Are (biological) children of gay people more likely to be gay?
|
[
{
"answer": " > Is this an inheritable trait?\n\nis a different question than\n\n > Are (biological) children of gay people more likely to be gay?\n\nThe evidence is overwhelming that there are genetic factors at play in homosexuality, if your identical twin is gay, there's something like a 52% chance that you will be gay[1], compared with something like 2-5% of people in the general population. Sexuality is an incredibly complex trait - in other words, it probably involves many different genes and interaction with the environment - but it is almost certainly heritable in some degree.\n\nHowever, this does not mean that the biological children of gay people are more (or at least significantly more) likely to be gay. To take a simple example, there's some evidence that there's a gene or genes on the X-chromosome that may make women with the gene more fertile, but also make men with the gene more likely to be gay. If this was the sole determinant (it's not, but pretend it is), then a gay man would be no more likely to have a gay son than a straight man, since fathers only contribute their Y chromosome. In this case, it would increase the chances that a gay man's male grandchildren (by his daughters only) might be more likely to be gay. \n\nThe genetics of inheritance for complex traits get super complicated super fast, and if a trait is determined by 10 different genes in different combinations in addition to environmental factors, and where the population is extenisvely outbred (in other words, you don't have cousins marrying etc) it's possible that there would be little if any change in the measured probability.\n\nAll of this is a long way of saying that I've never seen a study that suggested that the offspring of gay people are more likely to be gay. It's possible that there just aren't enough known examples, and once more research is done, it will show that gay parents *are* more likely to have gay children, though it might be like a 7% chance instead of a 5% chance. \n\n[1] _URL_0_",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5488304",
"title": "Homosexuality",
"section": "Section::::Parenting.\n",
"start_paragraph_id": 115,
"start_character": 0,
"end_paragraph_id": 115,
"end_character": 768,
"text": "A 2001 review suggested that the children with lesbian or gay parents appear less traditionally gender-typed and are more likely to be open to homoerotic relationships, partly due to genetic (80% of the children being raised by same-sex couples in the US are not adopted and most are the result of heterosexual marriages.) and family socialization processes (children grow up in relatively more tolerant school, neighborhood, and social contexts, which are less heterosexist), even though majority of children raised by same-sex couples identify as heterosexual. A 2005 review by Charlotte J. Patterson for the American Psychological Association found that the available data did not suggest higher rates of homosexuality among the children of lesbian or gay parents.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7466395",
"title": "LGBT parenting",
"section": "Section::::Research.:Sexual orientation and gender role.\n",
"start_paragraph_id": 52,
"start_character": 0,
"end_paragraph_id": 52,
"end_character": 1250,
"text": "A number of studies have examined whether the children of lesbian and gay parents are themselves more likely to identify as lesbian and gay. In a 2001 review of 21 studies, Judith Stacey and Timothy Biblarz found that researchers frequently downplay findings indicating difference regarding children's gender, sexual preferences and behavior, suggesting that an environment of heterosexism has hampered scientific inquiry in the area. Their findings indicate that the children with lesbian or gay parents appear less traditionally gender-typed and are more likely to be open to homoerotic relationships, which may be partly due to genetic or family socialization processes or \"contextual effects,\" even though children raised by same-sex couples are not more likely to self-identify as bisexual, lesbian, or gay and most of them identify as heterosexual. According to US Census, 80% of the children being raised by same-sex couples in US are their biological children. When it comes to family socialization processes and \"contextual effects,\" Stacey and Biblarz point out that children with such parents are disproportionately more likely to grow up in relatively more tolerant school, neighborhood, and social contexts, which are less heterosexist.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "36523292",
"title": "LGBT adoption in the United States",
"section": "Section::::Professional assessments.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 542,
"text": "A common fear of many persons who oppose the rearing of children by a homosexual couple will result in the child becoming homosexual themselves. However, this is not the case as when comparing children from heterosexual parents to those raised with same-sex parents there is no increase in the number of children who identify as homosexual. However, there are differences seen as children from same-sex relationships tend to not conform to standard gender roles. Which can be another argument brought about by opponents of same-sex adoption.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19049979",
"title": "Environment and sexual orientation",
"section": "Section::::Family influences.:General.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 450,
"text": "Researchers have provided evidence that gay men report having had less loving and more rejecting fathers, and closer relationships with their mothers, than non-gay men. Some researchers think this may indicate that childhood family experiences are important determinants to homosexuality, or that parents behave this way in response to gender-variant traits in a child. Michael Ruse suggests that both possibilities might be true in different cases.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9919443",
"title": "Dating",
"section": "Section::::As a social relationship.:Gender differences.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 351,
"text": "In studies comparing children with heterosexual families and children with homosexual families, there have been no major differences noted; though some claims suggest that kids with homosexual parents end up more well adjusted than their peers with heterosexual parents, purportedly due to the lack of marginalizing gender roles in same-sex families.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3062837",
"title": "Sexual ethics",
"section": "Section::::Gender identity and sexuality.:Homosexuality.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 366,
"text": "Although there has been a lot of debate regarding homosexuality, there is evidence that supports the notion that individuals are born with their sexual orientation. There was a study in 1991 that showed the hypothalamus of a gay man differed with that of a straight man. Also there is however a debate that environmental aspects impact the sexuality of individuals.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5488304",
"title": "Homosexuality",
"section": "Section::::Parenting.\n",
"start_paragraph_id": 114,
"start_character": 0,
"end_paragraph_id": 114,
"end_character": 332,
"text": "Scientific research has been generally consistent in showing that lesbian and gay parents are as fit and capable as heterosexual parents, and their children are as psychologically healthy and well-adjusted as children reared by heterosexual parents. According to scientific literature reviews, there is no evidence to the contrary.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1kc6yj
|
Can it be that life is originating constantly since it first originated 3.5 billion years ago?
|
[
{
"answer": "There is no real way of knowing, but until we find another form of life that does not use DNA/RNA, we have no reason to believe that abiogenesis (the process of life originating) is still occurring.\n\nI would also add that it is very very very unlikely that any life from recent abiogenesis would ever be found, if it ever did occur the resulting organisms would be incredibly simple and would likely die out in competition for resources.",
"provenance": null
},
{
"answer": "One reason abiogenesis was able to occur was that there were no other organisms around to absorb nutrients. While it is theoretically possible that it could be occurring right now, there aren't many places on earth where some form of microbial life isn't already present to eat the potential new life. ",
"provenance": null
},
{
"answer": "Most models of abiogenesis use the Oparin-Haldane hypothesis which suggests atmosphere that is chemically reducing, with O2 rare or absent. This is due to the fact that atmospheric oxygen prevents the synthesis of basic organic compounds. Some models also suggest other places of reducing environments such as outer space or deep-sea thermal vents. \n\nSo atleast synthesis of organic compounds is highly unlikely, unless in some reducing environments. And this is just in terms of organic compound synthesis that has been demonstrated experimentally. Other elements of abiogenesis, such as polymerization, self-replication and formation of cellular membrane are less understood. [Wiki](_URL_0_)\n",
"provenance": null
},
{
"answer": "I believe if a new lifeform did originate it would probably be consumed by the already abundant lifeforms like bacteria almost immediately, so there would be virtually no opportunity to detect it unless we create it ourselves in a lab.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "23678",
"title": "Panspermia",
"section": "Section::::Extraterrestrial life.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 807,
"text": "The chemistry of life may have begun shortly after the Big Bang, 13.8 billion years ago, during a habitable epoch when the Universe was only 10–17 million years old. According to the panspermia hypothesis, microscopic life—distributed by meteoroids, asteroids and other small Solar System bodies—may exist throughout the universe. Nonetheless, Earth is the only place in the universe known by humans to harbor life. The sheer number of planets in the Milky Way galaxy, however, may make it probable that life has arisen somewhere else in the galaxy and the universe. It is generally agreed that the conditions required for the evolution of intelligent life as we know it are probably exceedingly rare in the universe, while simultaneously noting that simple single-celled microorganisms may be more likely.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21830",
"title": "Nature",
"section": "Section::::Life.:Evolution.\n",
"start_paragraph_id": 53,
"start_character": 0,
"end_paragraph_id": 53,
"end_character": 465,
"text": "The origin of life on Earth is not well understood, but it is known to have occurred at least 3.5 billion years ago, during the hadean or archean eons on a primordial Earth that had a substantially different environment than is found at present. These life forms possessed the basic traits of self-replication and inheritable traits. Once life had appeared, the process of evolution by natural selection resulted in the development of ever-more diverse life forms.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53365898",
"title": "Earliest known life forms",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 640,
"text": "The earliest known life forms on Earth are putative fossilized microorganisms found in hydrothermal vent precipitates. The earliest time that life forms first appeared on Earth is unknown. They could have lived earlier than 3.77 billion years ago, possibly as early as 4.28 billion years ago, or nearly 4.5 billion years ago according to some; in any regards, not long after the oceans formed 4.41 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. The earliest \"direct\" evidence of life on Earth are microfossils of microorganisms permineralized in 3.465-billion-year-old Australian Apex chert rocks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19179706",
"title": "Abiogenesis",
"section": "Section::::Early geophysical conditions on Earth.:Earliest biological evidence for life.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 682,
"text": "The earliest life on Earth existed more than 3.5 billion years ago, during the Eoarchean Era when sufficient crust had solidified following the molten Hadean Eon. The earliest physical evidence so far found consists of microfossils in the Nuvvuagittuq Greenstone Belt of Northern Quebec, in \"banded iron formation\" rocks at least 3.77 billion and possibly 4.28 billion years old. This finding suggested that there was almost instant development of life after oceans were formed. The structure of the microbes was noted to be similar to bacteria found near hydrothermal vents in the modern era, and provided support for the hypothesis that abiogenesis began near hydrothermal vents.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9228",
"title": "Earth",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 971,
"text": "Within the first billion years of Earth's history, life appeared in the oceans and began to affect the Earth's atmosphere and surface, leading to the proliferation of anaerobic and, later, aerobic organisms. Some geological evidence indicates that life may have arisen as early as 4.1 billion years ago. Since then, the combination of Earth's distance from the Sun, physical properties and geological history have allowed life to evolve and thrive. In the history of life on Earth, biodiversity has gone through long periods of expansion, occasionally punctuated by mass extinction events. Over 99% of all species that ever lived on Earth are extinct. Estimates of the number of species on Earth today vary widely; most species have not been described. Over 7.6 billion humans live on Earth and depend on its biosphere and natural resources for their survival. Humans have developed diverse societies and cultures; politically, the world has around 200 sovereign states.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5551560",
"title": "Biotic material",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 688,
"text": "The earliest life on Earth arose at least 3.5 billion years ago. Earlier physical evidences of life include graphite, a biogenic substance, in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland, as well as, \"remains of biotic life\" found in 4.1 billion-year-old rocks in Western Australia. Earth's biodiversity has expanded continually except when interrupted by mass extinctions. Although scholars estimate that over 99 percent of all species of life (over five billion) that ever lived on Earth are extinct, there are still an estimated 10–14 million extant species, of which about 1.2 million have been documented and over 86% have not yet been described.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "827792",
"title": "Rare Earth hypothesis",
"section": "Section::::Requirements for complex life.:The right time in evolution.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 974,
"text": "While life on Earth is regarded to have spawned relatively early in the planet's history, the evolution from multicellular to intelligent organisms took around 800 million years. Civilizations on Earth have existed for about 12,000 years and radio communication reaching space has existed for less than 100 years. Relative to the age of the Solar System (~4.57 Ga) this is a short time, in which extreme climatic variations, super volcanoes, and large meteorite impacts were absent. These events would severely harm intelligent life, as well as life in general. For example, the Permian-Triassic mass extinction, caused by widespread and continuous volcanic eruptions in an area the size of Western Europe, led to the extinction of 95% of known species around 251.2 Ma ago. About 65 million years ago, the Chicxulub impact at the Cretaceous–Paleogene boundary (~65.5 Ma) on the Yucatán peninsula in Mexico led to a mass extinction of the most advanced species at that time.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5pb7n9
|
why hasn't usa adopted nordic countries education and health care system?
|
[
{
"answer": "Because such a system requires a Big Government. And it is an accepted wisdom in US politics that a Big Government is a horrendously inefficient waste of resources that is only good for maintaining tyranny. Americans prefer their government lean, and their taxes low; they expect the market to sort everything out. Under this dogma, Finnish education cannot possibly be good because there is no private competition to make it stick to a high standard and not just waste your money.\n\nSome Americans think all taxes are illegal, for chrissakes!",
"provenance": null
},
{
"answer": "cause it ain't 'murrican!\n\nBut seriously, our House of Rep's and Senate have totally failed the USA. They are only concerned with increasing the wealth of the already wealthy and maintaining their power.",
"provenance": null
},
{
"answer": "*Sigh*. Some of us have, it just doesn't go anywhere for a variety of reasons.\n\nThe \"core\" reason is that when the European democracies rebuilt after WWII, they developed universal healthcare systems to ensure everyone had care. This was largely an expansion of the systems Germany used to strikebreak or otherwise help industrial workers even before WWI, and it spread relatively easily in the wake of WWII - it was also seen as a comfortable political compromise with socialism, which had arrived rather dramatically on the political scene with the influence of an emboldened and expanded USSR. Giving people free healthcare and high taxes but keeping most of a capitalist economy while rebuilding from a grueling war was where people would up.\n\nIn the US, things were a little different. We have a very long history of wealthy people and their corporations having a lot of say in how things are run through several mechanisms. During WWII, it was really hard to get anyone to work for a company - most young men were at war, and there were only so many women who could or wanted to work in the factories or other businesses. This caused some problems, and part of that solution was the government stating outright that nobody could be paid more than a certain amount - but healthcare insurance policies were not considered \"pay\" by this rule. So companies started competing about health insurance instead of just wages. \n\nThis eventually led to a system where the government, insurance companies, and the medical industry all have roughly-equal say in how things are run - and the medical industry and insurers both have a lot of say in how the government is run, since they're fairly wealthy and we still defer to the wealthy in a lot of things. \n\nIt's not necessarily smart, but it's where we wound up.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "28244758",
"title": "Welfare in Finland",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 940,
"text": "According to Finnish sociologist Erik Allardt, the hallmark of the Nordic welfare system was its comprehensiveness. Unlike the welfare systems of the United States or most West European countries, those of the Nordic countries cover the entire population, and they are not limited to those groups unable to care for themselves. Examples of this universality of coverage are national flat-rate pensions available to all once they reached a certain age, regardless of what they had paid into the plan, and national health plans based on medical needs rather than on financial means. In addition, the citizens of the Nordic countries have a legal right to the benefits provided by their welfare systems, the provisions of which were designed to meet what was perceived as a collective responsibility to ensure everyone a decent standard of living. The Nordic system also is distinguished by the many aspects of people's lives it touched upon.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1043143",
"title": "Single-payer healthcare",
"section": "Section::::Regions with 'Beveridge Model' systems.:Nordic countries.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 311,
"text": "The Nordic countries are sometimes considered to have single-payer health care services, as opposed to single-payer national health care insurance like Taiwan or Canada. This is a form of the 'Beveridge Model' of health care systems that features public health providers in addition to public health insurance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1043143",
"title": "Single-payer healthcare",
"section": "Section::::Regions with 'Beveridge Model' systems.:Nordic countries.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 472,
"text": "The term 'Scandinavian model' or 'Nordic model' of health care systems has a few common features: largely public providers, limited private health coverage, and regionally-run, devolved systems with limited involvement from the central government. Due to this third characteristic, they can also be argued to be single-payer only on a regional level, or to be multi-payer systems, as opposed to the nationally run health coverage found in Canada, Taiwan, and South Korea.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11595783",
"title": "Nordic model",
"section": "Section::::Reception.:Misconceptions.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 315,
"text": "Americans imagine that \"welfare state\" means the U.S. welfare system on steroids. Actually, the Nordics scrapped their American-style welfare system at least 60 years ago, and substituted universal services, which means everyone—rich and poor—gets free higher education, free medical services, free eldercare, etc.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "25391",
"title": "Russia",
"section": "Section::::Demographics.:Health.\n",
"start_paragraph_id": 208,
"start_character": 0,
"end_paragraph_id": 208,
"end_character": 649,
"text": "The Russian Constitution guarantees free, universal health care for all its citizens. In practice, however, free health care is partially restricted because of mandatory registration. While Russia has more physicians, hospitals, and health care workers than almost any other country in the world on a per capita basis, since the dissolution of the Soviet Union the health of the Russian population has declined considerably as a result of social, economic, and lifestyle changes; the trend has been reversed only in the recent years, with average life expectancy having increased 5.2 years for males and 3.1 years for females between 2006 and 2014.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23711165",
"title": "Nordic countries",
"section": "Section::::Politics.:Nordic model.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 313,
"text": "The Nordic model is distinguished from other types of welfare states by its emphasis on maximizing labour force participation, promoting gender equality, egalitarian and extensive benefit levels, the large magnitude of income redistribution and liberal use of expansionary fiscal policy. Trade unions are strong.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13250",
"title": "Health care reform",
"section": "Section::::Russia.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 1145,
"text": "Following the collapse of the Soviet Union, Russia embarked on a series of reforms intending to deliver better healthcare by compulsory medical insurance with privately owned providers in addition to the state run institutions. According to the OECD none of 1991-93 reforms worked out as planned and the reforms had in many respects made the system worse. Russia has more physicians, hospitals, and healthcare workers than almost any other country in the world on a per capita basis, but since the collapse of the Soviet Union, the health of the Russian population has declined considerably as a result of social, economic, and lifestyle changes. However, after Putin became president in 2000 there was significant growth in spending for public healthcare and in 2006 it exceed the pre-1991 level in real terms. Also life expectancy increased from 1991-93 levels, infant mortality rate dropped from 18.1 in 1995 to 8.4 in 2008. Russian Prime Minister Vladimir Putin announced a large-scale health care reform in 2011 and pledged to allocate more than 300 billion rubles ($10 billion) in the next few years to improve health care in the country.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
25yl5u
|
how do grizzly bears (or any other animal) know that we are not a threat to them?
|
[
{
"answer": " > How do they know that we are weaker than them?\n\nWe are smaller, no claws in sight.\n\n > They don't get taught by their parents to hunt human, so why they are not afraid to attack us?\n\nSee answer #1\n\n > How do they know that we don't have some poison or something.\n\nanimals have very limited reasoning abilities. We don't display nature's poison colors so we are not poisonous. \n\n > Is it our body language that shows that we're scared?\n\nYes\n\n > What if we acted super confident and crazy, would they run from us then?\n\nMaybe depends on the bear. Imagine they are like dumb drunk people. Some are rational, some are not. Some are angry, some are cowards, etc.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "35783123",
"title": "Grizzly bear",
"section": "Section::::Interaction with humans.:Conflicts with humans.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 405,
"text": "Grizzly bears normally avoid contact with people. In spite of their obvious physical advantage they rarely actively hunt humans. Most grizzly bear attacks result from a bear that has been surprised at very close range, especially if it has a supply of food to protect, or female grizzlies protecting their offspring. A bear killing a human in a national park may be killed to prevent its attacking again.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11250523",
"title": "Bear danger",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 457,
"text": "Although most bears are alpha predators in their own habitat, most do not, under normal circumstances, hunt and feed on humans. Most bear attacks occur when the animal is defending itself against anything it perceives as a threat to itself or its territory. For instance, bear sows can become extremely aggressive if they feel their cubs are threatened. Any solitary bear is also likely to become agitated if surprised or cornered, especially while eating.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35783123",
"title": "Grizzly bear",
"section": "Section::::Interaction with humans.:Conflicts with humans.\n",
"start_paragraph_id": 68,
"start_character": 0,
"end_paragraph_id": 68,
"end_character": 215,
"text": "Grizzly bears are especially dangerous because of the force of their bite, which has been measured at over 8 megapascals (1160 psi). It has been estimated that a bite from a grizzly could even crush a bowling ball.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35783123",
"title": "Grizzly bear",
"section": "Section::::Ecology.:Interspecific competition.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 1073,
"text": "The relationship between grizzly bears and other predators is mostly one-sided; grizzly bears will approach feeding predators to steal their kill. In general, the other species will leave the carcasses for the bear to avoid competition or predation. Any parts of the carcass left uneaten are scavenged by smaller animals. Cougars generally give the bears a wide berth. Grizzlies have less competition with cougars than with other predators, such as coyotes, wolves, and other bears. When a grizzly descends on a cougar feeding on its kill, the cougar usually gives way to the bear. When a cougar does stand its ground, it will use its superior agility and its claws to harass the bear, yet stay out of its reach until one of them gives up. Grizzly bears occasionally kill cougars in disputes over kills. There have been several accounts, primarily from the late 19th and early 20th centuries, of cougars and grizzly bears killing each other in fights to the death. The other big cat that is present in the United States, which may pose as a threat to bears, is the jaguar.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4400",
"title": "Bear",
"section": "Section::::Relationship with humans.:Attacks.\n",
"start_paragraph_id": 242,
"start_character": 0,
"end_paragraph_id": 242,
"end_character": 320,
"text": "Several bear species are dangerous to humans, especially in areas where they have become used to people; elsewhere, they generally avoid humans. Injuries caused by bears are rare, but are widely reported. Bears may attack humans in response to being startled, in defense of young or food, or even for predatory reasons.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20632326",
"title": "Bear attack",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 349,
"text": "A bear attack is an attack by any mammal of the family Ursidae, on another animal, although it usually refers to bears attacking humans or domestic pets. Bear attacks are of particular concern for those who are in bear habitats. They can be fatal and often hikers, hunters, fishers, and others in bear country take precautions against bear attacks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4590230",
"title": "Southern reedbuck",
"section": "Section::::Ecology.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 507,
"text": "Their main predators include lions, leopards, cheetahs, spotted hyenas, Cape hunting dogs, pythons, and crocodiles. They can camouflage themselves in the grasslands due to their coats, which are almost the same color. If startled or attacked, they stand still, then either hide or flee with an odd rocking-horse movement, and cautiously look back to ensure the danger is gone, generally. They use vocalizations like a shrill whistle through their nostrils and a clicking noise to alert others about danger.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4to2bt
|
peer reviewed journals
|
[
{
"answer": "Yes, anyone can submit. But be aware that a paper written by Prof's Snuffsky and Huffsky from Oxbridge University has a much higher chance of getting through than a paper from Mr. Smith, Main Street 10025.\n \nYou need 'enough' evidence. I'm getting the gist that you've come across something that's a bit left field and hoping to surprise the world. If you're a 'nobody' the burden of evidence is going to be higher on you, so it really has to be beyond reasonable doubt.\n \nSpeculation is strongly frowned upon. If you can verify it, why haven't you? If you only have speculation, expect rejection. A journal will require you to have put in some time and effort into verifying. Even mathematical proofs will be good. Just \"I wonder if the earth is actually flat, I mean, the horizon looks flat when I look at it\" will be rejected without a second glance. Put some effort into it.\n \nSubmitting will get you to a larger audience, provided it's the right quality paper. I don't know all the branches of science, so I can't tell you which journals are good and which are bad. Some journals seem to take anything, but they're rarely read.\n \nIf there is anything in there that's useful for industrial purposes, you shouldn't publish it, you should patent it. If you have published something, it has your name, but what are you expecting that to do for you?\n \nSome high quality journals accept 'letters', and if your theory isn't sufficiently developed to merit a full article, that may be the way to go.\n \nWhat else? For heaven's sake: Do your literature survey. Read what other people have written on the subject, and reference them so that people know you have done your homework. Citeseer is a good place to start.\n \nProper journals have a review process. They receive your submission, they then pass it on to 3 or so reviewers, usually people with a good track record themselves, who then grade it in quality. The grade can be Good, revolutionary, a solution to a problem that doesn't exist, etc. I think that varies between journals.",
"provenance": null
},
{
"answer": " > Can anyone submit?\n\nTechnically yes, although your qualifications to publish will be subject to scrutiny.\n\n > How much evidence do you need to support your hypothesis?\n\nDepends on the field and the type of paper. Some publications don't really posit much of a hypothesis at all, but rather report on experimental findings, offer suggestions as to why a given set of behavior is seen, and use it as a means to test *other* people's hypotheses.\n\n > Is speculation allowed?\n\nAgain, depends on the field. This is where your own qualifications come in handy, as well as the qualifications of those you cite. You need enough evidence to back your claims, but what constitutes \"enough\" is subject to what field you're publishing in. Use other papers in the same field as a litmus test.\n\n > What is the whole process?\n\nYou write a paper and submit it to a given journal, who then send it out to reviewers (often, you can choose who you'd like to review your paper, which typically will be people you cite in your work). The reviewers give their input, and pass it on to the editor, who decides between the following;\n\nAccept.\n\nRequest minor revisions.\n\nRequest major revisions.\n\nReject.\n\nOf the four, the 2nd and 4th are the most common, and are up to the editor's discretion (typically, a \"major revision\" request won't happen unless the subject matter is particularly good, or the name(s) attached to the paper are particularly noteworthy). Assuming revisions are necessary, you revise the paper as per the reviewers'/editors' critiques, and then send it back in for another round. This can technically go on forever, but typically after the first round of revisions your paper is either accepted or rejected. Once it's accepted, you fill out the remaining paperwork, and the paper is published in the next edition of the journal. If rejected, you try again somewhere else.\n\nRejections aren't necessarily because the paper is bad, but often because the paper doesn't fit the journal's subject area. Because many of the journals are affiliated with other journals, they sometimes make a recommendation, or will outright hand your manuscript to the editor of another journal on your behalf.\n\n > Is it better to submit or self publish?\n\nSelf-published material is generally seen as worthless in the sciences, unless you have already have a name for yourself.\n\n > Whats the benefit to submitting?\n\nYou get citations and readership, ultimately to get your name out there. If you have more journal papers, it's easier to convince Universities and funding institutions that you're worth investing in.\n\n > What copyright rights do you lose?\n\nDepends on the journal. Many reputable journals hold the copyright themselves, but leave intellectual property rights to you. Less reputable journals with less stringent peer review may ask you for a fee to publish, and you sign away all intellectual property rights based on your work.\n\n > Is a peer review journal the only place to submit if you have a hypothesis, or are their other platforms?\n\nPeer-reviewed journals are the only place that's worth anything if you want to be taken seriously.\n\n > And anything else worth knowing?\n\nBe careful with anything that starts with \"International Journal of ...\" These are typically scam journals (particularly in engineering fields), and are based out of India. There are exceptions of course (Int. Journal of Hydrogen Energy is one that I deal with occasionally), but generally IJXX journals are shifty. Open-source journals also can be highly suspect, as they are often patent trolls who get you to sign your IP away in return for \"expedited publishing\" (i.e. no peer review).\n\n > Can you recommend good journals to submit to for certain fields?\n\nLiterally depends on the field. The rule-of-thumb is to check the journal's Impact Factor, which is just a number representing the likelihood that your work will be read and cited if you publish in that journal. The higher the impact factor, the better, but what constitutes a \"high\" impact factor depends on the precise field. For example, in medicine, the *really* good impact factors are up in the 30-60 range, while for my field (engine research), I start breathing heavy when a journal with an impact factor of 4 or more starts talking to me.\n\nEdit: On names and author order; there are two schools of thought on this.\n\nThe first way, which seems to be more common in the US, is that the first author is the person responsible for writing the work (typically a graduate student), the middle names are people who helped out with the work (typically, the 2nd or 3rd name is the PhD student who runs the lab, and who was supervising when the experiments were taking place), and the last names are the professors who actually funded the work. Thus, it's best to either be the first author (as the person who was responsible for the work), or the last author (as the most important PI who has the biggest name in the field, or who contributed the most money to the research).\n\nThe second school, which seems particularly common in India, is that the author order should go in importance of the authors, so PI first, second PI second, and so on, down to the graduate students who likely did the bulk of the work, but don't have a name for themselves.\n\nOf the two, I prefer the American version of it, although it has led to me getting shitloads of resumes and job requests from graduate students and postdocs (typically from India, as they are used to the *other* system) wanting to work in \"my\" lab, thinking I'm some hot-shit professor, when in reality I'm just a PhD student ruffling through my friends' couches looking for quarters for the laundry machine. It's a fun world.",
"provenance": null
},
{
"answer": "\nAnyone can submit a paper to a journal, its just like how anyone can submit a manuscript for a book to a publisher. Its pretty much the same process you submit your work, an editor reads it and either turns it down out right or accepts it with suggestions for revisions. Then the author(s) and editor(s) go back and forth with edits until a final draft is accepted by all parties. At the same time if the work is copyright-able/patent-able the paperwork is being completed to restrict others from stealing the idea once published. If you don't copyright before its published you essentially lose your rights if someone copyrights it and makes money on it without your permission before you get it copyrighted. Journals are really the only way to disseminate such work into the world, although another way would be patents for new inventions. You can self publish but most likely nobody would ever see your work. And as for content for your work in the biological field(I'm in) the only speculation you should really have is your hypothesis, and then the rest of your paper is all the evidence you can muster to prove your hypothesis. A paper is 5 basic parts: the introduction to what you're talking about, your hypothesis, the methods you used to experiment(so others can repeat your experiment, the result/data, and finally the conclusion where you discuss why your result prove your hypothesis. Anything extra is just fluff and really shouldn't be in your paper. I'm not sure what you're planning on doing with this information because unless you're an engineer most people that I know aren't submitting their work to journals unless they have a doctorate and are experts in their field. And I really cant give you any suggestions for journals, but look for ones with high impact factors(IF), its how often their articles are cited by other people doing research. And the higher the number the more reputable and renown the journal is. ",
"provenance": null
},
{
"answer": "Anyone can submit but they're really meant for professional researchers (grad students, professors, or those who work in research organizations). If you're not one of these, your chances of being accepted are incredibly low.\n\nThe benefit of being published in a peer review article is that it gives your work some kind of prestige. It's essentially a sign that other highly educated people have vetted your work and agree that it is a significant contribution to the field. Especially if you're seeking a career in academia, being published in peer reviewed journals is a necessity for promotion and tenure (this is colloquially referred to as \"publish or perish\"). How important it is to be published in peer review journals varies if you work in research outside of the academy; in some fields it's still important, while in others it's not important at all.\n\nPeer reviewed journals are not the only way of getting your ideas out there: we also have what are referred to as \"gray literature\" which are reports and other publications produced in academic and research contexts outside of peer review. Gray literature is easier to produce because there are fewer boundaries, but is generally considered less impactful to the field (e.g. preliminary research findings, policy briefs, etc). Self-publishing is another option but is even less prestigious than gray literature, as all that means is that you felt like writing a paper.\n\nThe peer review process starts by submitting your paper to a journal. Generally you wouldn't just submit a hypothesis; you would've actually done the research and have some findings - no one really cares about an untested hypothesis. The editor will decide whether your paper is good enough to review or if it's an automatic rejection. If it goes on to further review, it will be sent to usually two, sometimes more, reviewers who will be given a set of criteria to score your paper. Depending on the journal's own procedures, they either will or will not be told who you are. They will then return their comments to the editor, who will summarize and send you the decision, which can either be (in order of least common to most common):\n\n* Accepted without revision\n* Accepted with minor revisions\n* Revise and resubmit\n* Rejected\n\nIf you receive a decision of accepted with minor revisions, it just means do what the reviewers said and they'll take it. If it's R & R, it means make the suggested revisions, send it back, and then the editor and reviewers will review it again to decide if the revisions were good enough. Most commonly, papers are rejected outright.\n\nEach journal will also have its own copyright guidelines. In some cases, the journal maintains full copyright of the work. In others (I think this is more common), the author maintains copyright of the work, but the journal may retain certain rights such as the exact typesetting of the published paper (i.e. you're not supposed to freely distribute the PDF of the paper as it's published, but you can send your own personal Word file to people).\n\nTo be completely honest with you, if you have to ask whether you should be submitting your work to a peer reviewed journal or not, the answer is probably that your work is not relevant enough. People who are submitting their work to peer reviewed journals know that they have the educational training and skill level to do so.",
"provenance": null
},
{
"answer": "Will tackle some of these...\n\n-\n\n > Can anyone submit?\n\nYes, but lack of credentials (using the term loosely here to mean that at least one of the authors should be a scientist/engineer at a university or at a company) will typically be considered a \"red flag\" and will make the reviewers scrutinize the submission more carefully. A small fraction of journals have a \"double-blind\" review process in which this would be a non-issue, because the reviewers would not know who submitted the manuscript. Much more common is a \"single-blind\" review process in which the reviewers are anonymous but the authors are not.\n\n-\n\n > How much evidence do you need to support your hypothesis?\n\nDepends in part on the hypothesis. \"Extraordinary claims require extraordinary proof\", [as the saying goes](_URL_0_). Unfortunately, a hypothesis that makes a relatively \"obvious\" claim (and therefore would not require much work to support) would most likely not be of much interest to most journals. The answer depends also on the field. Experimental research in biomedicine, astronomy, and many other areas often require years of work before one obtains publication-quality data. Other areas of research may allow for publishable data to be generated over the course of months or perhaps weeks (if one is lucky!). \n\nBy reading published journal papers in the areas that interest you, you can get a sense of what is expected.\n\n-\n\n > Is speculation allowed?\n\nSpeculation is generally frowned upon. However, if a paper has made several solid conclusions fully supported by the authors' results, reviewers and readers may indulge the author in a speculative comment or two (assuming these are based somewhat in actual observations).\n\n-\n\n > What is the whole process?\n\nWill have to save that one for later (or for someone else to tackle), so I will just say that almost every scientific journal has a website that explains in detail the process that must be followed to submit a manuscript.\n\n-\n\n > Is it better to submit or self publish? Whats the benefit to submitting? Is a peer review journal the only place to submit if you have a hypothesis, or are their other platforms?\n\nIf you want to be taken seriously by scientists (and scientifically literate non-scientists), then your work should be published in a peer-reviewed journal.\n\n-\n\n > What copyright rights do you lose?\n\nGenerally, the publisher of the journal takes ownership of the copyright, but you will retain some rights (e.g., the right to re-publish your work in a compendium, etc.). Nowadays, you often have the option to have your journal article published as an \"open access\" publication, which sometimes means that you retain the copyright. The drawback is that you are required to pay the publisher a hefty fee (several thousand dollars) to exercise this option.\n\n-\n\nI hope this answers some of your questions.\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "36581992",
"title": "Andrea Alù",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 253,
"text": "Some of the scientific journals for which he peer reviews are Nature, \"Science\", \"Physical Review\" journals (A,B,E, Letters), and includes journals produced by the American Chemical Society, the Optical Society of America, IEEE, IOP Science and others.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10049346",
"title": "Journal of the Royal Society of Medicine",
"section": "Section::::Content.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 271,
"text": "In 2006, the journal introduced open peer review, a system in which authors and reviewers know each other's identities on the assumption that this improves openness in scientific discourse. This made it one of the few medical journals in the world with open peer review.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11421155",
"title": "List of University of Chicago Press journals",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 541,
"text": "The Journals Division of the University of Chicago Press, in partnership with 27 learned and professional societies and associations, foundations, museums, and other not-for-profit organizations, currently publishes and distributes 68 peer-reviewed academic journal titles. These influential scholarly publications present original research in the social sciences, the humanities, education, and the biological, medical, and physical sciences. The following list includes the journals currently published by the University of Chicago Press.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18700697",
"title": "Open peer review",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 425,
"text": "Open peer review is a process in which names of peer reviewers of papers submitted to academic journals are disclosed to the authors of the papers in question. In some cases, as with the \"BMJ\" and BioMed Central, the process also involves posting the entire pre-publication history of the article online, including not only signed reviews of the article, but also its previous versions and author responses to the reviewers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4817717",
"title": "Gatekeeper",
"section": "Section::::Academic peer review.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 661,
"text": "Peer review is a practice widely used by specialized journals that publish articles reporting new research, new discoveries, or new analyses in a specific academic field or area of focus. Journal editors ask one or more subject matter experts deemed to be \"peers\" of an article's author or authors to assess an article's suitability for publication in the journal. Notwithstanding the fact that the intent of peer review is to insure suitability and editorial quality, issues of preference or exclusion of articles are raised from time to time relating to the intellectual prejudices, career rivalries, or other biases of the journal editors or peer reviewers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8541748",
"title": "List of social science journals",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 612,
"text": "The following is a partial list of social science journals, including history and area studies. There are thousands of academic journals covering the social sciences in publication, and many more have been published at various points in the past. The list given here is far from exhaustive, and contains the most influential, currently publishing journals in each field. As a rule of thumb, each field should be represented by at most ten positions, chosen by their impact factors and other ratings. There are many important academic magazines that are not true peer-reviewed journals. They are not listed here.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33618685",
"title": "Nordic Journal of Human Rights",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 264,
"text": "In the Norwegian Association of Higher Education Institutions’ ranking of scientific journals (Norwegian Scientific Index), the journal is ranked as a Level 2 journal (Level 2 comprises up to the 20% most prestigious journals in any discipline, in this case law).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
267ke7
|
Any recommendations for a good espionage book?
|
[
{
"answer": "I'm a fan of the atomic spies myself. Some favorites that focus on individuals (which often makes for better stories than big, all-encompassing books on Soviet espionage, like _The Haunted Wood_): \n\n* _Bombshell: The Secret Story of America's Unknown Atomic Spy Conspiracy_. This focuses primarily on the spying of Ted Hall, a Harvard undergraduate who worked at Los Alamos. The guy is barely out of high school and he decides to spy on the atomic bomb for the USSR. Why'd he do it? How'd he do it? And why did he never go to jail, even though the FBI figured out he was a spy? The book gives interesting answers to these questions.\n\n* _The Catcher was a Spy_. Moe Berg was a Princeton-educated catcher for the Boston Red Sox. He was also a spy for the US during WWII. One of his jobs was to decide whether or not he should assassinate the famous German physicist Wernher Heisenberg who was thought to be working on an atomic bomb for the Nazis. \n\n* _The Invisible Harry Gold_. Gold was not a spy himself per se, but he was part of the Rosenberg/Greenglass/Fuchs network that got a lot of information out of Los Alamos, working as a courier. What makes him a great study is that he is not some kind of trained agent or even an ideological die-hard, but just a psychologically kind of messed up guy who falls in with \"the wrong crowd\" and aims to please. A much more nuanced story than you usually get with spy accounts, a great psychological portrait.\n",
"provenance": null
},
{
"answer": "I would recommend *Agent Zigzag* by Ben McIntyre. It's a biography of a gangster from the east end of London who ended up serving as a m & amp;s agent for the Germans... And then the British. McIntyre writes ridiculously well, and Eddie Chapman's life was like a thriller anyway, you'll finish it in a day or two, and wish it was longer.\n\nFavorite example of Chapman's ridiculousness: he was parachuted by his German spymasters into England (oxfordshire, I think). He landed in a field, and walked up to an old stone farmhouse with two spinster sisters inside. He knocked on the door, and said (paraphrasing): \"hello. I'm a German spy. Would you please call to police\". That was how he came to work for the British.\n\nChapman almost single-handedly save central London from the blitz. He reported back to the Germans that all their bombs were falling on hampstead and kilburm (north-west London), so the Germans dialed down their V2 range, obliterating south-east London (and many innocent people), but leaving the seat of power in central London *relatively* unscathed.",
"provenance": null
},
{
"answer": "Also if WWII is your thing, still in the historical fiction area I'd suggest Robert Harris' [Enigma](_URL_0_).\n\nFor non-fiction: \n\n[The Defense of the Realm](_URL_2_) is a fascinating, if incredibly long, history of MI5. Bear in mind it is an \"authorized\" history.\n\n[The Puzzle Palace](_URL_1_) comes highly recommended by people I know in the intelligence community, though I have not read it.\n\n",
"provenance": null
},
{
"answer": "maybe not as exciting as some other books listed here, but Steve Coll's *Ghost Wars* is an excellent (and Pulitzer prize winning) description of the CIA's efforts in assisting the Mujahideen in Afghanistan against the Taliban - and the subsequent intelligence operations and failures that led up to 9/11. Fascinating depiction of how intelligence organizations work to achieve goals in very difficult fields of operations having to negotiate with tenuous allies - like communist China",
"provenance": null
},
{
"answer": "\"Spycatcher\" by Peter Wright - Autobiography of a Senior Intelligence Officer of British MI5 1954-1965. It contains a great mix of personal struggles, technical details, overviews of (and some nitty gritty details of) specific surveillance and counter-surveillance operations, spy-turning, detailed accounts of double-agents, and the technology employed in espionage and counter-espionage, it doesn't touch particularly heavily on any one subject, but it's generally a very fascinating read. \n\nBut most of all it was known for it's controversy in exposing just how *astonishingly* (and I can't emphasize this enough, you need to read the book to understand) badly infiltrated by Soviet Counter-Intelligence MI5 were (and to lesser extents MI6 and GCHQ) back in the early days of the cold war. \n\nThe book was officially banned from sale in England by the government of the time before the books first attempted publication in 1985, at the time of it's first actual publication 1987, English newspapers were banned via gag order from even mentioning the book, and it was difficult to get hold of for at least a few years [untill 1988.](_URL_0_)\n",
"provenance": null
},
{
"answer": "Ken Follet writes some great spy stuff. The Eye of the Needle immediately comes to mind. Jackdaws is also very good.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "161884",
"title": "The Spy Who Came in from the Cold",
"section": "Section::::Cultural impact.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 331,
"text": "\"Time\" magazine, while including \"The Spy Who Came in from the Cold\" in its top 100 novels list, stated that the novel was \"a sad, sympathetic portrait of a man who has lived by lies and subterfuge for so long, he's forgotten how to tell the truth.\" The book also headed the \"Publishers Weekly\"s list of 15 top spy novels in 2006.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34285624",
"title": "The Spy (Cussler novel)",
"section": "Section::::Reception.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 538,
"text": "\"The Spy\" reached the \"USA Today\" best-selling book list on June 10, 2010, and remained on the list for twelve weeks, at one point reaching number sixteen on the list. The Book Reporter website said in early 2011, \"The ship-shape writing duo heaps on more excitement and thrills than a Coney Island roller coaster ride.\" \"The Citizen\", a Key West, Florida, daily newspaper said of \"The Spy\", \"Clive Cussler and Justin Scott have succeeded in writing another page-turning historical thriller filled with suspense and great period detail.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27646",
"title": "Spy fiction",
"section": "Section::::For children and adolescents.\n",
"start_paragraph_id": 81,
"start_character": 0,
"end_paragraph_id": 81,
"end_character": 276,
"text": "Leading examples include the \"Agent Cody Banks\" film, the Alex Rider adventure novels by Anthony Horowitz, and the CHERUB series, by Robert Muchamore. Ben Allsop, one of England's youngest novelists, also writes spy fiction. His titles include \"Sharp\" and \"The Perfect Kill\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3821456",
"title": "Gérard de Villiers",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 207,
"text": "De Villiers' books are well known in French-speaking countries for their in-depth insider knowledge of such subjects as espionage, geopolitics, and terrorist threats, as well as their hard-core sex scenes. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "161884",
"title": "The Spy Who Came in from the Cold",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 310,
"text": "\"The Spy Who Came in from the Cold\" portrays Western espionage methods as morally inconsistent with Western democracy and values. The novel received critical acclaim at the time of its publication and became an international best-seller; it was selected as one of the \"All-Time 100 Novels\" by \"Time\" magazine.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6851513",
"title": "Linda Melvern",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 712,
"text": "Other books include \"United Nations\", a book for children (Franklin Watts World Organisations Series, 2001); \"Techno-Bandits\" (co-authored; Boston Houghton Mifflin, 1983), an account of the campaign by the US Department of Defense to stop the illicit Soviet efforts to acquire American technology; and \"The End of the Street\", published in London, in 1986 (Methuen), exposing the secret planning by Rupert Murdoch to destroy the British print unions and move his newspapers to a modern printing plant at Wapping. \"The Ultimate Crime\" (Allison and Busby, 1995) was a secret history of the UN’s first 50 years and was the basis of a TV series for Channel Four, the three-part \"UN Blues\" broadcast in January 1995.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2837816",
"title": "The Terror Timeline",
"section": "Section::::Influence.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 223,
"text": "His work is also cited in the books \"Bad News\" by Tom Fenton and \"Fog Facts\" by Larry Beinhart. Richard Clarke has put it on his reading list for his course on \"Terrorism, Security, and Intelligence\" at Harvard University.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5ojav0
|
why does certain parts of audio dissappear when the headphonejack isn't all the way in?
|
[
{
"answer": "The headphone plug actually has multiple connectors on it--those are the colored ridges on the plug itself. Usually, there are three main connectors, but I suppose others can exist.\n\nWhen the plug isn't all the way in, some of those connectors don't line up, and others might line up with another input. This causes the audio to be messed up.\n\nThe audio is typically split into several channels; each channel contains some audio information. If the song is encoded such that the voices are on one channel and the bass is on another, moving the headphone plug out a little bit might cause the lyrics to cut out but not the bass, or the bass to cut out but not the lyrics, etc.",
"provenance": null
},
{
"answer": "Check out [this diagram](_URL_0_) of a 3.5mm stereo audio plug. The tip is connected to the left audio channel, and the ring is connected to the right audio channel; and the rest of the plug is connected to ground. If your plug isn't inserted all the way, then your left earpiece is connected to the right stereo channel, and the right earpiece is shorted to ground.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "7186698",
"title": "Gamate",
"section": "Section::::Hardware.:Sound.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 260,
"text": "The Gamate's mono internal speaker is of poor quality, giving off sound that is quite distorted, particularly at low volumes. However, if a user plugs into the headphone jack, the sound is revealed to be programmed in stereo, and of a relatively high quality.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "191884",
"title": "Headphones",
"section": "Section::::Ambient noise reduction.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 798,
"text": "Active noise-cancelling headphones use a microphone, amplifier, and speaker to pick up, amplify, and play ambient noise in phase-reversed form; this to some extent cancels out unwanted noise from the environment without affecting the desired sound source, which is not picked up and reversed by the microphone. They require a power source, usually a battery, to drive their circuitry. Active noise cancelling headphones can attenuate ambient noise by 20 dB or more, but the active circuitry is mainly effective on constant sounds and at lower frequencies, rather than sharp sounds and voices. Some noise cancelling headphones are designed mainly to reduce low-frequency engine and travel noise in aircraft, trains, and automobiles, and are less effective in environments with other types of noise.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "102676",
"title": "Binaural recording",
"section": "Section::::Playback.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 389,
"text": "Once recorded, the binaural effect can be reproduced using headphones. It does not work with mono playback; nor does it work while using loudspeaker units, as the acoustics of this arrangement distort the channel separation via natural crosstalk (an approximation can be obtained if the listening environment is carefully designed by employing expensive crosstalk cancellation equipment.)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "191884",
"title": "Headphones",
"section": "Section::::Types.:Open or closed back.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 247,
"text": "Open-back headphones have the back of the earcups open. This leaks more sound out of the headphone and also lets more ambient sounds into the headphone, but gives a more natural or speaker-like sound, due to including sounds from the environment.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "102676",
"title": "Binaural recording",
"section": "Section::::Playback.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 748,
"text": "Any set of headphones that provides good right and left channel isolation is sufficient to hear the immersive effects of the recording. Several high-end head set manufacturers have created some units specifically for the playback of binaural. It is also found that even normal headphones suffer from poor externalization, especially if the headphone completely blocks the ear from outside. A better design for externalization found in experiments is the open-ear one, where the drivers are sitting in front of the pinnae with the ear canal connected to the air. The hypothesis is that when the ear canal is completely blocked, the radiation impedance seen from the eardrum to the outside has been altered, which negatively affects externalization.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "191884",
"title": "Headphones",
"section": "Section::::Types.:Ear-fitting headphones.:In-ear headphones.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 440,
"text": "The outer shells of in-ear headphones are made up of a variety of materials, such as plastic, aluminum, ceramic and other metal alloys. Because in-ear headphones engage the ear canal, they can be prone to sliding out, and they block out much environmental noise. Lack of sound from the environment can be a problem when sound is a necessary cue for safety or other reasons, as when walking, driving, or riding near or in vehicular traffic.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1208732",
"title": "Dell XPS",
"section": "Section::::Laptops.:Gen 1.\n",
"start_paragraph_id": 272,
"start_character": 0,
"end_paragraph_id": 272,
"end_character": 342,
"text": "This model also suffers from a whine on the headphone and microphone jacks that are located on the left of the unit. This is because of shared space with the leftmost fan, and the spinning of said fan causes interference. There is no known fix than to otherwise use a USB, FireWire/1394 or PCMCIA-based audio device or card for sound output.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2dib9v
|
Have the Amish always been significantly different than other rural Midwestern farmers? At what point did technological and societal changes really set the Amish apart?
|
[
{
"answer": "From the time they first settled in the US, the Amish have been different. They spoke German and eschewed the clothing that was popular. They had specific rules about dress and behavior, like no buttons, which set them apart even before modern technology. They also have their own religion that is a branch of Protestantism.\nAs a result, they'd be going to their own church and hanging out in their own social circles.",
"provenance": null
},
{
"answer": " > Also, has there ever been any anti-Amish sentiment?\n\nBackground: I grew up Mennonite (similar to Amish), and I have Amish in my extended family.\n\nBefore I go trying to pass off family anecdotes as history, here is a source with more info and examples: _URL_0_\n\nNow, you asked whether there has ever been any anti-Amish sentiment, the answer is yes, particularly in WW1.\n\nTwo things to understand about Amish:\n\n1. Amish don't believe in going to war. Conscientious objectors, in other words.\n2. Amish to this day speak a dialect of German.\n\nSo in WW1, you have Amish and Mennonites who speak the enemy language, and on top of that they're refusing to join the military and go fight. It was not uncommon for them to be viewed as traitors because of it.\n\nThere are a number of stories in my family of people getting tarred and feathered, jailed, or even killed. Some of it is no doubt exaggerated with the passage of time, but the sentiment certainly existed for a time.\n\nFor WW2, I don't have any sources so I can't say anything definitively...that said, I've certainly been told by my older relatives that the sentiment was much reduced compared to WW1. Why, I have no idea",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "7424210",
"title": "Northkill Amish Settlement",
"section": "Section::::Settlement.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 423,
"text": "The first Amish began migrating to the United States in the 18th century, largely to avoid religious persecution and compulsory military service. The Northkill Creek watershed, in eastern Province of Pennsylvania, was opened for settlement in 1736 and that year Melchior Detweiler and Hans Seiber settled near Northkill. Shortly thereafter many Amish began to move to Northkill with large groups settling in 1742 and 1749.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49195999",
"title": "Somerset Amish Settlement",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 430,
"text": "The Amish from Somerset County became the \"vanguard of Amish settlers in Midwest\", because \"out of and through it most Midwest Amish settlements were founded\". This movement either to Lancaster or Somerset resulted in a first major divide in the family tree of the Amish. The two groups differ not only in dialect (Midwestern vs. Pennsylvania forms of Pennsylvania German) but also in the selection of typical Amish family names.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19346935",
"title": "Amish",
"section": "Section::::History.:Migration to North America.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 533,
"text": "Amish began migrating to Pennsylvania, then known for its religious toleration, in the 18th century as part of a larger migration from the Palatinate and neighboring areas. This migration was a reaction to religious wars, poverty, and religious persecution in Europe. The first Amish immigrants went to Berks County, Pennsylvania, but later moved, motivated by land issues and by security concerns tied to the French and Indian War. Many eventually settled in Lancaster County. Other groups later settled elsewhere in North America.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7424210",
"title": "Northkill Amish Settlement",
"section": "Section::::Legacy.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 281,
"text": "Although it existed for only a brief period, the Northkill settlement was fundamental in establishing the Amish in North America. The Northkill settlers included the progenitors of many widespread Amish families, such as the Yoders, Burkeys, Troyers, Hostetlers, and Hershbergers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7424210",
"title": "Northkill Amish Settlement",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 304,
"text": "The Northkill Amish Settlement was established in 1740 in Berks County, Pennsylvania. As the first identifiable Amish community in the new world, it was the foundation of Amish settlement in the Americas. By the 1780s it had become the largest Amish settlement, but declined as families moved elsewhere.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "49195999",
"title": "Somerset Amish Settlement",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 419,
"text": "Northkill Amish Settlement, founded around 1740, was the first Amish settlement in North America and remained the largest Amish settlement into the 1780s, but then declined as families moved on to areas of better farmland, mainly to Lancaster County, Pennsylvania and Somerset County, Pennsylvania in Pennsylvania, where they formed the Lancaster Amish Settlement around 1760 and the Somerset Amish Settlement in 1772.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42418776",
"title": "Renno Amish",
"section": "Section::::History.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 318,
"text": "Amish settled in Mifflin County as early as 1791, coming from Lancaster County, Pennsylvania. In the 1840s there were three Amish congregations in the region. In 1849 one district diveded from the two others, forming the Byler Amish, the first subgroup in North America that divided because of doctrinal differences. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1n14jo
|
why does tiresize change based on car size?
|
[
{
"answer": "Larger wheels allow the axle to be higher off the ground, letting you drive over larger irregularities without smashing into things. This is important in something like a pickup truck which might be driving onto an ungraded work site. A civic or a smart car would prefer to have smaller wheels because they aren't designed to leave a road, and it is more difficult to turn a larger tire because of the leverage involved.",
"provenance": null
},
{
"answer": "Traction is a function of tire pressure and wheel diameter. Notice I didn't mention width. Whether you have a wide tire or skinny, given the same diameter and pressure, you'll get the same surface area of the tire touching the ground.\n\nSo, a larger tire increases traction, something I would like an SUV to have as much as possible, so they don't lose it and rear end me (EDIT: again!). You can calculate traction, roughly, but engineers are going to test tires until they get the traction characteristics they're looking for.\n\nA Civic, by comparison, just doesn't need it, being a smaller, lighter car. Too much traction is just going to add to cost and wear.\n\nSo what does the width of the tire contribute? Heat management. A wider tire won't heat up as fast, it will soak heat better, and dissipate it faster. Heat will soften a tire, which contributes to traction, and in the case of a consumer vehicle, wear. Too much heat can actually cause a tire to fail. So this is another dimension engineers will calculate to get close, and then experiment with different tire widths until they find characteristics they want.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "6286739",
"title": "Crawl ratio",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 538,
"text": "Note that tire size (or dimensions of the road wheels) does not affect the gear ratio of a vehicle, and thus using a different size tire on the same vehicle does not affect the torque on the road wheels or the crawl ratio. However, for a given engine speed and a gear ratio, the output force on the road wheels decreases as the tire size increases. A lower force in turn decreases the acceleration of rotating wheels. Therefore, the smallest tires that are still big enough to drive over obstacles perform better for a given crawl ratio.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1039392",
"title": "Off-roading",
"section": "Section::::Vehicle modification.:Vehicle lifts.:Large tires.\n",
"start_paragraph_id": 66,
"start_character": 0,
"end_paragraph_id": 66,
"end_character": 282,
"text": "Increasing the tire size increases the ground clearance of all parts of vehicle including suspended components, such as the axles. It may be necessary to make modifications to vehicle's suspension or body depending on the size of the tires to be installed and the specific vehicle.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "779651",
"title": "Automobile handling",
"section": "Section::::Factors that affect a car's handling.:Tires and wheels.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 368,
"text": "The amount a tire meets the road is an equation between the weight of the car and the type (and size) of its tire. A 1000 kg car can depress a 185/65/15 tire more than a 215/45/15 tire longitudinally thus having better linear grip and better braking distance not to mention better aquaplaning performance, while the wider tires have better (dry) cornering resistance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "180624",
"title": "Vehicle dynamics",
"section": "Section::::Analysis and simulation.\n",
"start_paragraph_id": 74,
"start_character": 0,
"end_paragraph_id": 74,
"end_character": 393,
"text": "Vehicle motions are largely due to the shear forces generated between the tires and road, and therefore the tire model is an essential part of the math model. The tire model must produce realistic shear forces during braking, acceleration, cornering, and combinations, on a range of surface conditions. Many models are in use. Most are semi-empirical, such as the Pacejka Magic Formula model.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "827891",
"title": "Car tuning",
"section": "Section::::Areas of modification.:Tires.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 438,
"text": "Tires have large effects on a car's behavior and are replaced periodically; therefore, tire selection is a very cost-effective way to personalize an automobile. Choices include tires for various weather and road conditions, different sizes and various compromises between cost, grip, service life, rolling resistance, handling and ride comfort. Drivers also personalize tires for aesthetic reasons, for example, by adding tire lettering.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "917653",
"title": "Wheel sizing",
"section": "Section::::Tire Sizes.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 815,
"text": "Modern road tires have several measurements associated with their size as specified by tire codes like 225/70R14. The first number in the code (e.g., \"225\") represents the nominal tire width in millimeters. This is followed by the aspect ratio (e.g.,\"70\"), which is the height of the sidewall expressed as a percentage of the nominal tire width. \"R\" stands for radial and relates to the tire construction. The final number in the code (e.g.,\"14\") is the rim size measured in inches. The overall circumference of the tire will increase by increasing any of the tire's specifications. For example, increasing the width of the tire will also increase its circumference, because the sidewall height is a proportional length. Increasing the aspect ratio will increase the height of the tire and hence the circumference.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "917653",
"title": "Wheel sizing",
"section": "Section::::Wheel size.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 397,
"text": "Replacing the wheels on a car with larger ones can involve using tires with a smaller profile. This is done to keep the overall radius of the wheel/tire the same as stock to ensure the same clearances are achieved. Larger wheels are typically desired for their appearance but could also offer more space for brake components. This comes at a performance price though as larger wheels weigh more. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2shw5r
|
why does our own body clog our nose, which is essential for breathing, on allergic reactions or when we've got a cold?
|
[
{
"answer": "The allergic reaction comes from the inflammatory response trying to stop the spread of the allergen. The cells that release these chemicals don't know where in the body they are located. Just that something foreign is there and the body doesn't like it.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "53417080",
"title": "Environmental health policy",
"section": "Section::::Health Risks.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 471,
"text": "One of the more common health risks that people encounter is a result of air pollutants and air quality. Allergic Asthma is a chronic disease that affects individual's inflammatory system when they are exposed to allergens resulting in shortness of breath, wheezing, and coughing. Environmental factors such as, air pollutants, tobacco smoke, emission fumes, and other allergens in the air when absorbed through the body are said to have an influence on allergic asthma.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "395877",
"title": "Histamine",
"section": "Section::::Roles in the body.:Effects on nasal mucous membrane.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 308,
"text": "Increased vascular permeability causes fluid to escape from capillaries into the tissues, which leads to the classic symptoms of an allergic reaction: a runny nose and watery eyes. Allergens can bind to IgE-loaded mast cells in the nasal cavity's mucous membranes. This can lead to three clinical responses:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "679350",
"title": "Food allergy",
"section": "Section::::Signs and symptoms.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 497,
"text": "A severe case of an allergic reaction, caused by symptoms affecting the respiratory tract and blood circulation, is called anaphylaxis. When symptoms are related to a drop in blood pressure, the person is said to be in anaphylactic shock. Anaphylaxis occurs when IgE antibodies are involved, and areas of the body that are not in direct contact with the food become affected and show symptoms. Those with asthma or an allergy to peanuts, tree nuts, or seafood are at greater risk for anaphylaxis.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "102359",
"title": "Pulmonary alveolus",
"section": "Section::::Clinical significance.:Diseases.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 289,
"text": "BULLET::::- In asthma, the bronchioles, or the \"bottle-necks\" into the sac are restricted, causing the amount of air flow into the lungs to be greatly reduced. It can be triggered by irritants in the air, photochemical smog for example, as well as substances that a person is allergic to.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "482869",
"title": "Sulfite",
"section": "Section::::Health effects.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 359,
"text": "It may cause breathing difficulty within minutes after eating a food containing it. Asthmatics and possibly people with salicylate sensitivity (or aspirin sensitivity) are at an elevated risk for reaction to sulfites. Anaphylaxis and life-threatening reactions are rare. Other potential symptoms include sneezing, swelling of the throat, hives, and migraine.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4787648",
"title": "Levosalbutamol",
"section": "Section::::Adverse effects.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 319,
"text": "Rarer side effects may indicate a dangerous allergic reaction. These include: paradoxical bronchospasm (shortness of breath and difficulty breathing); skin itching, rash, or hives (urticaria); swelling (angioedema) of any part of the face or throat (which can lead to voice hoarseness), or swelling of the extremities.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12636107",
"title": "Antihistamine",
"section": "Section::::Medical uses.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 237,
"text": "Histamine produces increased vascular permeability, causing fluid to escape from capillaries into tissues, which leads to the classic symptoms of an allergic reaction — a runny nose and watery eyes. Histamine also promotes angiogenesis.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ay3cxd
|
if you're swimming in a pool and lightning strikes the water, it'll most likely harm you. at what range when swimming in a larger body of water, like a lake or even an ocean would lightning have an effect on a person swimming in it at the time?
|
[
{
"answer": "Lighting striking in an open water has a lethal range of about 6 - 10 meters with most of the energy being dispersed along the surface. If you're outside that range, you might still suffer burns. There's also a notable pressure wave (the underwater equivalent of thunder) that would be potentially dangerous at further distances.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "53855893",
"title": "Electric shock drowning",
"section": "Section::::Causes.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 445,
"text": "Besides boats and dockside power hookups, several other potential causes exist. Lightning strikes over or near water have caused electric shock drownings. Faulty hydroelectric generators or damaged underwater power lines can cause leakage currents, potentially creating a hazard. In general, anything electrically active that comes in contact with water has the potential to create leakage currents and contribute to this type of safety hazard.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1131151",
"title": "Heliosphere",
"section": "Section::::Structure.:Termination shock.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 577,
"text": "Other termination shocks can be seen in terrestrial systems; perhaps the easiest may be seen by simply running a water tap into a sink creating a hydraulic jump. Upon hitting the floor of the sink, the flowing water spreads out at a speed that is higher than the local wave speed, forming a disk of shallow, rapidly diverging flow (analogous to the tenuous, supersonic solar wind). Around the periphery of the disk, a shock front or wall of water forms; outside the shock front, the water moves slower than the local wave speed (analogous to the subsonic interstellar medium).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53855893",
"title": "Electric shock drowning",
"section": "Section::::Signs.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 468,
"text": "There is no visible warning to electrified water. Swimmers will be able to feel the electricity if the current is substantial. If the swimmers notice any unusual tingling feeling or symptoms of electrical shock, it is highly likely that stray currents exist and everyone needs to get out. Swimmers should always swim away from the suspected current source. In most cases this means swimming away from docks and boats and toward another safer portion of the shoreline.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12113551",
"title": "Devil's Pool",
"section": "Section::::Incidents.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 248,
"text": "A sign warns of the dangers of swimming there because the water is deep and fast flowing through channels and over underwater rocks but deaths still occur – some by swimming, others by falling in unexpectedly, many being wedged in a rock \"chute\". \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1205325",
"title": "High voltage",
"section": "Section::::Lightning.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 835,
"text": "Hazards due to lightning obviously include a direct strike on persons or property. However, lightning can also create dangerous voltage gradients in the earth, as well as an electromagnetic pulse, and can charge extended metal objects such as telephone cables, fences, and pipelines to dangerous voltages that can be carried many miles from the site of the strike. Although many of these objects are not normally conductive, very high voltage can cause the electrical breakdown of such insulators, causing them to act as conductors. These transferred potentials are dangerous to people, livestock, and electronic apparatus. Lightning strikes also start fires and explosions, which result in fatalities, injuries, and property damage. For example, each year in North America, thousands of forest fires are started by lightning strikes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "440906",
"title": "Seiche",
"section": "Section::::Occurrence.:Lake seiches.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 976,
"text": "Earthquake-generated seiches can be observed thousands of miles away from the epicentre of a quake. Swimming pools are especially prone to seiches caused by earthquakes, as the ground tremors often match the resonant frequencies of small bodies of water. The 1994 Northridge earthquake in California caused swimming pools to overflow across southern California. The massive Good Friday earthquake that hit Alaska in 1964 caused seiches in swimming pools as far away as Puerto Rico. The earthquake that hit Lisbon, Portugal in 1755 caused seiches 2,000 miles (3,000 km) away in Loch Lomond, Loch Long, Loch Katrine and Loch Ness in Scotland and in canals in Sweden. The 2004 Indian Ocean earthquake caused seiches in standing water bodies in many Indian states as well as in Bangladesh, Nepal and northern Thailand. Seiches were again observed in Uttar Pradesh, Tamil Nadu and West Bengal in India as well as in many locations in Bangladesh during the 2005 Kashmir earthquake.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "172078",
"title": "Whitewater",
"section": "Section::::Safety.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 285,
"text": "Running whitewater rivers is a popular recreational sport but is not without danger. In fast moving water there is always the potential for injury or death by drowning or hitting objects. Fatalities do occur; some 50+ people die in whitewater accidents in the United States each year.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3hbp3q
|
in terms of evolution, why are peacocks' tails so big?
|
[
{
"answer": "a large tail would be a trait of a healthy bird, which in turn drives the success of the species.",
"provenance": null
},
{
"answer": "It was mostly likely an early positive feature due to making the peacock appear larger to predators, male competitors and potential mates. To the animal brain bigger generally means stronger so it would scare off predators and other males and be attractive to females.\n\nPeacocks are, probably, not smart enough creatures to actually possess any form of abstract thought. Which is to say it's not a deliberate choice on their part to be scared of, or attracted to larger members of their species. It's just sort of built in after many many years of evolution. There isn't really an upper limit on it either, as far as we can tell. There's no point where their brain goes \"hold on, that's just ridiculous\". Same way that a cats brain is super stimulated by a laser pointer and the cat will go nuts chasing that little dot even though if they were able to think about it at all, it is clearly not any sort of prey.\n\nThey've probably only ended up stopping where they are now because there's an opposing selective pressure against having too large a tail. Lack of mobility, most likely. But if you painted an enormous realistic mural of a male peacock on a wall, it would impress the hell out of female peacocks.",
"provenance": null
},
{
"answer": "A couple things. One is sexual selection. Peahens would be impressed and attracted to (and simply notice better) the males with the biggest, prettiest tails. MANY birds show this evolutionary trait. Check out the difference between males and females of [Long-Tailed Widowbirds](_URL_0_).\n\nAt the same time, it is a defense mechanism. Peacocks open their tails very quickly, and when a predator is near this, to the predator it looks like the peacock just grew to like 5x it's original size and got really fucking colorful. That is terrifying when you're about to try and kill something, so they run away. Birds with less impressive tails got eaten. Natural selection at it's best.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "333925",
"title": "Handicap principle",
"section": "Section::::Examples.:Directed at members of the same species.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 279,
"text": "The tail of a peacock makes the peacock more vulnerable to predators, and may therefore be a handicap. However, the message that the tail carries to the potential mate peahen may be 'I have survived in spite of this huge tail; hence I am fitter and more attractive than others'.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26054823",
"title": "Andrew Balmford",
"section": "Section::::Education and career.:Research.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 714,
"text": "In 1993, along with two other researchers, he investigated why the tails of birds are shaped as they are, aiming to test Charles Darwin's hypothesis that females have a preference for males with longer and more ornate tails using aerodynamic analysis. They reported that shallow forked shaped tails (such as those of the house martin) are aerodynamically optimal and that species with them had similar lengthed tails, indicating they could have developed through natural selection. In species with longer tails, males tend to have longer tails than females and which also create drag, since this is no advantage except for when courting, the authors suggested long tails may have evolved through sexual selection.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "464447",
"title": "Animal communication",
"section": "Section::::Other aspects.:Evolution.\n",
"start_paragraph_id": 63,
"start_character": 0,
"end_paragraph_id": 63,
"end_character": 1268,
"text": "One theory to explain the evolution of traits like a peacock's tail is 'runaway selection'. This requires two traits—a trait that exists, like the bright tail, and a preexisting bias in the female to select for that trait. Females prefer the more elaborate tails, and thus those males are able to mate successfully. Exploiting the psychology of the female, a positive feedback loop is enacted and the tail becomes bigger and brighter. Eventually, the evolution will level off because the survival costs to the male do not allow for the trait to be elaborated any further. Two theories exist to explain runaway selection. The first is the good genes hypothesis. This theory states that an elaborate display is an honest signal of fitness and truly is a better mate. The second is the handicap hypothesis. This explains that the peacock's tail is a handicap, requiring energy to keep and makes it more visible to predators. Thus, the signal is costly to maintain, and remains an honest indicator of the signaler's condition. Another assumption is that the signal is more costly for low quality males to produce than for higher quality males to produce. This is simply because the higher quality males have more energy reserves available to allocate to costly signaling.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "515152",
"title": "Fisherian runaway",
"section": "Section::::Peacocks and sexual dimorphism.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1513,
"text": "The plumage dimorphism of the peacock and peahen of the species within the genus \"Pavo\" is a prime example of the ornamentation paradox that has long puzzled evolutionary biologists; Darwin wrote in 1860:The sight of a feather in a peacock’s tail, whenever I gaze at it, makes me sick!The peacock's colorful and elaborate tail requires a great deal of energy to grow and maintain. It also reduces the bird's agility, and may increase the animal's visibility to predators. The tail appears to lower the overall fitness of the individuals who possess it. Yet, it has evolved, indicating that peacocks with longer and more colorfully elaborate tails have some advantage over peacocks who don’t. Fisherian runaway posits that the evolution of the peacock tail is made possible if peahens have a preference to mate with peacocks that possess a longer and more colourful tail. Peahens that select males with these tails in turn have male offspring that are more likely to have long and colourful tails and thus are more likely to be sexually successful themselves. Equally importantly, the female offspring of these peahens are more likely to have a preference for peacocks with longer and more colourful tails. However, though the relative fitness of males with large tails is higher than those without, the absolute fitness levels of all the members of the population (both male and female) is less than it would be if none of the peahens (or only a small number) had a preference for a longer or more colorful tail.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "63610",
"title": "Peafowl",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 510,
"text": "The functions of the elaborate iridescent colouration and large \"train\" of peacocks have been the subject of extensive scientific debate. Charles Darwin suggested they served to attract females, and the showy features of the males had evolved by sexual selection. More recently, Amotz Zahavi proposed in his handicap theory that these features acted as honest signals of the males' fitness, since less-fit males would be disadvantaged by the difficulty of surviving with such large and conspicuous structures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "548255",
"title": "Indian peafowl",
"section": "Section::::Description.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 1594,
"text": "Peacocks are a larger sized bird with a length from bill to tail of and to the end of a fully grown train as much as and weigh . The females, or peahens, are smaller at around in length and weigh . Indian peafowl are among the largest and heaviest representatives of the Phasianidae. So far as is known, only the wild turkey grows notably heavier. The green peafowl is slightly lighter in body mass despite the male having a longer train on average than the male of the Indian species. Their size, colour and shape of crest make them unmistakable within their native distribution range. The male is metallic blue on the crown, the feathers of the head being short and curled. The fan-shaped crest on the head is made of feathers with bare black shafts and tipped with bluish-green webbing. A white stripe above the eye and a crescent shaped white patch below the eye are formed by bare white skin. The sides of the head have iridescent greenish blue feathers. The back has scaly bronze-green feathers with black and copper markings. The scapular and the wings are buff and barred in black, the primaries are chestnut and the secondaries are black. The tail is dark brown and the \"train\" is made up of elongated upper tail coverts (more than 200 feathers, the actual tail has only 20 feathers) and nearly all of these feathers end with an elaborate eye-spot. A few of the outer feathers lack the spot and end in a crescent shaped black tip. The underside is dark glossy green shading into blackish under the tail. The thighs are buff coloured. The male has a spur on the leg above the hind toe.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5928942",
"title": "Tuckerella",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 299,
"text": "They also have long hair-like setae projecting from rear (caudal setae) that have been compared to a trailing peacock tail. The 5–7 pairs of caudal setae can be flicked over the body very quickly, so they are used like whips in defense against predators. They may also help in wind-borne dispersal.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3035as
|
at any given night in a city, we see a fairly small amount of stars but far away from city lights we see thousands of stars. what determines what stars we see in a city?
|
[
{
"answer": "The brightest stars are still visible. Cities produce a lot of light, dubbed \"light pollution\" that drowns out the light from dimmer stars.",
"provenance": null
},
{
"answer": "Light pollution.\n\nAll those cars and buildings radiate light, which gets scattered through the air to brighten the general area, reducing the contrast of the dark sky and limiting the numbers of faint objects we can see.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3023823",
"title": "Star count",
"section": "Section::::Inherent luminosity complications.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 363,
"text": "Heavy, bright stars (both giants and blue dwarfs) are the most common stars listed in general star catalogs, even though on average they are rare in space. Small dim stars (red dwarfs) seem to be the most common stars in space, at least locally, but can only be seen with large telescopes, and then only when they are within a few tens of light-years from Earth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6753004",
"title": "Epsilon Eridani in fiction",
"section": "Section::::General uses.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 228,
"text": "Many stars may be referred to in fictional works for their metaphorical or mythological associations, or else as bright points of light in the sky of the Earth, but not as locations in space or the centers of planetary systems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6234266",
"title": "Alpha Centauri in fiction",
"section": "Section::::General uses of Alpha Centauri.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 228,
"text": "Many stars may be referred to in fictional works for their metaphorical or mythological associations, or else as bright points of light in the sky of the Earth, but not as locations in space or the centers of planetary systems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6234591",
"title": "Tau Ceti in fiction",
"section": "Section::::General uses of Tau Ceti.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 224,
"text": "Many stars may be referred to in fictional works for their metaphorical or mythological associations, or else as bright points of light in the sky of Earth, but not as locations in space or the centers of planetary systems.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26808",
"title": "Star",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 758,
"text": "A star is an astronomical object consisting of a luminous spheroid of plasma held together by its own gravity. The nearest star to Earth is the Sun. Many other stars are visible to the naked eye from Earth during the night, appearing as a multitude of fixed luminous points in the sky due to their immense distance from Earth. Historically, the most prominent stars were grouped into constellations and asterisms, the brightest of which gained proper names. Astronomers have assembled star catalogues that identify the known stars and provide standardized stellar designations. However, most of the estimated 300 sextillion () stars in the observable universe are invisible to the naked eye from Earth, including all stars outside our galaxy, the Milky Way.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "997476",
"title": "Night sky",
"section": "Section::::Visual presentation.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 345,
"text": "The stars of the night sky cannot be counted unaided because they are so numerous and there is no way to track which have been counted and which have not. Further complicating the count, fainter stars may appear and disappear depending on exactly where the observer is looking. The result is an impression of an extraordinarily vast star field.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41516508",
"title": "Kepler-90h",
"section": "Section::::Host star.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 206,
"text": "The star's apparent magnitude, or how bright it appears from Earth's perspective, is 14. It is too dim to be seen with the naked eye, which typically can only see objects with a magnitude around 6 or less.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
7pc2um
|
How does the photon of specific phase that causes stimulated emission in a laser device arise?
|
[
{
"answer": "You mostly have it already. The overall polarization of the beam depends on the lasing medium and the cavity design, but in general, a photon will be emitted by spontaneous emission, and that gets the stimulated emission going to start lasing. Like you said, you could have a cavity with Brewster windows in it, which will cause high loss for one polarization and force the laser to have an output with only one polarization. There are also lasing media which, due to their crystal structure, will only emit photons in a given polarization (relative to the crystal structure). ",
"provenance": null
},
{
"answer": "The first photon doesn't have to have a specific phase. Whatever it has determines the phase of the laser. In terms of direction and polarization: If it is not aligned with the laser cavity (or has the wrong polarization, if that is relevant), this chain of photons dies down quickly and another \"first photon\" will start the laser. Note that actual lasers do not emit *perfect* laser light. You can still get all sorts of weird effects in between.",
"provenance": null
},
{
"answer": "To start, there is no such thing as phase for a single photon. If you quantize the EM hamiltonian in the cavity and find the expectation value of the electric field for a single photon state (an eigenstate of the EM field hamiltonian in the cavity), you will find that it is zero everywhere (on the other hand, E^2 is not). It is the superposition of many of these single-photon states in a [coherent state](_URL_0_) that gives rise to a classical EM wave (which does have a phase). If you have trouble seeing why this is, consider the 1-D quantum harmonic oscillator: its excitations have discrete \"lumps\" of energy (energy hbar*omega) but have an expectation value for x of zero. It takes the superposition of at least two modes to have any sort of time-dependence, and many more modes in a coherent state to get classical harmonic motion (where the concept of phase is well defined). \n\nWhen a laser begins to lase after a population inversion has formed in the gain medium, a photon is spontaneously emitted in one of the [longitudinal modes](_URL_2_) of the cavity. Because the end-mirrors are not perfectly flat, there can be many thousands of these modes in the line width of the atomic transition that are above the [threshold gain](_URL_1_) of the cavity. The number of photons in this mode grows exponentially in time to the point where it decreases the inversion density, which effectively decreases the gain of other modes in the cavity below the threshold gain (hence killing these other modes). \n\nFor some applications involving q-switched (pulsed) lasers, the cavity is seeded (i.e. \"back-filled\" with photons already in a specific mode) by shining a bright fiber laser through the cavity end-mirror. This decreases the \"build-up\" time for laser light in the cavity when it turns on (by 10s of nanoseconds in most cases) and reduces shot-to-shot timing jitter that occurs when the laser starts on a spontaneously-emitted photon. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "24065",
"title": "Population inversion",
"section": "Section::::The interaction of light with matter.:Stimulated emission.\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 956,
"text": "The critical detail of stimulated emission is that the induced photon has the same frequency and phase as the incident photon. In other words, the two photons are coherent. It is this property that allows optical amplification, and the production of a laser system. During the operation of a laser, all three light-matter interactions described above are taking place. Initially, atoms are energized from the ground state to the excited state by a process called \"pumping\", described below. Some of these atoms decay via spontaneous emission, releasing incoherent light as photons of frequency, ν. These photons are fed back into the laser medium, usually by an optical resonator. Some of these photons are absorbed by the atoms in the ground state, and the photons are lost to the laser process. However, some photons cause stimulated emission in excited-state atoms, releasing another coherent photon. In effect, this results in \"optical amplification\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1114367",
"title": "Optical parametric amplifier",
"section": "Section::::Optical parametric generation (OPG).\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 540,
"text": "This light emission is based on the nonlinear optical principle. The photon of an incident laser pulse (pump) is, by a nonlinear optical crystal, divided into two lower-energy photons. The wavelengths of the signal and the idler are determined by the phase matching condition, which is changed e. g. by temperature or, in bulk optics, by the angle between the incident pump laser ray and the optical axes of the crystal. The wavelengths of the signal and the idler photons can, therefore, be tuned by changing the phase matching condition.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13232736",
"title": "Energy transfer upconversion",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 546,
"text": "If a laser-active ion is in an excited state, it can decay to a lower state either radiatively (i.e. energy is conserved by the emission of a photon, as required for laser operation) or nonradiatively. Nonradiative emission may be via Auger decay or via energy transfer to another laser-active ion. If this occurs, the ion receiving the energy will be excited to a higher energy state than that already achieved by absorption of a pump photon. This process of further exciting an already excited laser-active ion is known as photon upconversion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "385621",
"title": "Plasma diagnostics",
"section": "Section::::Active spectroscopy.:Two-photon laser-induced fluorescence.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 435,
"text": "The two-photon laser-induced fluorescence (TALIF) is a modification of the laser-induced fluorescence technique. In this approach the upper level is excited by absorbing two photons and registering the resulting emission from the excited state. The advantage of this approach is that the registered light from the fluorescence is with a different wavelength from the exciting laser beam, which leads to improved signal to noise ratio.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "17556",
"title": "Laser",
"section": "Section::::Laser physics.:Gain medium and cavity.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 1088,
"text": "If the gain (amplification) in the medium is larger than the resonator losses, then the power of the recirculating light can rise exponentially. But each stimulated emission event returns an atom from its excited state to the ground state, reducing the gain of the medium. With increasing beam power the net gain (gain minus loss) reduces to unity and the gain medium is said to be saturated. In a continuous wave (CW) laser, the balance of pump power against gain saturation and cavity losses produces an equilibrium value of the laser power inside the cavity; this equilibrium determines the operating point of the laser. If the applied pump power is too small, the gain will never be sufficient to overcome the cavity losses, and laser light will not be produced. The minimum pump power needed to begin laser action is called the \"lasing threshold\". The gain medium will amplify any photons passing through it, regardless of direction; but only the photons in a spatial mode supported by the resonator will pass more than once through the medium and receive substantial amplification.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "56045552",
"title": "Quantum dot single-photon source",
"section": "Section::::Theory of realizing a single-photon source.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 788,
"text": "Single photons are extracted out of a semiconductor by spontaneous emission from the decay of a single excitation. Inside the cavity spontaneous emission is increased due to the Purcell effect. The challenge in making in a single photon source is to make sure that there is only one excited state in the system at a time. To do that, a quantum dot is placed in a microcavity (Fig. 1). A quantum dot has discrete energy levels. An excitation from its ground state to an excited state will create an exciton. The eventual decay of this exciton due to spontaneous emission will result in the emission of a single photon. DBR’s are placed in the cavity to achieve a well-defined spatial mode and to reduce linewidth broadening due to the lifetime formula_1 of the excited state (see Fig. 2).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "869444",
"title": "Laser-induced fluorescence",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 292,
"text": "Laser-induced fluorescence (LIF) or laser-stimulated fluorescence (LSF) is a spectroscopic method in which an atom or molecule is excited to a higher energy level by the absorption of laser light followed by spontaneous emission of light. It was first reported by Zare and coworkers in 1968.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2yg287
|
Who is the Japanese military leader in this picture?
|
[
{
"answer": " > My own investigation of Jappanese military leaders who killed themselves in circumstances where the Americans might quickly find their body makes me think it could be either Isamu Cho or Mitsuru Ushijima. The picture of Cho kind of looks like the picture I have.\n\nRight idea, but wrong guy, mainly because this is a suicide attempt, not a successful one! Hideki Tojo was seen as one of the principal war criminals of Japan by the Allies, and when American MPs went to arrest him in early September, he attempted to shoot himself in the heart, but missed and only wounded himself. He was arrested, and successfully treated by American medical personnel so that he could stand trial for war crimes, be found guilty, and finally be executed by hanging a few years later.\n\nNow of course it is possible Im wrong, but based on what I can see of the face, [with that trademark mustache](_URL_1_), as well as the apparent location of the wound based on the bloodied garments, I feel confident in my deduction here.\n\nEdit: reverse Google search failed, but \"Tojo suicide photos\" turned up a few that [look very similar](_URL_0_), although that specific one does not seem to be online (which would make it pretty interesting for a collector!)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "39422490",
"title": "Commando Duck",
"section": "Section::::Analysis.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 221,
"text": "There are Japanese caricatures and depictions of the Imperial Japanese Army. There is also a reference to Hirohito. The Japanese soldiers speak in stereotypical dialect and advocate firing the first shot at a man's back.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45660287",
"title": "The Legend of Tank Commander Nishizumi",
"section": "Section::::Film.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 390,
"text": "It shows the common Japanese soldier as an individual and as a family man, and even enemy Chinese soldiers are presented as individuals, sometimes fighting bravely. The film, based on a true story of the Sino-Japanese war, served as propaganda, instructing its audience in the correct way to endure loss without despair. To make the film, Yoshimura toured the actual battlefields in China.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1694661",
"title": "The Maze (painting)",
"section": "Section::::Interpretation.:Inside the Skull.:Politics (Upper Left).\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 265,
"text": "The Chinese soldier panel: The picture in this panel is designed as a shield with a crest on it. It depicts a Chinese soldier in Korea bayonetting Kurelek. This image is meant to represent his fear of war, which derived from his father keeping him out of the army.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "483232",
"title": "Prince Yasuhiko Asaka",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 419,
"text": "General was the founder of a collateral branch of the Japanese imperial family and a career officer in the Imperial Japanese Army. Son-in-law of Emperor Meiji and uncle by marriage of Emperor Hirohito, Prince Asaka was commander of Japanese forces in the final assault on Nanjing, then the capital city of Nationalist China, in December 1937. He was a perpetrator of the Nanking massacre in 1937 but was never charged.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13569092",
"title": "Al Chang",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 592,
"text": "He was a dock worker in 1941 when he witnessed the Japanese attack on Pearl Harbor, and would later work as a military photographer for the U.S. Army, serving in World War II, and the Korean War and the Vietnam War. He briefly left the armed forces to work for National Geographic and the Associated Press during the Vietnam War, but then returned to work for the Army during the war. His work includes photographs of the official surrender of Japan aboard the , and a photograph of an American sergeant embracing a fellow soldier which was featured in Edward Steichen's \"The Family of Man\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47027278",
"title": "Zhang Shibo",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 256,
"text": "Zhang Shibo (; born February 1952) is a retired general of the Chinese People's Liberation Army of China. He served as Commander of the PLA Hong Kong Garrison, Commander of the Beijing Military Region, and President of the PLA National Defence University.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30516106",
"title": "Bloody Saturday (photograph)",
"section": "Section::::Legacy.:Allegations of falsehood.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 1238,
"text": "At the time, Japanese nationalists called the photograph a fake, and the Japanese government put a bounty of $50,000 on Wong's head: an amount equivalent to $ in 2020. Wong was known to be against the Japanese invasion of China and to have leftist political sympathies, and he worked for William Randolph Hearst who was famous for saying to his newsmen, \"You furnish the pictures and I'll furnish the war\" in relation to the Spanish–American War. Another of Wong's photos appeared in \"Look\" magazine on December 21, 1937, showing a man bent over a child of perhaps five years of age, both near the crying baby. The man was alleged to be Wong's assistant Taguchi who was arranging the children for best photographic effect. An article in \"The Japan Times and Mail\" said the man was a rescue worker who was posing the baby and the boy for the photographer. Wong described the man as the baby's father, coming to rescue his children as the Japanese aircraft returned following the bombing. Japanese propagandists drew a connection between what they claimed was a falsified image and the general news accounts by U.S. and Chinese sources reporting on the fighting in Shanghai, with the aim of discrediting all reports of Japanese atrocities.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
bd2ouf
|
Was suicide among "commoners" normal during time periods like the renaissance? Or is it something that became more prevalent recently?
|
[
{
"answer": "I have an earlier answer on [suicide in the Middle Ages in western Europe](_URL_1_). If you don't mind, I'll copy-paste it here for now so people have something to read while I work on one that extends into the Reformation/later Renaissance era. (Normally I'd just wait, but the topic seems to demand it.) ([Now posted!](_URL_0_))\n\n~~\n\n[1/2]\n\n*I'm borrowing some pieces from earlier answers [here](_URL_3_) and [here](_URL_2_), but it's mostly new.*\n\nIt's impossible to calculate the rate at which medieval people in the Latin West killed themselves or tried to. First, for the usual reasons--lack of records, bias of records that do survive in favor of focus on specific groups, the sketchily-drawn nature of calculating medieval demographics in general. Equally important, however, are the immense social, legal, and Christian religious consequences not just for the ones who killed themselves, but for those staring numbly at their loved one's body. While we can't say \"how commonly did medieval people kill themselves,\" it is evident that suicide was not only a common problem for survivors, but became an even bigger emotional burden over the course of the Middle Ages.\n\nThe central drumbeat of any examination of suicide in the Christian Middle Ages must be: suicide was a sin. And not just any sin, but an absolutely, fundamentally unforgiveable one. It was understood that the act of self-murder was the last thing that a person would do; there was no time for confession and absolution. No cleansing purgatorial fire awaited those who killed themselves: they were eternally bound to hell. As early as 570, Gregory of Tours writes that the body of a nobleman who had killed himself was taken to a monastery by his survivors, but the monks could \"not put [him] among the Christian dead, and no Mass was sung for him.\" The refusal of burial with the Christian community in consecrated ground is an earthly symbol of the theological belief that the count was separated from the Christian community in the afterlife.\n\nThis story shows us two further things. First, the intimate relationship of suicide and death means the theology of suicide was doctrine that wrapped itself around every level of Christian society. Even if not every single person over a thousand year span was excited to hear every last sermon or could recite the Paternoster (prayer) without prompting at their goddaughter's baptism, everyone dealt with death, whose aftermath was the domain of God and the Church.\n\nSecond, it shows the desperation of the count's family. They still took his body to the monastery even knowing he had killed himself, holding out some shard of hope for his soul, that the holy men might still be able to help. Already in the earliest years of the Middle Ages, we witness the desperation of the survivors.\n\nThe fallout of this desperation--even a generalized sadness of pious writers upset at the consignment of *any* soul to hell--permeates the medieval source record on suicide. As with Gregory, it's not that suicide isn't mentioned. We hear about it in monastic chronicles: a 12th century monk and prior of Le Dale monastery named Henry fell in love with a local woman and, officially absent from his house to earn money for it, moved in with her. When his affair was discovered and he was forced to return to the convent, \"Taking guidance from the Devil he got into a hot bath and opened veins in both arms; and by way of spontaneous, or rather foolish, death he put an end to life.\" From late medieval England, we have cases mentioned in coroners' rolls: A man sentenced to sit in the stocks overnight is found dead in the morning, having stabbed himself. \n\nMiracle stories attached to saints and shrines describe people who attempted suicide, maybe even appeared to have killed themselves, but were (literally) miraculously revived: a young woman was raped repeatedly by her uncle, who forced her to have an abortion each time she became pregnant. The third time, she did so directly, by ripping open her stomach with a knife. But when she cried to the Virgin Mary--here as both mother and *mediatrix*--Mary healed her external as well as internal wounds, and the woman took vows in a Cistercian convent to spend the rest of her days in praise of Mary/out of sight of mainstream society. And fictional literary sources talk of suicide, too: Boccaccio's *Elegy of Lady Fiammetta* describes a woman who decides to kill herself by jumping from a tower, because the people who find her body won't be able to tell whether it was suicide or an accident.\n\nBut in these stories, a clear pattern emerges: an emphasis on secrecy, privacy, and shame. A traveler who drops back from the group; a nun who barricades herself into a room for \"private prayer\" but slips out the window. Fiammetta (who is ultimately rescued) wanted to camouflage her death as an accident; the noblewoman in Gerard of Frachet's miracle tale hid herself away in the aftermath.\n\nThis only increases as one moves up the social scale in considering cases. Although typically we'd say the source record is *radically* denser for religious and the upper class than the small but growing middle class and peasants, with suicide this is not so. Alexander Murray, who composed the most important study of suicide in the Middle Ages (and to give you an idea of the weight of this project: he only ever made it through two volumes of a planned three before it was too much), instead says we must look to \"whispers\". \n\nThe sources ideologically and personally closest to a named noble or royal will shy away from mentioning suicide or suicidal ideation; those further removed in time and alliance will be less reticent. One example of this in operation is the possible attempted suicide of Henry IV, 11th (mostly) century Holy Roman Emperor. A lot of chronicles discuss his wars with the pope and his own son. But it is only one account, by known opponent Bernold of Constance, who includes this detail:\n\n > He betook himself to a castle and there remained without any regal trappings. He was in a state of extreme dejection and, as they say, he tried to give himself over to death, but was prevented by his men and could not bring his wish to effect. *(trans. Murray)*\n\nWhile the modern reader will recongize circumstances of deep depression and suicidal desire that feel all too familiar, there is an even darker angle in play. A given \"mental illness\" is of course a name attached of a web of symptoms that frequently travel together, manifesting slightly differently in all cases; but even the concept of *illness* is a cultural-scientific attachment. *Tristitia*, *acedia*, *melancholia*, and their fellows in medieval writings appear to aligns with different manifestations of what we call major depressive disorder today. But in the Middle Ages, they were sins. Even before one stepped onto the tower window ledge or threw the rope over the rafters, sorrow over worldly matters like *your own son leading an armed rebellion against you, nbd* was a sin that divorced you from other people and from God. It's not an accident that so many accounts of suicide attribute the act to possession by the devil or the influence of demons, and describe the victim's diabolical fear or behavior in the days or years beforehand.\n\nIt's no wonder, then, that even an anti-Henry partisan like Bernold can only bring himself to write \"As they say\" (*aiunt*). It's a common pattern. Dante Alighieri refused to identify thirteenth-century king Henry Hohenstaufen as one of the inmates of the seventh circle of hell in *Inferno*, despite rumors to the effect he was among those violent against themselves. It's not agreement or disagreement with this decision that is picked up by commentators, it's the *debate*: \"but others write,\" hedges Bevenuto da Imola, and \"if this is true.\" \n\nThere was good reason for those left behind to be cautious. As laws and legal systems coalesced over the course of the Middle Ages, death by suicide came to have extensive legal consequences for one's heirs (and whatever a grudge against the dead, might not be good to antagonize the living). Laws permitted or mandated the \"ravage\" of the property of someone who committed suicide: that its, its seizure by the lord or city rather than passing down to one's heirs. This could extend all the way to the home that a house-owner's family was *still living in*, throwing them onto the street. \n\nA 1280 case from England illustrates these laws in action. Upon the death of one of his tenants, a lord had claimed it was suicide and thus her property reverted to him. Her heirs had sued to get the property back, claiming his \"presumptions\" were (a) wrong and (b) even if they were right, presumptions weren't strong enough to be evidence of suicide. Notably, the judge ruled in the lord's favor because one of the 'presumptions' was the dead woman's threat to do something to shame her friends. Suicide was shameful for the immediate victim, but it also smade victims of the survivors who had to deal with public shame and material loss in the midst of private grief.",
"provenance": null
},
{
"answer": "It is *also* impossible to calculate a suicide rate for early modern western Europe. The difficulties with identifying modern victims of suicide come into play--people who try to cover up their own actions, families who don't report it. For the early modern era, the usual problem with surviving sources compounds these problems exponentially. \n\nBut it's also harder because of much darker cultural beliefs about suicide. It was a matter of deep social shame for the survivors and the memory of the victim. It was a legal crime that punished survivors through state seizure of the victim's property. And in Christianity, it was a sin that sent one's soul straight to hell.\n\nThis was true on all sides of the Reformation. In Catholicism, suicide offered no time for repentance between act and death. According to Protestant beliefs, suicide was an act of the reprobate. \n\nSo with so much societal push against suicide, combined with the usual narratives of the early modern era as the \"rise of social discipline,\" who would get to the point of actively trying to kill themselves? Through the difficulties in the sources, one thing has stood out in multiple studies. \n\nSuicide was often, though obviously not always, a sin and a crime of social and economic outcasts. People who perceived they had nowhere to turn or would have nowhere to turn in the future; people who faced a really awful future.\n\nLegal records are where most of our data on suicide in early modern Europe comes from--actual court cases, records of deaths in a city, investigations of violent death and accidents in general. But studies of England and northern Germany show some of the problems with using these records straightforwardly. \n\nFirst, it's generally considered fact that people sought desperately to cover up the suicidal death of a family member for three reasons: their own social shame, refusal of Christian burial rites, and seizure of property. In England, laws mandating almoners and coroners investigate *all* suspicious deaths were codified around 1500, which you would think would eliminate some of the chances of a cover-up. \n\nBut as R. A. Houston showed for 16th century England, cases taken to the courts often ended up more as a mediation in how to divide a deceased person's assets than outright forfeiture. And they might not end up in court until *decades* after a death. At the same time, there's plenty of evidence of families indeed trying to cover up someone's suicide. And people who committed suicide themselves might also have taken care. So we're definitely still dealing with very selective reporting and recording.\n\nIn northern Germany and Scandinavia, so-called \"suicidal murder\" became a major problem in the 16th-18th centuries. This involved a person who despaired to the point of suicide actually murdering someone else, a victim and in a manner that made capital punishment inevitable (usually a child not related to them). Arne Jansson traced this horror to a local folk belief that a violent death of any kind--including execution--sent one to heaven. This would presumably constitute a small number of cases of suicide overall. But it's a useful, if tragic, reminder that suicide doesn't always look like \"suicide.\"\n\nAnd of course, a major difficulty is that sources don't always agree--and that they disagree in really significant ways. Through 1646, Laura Cruz observed 38 suicides recorded in court records for Leiden; Jeffrey Watts observed 41 in Geneva through 1650. This seems quite ordinary until you realize that Leiden was about twice the size of Geneva.\n\nHowever, Cruz and Watts found agreement in their sources on a crucial point: suicide was overwhelmingly an act of the socially marginalized. Cruz observes a strong link between economic difficulties and suicide. Even as Leiden prospered dramatically, not everyone came along. Those excluded from guild membership as temporary workers (the adjunct professors of early modern trades, if you will) or those still trying to earn their way in as apprentices (the grad students) constituted 20% of the people \"convicted\" of committing suicide in court records. \n\nFeeling a full sense of belonging and community in a church was also insulation from actually committing suicide, although there is no information on attempts. Cruz found only 2 cases out of 38 who were full members of a Calvinist or Anabaptist church (about 40% of the population overall).\n\nFor Geneva, on the other hand, Watt identified surprisingly specific groups as those most likely to commit suicide: suspected witches, prisoners, and people previously considered violently insane. 9 out of the 41 pre-1650 victims of suicide had been accused or suspected of witchcraft.\n\nStudying England, Houston cautions that the predominance of social outcasts in statistics about people who committed suicide, likely reflects source bias to some extent. MacDonald and Murphy in *Sleepless Souls: Suicide in Early Modern England* highlight the presence of nobles and wealthier burghers among the registers of suicide victims. But they still point out that based on assets uncovered for forfeiture, more than half of victims of suicide would qualify as poor or destitute. \n\nSharon Strocchia, meanwhile, studied suicides and suicide attempts among nuns in early modern Italy--what more tight-knit community than a convent? It's impossible to reconstruct the complex social, medical, and personal reasons that any one person committed suicide. But looking at the circumstances of these nuns, she detected two patterns at work in many (not all) cases. First, some of the nuns were noted as suffering horrible verbal and even physical abuse. (And this does not seem to have been an exaggeration--one nun, who reported the suicide of her sister to local authorities, also sought permission to transfer to another convent because of the terrible environment.) Second, many nuns who attempted suicide, or had sisters desperately concerned that they would, were among those forced into monastic life by relatives. In both those cases, there was sharp displacement from these women's desired community, whether that was within the convent or outside.\n\nAnd Houston offers a poignant reminder that \"social outcast\" could come in many forms. From 17th century Shropshire (the year isn't clear), a man named John Gossage committed suicide by taking arsenic. He had spent time in jail for counterfeiting money and was accounted an alcoholic by survivors. When his body was found, the only person the town could find to deal with his burial was his landlord.\n\nAnd the nameless woman who threw herself into the Nor Loch in Edinburgh in 1665? She was buried right next to where she drowned herself--she had no family or friends to claim, move, or take care of her body. We only know her from a brief reference in the city treasurers' records of the need to supply a coffin.",
"provenance": null
},
{
"answer": "As a follow up question: *why* was the punishment for suicide so harsh and *why* did it get harsher over time? Why did it extend to living relatives? Wasn't eternity in purgatory punishment enough? Was it just about money - i.e. seizing the property of the suicidee?",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2252808",
"title": "History of suicide",
"section": "Section::::Changes in attitude.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 736,
"text": "Attitudes towards suicide slowly began to shift during the Renaissance; Thomas More the English humanist, wrote in \"Utopia\" (1516) that a person afflicted with disease can “free himself from this bitter life…since by death he will put an end not to enjoyment but to torture...it will be a pious and holy action”. It was assisted suicide, and killing oneself for other reasons was still a crime for people in his Utopia, punished by the denial of funeral rites. John Donne's work \"Biathanatos\" contained one of the first modern defenses of suicide, bringing proof from the conduct of Biblical figures, such as Jesus, Samson and Saul, and presenting arguments on grounds of reason and nature to sanction suicide in certain circumstances.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16831059",
"title": "Suicide",
"section": "Section::::History.\n",
"start_paragraph_id": 76,
"start_character": 0,
"end_paragraph_id": 76,
"end_character": 372,
"text": "By the 19th-century, the act of suicide had shifted from being viewed as caused by sin to being caused by insanity in Europe. Although suicide remained illegal during this period, it increasingly became the target of satirical comments, such as the Gilbert and Sullivan comic opera \"The Mikado\" that satirized the idea of executing someone who had already killed himself.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16831059",
"title": "Suicide",
"section": "Section::::History.\n",
"start_paragraph_id": 74,
"start_character": 0,
"end_paragraph_id": 74,
"end_character": 347,
"text": "Attitudes towards suicide slowly began to shift during the Renaissance. John Donne's work \"Biathanatos\" contained one of the first modern defences of suicide, bringing proof from the conduct of Biblical figures, such as Jesus, Samson and Saul, and presenting arguments on grounds of reason and nature to sanction suicide in certain circumstances.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2252808",
"title": "History of suicide",
"section": "Section::::Changes in attitude.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 460,
"text": "By the 19th-century, the act of suicide had shifted from being viewed as caused by sin to being caused by insanity in Europe. Although suicide remained illegal during this period, it increasingly became the target of satirical comment, such as the spoof advertisement in the 1839 \"Bentley’s Miscellany\" for a \"London Suicide Company\" or the Gilbert and Sullivan musical \"The Mikado\" that satirized the idea of executing someone who had already killed himself.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "299487",
"title": "Non compos mentis",
"section": "Section::::History.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 394,
"text": "However, attitudes to suicide changed profoundly after 1660, following the English Revolution. After the civil war, political and social changes, judicial and ecclesiastical severity gave way to official leniency for most people who died by suicide. \"Non compos mentis\" verdicts increased greatly, and \"felo de se\" verdicts became as rare as \"non compos mentis\" had been two centuries earlier.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13690904",
"title": "Suicide in literature",
"section": "Section::::Quotations.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 367,
"text": "“Once suicide was accepted as a common fact of society- not as a noble Roman alternative, nor as the mortal sin it had been in the Middle Ages, nor as a special cause to be pleaded or warned against- but simply as something people did, often and without much hesitation, like committing adultery, then it automatically became a common property of art.\" - diaz, 1971.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14674286",
"title": "Christina Johansdotter",
"section": "Section::::Context.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 425,
"text": "These suicide-executions represent quite a peculiar historical phenomenon, which developed its own customs and culture. At the end of the 17th century, executions were given a solemn character in Stockholm; the condemned and their families bought special costumes, which were to be white or black and decorated with embroidery and ribbons, and paid for a suite to escort the condemned to the place of execution at Skanstull.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1ria70
|
why do woman traditionally throw underhand while men throw overhand?
|
[
{
"answer": "At least in cricket I've always been led to believe that it was women who started bowling overarm, as at the time their dresses were to voluminous to allow underarm bowling which was the style at the time.\n\n_URL_0_",
"provenance": null
},
{
"answer": "If you are talking about softball, they are utilizing centripetal force, thus exerting less energy and requiring less strength the launch the ball. In addition with the size of a softball, it is much more difficult to effectively throw it overhand.\nThrowing and overhand ball is much more difficult as far as technique goes as well, as your arm is not moving in a straight, or even a steady curved direction. It folds back on itself and then sort of whips forward, requiring you to put spin on the ball to aim it in a certain direction. \n\nThrowing a softball underhand requires much less technique to throw in a straight line.",
"provenance": null
},
{
"answer": "What do you mean by \"traditionally\"? Do you mean as in softball vs. baseball? They're different games with different rules, and overhand pitching is illegal in softball. Women who do play baseball do throw overhand pitches.",
"provenance": null
},
{
"answer": "The underhanded throw is actually much easier on the arm (specifically the elbow), as it is a much more natural movement. That's why you hear about so many male pitchers needing Tommy John surgery. Now, I'd imagine that since girls typically have less upper-body strength, it makes more sense to use what we do have the more natural way to prevent injury and to use these muscles in the most efficient way possible. Also, getting added velocity is much easier using the pendulum motion than the slingshot motion that men typically use. Momentum!\n\nSource: I was a catcher for both baseball and fastpitch softball for 15 years and understand the mechanics that go into each style of pitching.",
"provenance": null
},
{
"answer": "The women who don't throw a ball conventionally is because nobody taught them the proper way. Most boys when growing up have someone show them how to throw a ball.",
"provenance": null
},
{
"answer": "This five-year-old's parents should stop teaching him/her such rigid gender roles. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "147733",
"title": "Slam dunk",
"section": "Section::::Dunking in women's play.\n",
"start_paragraph_id": 76,
"start_character": 0,
"end_paragraph_id": 76,
"end_character": 235,
"text": "Dunking is much less common in women's basketball than in men's play. Dunking is slightly more common during practice sessions, but many coaches advise against it in competitive play because of the risks of injury or failing to score.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "907318",
"title": "Batey (game)",
"section": "Section::::Batey origins.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 744,
"text": "\"Faltas\" (errors or faults) were made when the ball came to a halt on the ground or if it had been thrown out of bounds (outside the stone boundary markers). The ball could only be struck from the shoulder, the elbow, the head, the hips, the buttock, or the knees and never with the hands. Las Casas noted that when women played the game they did not use their hips or shoulders, but their knees. Points were earned when the ball failed to be returned from a non-faulted play (similar to the earning of points in today's volleyball). Play continued until the number of predetermined points was earned by a side. Often, players and chiefs made bets or wagers on the possible outcome of a game. These wagers were paid after a game was concluded.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30627895",
"title": "Overhand throwing motion",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 410,
"text": "The overhand throw is a complex motor skill that involves the entire body in a series of linked movements starting from the legs, progressing up through the pelvis and trunk, and culminating in a ballistic motion in the arm that propels a projectile forward. It is used almost exclusively in athletic events. The throwing motion can be broken down into three basic steps: cocking, accelerating, and releasing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11313086",
"title": "Fastpitch softball",
"section": "Section::::History.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 455,
"text": "During those years, the women's games were popular and fun to watch but the real draws were the men's games. Pitchers that could hurl the ball in excess of 85 mph at a batter 46 feet away could strike out 15 to 20 batters a game. To make things even more difficult, the underhand delivery meant the ball was rising as it approached the plate and a talented pitcher could make the ball perform some baffling aerobatics on its journey to the batter's box. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40913155",
"title": "Indigenous North American stickball",
"section": "Section::::The modern game.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 535,
"text": "In contemporary stickball games, it is not unusual to see women playing. Female stickball players are the only players on the field who are not required to use sticks and are allowed to pick up the ball with their hands, while men are always required to play with a pair of stickball sticks. Teams are usually split into men vs. women for social games. The men will suffer some sort of penalty or disqualification for being too aggressive towards the women players, but the women have no such restrictions on their methods of playing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16192026",
"title": "Ernie Awards",
"section": "Section::::Winners.:Gold Ernie.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 250,
"text": "BULLET::::- 1999: \"Magistrate #1\" (in a case reviewed by the Judicial Commission): \"Women cause a lot of problems by nagging, bitching and emotionally hurting men. Men cannot bitch back for hormonal reasons, and often have no recourse but violence.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7239865",
"title": "Women in combat",
"section": "Section::::Specific countries.:United States.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 295,
"text": "As far back as the Revolutionary War, when Molly Pitcher took over a cannon after her husband fell in the field, where she was delivering water (in pitchers), women have at times been forced into combat, though until recently they have been formally banned from choosing to do so intentionally.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
19hay6
|
Will Microscopes ever be powerful enough that we can view individual molecules?
|
[
{
"answer": "[Already exists](_URL_0_). That is from an atomic force microscope; optical microscopes are limited by the wavelength of light and laws of optics.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "102858",
"title": "Cell theory",
"section": "Section::::Microscopes.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 381,
"text": "Optical microscopes can focus on objects the size of a wavelength or larger, giving restrictions still to advancement in discoveries with objects smaller than the wavelengths of visible light. Later in the 1920s, the electron microscope was developed, making it possible to view objects that are smaller than optical wavelengths, once again, changing the possibilities in science.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3088223",
"title": "Superlens",
"section": "Section::::Development and construction.:Super-imaging in the visible frequency range.\n",
"start_paragraph_id": 112,
"start_character": 0,
"end_paragraph_id": 112,
"end_character": 693,
"text": "Continual improvements in optical microscopy are needed to keep up with the progress in nanotechnology and microbiology. Advancement in spatial resolution is key. Conventional optical microscopy is limited by a diffraction limit which is on the order of 200 nanometers (wavelength). This means that viruses, proteins, DNA molecules and many other samples are hard to observe with a regular (optical) microscope. The lens previously demonstrated with negative refractive index material, a thin planar superlens, does not provide magnification beyond the diffraction limit of conventional microscopes. Therefore, images smaller than the conventional diffraction limit will still be unavailable.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30869356",
"title": "Plasmonic metamaterial",
"section": "Section::::Materials.:Superlattice.\n",
"start_paragraph_id": 30,
"start_character": 0,
"end_paragraph_id": 30,
"end_character": 253,
"text": "Possible applications include a “planar hyperlens” that could make optical microscopes able to see objects as small as DNA, advanced sensors, more efficient solar collectors, nano-resonators, quantum computing and diffraction free focusing and imaging.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29400",
"title": "Structural biology",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 275,
"text": "Biomolecules are too small to see in detail even with the most advanced light microscopes. The methods that structural biologists use to determine their structures generally involve measurements on vast numbers of identical molecules at the same time. These methods include:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30833420",
"title": "Transmission electron microscopy DNA sequencing",
"section": "Section::::Principle.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 230,
"text": "The electron microscope has the capacity to obtain a resolution of up to 100 pm, whereby microscopic biomolecules and structures such as viruses, ribosomes, proteins, lipids, small molecules and even single atoms can be observed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4387176",
"title": "Atomic de Broglie microscope",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 549,
"text": "The idea of imaging with atoms instead of light is widely discussed in the literature since the past century. Atom optics using neutral atoms instead of light could provide resolution as good as the electron microscope and be completely non-destructive, because short wavelengths on the order of a nanometer can be realized at low energy of the probing particles. \"It follows that a helium microscope with nanometer resolution is possible. A helium atom microscope will be [a] unique non-destructive tool for reflection or transmission microscopy.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6897386",
"title": "Max Planck Institute of Neurobiology",
"section": "Section::::Scientific Focus.:Research Groups.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 343,
"text": "BULLET::::- The best microscopes are only of little aid if the cells or processes to be investigated are hardly discernible from their background. Dr. Oliver Griesbeck and his Research Group Cellular Dynamics develop biosensors, which stain specific cells or change their fluorescent hue when something goes on in the investigated nerve cell.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
7qetr2
|
i need help understanding the difference between torque and rotational inertia
|
[
{
"answer": "Ok, ELI5 attempt: you can imagine rotational inertia as a measure of the total kinetic energy in a rotating object. It's the amount of energy that you have to invest to spin the object up from a standstill to the current situation, or alternatively, the energy you need to bring it to a stop.\n\nTorque, on the other hand, is the force (not energy) you need to apply in order to affect a certain change in rotation.",
"provenance": null
},
{
"answer": "All the rotational quantities you're learning are equivalent to linear motion concepts you already know.\n\n* Torque is the rotational version of force. \n* Moment of inertia is the rotational version of mass. \n* Angular acceleration is the rotational version of acceleration.\n* Angular momentum is the rotational version of linear momentum.\n* \"sum of torques = moment of inertia * angular accel\" is the rotational version of \"sum of F = m a\".\n\n... and so on.\n\nThe key difference is that because of the [arc length formula](_URL_0_), when an object rotates by a certain angle, the parts of it that are further from the center of rotation travel farther. Thus, the \"distance from the center\", aka the \"moment arm\", appears in the definitions for many of these quantities. The torque is greater if you apply a force far from the center -- this makes it easier to turn. Moment of inertia is greater if the mass is far from the center -- this makes it harder to speed up or slow down its rotation. And so on.\n\nBe careful with the analogy though: while torque is the \"rotational equivalent\" of force, it's *not* force. They're related by the equation\n\nTorque = force * moment arm * sin(angle between them)\n\nso you can see that torque has units of newtons * meters, not newtons.",
"provenance": null
},
{
"answer": "Linear | Rotating\n---|---\nForce | Torque\nInertia | Rotational Inertia\n\n\nTorque is the force you apply in a circular fashion. The moment arm is how far away from the axis that the force is applied. If I have a wrench and I'm turning a nut with it, I'm applying torque (circular force) to that nut by pushing on the end of the wrench. The distance between the center of the nut and where my hand is is the moment arm. The further from the nut my hand is (longer the wrench) the more torque is applied as long as I'm pushing with the same amount of strength.\n\nRotational inertia is just how much inertia a spinning object has. I spin a basketball on my finger by applying torque to it. Once it's spinning, it keeps spinning because it has rotational inertia, even if I apply no additional torque.\n\n[I made you a crappy picture](_URL_0_)",
"provenance": null
},
{
"answer": "Probably the easiest way to look at rotation is to think of it as having parts that are analogous to the linear motion you're probably used to dealing with. You're used to working with position, velocity, acceleration, momentum, force, mass, etc.\n\nAll of those have angular analogues when you're dealing with rotation. Angular position, angular velocity, angular acceleration, and angular momentum are easy to identify as analogues to linear position, velocity, acceleration, and momentum. It's a bit trickier to identify the analogues for force and mass, though, which is where torque and moment of inertia come in.\n\nIn linear motion, we deal with momentum, which is calculated as mass * velocity. It's constant unless a force is applied to a system. Force is defined as the rate of change of linear momentum, which is why F = ma (as acceleration is the derivative - and therefore the rate of change of - velocity). In this case, we could think of mass as a measure of how difficult it is to accelerate an object by applying a force to it.\n\nAnd it turns out that the angular analogues work pretty much the same way! We have angular momentum, which we can (in the case of a rotating system) define as rotational inertia * angular velocity, and by taking its derivative, we can find a rate of change for it that is analogous to force. This works out to being rotational inertia * angular acceleration, and it's what we call torque. Just like how you can think of mass as a measure of how difficult it is to accelerate an object, you can think of rotational inertia as being a measure of how difficult it is to cause the object to rotate faster. \n\nA moment arm is used to easily calculate the magnitude of torque (which you can calculate as moment arm * force). Simply put, it's a line between the axis that you're rotating on and the point where the force is being applied. In the door analogy, it's the distance from the hinges to the spot where you're pushing on the door. It's the reason why door knobs are on the opposite side from the hinges - it's easier to open the door by pushing on the opposite side than if you try to push next to the hinges, because you can produce more torque with the same amount of force.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "10902",
"title": "Force",
"section": "Section::::Rotations and torque.\n",
"start_paragraph_id": 128,
"start_character": 0,
"end_paragraph_id": 128,
"end_character": 511,
"text": "Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's First Law of Motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's Second Law of Motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24714",
"title": "Precession",
"section": "Section::::Torque-free.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 787,
"text": "Torque-free precession implies that no external moment (torque) is applied to the body. In torque-free precession, the angular momentum is a constant, but the angular velocity vector changes orientation with time. What makes this possible is a time-varying moment of inertia, or more precisely, a time-varying inertia matrix. The inertia matrix is composed of the moments of inertia of a body calculated with respect to separate coordinate axes (e.g. , , ). If an object is asymmetric about its principal axis of rotation, the moment of inertia with respect to each coordinate direction will change with time, while preserving angular momentum. The result is that the component of the angular velocities of the body about each axis will vary inversely with each axis' moment of inertia.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3096395",
"title": "Rotation around a fixed axis",
"section": "Section::::Vector expression.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 442,
"text": "The torque vector points along the axis around which the torque tends to cause rotation. To maintain rotation around a fixed axis, the total torque vector has to be along the axis, so that it only changes the magnitude and not the direction of the angular velocity vector. In the case of a hinge, only the component of the torque vector along the axis has an effect on the rotation, other forces and torques are compensated by the structure.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "24714",
"title": "Precession",
"section": "Section::::Torque-induced.:Classical (Newtonian).\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 362,
"text": "Due to the way the torque vectors are defined, it is a vector that is perpendicular to the plane of the forces that create it. Thus it may be seen that the angular momentum vector will change perpendicular to those forces. Depending on how the forces are created, they will often rotate with the angular momentum vector, and then circular precession is created.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12939",
"title": "Geometric algebra",
"section": "Section::::Examples and applications.:Rotating systems.\n",
"start_paragraph_id": 144,
"start_character": 0,
"end_paragraph_id": 144,
"end_character": 209,
"text": "The mathematical description of rotational forces such as torque and angular momentum often makes use of the cross product of vector calculus in three dimensions with a convention of orientation (handedness).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42896",
"title": "Viscometer",
"section": "Section::::Rotational viscometers.\n",
"start_paragraph_id": 41,
"start_character": 0,
"end_paragraph_id": 41,
"end_character": 222,
"text": "Rotational viscometers use the idea that the torque required to turn an object in a fluid is a function of the viscosity of that fluid. They measure the torque required to rotate a disk or bob in a fluid at a known speed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31075021",
"title": "Rotary actuator",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 364,
"text": "The motion produced by an actuator may be either continuous rotation, as for an electric motor, or movement to a fixed angular position as for servomotors and stepper motors. A further form, the torque motor, does not necessarily produce any rotation but merely generates a precise torque which then either causes rotation, or is balanced by some opposing torque.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
47jq35
|
electricity supply
|
[
{
"answer": "Houses have 3 power lines. Imagine a top, bottom and middle line. Top and bottom are 240V with respect to each other. The middle line is there to give you 120V between it and the top or bottom line.",
"provenance": null
},
{
"answer": "If you plug a 240 appliance into a 120 outlet, chances are it just won't work. Voltage is electric \"pressure\". ",
"provenance": null
},
{
"answer": "For a simple resistive load a 240V unit rated at 10 amps will draw 5 amps at 120V. Current and voltage are **not** inversely proportional in this case. I=E/R.\n\nIf you plug in a unit which can automatically work anywhere between 120 and 240 volts, then the current at 120V will be approx twice the current as at 240V. In this case they are inversely proportional. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3270043",
"title": "Electric power",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 557,
"text": "Electric power is usually produced by electric generators, but can also be supplied by sources such as electric batteries. It is usually supplied to businesses and homes (as domestic mains electricity) by the electric power industry through an electric power grid. Electric energy is usually sold by the kilowatt hour (1 kW·h = 3.6 MJ) which is the product of the power in kilowatts multiplied by running time in hours. Electric utilities measure power using an electricity meter, which keeps a running total of the electric energy delivered to a customer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "219042",
"title": "Power supply",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 1011,
"text": "A power supply is an electrical device that supplies electric power to an electrical load. The primary function of a power supply is to convert electric current from a source to the correct voltage, current, and frequency to power the load. As a result, power supplies are sometimes referred to as electric power converters. Some power supplies are separate standalone pieces of equipment, while others are built into the load appliances that they power. Examples of the latter include power supplies found in desktop computers and consumer electronics devices. Other functions that power supplies may perform include limiting the current drawn by the load to safe levels, shutting off the current in the event of an electrical fault, power conditioning to prevent electronic noise or voltage surges on the input from reaching the load, power-factor correction, and storing energy so it can continue to power the load in the event of a temporary interruption in the source power (uninterruptible power supply).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "175959",
"title": "Mains electricity",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 464,
"text": "Mains electricity (as it is known in the UK and some parts of Canada; US terms include grid power, wall power, and domestic power; in much of Canada it is known as hydro) is the general-purpose alternating-current (AC) electric power supply. It is the form of electrical power that is delivered to homes and businesses, and it is the form of electrical power that consumers use when they plug domestic appliances, televisions and electric lamps into wall outlets.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3270043",
"title": "Electric power",
"section": "Section::::Electric power industry.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 473,
"text": "The electric power industry provides the production and delivery of power, in sufficient quantities to areas that need electricity, through a grid connection. The grid distributes electrical energy to customers. Electric power is generated by central power stations or by distributed generation. The electric power industry has gradually been trending towards deregulation - with emerging players offering consumers competition to the traditional public utility companies.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27642888",
"title": "Electric utility",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 283,
"text": "An electric utility is a company in the electric power industry (often a public utility) that engages in electricity generation and distribution of electricity for sale generally in a regulated market. The electrical utility industry is a major provider of energy in most countries.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9550",
"title": "Electricity",
"section": "Section::::Concepts.:Electric power.\n",
"start_paragraph_id": 61,
"start_character": 0,
"end_paragraph_id": 61,
"end_character": 710,
"text": "Electricity generation is often done with electric generators, but can also be supplied by chemical sources such as electric batteries or by other means from a wide variety of sources of energy. Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9540",
"title": "Electricity generation",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 334,
"text": "Electricity generation is the process of generating electric power from sources of primary energy. For electric utilities in the electric power industry, it is the first stage in the delivery of electricity to end users, the other stages being transmission, distribution, energy storage and recovery, using the pumped-storage method.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ck8cze
|
why do muscles stiffen and lose flexibility? and why does stretching sometimes feel good and sometimes hurt?
|
[
{
"answer": "Lots and lots of reasons. But ELI5. Muscles get stiff because they get used to being short and all the fibres get tighter and closer together. It can also be because of literal knots in the muscle. Imagine you cut a piece of string in half, to make it whole you have to tie a knot in it. The string is shorter but it’s whole. These are knots and there can be thousands. Thanks to healing and massage those cuts can be healed to normal. \n\nPain when stretching is normally due to excessive tearing. It’s your body screaming at you to stop. It feels good because of other reasons that I’m not clear on.",
"provenance": null
},
{
"answer": "It is rarely muscle fibers that are the issue. For most of us it is the facia that runs through the muscles that becomes inflexible and inflamed. This prevents the muscle fibers from contracting and stretching as they are designed to do. \nAs for feeling good after stretches (done properly), you have ‘freed’ those fibers and lessened the inflammation in the area.",
"provenance": null
},
{
"answer": "NASM Certified Trainer here, the ELIF simple explanation for the stretching would be: it feels good when loosening (getting the knots out) or prepping the muscles for activity. It can often hurt when you are trying to become more flexible because when you push your muscles past it's normal range your body has 2 alarms. The first will try to resist and bring the muscle back to avoid injury (this is where it hurts) but after 30 seconds some complicated process happens and the second alarm will tell the body to relax and loosen up to avoid injury. \n\nSo make sure to hold your stretches for 30second!:)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5300182",
"title": "Flexibility (anatomy)",
"section": "Section::::Stretching.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 460,
"text": "Flexibility is improved by stretching. Stretching should only be started when muscles are warm and the body temperature is raised. To be effective while stretching, force applied to the body must be held just beyond a feeling of pain and needs to be held for at least ten seconds. Increasing the range of motion creates good posture and develops proficient performance in everyday activities increasing the length of life and overall health of the individual.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "39595",
"title": "Human leg",
"section": "Section::::Structure.:Flexibility.:Stretching.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 451,
"text": "Stretching prior to strenuous physical activity has been thought to increase muscular performance by extending the soft tissue past its attainable length in order to increase range of motion. Many physically active individuals practice these techniques as a “warm-up” in order to achieve a certain level of muscular preparation for specific exercise movements. When stretching, muscles should feel somewhat uncomfortable but not physically agonizing.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29930733",
"title": "Precor StretchTrainer",
"section": "Section::::Physical benefits.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 426,
"text": "An active stretching regimen can strengthen muscles because stretching affects muscles in a way similar to strength training, just on a smaller scale. A stretching regimen has been shown to increase weight-lifting abilities, improve endurance, and assist in plyometrics. Research shows that StretchTrainer users can increase their flexibility (as judged by a basic sit and reach test) after 30 days of use, regardless of age.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "868983",
"title": "Stretching",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 367,
"text": "Stretching is a form of physical exercise in which a specific muscle or tendon (or muscle group) is deliberately flexed or stretched in order to improve the muscle's felt elasticity and achieve comfortable muscle tone. The result is a feeling of increased muscle control, flexibility, and range of motion. Stretching is also used therapeutically to alleviate cramps.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "868983",
"title": "Stretching",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 469,
"text": "Stretching can be dangerous when performed incorrectly. There are many techniques for stretching in general, but depending on which muscle group is being stretched, some techniques may be ineffective or detrimental, even to the point of causing hypermobility, instability, or permanent damage to the tendons, ligaments, and muscle fiber. The physiological nature of stretching and theories about the effect of various techniques are therefore subject to heavy inquiry.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "868983",
"title": "Stretching",
"section": "Section::::Effectiveness.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 705,
"text": "There are different positives and negatives for the two main types of stretching: static and dynamic. Static stretching is better at creating a more intense stretch because it is able to isolate a muscle group better. But this intense of a stretch may hinder one's athletic performance because the muscle is being over stretched while held in this position and, once the tension is released, the muscle will tend to tighten up and may actually become weaker than it was previously . Also, the longer the duration of static stretching, the more exhausted the muscle becomes. This type of stretching has been shown to have negative results on athletic performance within the categories of power and speed .\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "828050",
"title": "Warming up",
"section": "Section::::Stretching.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 209,
"text": "Stretching is part of some warm up routines, although a study in 2013 indicates that it weakens muscles in that situation. There are 3 types of stretches: ballistic stretching, dynamic, and static stretching:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
30pdom
|
what is a proxy war? what sets it apart from a traditional war/conflict?
|
[
{
"answer": "It's when two countries fight a war using other, smaller countries like puppets. Watch this: _URL_0_",
"provenance": null
},
{
"answer": "You have a brother called Joe, and a sister called Suzy, now you don't want to fight Joe because your parent will get mad at you, so you give Suzy some candy to pick a fight with Joe. Your parents don't get upset with you, because you're not involved.",
"provenance": null
},
{
"answer": "I'm writing my graduate thesis on this topic, so I should be able to answer it. \n\nProxy war is one of those words that gets thrown around a lot with little thought as to its meaning. It has existed for hundreds of centuries, but it is really only within the last 10 years that is receiving serious study by academics, though there is still a long way to go. \n\nProxy war is *external actor(s) seeking to indirectly influence the outcome of a conflict in pursuit of their strategic policy objectives by providing direct and intentional assistance to an existing actor in the conflict.* (I should point out that this definition of is a modified version of the definition provided by Andrew Mumford and also includes input from Geraint Hughes' description of proxy warfare and Daniel Byman’s definition of state sponsorship of terrorism). \n\nWhat does that mean exactly? It means that a state or a nonstate group (which I will call the *benefactor*) is providing assistance to one or more groups fighting in a war (which I will call *proxies*). The benefactors are providing support to their proxies with the belief that their assistance will influence the outcome of the war in a way that is beneficial to them. And the benefactor doesn't want to fight the war itself, so they are supporting someone who will fight on their behalf. The proxies want this support because they believe that it will help them win the war by giving them access to things--like weapons, money, training, intelligence, logistical support, and other fighters--that they would not be able to get normally. This relationship is often covert because neither the benefactor nor the proxy want the world to know of their relationship. \n\nIt may also help to explain what proxy war is *not.* First, it is rarely (if ever) one country fighting on behalf of another country. The most commonly cited example of this phenomenon is Cuba sending 30,000 soldiers to fight in the Angolan Civil War. Most people assume that Cuba did this on behalf of the Soviet Union, but recent studies have shown that not to be the case. Second, proxy war is not diplomatically supporting a group fighting a war. Unless there is the direct transfer of materials or support to the group to help them win the war, it is not proxy warfare. \n\nAs to your second question, what makes proxy warfare different is the additional level of support behind the parties of the war. However, in terms of how the fighting is conducted on the ground it is very similar to normal war except that it is usually bloodier and lasts several years longer. The existence of a proxy relationship complicates the war because it means that the proxy will always have its base of support outside of the conflict zone, and therefore will be much harder to defeat. \n\nEDIT: Wording",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "578611",
"title": "Proxy war",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 531,
"text": "A proxy war is an armed conflict between two states or non-state actors which act on the instigation or on behalf of other parties that are not directly involved in the hostilities. In order for a conflict to be considered a proxy war, there must be a direct, long-term relationship between external actors and the belligerents involved. The aforementioned relationship usually takes the form of funding, military training, arms, or other forms of material assistance which assist a belligerent party in sustaining its war effort.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53451",
"title": "Hegemony",
"section": "Section::::Historical examples.:20th century.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 284,
"text": "Proxy wars became battle grounds between forces supported either directly or indirectly by the hegemonic powers and included the Korean War, the Laotian Civil War, the Arab–Israeli conflict, the Vietnam War, the Afghan War, the Angolan Civil War, and the Central American Civil Wars.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "578611",
"title": "Proxy war",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1758,
"text": "Since the early twentieth century, proxy wars have most commonly taken the form of states assuming the role of sponsors to non-state proxies, essentially using them as fifth columns to undermine an adversarial power. This type of proxy warfare includes external support for a faction engaged in a civil war, terrorists, national liberation movements, and insurgent groups, or assistance to a national revolt against foreign occupation. For example, the British partly organized and instigated the Arab Revolt to undermine the Ottoman Empire during World War I. Many proxy wars began assuming a distinctive ideological dimension after the Spanish Civil War, which pitted the fascist political ideology of Italy and National Socialist ideology of Nazi Germany against the communist ideology of the Soviet Union without involving these states in open warfare with each other. Sponsors of both sides also used the Spanish conflict as a proving ground for their own weapons and battlefield tactics. During the Cold War, proxy warfare was motivated by fears that a conventional war between the United States and Soviet Union would result in nuclear holocaust, rendering the use of ideological proxies a safer way of exercising hostilities. The Soviet government found that supporting parties antagonistic to the US and Western nations was a cost-effective way to combat NATO influence in lieu of direct military engagement. In addition, the proliferation of televised media and its impact on public perception made the US public especially susceptible to war-weariness and skeptical of risking American life abroad. This encouraged the American practice of arming insurgent forces, such as the funneling of supplies to the mujahideen during the Soviet–Afghan War.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "32817779",
"title": "Cold peace",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 417,
"text": "It is contrasted against a cold war, in which at least two states which are not openly pursuing a state of war against each other, openly or covertly support conflicts between each other's client states or allies. Cold peace, while marked by similar levels of mistrust and antagonistic domestic policy between the two governments and populations, do not result in proxy wars, formal incursions, or similar conflicts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2231892",
"title": "War of succession",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 371,
"text": "A war of succession or succession war is a war prompted by a succession crisis in which two or more individuals claim the right of successor to a deceased or deposed monarch. The rivals are typically supported by factions within the royal court. Foreign powers sometimes intervene, allying themselves with a faction. This may widen the war into one between those powers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50650174",
"title": "Coalition Wars",
"section": "Section::::Terminology.:Compared to other terms.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 494,
"text": "Because it only pertains to wars involving any of the Coalition parties, not all wars counted amongst the French Revolutionary and Napoleonic Wars are considered \"Coalition Wars\". For example, the French invasion of Switzerland (1798, between the First and Second Coalition), the Stecklikrieg (1802, between the Second and Third Coalition) and the French invasion of Russia (1812, between the Fifth and Sixth Coalition) were not \"Coalition Wars\", since France fought against a single opponent.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18537936",
"title": "Uppsala Conflict Data Program",
"section": "Section::::UCDP's definitions of organized violence.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 485,
"text": "State-based conflict refers to what most people intuitively perceive as \"war\"; fighting either between two states, or between a state and a rebel group that challenges it. The UCDP defines an armed state-based conflict as: \"An armed conflict is a contested incompatibility that concerns government and/or territory where the use of armed force between two parties, of which at least one is the government of a state, results in at least 25 battle-related deaths in one calendar year\".\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
797zly
|
why does vision degrade when you are tired?
|
[
{
"answer": "Your eyes become fatigued, as they are working muscles. So after a long day of using your eyes, they need that rest. Usually by the time you’re tired your eyes have been strained enough to feel that fatigue. It can also sometimes make you think that you are tired when your eyes just need resting too, especially if you use bright light objects such as a computer phone or television for an extensive amount of time. ",
"provenance": null
},
{
"answer": "Eye tech here. May I also add that if you have been staring at a screen (computer, tv, phone) for any length of time you tend to blink less often. The less you blink, the more dry your eyes get. You need a complete tear film for clear vision. It's actually part of the refractive process. If you live in a dry climate, it's even more difficult. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "554130",
"title": "Adaptation (eye)",
"section": "Section::::Dark adaptation.:Measuring Dark Adaptation.:Using Dark Adaptation Measurement to Diagnose Disease.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 1159,
"text": "Numerous clinical studies have shown that dark adaptation function is dramatically impaired from the earliest stages of AMD, retinitis pigmentosa (RP), and other retinal diseases, with increasing impairment as the diseases progress. AMD is a chronic, progressive disease that causes a part of your retina, called the macula, to slowly deteriorate as you get older. It is also the leading cause of vision loss among people age 50 and older. It is characterized by a breakdown of the RPE/Bruch's membrane complex in the retina, leading to an accumulation of cholesterol deposits in the macula. Eventually, these deposits become clinically-visible drusen that affect photoreceptor health, causing inflammation and a predisposition to choroidal neovascularization (CNV). During the AMD disease course, the RPE/Bruch's function continues to deteriorate, hampering nutrient and oxygen transport to the rod and cone photoreceptors. As a side effect of this process, the photoreceptors exhibit impaired dark adaptation because they require these nutrients for replenishment of photopigments and clearance of opsin to regain scotopic sensitivity after light exposure.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40137538",
"title": "Accommodative infacility",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 414,
"text": "Accommodative infacility is the inability to change the accommodation of the eye with enough speed and accuracy to achieve normal function. This can result in visual fatigue, headaches, and difficulty reading. The delay in accurate accommodation also makes vision blurry for a moment when switching between distant and near objects. The duration and extent of this blurriness depends on the extent of the deficit.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1070221",
"title": "Human eye",
"section": "Section::::Clinical significance.:Eye disease.\n",
"start_paragraph_id": 73,
"start_character": 0,
"end_paragraph_id": 73,
"end_character": 1127,
"text": "As the eye ages, certain changes occur that can be attributed solely to the aging process. Most of these anatomic and physiologic processes follow a gradual decline. With aging, the quality of vision worsens due to reasons independent of diseases of the aging eye. While there are many changes of significance in the non-diseased eye, the most functionally important changes seem to be a reduction in pupil size and the loss of accommodation or focusing capability (presbyopia). The area of the pupil governs the amount of light that can reach the retina. The extent to which the pupil dilates decreases with age, leading to a substantial decrease in light received at the retina. In comparison to younger people, it is as though older persons are constantly wearing medium-density sunglasses. Therefore, for any detailed visually guided tasks on which performance varies with illumination, older persons require extra lighting. Certain ocular diseases can come from sexually transmitted diseases such as herpes and genital warts. If contact between the eye and area of infection occurs, the STD can be transmitted to the eye.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3132756",
"title": "Troxler's fading",
"section": "Section::::Explanation of effect.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 544,
"text": "Troxler's fading can occur without any extraordinary stabilization of the retinal image in peripheral vision because the neurons in the visual system beyond the rods and cones have large receptive fields. This means that the small, involuntary eye movements made when fixating on something fail to move the stimulus onto a new cell's receptive field, in effect giving unvarying stimulation. Further experimentation this century by Hsieh and Tse showed that at least some portion of the perceptual fading occurred in the brain, not in the eyes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "794008",
"title": "Dry eye syndrome",
"section": "Section::::Causes.:Additional causes.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 518,
"text": "Aging is one of the most common causes of dry eyes because tear production decreases with age. Several classes of medications (both prescription and OTC) have been hypothesized as a major cause of dry eye, especially in the elderly. Particularly, anticholinergic medications that also cause dry mouth are believed to promote dry eye. Dry eye may also be caused by thermal or chemical burns, or (in epidemic cases) by adenoviruses. A number of studies have found that diabetics are at increased risk for the disease.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16413778",
"title": "Ageing",
"section": "Section::::Effects.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 550,
"text": "Age can result in visual impairment, whereby non-verbal communication is reduced, which can lead to isolation and possible depression. Older adults, however, may not suffer depression as much as younger adults, and were paradoxically found to have improved mood despite declining physical health. Macular degeneration causes vision loss and increases with age, affecting nearly 12% of those above the age of 80. This degeneration is caused by systemic changes in the circulation of waste products and by growth of abnormal vessels around the retina.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5536415",
"title": "Retinitis",
"section": "Section::::Symptoms.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 469,
"text": "The first symptom of this disease is usually a slow loss of vision. Early signs of Retinitis include loss of night vision; making it harder to drive at night. Later signs of retinitis include loss of peripheral vision, leading to tunnel vision. In some cases, symptoms are experienced in only one of the eyes. Experiencing the vision of floaters, flashes, blurred vision and loss of side vision in just one of the eyes is an early indication of the onset of Retinitis.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ekwx4r
|
Was there any condemnation in the 1500s towards Luther's piece on the Jewish community?
|
[
{
"answer": "I wasn't active here for a while and am looking through the backlog of Judaism related questions, and think this is a great one.\n\nLuther's views on Jews in that essay were by no means universal and perhaps more vitriolic than the average, but they did reflect his era's view on Jews in general, were not considered particularly unusual, and were very influential. \n\nHis attitude in general toward Judaism was very much shaped by the prevailing Christian beliefs in Judaism as the antithesis to Christianity, in supersessionism (the idea that with the New Testament, Christians had become the \"real Jews\"), and in Jews as a malevolent force. It's unlikely that Luther himself ever met more than a few Jews in his life, despite living among them; however, he would have had plenty of material, theological and otherwise, on which to rely in his formulations here. For centuries, Jews had been used as examples of veniality, heresy, blindness to reason and truth, falsehood, and arrogance. Martin Luther simply continued in this tradition in many ways, in that he used his perception of Judaism and Jews as a foil for how he saw Christianity to be, as in the much older comparison of the frail, blindfolded Synagoga with the youthful, forward-seeing Ecclesia. However, it is undoubted that these feelings about Judaism went beyond a rhetorical device and were instead an actual sentiment felt about actual Jews.\n\nSome draw a sharp distinction between early Luther and late Luther in terms of his antisemitism, saying that he was actually somewhat friendly to Jews in his early life and only in his later years grew virulently antisemitic. However, while there is certainly a difference in the level of the rhetoric regarding Jews, the actual feelings were essentially the same. As I mention in [this answer](_URL_0_), no matter how benevolent Christian theologians and academics ever were toward Jews, it was nearly always from a position of superiority and disdain and often with an eye on conversion. In fact, a main feature of Luther's early writing about Jews is his opinion that if Christians treat Jews badly then they won't want to convert. However, it seems that later in his life he became less tolerant, and soon raged against Jewish practice of Judaism as a heresy against Christianity but no longer believed that conversion was possible.\n\nIt was at this point that his statements about Jews became far more violent than merely advocating for conversion. He made recommendations like \"... first to set fire to their synagogues or schools ... to raze and destroy their houses ... to take all their prayer books and Talmudic writings ... that their rabbis be forbidden to teach henceforth on pain of loss of life and limb ... that safe conduct on the highways be abolished completely ... that that usury be prohibited to them, and that all cash and treasure of silver and gold be taken from them and put aside for safekeeping...\" In these recommendations he was NOT necessarily supported by other Christian scholars of his era; however, this was because of the violent nature of his statements rather than a fundamental disagreement between him and them about the role of Jews, their inferiority, and their heresy. These scholars would have preferred something of a benign disdain, enlightened curiosity, and subtle (or not so subtle!) attempts at conversion. Luther was seen as vulgar in his recommendations by other theologians, not necessarily as wrong in his opinions about Jews from a theological perspective. \n\n & #x200B;\n\nBell, \"Martin Luther and the Jews: Context and Content\"\n\nRudnick, \"Early Modern Hate Speech- Martin Luther's Anti-Semitism Responses and Reactions\"",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "14606749",
"title": "Vom Schem Hamphoras",
"section": "",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 824,
"text": "Luther argued that the Jews were no longer the chosen people but \"the devil's people\". An English translation of \"Vom Schem Hamphoras\" was first published in 1992 as part of \"The Jew In Christian Theology\" by Gerhard Falk. Historians have noted Luther's writings contributed to antisemitism within the German provinces during his era. Historical evidence shows that the Nazi Party in the 1930s and 1940s used Luther's writings to build up antisemitism under their rule, by exerting pressure on schools to incorporate it into the curriculum, and the Lutheran church to incorporate it into sermons. Whether or not Luther's writings were a leading force for antisemitism in Europe over the past 500 years is currently being debated by historians. Nevertheless, it is clear that his writings were used extensively by the Nazis.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16653517",
"title": "Martin Luther and antisemitism",
"section": "Section::::Influence on modern antisemitism.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 965,
"text": "Other scholars assert that Luther's antisemitism as expressed in \"On the Jews and Their Lies\" is based on religion. Bainton asserts that Luther's position was \"entirely religious and in no respect racial. The supreme sin for him was the persistent rejection of God's revelation of himself in Christ. The centuries of Jewish suffering were themselves a mark of the divine displeasure. They should be compelled to leave and go to a land of their own. This was a program of enforced Zionism. But if it were not feasible, then Luther would recommend that the Jews be compelled to live from the soil. He was unwittingly proposing a return to the condition of the early Middle Ages, when the Jews had been in agriculture. Forced off the land, they had gone into commerce and, having been expelled from commerce, into money lending. Luther wished to reverse the process and thereby inadvertently would accord the Jews a more secure position than they enjoyed in his day.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16653517",
"title": "Martin Luther and antisemitism",
"section": "Section::::Influence on modern antisemitism.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 1154,
"text": "The prevailing view among historians is that Luther's anti-Jewish rhetoric contributed significantly to the development of antisemitism in Germany, and in the 1930s and 1940s provided an ideal foundation for the Nazi Party's attacks on Jews. Reinhold Lewin writes that \"whoever wrote against the Jews for whatever reason believed he had the right to justify himself by triumphantly referring to Luther.\" According to Michael, just about every anti-Jewish book printed in the Third Reich contained references to and quotations from Luther. Diarmaid MacCulloch argues that Luther's 1543 pamphlet \"On the Jews and Their Lies\" was a \"blueprint\" for the Kristallnacht. Shortly after the Kristallnacht, Martin Sasse, Bishop of the Evangelical Lutheran Church in Thuringia, published a compendium of Martin Luther's writings; Sasse \"applauded the burning of the synagogues\" and the coincidence of the day, writing in the introduction, \"On November 10, 1938, on Luther's birthday, the synagogues are burning in Germany.\" The German people, he urged, ought to heed these words \"of the greatest anti-Semite of his time, the warner of his people against the Jews.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16653517",
"title": "Martin Luther and antisemitism",
"section": "Section::::Luther's words and scholarship.\n",
"start_paragraph_id": 81,
"start_character": 0,
"end_paragraph_id": 81,
"end_character": 532,
"text": "In 1988, theologian Stephen Westerholm argued that Luther's attacks on Jews were part and parcel of his attack on the Catholic Church—that Luther was applying a Pauline critique of Phariseism as legalistic and hypocritical to the Catholic Church. Westerholm rejects Luther's interpretation of Judaism and his apparent antisemitism but points out that whatever problems exist in Paul's and Luther's arguments against Jews, what Paul, and later, Luther, were arguing \"for\" was and continues to be an important vision of Christianity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16653517",
"title": "Martin Luther and antisemitism",
"section": "Section::::Debate on influence on Nazis.\n",
"start_paragraph_id": 57,
"start_character": 0,
"end_paragraph_id": 57,
"end_character": 467,
"text": "Michael has argued that Luther scholars who try to tone down Luther's views on the Jews ignore the murderous implications of his antisemitism. Michael argues that there is a \"strong parallel\" between Luther's ideas and the antisemitism of most German Lutherans throughout the Holocaust. Like the Nazis, Luther mythologized the Jews as evil, he writes. They could be saved only if they converted to Christianity, but their hostility to the idea made it inconceivable.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16653517",
"title": "Martin Luther and antisemitism",
"section": "Section::::Evolution of his views.:Anti-Jewish agitation.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 648,
"text": "Josel of Rosheim, who tried to help the Jews of Saxony, wrote in his memoir that their situation was \"due to that priest whose name was Martin Luther — may his body and soul be bound up in hell!! — who wrote and issued many heretical books in which he said that whoever would help the Jews was doomed to perdition.\" Robert Michael, Professor Emeritus of European History at the University of Massachusetts Dartmouth writes that Josel asked the city of Strasbourg to forbid the sale of Luther's anti-Jewish works; they refused initially, but relented when a Lutheran pastor in Hochfelden argued in a sermon that his parishioners should murder Jews.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16653517",
"title": "Martin Luther and antisemitism",
"section": "Section::::The influence of Luther's views.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 663,
"text": "Luther's treatises against the Jews were reprinted again early in the 17th century at Dortmund, where they were seized by the Emperor. In 1613 and 1617 they were published in Frankfurt am Main in support of the banishment of Jews from Frankfurt and Worms. Vincenz Fettmilch, a Calvinist, reprinted \"On the Jews and Their Lies\" in 1612 to stir up hatred against the Jews of Frankfurt. Two years later, riots in Frankfurt saw the deaths of 3,000 Jews and the expulsion of the rest. Fettmilch was executed by the Lutheran city authorities, but Michael writes that his execution was for attempting to overthrow the authorities, not for his offenses against the Jews.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1zbkfi
|
the median of something vs. the average.
|
[
{
"answer": "Median is just a different way to calculate an average. The three main ways to 'average' a group of numbers are: Mean, median, and mode. \nLet's say you have the following 11 speeds caught on a radar and you need to determine the average\n(56,58,62,65,65,68,69,70, 71,74, 75)\n\nMean = 66.6 (add all #s and divide by the sample size). This is the most common use when someone talks about an average. \nMedian = 68 (the middle number when #s are sorted by size. 68 in this example is the 6th sample counting up from the smallest and 6th sample counting down from the largest sample) \nMode = 65 (mode is the most frequently sampled speed as there were 2 samples at that speed whereas there is only 1 sample for all the other speeds) \n\nThe type of average used depends a lot on what the user wants to convey. Mode is often used to communicate image the most \"popular\" or most likely outcome. Median is used to identify the middle, where it may be helpful to know that half of the numbers are smaller and half the numbers are larger... You can even say that is the number where there is a 50% chance that any new sample will be larger and 50% chance that it will be smaller. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "18837",
"title": "Median",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 466,
"text": "The median is the value separating the higher half from the lower half of a data sample (a population or a probability distribution). For a data set, it may be thought of as the \"middle\" value. For example, in the data set {1, 3, 3, 6, 7, 8, 9}, the median is 6, the fourth largest, and also the fourth smallest, number in the sample. For a continuous probability distribution, the median is the value such that a number is equally likely to fall above or below it.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18837",
"title": "Median",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 625,
"text": "The median is a commonly used measure of the properties of a data set in statistics and probability theory. The basic advantage of the median in describing data compared to the mean (often simply described as the \"average\") is that it is not skewed so much by a small proportion of extremely large or small values, and so it may give a better idea of a \"typical\" value. For example, in understanding statistics like household income or assets, which vary greatly, the mean may be skewed by a small number of extremely high or low values. Median income, for example, may be a better way to suggest what a \"typical\" income is.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18837",
"title": "Median",
"section": "Section::::Finite set of numbers.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 361,
"text": "The median is used primarily for skewed distributions, which it summarizes differently from the arithmetic mean. Consider the multiset { 1, 2, 2, 2, 3, 14 }. The median is 2 in this case, (as is the mode), and it might be seen as a better indication of central tendency (less susceptible to the exceptionally large value in data) than the arithmetic mean of 4.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18837",
"title": "Median",
"section": "Section::::Finite set of numbers.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 483,
"text": "The median is a popular summary statistic used in descriptive statistics, since it is simple to understand and easy to calculate, while also giving a measure that is more robust in the presence of outlier values than is the mean. The widely cited empirical relationship between the relative locations of the mean and the median for skewed distributions is, however, not generally true. There are, however, various relationships for the \"absolute\" difference between them; see below.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "18837",
"title": "Median",
"section": "Section::::Finite set of numbers.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 573,
"text": "The median is one of a number of ways of summarising the typical values associated with members of a statistical population; thus, it is a possible location parameter. The median is the 2nd quartile, 5th decile, and 50th percentile. Since the median is the same as the \"second quartile\", its calculation is illustrated in the article on quartiles. A median can be worked out for ranked but not numerical classes (e.g. working out a median grade when students are graded from A to F), although the result might be halfway between grades if there is an even number of cases.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2885691",
"title": "Robust statistics",
"section": "Section::::Examples.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 403,
"text": "The median is a robust measure of central tendency. Taking the same dataset {2,3,5,6,9}, if we add another datapoint with value -1000 or +1000 then the median will change slightly, but it will still be similar to the median of the original data. If we replace one of the values with a datapoint of value -1000 or +1000 then the resulting median will still be similar to the median of the original data.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "612",
"title": "Arithmetic mean",
"section": "Section::::Contrast with median.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 777,
"text": "The arithmetic mean may be contrasted with the median. The median is defined such that no more than half the values are larger than, and no more than half are smaller than, the median. If elements in the data increase arithmetically, when placed in some order, then the median and arithmetic average are equal. For example, consider the data sample formula_15. The average is formula_16, as is the median. However, when we consider a sample that cannot be arranged so as to increase arithmetically, such as formula_17, the median and arithmetic average can differ significantly. In this case, the arithmetic average is 6.2 and the median is 4. In general, the average value can vary significantly from most values in the sample, and can be larger or smaller than most of them.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1nwurp
|
if both nuclear fission and fusion generate energy, why don't we have infinite energy?
|
[
{
"answer": "You don't use the same elements in fusion and fission. The whole reason the respective processes can generate energy is because the nuclear reaction results in a nucleus that is _more stable_ than the starting nuclei.\n\nYou use a heavy element - such as uranium - for fission, while you use a light element - such as hydrogen - for fusion. You cannot reverse those and still get energy out of the reaction.",
"provenance": null
},
{
"answer": "*Some* fission reactions **release** energy and *some* fusion reactions **release** energy. If you take any particular reaction which releases energy, reversing it requires using energy. In general, you can fuse things to release energy up to iron, but above that, fusing takes more than you get out.",
"provenance": null
},
{
"answer": "This assumes that you can create an environment in which you can create a controlled fission reaction that then feeds into a controlled fusion reaction, which then can feed back into and create a new fission reaction safely. \n\n[Fusion bombs](_URL_0_) (H bombs) work this way - they use a Fission reaction* to provide the power to create the fission. This then creates an explosion - so what you are asking is why when that H bomb goes off, it doesn't then re-power the Fusion reaction that set off the Fission reaction. It's not as simple as energy out energy in - its a ton of energy out, which is absorbed, and then released into more energy out - the original material used to create that initial reaction is gone, and isn't going to re-appear.\n\nMore importantly, to my knowledge [we haven't figured out fusion yet outside of nuclear bombs](_URL_2_). If we did, theoretically, we would still be stuck figuring out how we could get the reaction that happens in [this](_URL_1_) to play nicely with the reaction that happens in [this](_URL_3_)\n\nEdited for info on Fission bombs",
"provenance": null
},
{
"answer": "thermodynamics makes this impossible. if you could connect a fission plant to a fusion plant and run it forever you would have a perpetual motion machine, which is impossible.",
"provenance": null
},
{
"answer": "We can't capture energy with 100% effeciency. There will be loss, and eventually, you'll deplete your fuel until your energy production cycle is no longer sustainable.\n\nBut that's not the worst of it. Fusion is really hard to perform, here on Earth. We're still struggling to perform reliable fusion in the technologies we have, and some of these experiments aren't designed to caputre energy for means of energy production. We are still far away with regards to this technology.\n\nFurther, we can't reasonably split anything or fuse anything. Some elements are extremely stable, which is why fission is done with heavy and unstable elements. They break down into something stable, and that's it. Some elements are too big and heavy and whatever else that prevents fusion from happening. We use hydrogen in our fusion experiments for whatever this reason is, and not heavier elements.\n\nIf you look at nature, a recent paper suggests that almost all the heavier elements (than iron, if I recall) found in the universe are the blown off debris of neutron stars colliding, and the paper suggests stars don't get big or hot enough to produce these elements the the quantities we see in the universe.\n\nSo, we can fission already unstable elements, we can barely fuse the lightest of elements, and that's it. While nothing in between is impossible, they usually don't happen outside particle accelerators, and thus, are not a possible source of energy.",
"provenance": null
},
{
"answer": "Fusion and fission *sometimes* generate energy, and sometimes require energy.\n\nSpecifically, fusion of light elements generates energy, fusion of heavy elements requires energy.\n\nFission of heavy elements generates energy, fission of light elements requires energy.\n\nSo after all the energy generation reactions, you wind up with medium sized elements (iron and nickel to be precise) that you can't really do anything with.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3043836",
"title": "Nuclear binding energy",
"section": "Section::::Nuclear binding energy curve.:Binding energy and nuclide masses.\n",
"start_paragraph_id": 85,
"start_character": 0,
"end_paragraph_id": 85,
"end_character": 396,
"text": "Nuclear fusion produces energy by combining the very lightest elements into more tightly bound elements (such as hydrogen into helium), and nuclear fission produces energy by splitting the heaviest elements (such as uranium and plutonium) into more tightly bound elements (such as barium and krypton). Both processes produce energy, because middle-sized nuclei are the most tightly bound of all.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1055890",
"title": "Sustainable energy",
"section": "Section::::Sustainable energy research.:Thorium.\n",
"start_paragraph_id": 94,
"start_character": 0,
"end_paragraph_id": 94,
"end_character": 487,
"text": "There are potentially two sources of nuclear power. Fission is used in all current nuclear power plants. Fusion is the reaction that exists in stars, including the sun, and remains impractical for use on Earth, as fusion reactors are not yet available. However nuclear power is controversial politically and scientifically due to concerns about radioactive waste disposal, safety, the risks of a severe accident, and technical and economical problems in dismantling of old power plants.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21188370",
"title": "Fuel",
"section": "Section::::Nuclear.:Fusion.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 488,
"text": "Fuels that produce energy by the process of nuclear fusion are currently not utilized by humans but are the main source of fuel for stars. Fusion fuels tend to be light elements such as hydrogen which will combine easily. Energy is required to start fusion by raising temperature so high all materials would turn into plasma, and allow nuclei to collide and stick together with each other before repelling due to electric charge. This process is called fusion and it can give out energy.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20766780",
"title": "Nuclear fusion–fission hybrid",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 618,
"text": "Hybrid nuclear fusion–fission (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The basic idea is to use high-energy fast neutrons from a fusion reactor to trigger fission in otherwise nonfissile fuels like U-238 or Th-232. Each neutron can trigger several fission events, multiplying the energy released by each fusion reaction hundreds of times. This would not only make fusion designs more economical in power terms, but also be able to burn fuels that were not suitable for use in conventional fission plants, even their nuclear waste.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "481862",
"title": "Uranium-238",
"section": "Section::::Nuclear energy applications.:Breeder reactors.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 432,
"text": "U is not usable directly as nuclear fuel, though it can produce energy via \"fast\" fission. In this process, a neutron that has a kinetic energy in excess of 1 MeV can cause the nucleus of U to split in two. Depending on design, this process can contribute some one to ten percent of all fission reactions in a reactor, but too few of the average 2.5 neutrons produced in each fission have enough speed to continue a chain reaction.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "55017",
"title": "Fusion power",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 307,
"text": "Fusion power is a proposed form of power generation that would generate electricity by using heat from nuclear fusion reactions. In a fusion process, two lighter atomic nuclei combine to form a heavier nucleus, while releasing energy. Devices designed to harness this energy are known as \"fusion reactors\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22153",
"title": "Nuclear power",
"section": "Section::::Research.:Hybrid nuclear fusion-fission.\n",
"start_paragraph_id": 317,
"start_character": 0,
"end_paragraph_id": 317,
"end_character": 674,
"text": "Hybrid nuclear power is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to delays in the realization of pure fusion. When a sustained nuclear fusion power plant is built, it has the potential to be capable of extracting all the fission energy that remains in spent fission fuel, reducing the volume of nuclear waste by orders of magnitude, and more importantly, eliminating all actinides present in the spent fuel, substances which cause security concerns.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
20nx01
|
How did medieval leaders get their armies to fight against the pope?
|
[
{
"answer": "In some cases there were Antipopes - that is, a rival claiming to be the true pope. Of course, each declared the other to be the Antipope! This was the case during Roger II of Sicily's disagreements with the papacy. In Roger's case the hostilities were usually initiated by the pope rather than the king, which might have helped. Regardless, one of the greatest disputes Roger had with the papacy revolved around the new Pope Innocent II's refusal to acknowledge him as king of Sicily, and the equally new Pope Anacletus II's promise to support him if Roger returned the favour. At this point, Roger was one of the most powerful rulers in Italy and his backing would be invaluable to Anacletus. Roger, for his part, wanted simply to have his newly assembled kingdom recognised as such, and the anti-Norman Innocent II had no intention of doing so (Houben, *Roger II: a Ruler between East and West*). In this type of case, both sides have an equal claim to righteous conviction. \n\nAs might be expected, a papal schism is not the norm when disputes between secular rulers and the papacy arise, but I think these unusual cases serve as illustrations of a more broadly applicable principle - that clergy, including even popes, can be considered illegitimate. The claim of a pope to divine correctness isn't necessarily swallowed without question. Any pope could be painted as a fraudulent pope. Remember also that God's will is in action. If a ruler goes against the pope and wins, then it was God's will all along and the pope was being ungodly. \n\nIndeed, Helene Wieruszowski argues (in a 1963 article that makes some very valid points despite its age) that the widespread support of Sicily's social elites, magnates and so forth could be taken as evidence that God was speaking through the actions of these powerful citizens - who, naturally, had themselves risen to prominence on the back of God's good will (Wieruszowski, 'Roger II of Sicily, *Rex-Tyrannus*, in Twelfth-Century Political Thought', *Speculum* 38:1). Near-universal acclaim by the divinely appointed influential elites is a ringing endorsement of the king's legitimacy and a condemnation of the pope's ungodly error. \n\nTo further muddy the waters, the pope was also ruler of a material realm in his own right, and could muster armies and negotiate treaties like any other ruler. The 1156 Treaty of Benevento between Pope Adrian IV and William I of Sicily is an interesting example. Although the concessions that it requires from Pope Adrian are ecclesiastical - that is, it demands papal recognition of the Hauteville kingdom of Sicily in perpetuity - in most respects it is like any other treaty between two rulers. (ed. Enzensberger, *Guillelmi I Regis Diplomata* and, for an English translation, ed. Loud, *The History of the Tyrants of Sicily by 'Hugo Falcandus' 1154-69*). Clearly the popes were treated as susceptible to mundane negotiation like anyone else. \n\nMy point here is there's evidence that popes weren't thought to be unassailable and so defying them wasn't necessarily as unthinkable as we might imagine. \n\nOf course, the best we can do here is to speculate based on limited evidence. There's very little record of what an 'average' citizen, soldier or otherwise, thought about anything during the middle ages. They didn't govern and they weren't literate, so couldn't write their own letters or diaries. The lesser nobility who followed the kings most likely did so out of self interest, which brings me to my final point. \n\nI want to throw in a thought that one of my undergraduate professors always reminded us of: people are always people. In the present day, many of us would be more likely to follow our immediate ruler over a lofty figure from hundreds or thousands of miles away, someone who is more an abstract idea than a tangible, real person. If we stand to gain (or simply to avoid hardship) by following our king or government to war against a faceless abstract concept who we have never seen and who doesn't even know we exist, many of us will go with the tangible, the things that are real to us. People are always people; follow your king against your pope because he's here and his best interests probably overlap with your own. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "4412145",
"title": "Crusades",
"section": "Section::::In the eastern Mediterranean.:13th century.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 1163,
"text": "While the Holy Roman Empire and the Papacy were in conflict, it often fell to secular leaders to campaign. What is sometimes known as the Barons' Crusade was first led by Count Theobald I of Navarre and when he returned to his lands, by the king of England's brother, the newly arrived, Richard of Cornwall. Sultan al-Kamil had died and his family were battling for the succession in Egypt and Syria. This allowed the crusaders to follow Frederick's tactics of combined forceful diplomacy and the playing of rival factions off against each other. The sparsely populated Jerusalem was in Christian hands and the territorial reach was that of the Kingdom prior to the disaster at Hattin in 1187. This brief renaissance for Frankish Jerusalem was illusory. The nobility rejected the Emperor's son in the succession to the throne which left the Kingdom dependent on Ayyubid division, the Crusading Orders and Western Aid. In 1244 a band of Khwarazmian mercenaries travelling to Egypt to serve As-Salih Ismail, Emir of Damascus, seemingly of their own volition, captured Jerusalem en route and defeated a combined Christian and Syrian army at the Battle of La Forbie.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1425816",
"title": "Wars of Castro",
"section": "Section::::First War of Castro.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 557,
"text": "At first, Pope Urban threatened to excommunicate anyone who helped Odoardo, but Odoardo's allies insisted their conflict was not with the papacy, but rather with the Barberini family (of which the Pope happened to be a member). When this failed, the Pope attempted to call on old alliances of his own and turned to Spain for assistance. But he received little help as Spanish forces were fully occupied by the Thirty Years' War. As it was, most of the troops fighting on the side of the papacy were French, most of those fighting for the Dukes were German.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26582626",
"title": "History of the papacy (1048–1257)",
"section": "Section::::History.:Investiture Controversy.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 884,
"text": "The struggle between the temporal power of the emperors and the spiritual influence of the popes came to a head in the reigns of Pope Nicholas II (1059–1061) and Pope Gregory VII (1073–1085). The popes fought to free the appointment of bishops, abbots and other prelates from the power of secular lords and monarchs into which it had fallen. This would prevent venial men being appointed to vital church positions because it benefited political rulers. Henry IV was ultimately driven by a revolt among the German nobles to make peace with the pope and appeared before Gregory in January 1077 at Canossa. Dressed as a penitent, the emperor is said to have stood barefoot in the snow for three days and begged forgiveness until, in Gregory's words: \"We loosed the chain of the anathema and at length received him into the favor of communion and into the lap of the Holy Mother Church\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "310919",
"title": "Fifth Crusade",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 584,
"text": "Pope Innocent III and his successor Pope Honorius III organized crusading armies led by King Andrew II of Hungary and Leopold VI, Duke of Austria, and an attack against Jerusalem ultimately left the city in Muslim hands. Later in 1218, a German army led by Oliver of Cologne, and a mixed army of Dutch, Flemish and Frisian soldiers led by William I, Count of Holland joined the crusade. In order to attack Damietta in Egypt, they allied in Anatolia with the Seljuk Sultanate of Rûm which attacked the Ayyubids in Syria in an attempt to free the Crusaders from fighting on two fronts.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1569009",
"title": "Mongol invasion of Europe",
"section": "Section::::Europe at the time of the Mongol Invasion.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 726,
"text": "In the 1240s the efforts of Christendom were already divided between five Crusades, only one of which was aimed against the Mongols. Initially when Bela sent messengers to the Pope to request a Crusade against the Mongols, the Pope tried to convince them to instead join his Crusade against the Holy Roman Emperor. Eventually Pope Gregory did promise a Crusade and the Church finally helped sanction a small Crusade against the Mongols in mid-1241, but it was diverted when he died in August 1241. Instead of fighting the Mongols, the resources gathered by the Crusade was used to fight a Crusade against the Hohenstaufen Dynasty after the German barons revolted against the Holy Roman Emperor's son Conrad in September 1241.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44500703",
"title": "Knightly Piety",
"section": "Section::::Origins.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 663,
"text": "In 1095, Pope Urban II preached the First Crusade at Clermont. Here, the Church officially sanctioned lay knights fighting for the Faith when Urban said that any who fought would be absolved of their sins rather than tarnish their soul for killing. By this time knights were already concerned with their immortal soul enough to fight for the Church. By the time the Church began to accept warfare and create the idea of a holy war, piety had already become entrenched in the warfare of the lay knight. However, as the time of increasing church involvement was the formative period of the Chivalric Codes, it helped add another dynamic to the \"Ritterfrömmigkeit\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13271189",
"title": "Byzantine Empire under the Angelos dynasty",
"section": "Section::::Isaac II Angelos.:Fourth Crusade.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 563,
"text": "In 1198, Pope Innocent III broached the subject of a new crusade through legates and encyclical letters. There were few monarchs willing to lead the Crusade; Richard I of England was battling his former Crusader ally Phillip II Augustus – both had their fill from the Third Crusade. The Holy Roman Empire meanwhile was ravaged by civil war, as Philip of Swabia and Otto of Brunswick had both been elected Kings of Germany by rival factions. The divided Holy Roman Empire was in no position to assist her rival-in-religious authority in any military undertakings.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5hss9g
|
if the edge of the universe to us is 45 billion light years away, could it have already stopped expanding?
|
[
{
"answer": "When we talk about the \"edge\" of the universe, we are referring to the extent of the observable universe. Space is expanding, and there is no centre to the expansion. Every point is moving away from every other point, equally in all directions. The further away you look, the more space exists between you and the point you are observing, so the faster that point appears to be moving away from you. If you look far enough, space is expanding away from you at the speed of light. This is the limit of observability, because no information about events beyond that distance will ever reach you. That occurs at a finite distance, so the observable universe is finite in extent, but we can't ever know what lies beyond. The true universe could be infinite or not. At present, all indications are that the expansion will continue, so more and more objects will continue to expand out past the observability limit and become invisible to us, until eventually our own galaxy will be the only thing in the night sky.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "31880",
"title": "Universe",
"section": "Section::::Physical properties.:Size and regions.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 984,
"text": "The proper distance—the distance as would be measured at a specific time, including the present—between Earth and the edge of the observable universe is 46 billion light-years (14 billion parsecs), making the diameter of the observable universe about 93 billion light-years (28 billion parsecs). The distance the light from the edge of the observable universe has travelled is very close to the age of the Universe times the speed of light, , but this does not represent the distance at any given time because the edge of the observable universe and the Earth have since moved further apart. For comparison, the diameter of a typical galaxy is 30,000 light-years (9,198 parsecs), and the typical distance between two neighboring galaxies is 3 million light-years (919.8 kiloparsecs). As an example, the Milky Way is roughly 100,000–180,000 light-years in diameter, and the nearest sister galaxy to the Milky Way, the Andromeda Galaxy, is located roughly 2.5 million light-years away.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4035",
"title": "Black",
"section": "Section::::Science.:Astronomy.:Why the night sky and space are black – Olbers' paradox.\n",
"start_paragraph_id": 86,
"start_character": 0,
"end_paragraph_id": 86,
"end_character": 672,
"text": "The current accepted answer is that, although the universe is infinitely large, it is not infinitely old. It is thought to be about 13.8 billion years old, so we can only see objects as far away as the distance light can travel in 13.8 billion years. Light from stars farther away has not reached Earth, and cannot contribute to making the sky bright. Furthermore, as the universe is expanding, many stars are moving away from Earth. As they move, the wavelength of their light becomes longer, through the Doppler effect, and shifts toward red, or even becomes invisible. As a result of these two phenomena, there is not enough starlight to make space anything but black.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5985207",
"title": "Expansion of the universe",
"section": "Section::::Theoretical basis and first evidence.:Hubble's concerns over the rate of expansion.\n",
"start_paragraph_id": 73,
"start_character": 0,
"end_paragraph_id": 73,
"end_character": 819,
"text": "However, recent measurements of the distances and velocities of faraway galaxies revealed a 9 percent discrepancy in the value of the Hubble constant, implying a universe that seems expanding too fast compared to previous measurements. In 2001, Dr. Wendy Freedman determined space to expand at 72 kilometers per second per megaparsec - roughly 3.3 million light years - meaning that for every 3.3 million light years further away from the earth you are, the matter where you are, is moving away from earth 72 kilometers a second faster. In the summer of 2016, another measurement reported a value of 73 for the constant, thereby contradicting 2013 measurements from the European Planck mission of slower expansion value of 67. The discrepancy opened new questions concerning the nature of dark energy, or of neutrinos.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "251399",
"title": "Observable universe",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 953,
"text": "According to calculations, the current \"comoving distance\"—proper distance, which takes into account that the universe has expanded since the light was emitted—to particles from which the cosmic microwave background radiation (CMBR) was emitted, which represent the radius of the visible universe, is about 14.0 billion parsecs (about 45.7 billion light-years), while the comoving distance to the edge of the observable universe is about 14.3 billion parsecs (about 46.6 billion light-years), about 2% larger. The radius of the observable universe is therefore estimated to be about 46.5 billion light-years and its diameter about 28.5 gigaparsecs (93 billion light-years, ). The total mass of ordinary matter in the universe can be calculated using the critical density and the diameter of the observable universe to be about 1.5 × 10 kg. In November 2018, astronomers reported that the extragalactic background light (EBL) amounted to 4 × 10 photons.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34579",
"title": "2000s (decade)",
"section": "Section::::Science and technology.:Science.:Scientific Marks by Field.:Space.\n",
"start_paragraph_id": 279,
"start_character": 0,
"end_paragraph_id": 279,
"end_character": 313,
"text": "BULLET::::- 2009 – Astrophysicists studying the universe confirm its age at 13.7 billion years, discover that it will most likely expand forever without limit, and conclude that only 4% of the universe's contents are ordinary matter (the other 96% being still-mysterious dark matter, dark energy, and dark flow).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11439",
"title": "Faster-than-light",
"section": "Section::::Superluminal travel of non-information.:Universal expansion.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 850,
"text": "However, because the expansion of the universe is accelerating, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future, because the light never reaches a point where its \"peculiar velocity\" towards us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Comoving and proper distances#Uses of the proper distance). The current distance to this cosmological event horizon is about 16 billion light-years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event was less than 16 billion light-years away, but the signal would never reach us if the event was more than 16 billion light-years away.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48916027",
"title": "2016 in science",
"section": "Section::::Events.:June.\n",
"start_paragraph_id": 157,
"start_character": 0,
"end_paragraph_id": 157,
"end_character": 223,
"text": "BULLET::::- NASA and ESA jointly announce that the Universe is expanding 5% to 9% faster than previously thought, after using the Hubble Space Telescope to measure the distance to stars in 19 galaxies beyond the Milky Way.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5cotbn
|
why do we tend to get small violent tendencies when we get angry or have a heated argument with someone else? [biology]
|
[
{
"answer": "The \"Fight or Flight\" response to the confrontation. Anticipating a fight, a cascade of things happen to your physiology..adrenaline production, flushing, heat, respiration increasing, muscles tensing..the whole brain is prepped to go to battle. This also suppresses normal functions, like situational awareness giving way to tunnel vision, reduced perception of pain and fatigue, and most noteworthy: rapidly reduced impulse control. Impulse control in a potentially fatal situation can be deadly, and we have evolved a way of shutting it down in the face of danger: Don't *think* about the tiger in the bushes, just run. Baser impulses become difficult if not impossible to suppress, as seen when someone \"rages\". Those violent tendencies rush to the surface and find expression. \n\nWe don't even have to full on rage for this. It varies from person to person but when there is moderate stimulation of a fight-or-flight response we can observe expressions of anxiety and/or aggression. It's why people yell during sports matches. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "44494266",
"title": "Confrontation",
"section": "Section::::Confrontation between groups.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 563,
"text": "Confrontation may occur between individuals, or between larger groups. Because groups are composed of multiple individuals, with each member having their own specific triggers for a violent response to a perceived provocation, risk factors which \"may not be sufficient individually to explain collective violence, in combination [can] create conditions that may precipitate aggressive confrontations between groups\". Thus provocation of a single member of one group by a single member of the other group can lead to a confrontation between the groups as a whole.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33928201",
"title": "Behavior mutation",
"section": "Section::::Mutations affecting passive/aggressive characteristics.:Testosterone.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 381,
"text": "Other evolutionary and genetic explanations of violent behaviour include: dopamine receptors mutations, DRD2 and DRD4, that, when mutate simultaneously, are hypothesized to cause personality disorders, low serotonin levels increasing irritability and gloom and the effects of testosterone on neurotransmitter functioning to explain the increased occurrence of aggression in males.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "68672",
"title": "Anger",
"section": "Section::::Cognitive effects.\n",
"start_paragraph_id": 54,
"start_character": 0,
"end_paragraph_id": 54,
"end_character": 721,
"text": "Anger causes a reduction in cognitive ability and the accurate processing of external stimuli. Dangers seem smaller, actions seem less risky, ventures seem more likely to succeed, and unfortunate events seem less likely. Angry people are more likely to make risky decisions, and make less realistic risk assessments. In one study, test subjects primed to feel angry felt less likely to suffer heart disease, and more likely to receive a pay raise, compared to fearful people. This tendency can manifest in retrospective thinking as well: in a 2005 study, angry subjects said they thought the risks of terrorism in the year following 9/11 in retrospect were low, compared to what the fearful and neutral subjects thought.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2923321",
"title": "Emergency psychiatry",
"section": "Section::::Scope.:Violent behavior.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 1066,
"text": "Aggression can be the result of both internal and external factors that create a measurable activation in the autonomic nervous system. This activation can become evident through symptoms such as the clenching of fists or jaw, pacing, slamming doors, hitting palms of hands with fists, or being easily startled. It is estimated that 17% of visits to psychiatric emergency service settings are homicidal in origin and an additional 5% involve both suicide and homicide. Violence is also associated with many conditions such as acute intoxication, acute psychosis, paranoid personality disorder, antisocial personality disorder, narcissistic personality disorder and borderline personality disorder. Additional risk factors have also been identified which may lead to violent behavior. Such risk factors may include prior arrests, presence of hallucinations, delusions or other neurological impairment, being uneducated, unmarried, etc. Mental health professionals complete violence risk assessments to determine both security measures and treatments for the patient.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3998822",
"title": "Strain theory (sociology)",
"section": "Section::::Other strain theorists.:Robert Agnew.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 1068,
"text": "Anger and frustration confirm negative relationships. The resulting behavior patterns will often be characterized by more than their share of unilateral action because an individual will have a natural desire to avoid unpleasant rejections, and these unilateral actions (especially when antisocial) will further contribute to an individual's alienation from society. If particular rejections are generalized into feelings that the environment is unsupportive,more strongly negative emotions may motivate the individual to engage in crime. This is most likely to be true for younger individuals, and Agnew suggested that research focus on the magnitude, recency, duration, and clustering of such strain-related events to determine whether a person copes with strain in a criminal or conforming manner. Temperament, intelligence, interpersonal skills, self-efficacy, the presence of conventional social support, and the absence of association with antisocial (\"e.g.\", criminally inclined) age and status peers are chief among the factors Agnew identified as beneficial.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46112",
"title": "Violence",
"section": "Section::::Factors.:Psychology.\n",
"start_paragraph_id": 70,
"start_character": 0,
"end_paragraph_id": 70,
"end_character": 258,
"text": "The causes of violent behavior in people are often a topic of research in psychology. Neurobiologist Jan Vodka emphasizes that, for those purposes, \"violent behavior is defined as overt and intentional physically aggressive behavior against another person.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58687",
"title": "Aggression",
"section": "Section::::Society and culture.:Situational factors.\n",
"start_paragraph_id": 115,
"start_character": 0,
"end_paragraph_id": 115,
"end_character": 1139,
"text": "Frustration is another major cause of aggression. The Frustration aggression theory states that aggression increases if a person feels that he or she is being blocked from achieving a goal (Aronson et al. 2005). One study found that the closeness to the goal makes a difference. The study examined people waiting in line and concluded that the 2nd person was more aggressive than the 12th one when someone cut in line (Harris 1974). Unexpected frustration may be another factor. In a separate study to demonstrate how unexpected frustration leads to increased aggression, Kulik & Brown (1979) selected a group of students as volunteers to make calls for charity donations. One group was told that the people they would call would be generous and the collection would be very successful. The other group was given no expectations. The group that expected success was more upset when no one was pledging than the group who did not expect success (everyone actually had horrible success). This research suggests that when an expectation does not materialize (successful collections), unexpected frustration arises which increases aggression.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2c5no9
|
I'm watching English period pieces like The Tudors and Elizabeth. Did monarchs have titles like Lord Burleigh to give out? What did that entail?
|
[
{
"answer": "If you're just talking about titles rather than estates and incomes, yes. The sovereign is the [fount of honour](_URL_0_), i.e. has the exclusive right to confer titles of nobility and orders of chivalry.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "10111",
"title": "Emperor",
"section": "Section::::Emperors of Europe.:Great Britain.:England.\n",
"start_paragraph_id": 73,
"start_character": 0,
"end_paragraph_id": 73,
"end_character": 428,
"text": "There was no consistent title for the king of England before 1066, and monarchs chose to style themselves as they pleased. Imperial titles were used inconsistently, beginning with Athelstan in 930 and ended with the Norman conquest of England. Empress Matilda (1102–1167) is the only English monarch commonly referred to as \"emperor\" or \"empress\", but she acquired her title through her marriage to Henry V, Holy Roman Emperor.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "45811",
"title": "Eponym",
"section": "Section::::History.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 201,
"text": "BULLET::::- British monarchs have become eponymous throughout the English-speaking world for time periods, fashions, etc. \"Elizabethan\", \"Georgian\", \"Victorian\", and \"Edwardian\" are examples of these.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14909875",
"title": "List of English royal consorts",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 367,
"text": "The English royal consorts were the spouses of the reigning monarchs of the Kingdom of England who were not themselves monarchs of England: spouses of some English monarchs who were themselves English monarchs are not listed, comprising Mary I and Philip who reigned together in the 16th century, and William III and Mary II who reigned together in the 17th century.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40011858",
"title": "Tudor Crown",
"section": "Section::::Fate.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 401,
"text": "After the death of Elizabeth I and the end of the Tudor dynasty, the Stuarts came to power in England. Both James I and Charles I are known to have worn the crown. Following the abolition of the monarchy and the execution of Charles I in 1649, the Tudor Crown was broken up and its valuable components sold for £1,100. According to an inventory drawn up for the sale of the king's goods, it weighed .\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "407950",
"title": "Kingdom of England",
"section": "Section::::Name.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 608,
"text": "The standard title for monarchs from Æthelstan until John was ' (\"King of the English\"). Canute the Great, a Dane, was the first to call himself \"King of England\". In the Norman period ' remained standard, with occasional use of ' (\"King of England\"). From John's reign onwards all other titles were eschewed in favour of ' or \"\". In 1604 James I, who had inherited the English throne the previous year, adopted the title (now usually rendered in English rather than Latin) \"King of Great Britain\". The English and Scottish parliaments, however, did not recognise this title until the Acts of Union of 1707.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "755195",
"title": "The Vicar of Bray (song)",
"section": "Section::::Historical basis of the character.\n",
"start_paragraph_id": 128,
"start_character": 0,
"end_paragraph_id": 128,
"end_character": 304,
"text": "BULLET::::- The most frequently sung words refer to 17th-century monarchs. Therefore, a later proposed model is Simon Symonds, who was an Independent in the Protectorate, a Church of England cleric under Charles II, a Roman Catholic under James II, and a moderate Anglican under William III and Mary II.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "59850289",
"title": "Henry Vernon (died 1515)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 255,
"text": "Sir Henry Vernon, KB, (1441–13 April 1515) was a Tudor-era English landowner, politician, and courtier. He was the Controller of the household of Arthur, Prince of Wales, eldest son of Henry VII of England and heir to the throne until his untimely death.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1mftka
|
Why do atoms "want" to get full outer electron shells when bonding?
|
[
{
"answer": "Good question! Understanding this behaviour requires that you first understand that a reaction can be considered as a number of individual processes. When you combine chlorine and sodium to make table salt, the sodium atom loses an electron, the chlorine atom gains one and the resultant ions bond due to their charge (this description is accurate to a first approximation).\n\nRemoving electrons from atoms takes energy. Adding electrons to neutral atoms releases energy, but adding electrons to already-negative ions typically takes energy. Forming the resultant bond between the atoms releases energy. So, let's add this all up.\n\nSodium has a single electron in its outer shell. The energy cost of removing this, combined with the energy gain of creating Cl^- and then the salt NaCl yields a large negative number for the overall energy change because the bond that's formed is quite strong. That is, the overall process is favoured and the reaction happens. You may ask: if the bond is quite strong, why not form two of them (i.e. NaCl*_2_*) and get twice the energy out from bonding?\n\nRemoving a second electron from sodium is much tougher. It's at a lower energy level, much closer to the nucleus and more tightly bound. It turns out that the energy gain from a second bond doesn't make up for the extra energy required to remove a second electron. Similarly, for magnesium, which does form two bonds, this is because magnesium has two electrons in its outer shell which are comparatively easy to remove.\n\nSo, it's not so much that an element wants to form a particular number of bonds. Elements will form as many bonds as they can (because bond formation releases energy) until the energetics become unfavourable. Sometimes, if you have particularly reactive compounds, you can exceed the traditional \"correct\" number of bonds because the reactive compound releases sufficient energy in the reaction.\n\nBond angles are caused by the electrons in bonds repelling each other. If you have methane, a tetrahedral molecule, you end up with the bonds at an angle of about 109 degrees because that maximises the distance between bonds. However, lone pairs that aren't involved in bonding exert higher repulsion than bond pairs. So, in water, oxygen has two lone pairs which squeeze the bond pairs closer together to an angle of about 104 degrees. Read up on \"VSEPR\" for more about this.",
"provenance": null
},
{
"answer": "I remember one of my lecturers telling me there was nothing special about a full shell, and that it is just where energy minima tend to be.\n\nOne of the reasons for this is shielding of the nuclear charge:\n_URL_0_\nElectrons in inner shells shield the outer electrons from the nucleus's charge I.e some of the protons' attraction to the outer electrons is blocked by the inner electrons. As you go across a row of the periodic table, all elements have the same number of inner electrons, and therefore the same amount of shielding, but more protons and therefore a higher nuclear charge. This means the outer electrons of the elements on the right hand of the table experience a larger \"effective nuclear charge\" than the ones on the left. \nWhen a left and a right element are mixed together, for example lithium and fluorine, one of lithium's electrons will be pulled off and into fluorine (who has a much higher effective nuclear charge)'s last remaining space it its second shell. \nWith a now complete second shell, any more electrons pulled off by fluorine would have to sit in the third shell, which has two shells of shielding in between it and the nucleus, causing it to experience a much lower effective nuclear charge - lower than it would experience staying in orbit around the lithium atom meaning fluorine stays as F-, and the second Li goes off to find an F.\n\n",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "59444",
"title": "Energy level",
"section": "Section::::Molecules.\n",
"start_paragraph_id": 36,
"start_character": 0,
"end_paragraph_id": 36,
"end_character": 1094,
"text": "Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6176600",
"title": "Intramolecular force",
"section": "Section::::Bond formation.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 754,
"text": "Bonds are formed by atoms so that they are able to achieve a lower energy state. Free atoms will have more energy than a bonded atom. This is because some energy is released during bond formation, allowing the entire system to achieve a lower energy state. The bond length, or the minimum separating distance between two atoms participating in bond formation, is determined by their repulsive and attractive forces along the internuclear direction. As the two atoms get closer and closer, the positively charged nuclei repel, creating a force that attempts to push the atoms apart. As the two atoms get further apart, attractive forces work to pull them back together. Thus an equilibrium bond length is achieved and is a good measure of bond stability.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1191172",
"title": "Dangling bond",
"section": "Section::::Definition and properties.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 632,
"text": "In order to gain enough electrons to fill their valence shells (see also octet rule), many atoms will form covalent bonds with other atoms. In the simplest case, that of a single bond, two atoms each contribute one unpaired electron, and the resulting pair of electrons is shared between them. Atoms which possess too few bonding partners to satisfy their valences and which possess unpaired electrons are termed \"free radicals\"; so, often, are molecules containing such atoms. When a free radical exists in an immobilized environment (for example, a solid), it is referred to as an \"immobilized free radical\" or a \"dangling bond\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "542915",
"title": "Open shell",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 401,
"text": "In the context of atomic orbitals, an open shell is a valence shell which is not completely filled with electrons or that has not given all of its valence electrons through chemical bonds with other atoms or molecules during a chemical reaction. Atoms generally reach a noble gas configuration in a molecule. The noble gases (He, Ne, Ar, Kr, Xe, Rn) are less reactive and have configurations 1s (He),\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4831",
"title": "Bohr model",
"section": "Section::::Shell model of heavier atoms.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 1051,
"text": "In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra \"d\" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n=3 d orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7337217",
"title": "Molecular orbital diagram",
"section": "Section::::Diatomic MO diagrams.:Dihydrogen.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 540,
"text": "The superposition of the two 1s atomic orbitals leads to the formation of the σ and σ* molecular orbitals. Two atomic orbitals in phase create a larger electron density, which leads to the σ orbital. If the two 1s orbitals are not in phase, a node between them causes a jump in energy, the σ* orbital. From the diagram you can deduce the bond order, how many bonds are formed between the two atoms. For this molecule it is equal to one. Bond order can also give insight to how close or stretched a bond has become if a molecule is ionized.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "266466",
"title": "Octet rule",
"section": "Section::::Explanation in quantum theory.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 522,
"text": "The quantum theory of the atom explains the eight electrons as a closed shell with an sp electron configuration. A closed-shell configuration is one in which low-lying energy levels are full and higher energy levels are empty. For example, the neon atom ground state has a full shell (2s 2p) and an empty shell. According to the octet rule, the atoms immediately before and after neon in the periodic table (i.e. C, N, O, F, Na, Mg and Al), tend to attain a similar configuration by gaining, losing, or sharing electrons.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1n40e9
|
What is the best referencing/organizing software you use to write history and why?
|
[
{
"answer": "Not history specific, but OneNote works REALLY well for writing initial drafts. You can organize your work far beyond anything else I've used, and can include text clippings, photos, videos, and links out to the side. It eschews the \"page\" construct and is more like a whiteboard.\n\nOnce you get down to doing a final draft, you'd be better to switch to something designed to nicely handle notations, footnotes, and all that.",
"provenance": null
},
{
"answer": "If your research project includes lots of references and if you're looking for an advanced reference manager, I would recommend [Citavi](_URL_0_). I have yet to see a reference manager that comes close to it. I can't really name that *one* feature that makes Citavi special, it's the overall product and attention to detail that makes it worth the money (~$140; it's free if your university has a licence!). There is a [trial version](_URL_1_) which works for up to 100 references if you want to have a look at it.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "23952830",
"title": "A Manual for Writers of Research Papers, Theses, and Dissertations",
"section": "Section::::Structure and content of the manual.:Part 2: Source Citation.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 416,
"text": "The more-concise author-date style (sometimes referred to as the \"reference list style\") is more common in the physical, natural, and social sciences. This style involves sources being \"briefly cited in the text, usually in parentheses, by author’s last name and year of publication\" with the parenthetical citations corresponding to \"an entry in a reference list, where full bibliographic information is provided.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1219401",
"title": "Technical communication",
"section": "Section::::Content creation.:Revising and editing.:Editing for style.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 565,
"text": "Technical writing as a discipline usually requires that a technical writer use a style guide. These guides may relate to a specific project, product, company, or brand. They ensure that technical writing reflects formatting, punctuation, and general stylistic standards that the audience expects. In the United States, many consider the \"Chicago Manual of Style\" the bible for general technical communication. Other style guides have their adherents, particularly for specific industries—such as the \"Microsoft Style Guide\" in some information technology settings.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3453719",
"title": "ASA style",
"section": "Section::::Software support.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 204,
"text": "ASA style is supported by most major reference management software programs, including Endnote, Procite, Zotero, Refworks, and so forth, making the formatting of references a fairly straightforward task.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4696265",
"title": "Reference model",
"section": "Section::::The uses of a reference model.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 564,
"text": "Another use of a reference model is to educate. Using a reference model, leaders in software development can help break down a large problem space into smaller problems that can be understood, tackled, and refined. Developers who are new to a particular set of problems can quickly learn what the different problems are, and can focus on the problems that they are being asked to solve, while trusting that other areas are well understood and rigorously constructed. The level of trust is important to allow software developers to efficiently focus on their work.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9073648",
"title": "Reference software",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 611,
"text": "Reference software is software which emulates and expands upon print reference forms including the dictionary, translation dictionary, encyclopaedia, thesaurus, and atlas. Like print references, reference software can either be general or specific to a domain, and often includes maps and illustrations, as well as bibliography and statistics. Reference software may include multimedia content including animations, audio, and video, which further illustrate a concept. Well designed reference software improves upon the navigability of print references, through the use of search functionality and hyperlinks.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "454843",
"title": "Reference management software",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 598,
"text": "Reference management software, citation management software, company reference software or personal bibliographic management software is software for scholars and authors to use for recording and utilising bibliographic citations (references) as well as managing project references either as a company or an individual. Once a citation has been recorded, it can be used time and again in generating bibliographies, such as lists of references in scholarly books, articles and essays. The development of reference management packages has been driven by the rapid expansion of scientific literature.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27031910",
"title": "Sentence spacing in language and style guides",
"section": "Section::::Background.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 553,
"text": "Style guides are important to writers since \"virtually all professional editors work closely with one of them in editing a manuscript for publication.\" Comprehensive style guides, such as the \"Oxford Style Manual\" in the United Kingdom and style guides developed by the American Psychological Association, and the Modern Language Association in the United States, provide standards for a wide variety of writing, design, and English language topics—such as grammar, punctuation, and typographic conventions—and are widely used regardless of profession.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
20cna3
|
what is the purpose of checking in for a flight, if you can check in online?
|
[
{
"answer": "The online checkin process is mainly to get the customer to complete as much of the administration and data entry as possible before hand - rather than have staff do it whilst a queue waits.",
"provenance": null
},
{
"answer": "You still need a boarding pass. \nYou still need to weigh your checked bags. ",
"provenance": null
},
{
"answer": "In the US, you can check in online and get your boarding pass.\n\nThen you can go directly to the gate as long as you do not have to check any luggage.",
"provenance": null
},
{
"answer": "I'm not familiar with all the airlines in the industry, but one of my parents works for a US airline so I grew up flying non-revenue stand-by most of my life, and this is how I understand it.\n\nBooking agents for airlines, which have since evolved into computer systems in most cases, are allowed to oversell flights based on historic records of the same and similar flights. For example, imagine a flight from Atlanta to New York City, that leaves at noon, and has 200 seats available. Based on records of that flight in previous days, 20% of the people that purchased a ticket for the flight did not actually show up to take the flight. Therefore, the booking agents (or computers) are able to sell 15% more tickets for the flight than actually exist, or in this case, 230 tickets can be sold for a flight with only 200 seats. This information becomes more relevant towards the end of the check in process, as I will explain.\n\nThe check in period currently exists (at least on the carrier I'm familiar with) from 24 hours before the flight departs until 30 minutes before the flight departs. This period exists so that passengers that have purchased a ticket can check in with the airline and say they still plan on taking the flight for which they bought a ticket, and the airline marks that passenger's seat as claimed. Once there are only 30 minutes until the flight is going to depart, the gate attendant (the people physically at the gate that the plane is departing from) will open up the seats of all the passengers that haven't checked in, and will then give those seats to various forms of stand-by passengers in a seniority order - which include displaced passengers from other flights, passengers that bought stand-by tickets, or employees and their relatives flying non-revenue stand-by. Once there are only 10 minutes left until the flight departs, even checked-in passengers that haven't boarded the plane may lose their seats if there are more stand-by passengers available at the gate, on a full flight.\n\nIn summary, it's my understanding that the check in process is something of a soft confirmation that can only occur in the 24 hours prior to the flight departing, and allows the airline to better streamline its boarding process in regards to ticketed and stand-by passengers.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "10321936",
"title": "Airport check-in",
"section": "Section::::Online check-in.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 314,
"text": "Online check-in is the process in which passengers confirm their presence on a flight via the Internet and typically print their own boarding passes. Depending on the carrier and the specific flight, passengers may also enter details such as meal options and baggage quantities and select their preferred seating.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10321936",
"title": "Airport check-in",
"section": "Section::::Online check-in.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 1205,
"text": "Typically, web-based check-in for airline travel is offered on the airline's website not earlier than 24 hours before a flight's scheduled departure or seven days for Internet Check-In Assistant. However, some airlines allow a longer time, such as Ryanair, which opens online check-in 30 and 4 days beforehand (depending on whether the passenger paid for a seat reservation), AirAsia, which opens it 14 days prior to departure, and easyJet, which opens as soon as a passenger is ticketed (however for easyJet, passengers are not checked-in automatically after ticketing, the passenger must click the relevant button). Depending on the airline, there can be benefits of better seating or upgrades to first class or business class offered to the first people to check in for a flight. In order to meet this demand, some sites have offered travelers the ability to request an airline check-in prior to the 24-hour window and receive airline boarding passes by email when available from the airline. Some airlines charge for the privilege of early check-in before the 24-hour window opens, thus capitalising on the demand for desirable seats such as those immediately behind a bulkhead or emergency exit row.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "10321936",
"title": "Airport check-in",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 785,
"text": "Check-in is usually the first procedure for a passenger when arriving at an airport, as airline regulations require passengers to check in by certain times prior to the departure of a flight. This duration spans from 15 minutes to 4 hours depending on the destination and airline (with self check in, this can be expanded to 24 hours, if checking in by online processes). During this process, the passenger has the ability to ask for special accommodations such as seating preferences, inquire about flight or destination information, accumulate frequent flyer program miles, or pay for upgrades. The required time is sometimes written in the reservation, sometimes written somewhere in websites, and sometimes only referred as \"passengers should allow sufficient time for check-in\". \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1591958",
"title": "Electronic ticket",
"section": "Section::::Airline ticket.:Checking in with an e-ticket.:Self-service and remote check-in.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 494,
"text": "Several websites assist people holding e-tickets to check in online in advance of the twenty-four-hour airline restriction. These sites store a passenger's flight information and then when the airline opens up for online check-in the data is transferred to the airline and the boarding pass is emailed back to the customer. With this e-ticket technology, if a passenger receives his boarding pass remotely and is travelling without check-in luggage, he may bypass traditional counter check-in.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3206290",
"title": "Check-in",
"section": "Section::::Airport check-in.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 615,
"text": "Many airlines have a deadline for passengers to check in before each flight. This is to allow the airline to offer unclaimed seats to stand-by passengers, to load luggage onto the plane and to finalize documentation for take-off. The passenger must also take into account the time that may be needed for them to clear the check-in line, to pass security and then to walk (sometimes also to ride) from the check-in area to the boarding area. This may take several hours at some airports or at some times of the year. On international flights, additional time would be required for immigration and customs clearance.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3206290",
"title": "Check-in",
"section": "Section::::Airport check-in.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 764,
"text": "The check-in process at airports enables passengers to check in luggage onto a plane and to obtain a boarding pass. When presenting at the check-in counter, a passenger will provide evidence of the right to travel, such as a ticket, visa or electronic means. Each airline provides facilities for passengers to check in their luggage, except for their carry-on bags. This may be by way of airline-employed staff at check-in counters at airports or through an agency arrangement or by way of a self-service kiosk. The luggage is weighed and tagged, and then placed on a conveyor that usually feeds the luggage into the main baggage handling system. The luggage goes into the aircraft's cargo hold. The check-in staff then issues each passenger with a boarding pass.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1585406",
"title": "Boarding pass",
"section": "Section::::Print-at-home boarding passes.\n",
"start_paragraph_id": 45,
"start_character": 0,
"end_paragraph_id": 45,
"end_character": 368,
"text": "Many airlines encourage travellers to check in online up to a month before their flight and obtain their boarding pass before arriving at the airport. Some carriers offer incentives for doing so (e.g., in 2015, US Airways offered 1000 bonus miles to anyone checking in online,), while others charge fees for checking in or printing one's boarding pass at the airport.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3da45g
|
Why do the edges of certain materials dry faster than the middles?
|
[
{
"answer": "The edges are more likely than not touching more air than concrete found in the center of a section of sidewalk. Whereas the center only has exposure to air above it (along with a very small amount found in small cracks, etc), the edges are exposed to air on two fronts, the top and the side that goes into the ground. More air allows for more heat to transfer to the edges of the sidewalk, causing more water to evaporate. As the center is only exposed to air on the top, the water found in this section takes longer to acquire enough energy to evaporate, thus remaining wet for longer. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "58984",
"title": "Reed (mouthpiece)",
"section": "Section::::Care and maintenance.\n",
"start_paragraph_id": 28,
"start_character": 0,
"end_paragraph_id": 28,
"end_character": 682,
"text": "Because reeds change with climate, reeds that are too soft can be kept in the hopes that they eventually thicken, but there is nothing else that can be done. If a reed is too stiff, however, there are solutions. The most simple solution is to turn a piece of paper over so there is no ink and gently rotate the reed around it while gently placing the fingers at the tip and the butt to ensure even distribution on the paper. This works if the reed is just barely too stiff or warped (the tip is not flat). If the reed is more than a little too stiff, sandpaper can be used (preferably 300–500 grain) to repeat the process as described. Be careful not to damage the tip of the reed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9396111",
"title": "Preservation (library and archival science)",
"section": "Section::::Practices.:Storage environment.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 542,
"text": "Bound materials are sensitive to rapid temperature or humidity cycling due to differential expansion of the binding and pages, which may cause the binding to crack and/or the pages to warp. Changes in temperature and humidity should be done slowly so as to minimize the difference in expansion rates. However, an accelerated aging study on the effects of fluctuating temperature and humidity on paper color and strength showed no evidence that cycling of one temperature to another or one RH to another caused a different mechanism of decay.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1409530",
"title": "Equilibrium moisture content",
"section": "Section::::Equilibrium moisture content of sands, soils and building materials.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 389,
"text": "Materials such as stones, sand and ceramics are considered 'dry' and have much lower equilibrium moisture content than organic material like wood and leather. typically a fraction of a percent by weight when in equilibrium of air of Relative humidity 10% to 90%. This affects the rate that buildings need to dry out after construction, typical cements starting with 40-60% water content. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1418996",
"title": "Grog (clay)",
"section": "Section::::Applications.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 256,
"text": "The finer the particles, the closer the clay bond, and the denser and stronger the fired product. \"The strength in the dry state increases with grog down as fine as that passing the 100-mesh sieve, but decreases with material passing the 200-mesh sieve.\" \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "233317",
"title": "Austenite",
"section": "Section::::Behavior in plain carbon-steel.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 1182,
"text": "A high cooling rate of thick sections will cause a steep thermal gradient in the material. The outer layers of the heat treated part will cool faster and shrink more, causing it to be under tension and thermal staining. At high cooling rates, the material will transform from austenite to martensite which is much harder and will generate cracks at much lower strains. The volume change (martensite is less dense than austenite) can generate stresses as well. The difference in strain rates of the inner and outer portion of the part may cause cracks to develop in the outer portion, compelling the use of slower quenching rates to avoid this. By alloying the steel with tungsten, the carbon diffusion is slowed and the transformation to BCT allotrope occurs at lower temperatures, thereby avoiding the cracking. Such a material is said to have its hardenability increased. Tempering following quenching will transform some of the brittle martensite into tempered martensite. If a low-hardenability steel is quenched, a significant amount of austenite will be retained in the microstructure, leaving the steel with internal stresses that leave the product prone to sudden fracture.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "317900",
"title": "Clothes dryer",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 446,
"text": "Many dryers consist of a rotating drum called a \"tumbler\" through which heated air is circulated to evaporate the moisture, while the tumbler is rotated to maintain air space between the articles. Using these machines may cause clothes to shrink or become less soft (due to loss of short soft fibers/lint). A simpler non-rotating machine called a \"drying cabinet\" may be used for delicate fabrics and other items not suitable for a tumble dryer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1076403",
"title": "Woodturning",
"section": "Section::::Overview.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 508,
"text": "Moisture content affects both the ease of cutting wood and the final shape of the work when it dries. Wetter wood cuts easily with a continuous ribbon of shavings that are relatively dust-free. However, the wet wood moves as it dries. shrinking less along the grain. These variable changes may add the illusion of an oval bowl, or draw attention to features of the wood. Dry wood is necessary for turnings that require precision, as in the fit of a lid to a box, or in forms where pieces are glued together.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1s2o0m
|
Why can I hear a transmitted radio signal on several frequencies?
|
[
{
"answer": "Its called harmonics, the idea comes from that all waveforms comes from the sum of a series of increasing sinusodial waves. Meaing a 11MHz transducer is transducing a sum of 11Mhz, 22Mhz, 44Mhz, 88Mhz etc frequencies, but in decreasing amplitudes. But your reciever is sensitive enough to pick those harmonics up. \n\nI need to stress that this is not a problem with your transducer, but a fundamental property of maths and physics.",
"provenance": null
},
{
"answer": "/u/zalaesseo said it perfect. I'll add that fast food restaurant wireless headsets operate in a frequency range you can access. They use two frequencies, one to talk to the drive through speaker and one to talk back inside. Electronics are fun. Source: USCG electronics school and way too much free time.\n\nI'm not condoning sending signals to a drive through speaker, but I'm not saying it isn't funny either. The people inside can't hear you and will have no idea what's going on.",
"provenance": null
},
{
"answer": "Despite the excellent answer, I'm going to contribute here because I used to be a serious RF hobbyist and I still have a great deal of enthusiasm for the field (pun semi-intentional) and specifically for harmonics.\n\nHarmonics are harder to come by nowadays when DXing (monitoring, analyzing and identifying radio signals). The reason is that much of the newer digitally-augmented electronics are designed to detect and filter out harmonics (called harmonic rejection). Or digital tuners may not allow you to lock onto a harmonic, because harmonics tend to run substantially lower power and the tuner will simply interpret it as a \"weak\" station.\n\nSo if you have an older dial tuner receiver, you can more easily pick up these little treasures. You can also get specialized equipment.\n\nThe other reason for fewer observed RF harmonics these days is that the nicer, commercial transmitters (FM, UHF, VHF, SW, etc.) have gotten pretty good at minimizing harmonics. Harmonics are a huge indicator to the regulators that you have an out-of-compliance transmitter... and that can get you into big trouble, especially if the harmonic \"steps\" over another licensed station. \n\nI might also add that RF engineers have gotten very proficient at dealing with terrestrial contours that can induce a harmonic skip... a fairly rare phenomenon that's really fun to discover. Basically it's a reflection of an RF signal that shifts frequency... sometimes to a harmonic of the original signal. So essentially, you are listening to an \"accidental\" radar.\n\nA loosely associated phenomenon called Phase Shift can also be observed, that's kind of like this, but that's another topic entirely.",
"provenance": null
},
{
"answer": " > 44MHz, 22MHz or 10.5MHz transmission. \n\nHearing 44 on 88 is probably via the second harmonic, as has already been said. 22 Mhz is likely the 4th harmonic. Odds are you will get a stronger signal if you transmit on something like 29.3 instead, as the third harmonic is typically stronger than the 2nd or 4th.\n\nHowever with the 10.5MHz, I believe you are probably overloading the \"front end\" of the receiver and getting the signal past the filters. 10.5 Mhz (or frequently 10.4 Mhz) is frequently used in superheterodyne receiver (the predominate type nowadays) as an I.F. (intermediate frequency) \n\n_URL_0_\n\nIt's too bad that your radio probably doesn't have a signal strength meter. You could do some simple experiments to verify",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "15368428",
"title": "Radio",
"section": "Section::::Radio technology.:Radio communication.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 1112,
"text": "The radio waves from many transmitters pass through the air simultaneously without interfering with each other because each transmitter's radio waves oscillate at a different rate, in other words each transmitter has a different frequency, measured in kilohertz (kHz), megahertz (MHz) or gigahertz (GHz). The receiving antenna typically picks up the radio signals of many transmitters. The receiver uses \"tuned circuits\" to select the radio signal desired out of all the signals picked up by the antenna, and reject the others. A tuned circuit (also called resonant circuit or tank circuit) acts like a resonator, similarly to a tuning fork. It has a natural resonant frequency at which it oscillates. The resonant frequency of the receiver's tuned circuit is adjusted by the user to the frequency of the desired radio station; this is called \"tuning\". The oscillating radio signal from the desired station causes the tuned circuit to resonate, oscillate in sympathy, and it passes the signal on to the rest of the receiver. Radio signals at other frequencies are blocked by the tuned circuit and not passed on.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "98132",
"title": "Radio wave",
"section": "Section::::Radio communication.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 857,
"text": "The radio waves from many transmitters pass through the air simultaneously without interfering with each other. They can be separated in the receiver because each transmitter's radio waves oscillate at a different rate, in other words each transmitter has a different frequency, measured in kilohertz (kHz), megahertz (MHz) or gigahertz (GHz). The bandpass filter in the receiver consists of a tuned circuit which acts like a resonator, similarly to a tuning fork. It has a natural resonant frequency at which it oscillates. The resonant frequency is set equal to the frequency of the desired radio station. The oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. Radio signals at other frequencies are blocked by the tuned circuit and not passed on.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3339042",
"title": "Radio noise",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 967,
"text": "In radio reception, noise is unwanted random electrical signals always present in a radio receiver in addition to the desired radio signal. Radio noise is a combination of natural electromagnetic atmospheric noise (\"spherics\", static) created by electrical processes in the atmosphere like lightning, manmade radio frequency interference (RFI) from other electrical devices picked up by the receiver's antenna, and thermal noise present in the receiver input circuits, caused by the random thermal motion of molecules. The level of noise determines the maximum sensitivity and reception range of a radio receiver; if no noise were picked up with radio signals, even weak transmissions could be received at virtually any distance by making a radio receiver that was sensitive enough. With noise present, if a radio source is so weak and far away that the radio signal in the receiver has a lower amplitude than the average noise, the noise will drown out the signal. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15368428",
"title": "Radio",
"section": "Section::::Radio technology.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 644,
"text": "As they travel farther from the transmitting antenna, radio waves spread out so their signal strength (intensity in watts per square meter) decreases, so radio transmissions can only be received within a limited range of the transmitter, the distance depending on the transmitter power, antenna radiation pattern, receiver sensitivity, noise level, and presence of obstructions between transmitter and receiver. An omnidirectional antenna transmits or receives radio waves in all directions, while a directional antenna or high gain antenna transmits radio waves in a beam in a particular direction, or receives waves from only one direction. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "98132",
"title": "Radio wave",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 562,
"text": "Radio waves are a type of electromagnetic radiation with wavelengths in the electromagnetic spectrum longer than infrared light. Radio waves have frequencies as high as 300 gigahertz (GHz) to as low as 30 hertz (Hz). At 300 GHz, the corresponding wavelength is 1 mm, and at 30 Hz is 10,000 km. Like all other electromagnetic waves, radio waves travel at the speed of light. They are generated by electric charges undergoing acceleration, such as time varying electric currents. Naturally occurring radio waves are emitted by lightning and astronomical objects. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15368428",
"title": "Radio",
"section": "Section::::Radio technology.:Radio communication.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 671,
"text": "At the receiver, the radio wave induces a tiny oscillating voltage in the receiving antenna which is a weaker replica of the current in the transmitting antenna. This voltage is applied to the radio receiver, which amplifies the weak radio signal so it is stronger, then demodulates it, extracting the original modulation signal from the modulated carrier wave. The modulation signal is converted by a transducer back to a human-usable form: an audio signal is converted to sound waves by a loudspeaker or earphones, a video signal is converted to images by a display, while a digital signal is applied to a computer or microprocessor, which interacts with human users. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29048",
"title": "Single-sideband modulation",
"section": "Section::::Basic concept.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 711,
"text": "Radio transmitters work by mixing a radio frequency (RF) signal of a specific frequency, the carrier wave, with the audio signal to be broadcast. In AM transmitters this mixing usually takes place in the final RF amplifier (high level modulation). It is less common and much less efficient to do the mixing at low power and then amplify it in a linear amplifier. Either method produces a set of frequencies with a strong signal at the carrier frequency and with weaker signals at frequencies extending above and below the carrier frequency by the maximum frequency of the input signal. Thus the resulting signal has a spectrum whose bandwidth is twice the maximum frequency of the original input audio signal. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
88yq2i
|
why is it easier to shoot at people under you in games?
|
[
{
"answer": "Insert \"i have the high ground\" meme.\n\nIts not only in games, highround has the vision advantage. ",
"provenance": null
},
{
"answer": "Mostly you look more toward ground then toward sky. So usually u see tinks blow you faster.",
"provenance": null
},
{
"answer": "When in high ground, you just need to poke your head out to see entire body of enemies, meanwhile they can only see your head which is much smaller than whole body so it's a lot harder to hit you. In addition, if you need to take cover for reload or heal, you can just take a step back or crouch, while in low ground you must find a proper cover",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "25851669",
"title": "Strafing (gaming)",
"section": "Section::::Techniques.:Circle strafing.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 432,
"text": "Many shooters will allow players to zoom down the sights of a gun or use a scope, usually exchanging movement speed, field of vision, and the speed of their traverse for greater accuracy. This can make a player considerably more vulnerable to circle-strafing, as objects will pass through their field of vision more quickly, they are less capable of keeping up with the target, and their lowered speed makes dodging more difficult.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "731791",
"title": "Gamesmanship",
"section": "Section::::Techniques.:Causing the opponent to overthink.\n",
"start_paragraph_id": 33,
"start_character": 0,
"end_paragraph_id": 33,
"end_character": 407,
"text": "BULLET::::- The converse approach, suggesting a level of expertise far higher than the player actually possesses, can also be effective. For example, although gamesmanship frowns on simple distractions like whistling loudly while an opponent takes a shot, it is good gamesmanship to do so when taking a shot oneself, suggesting as it does a level of carefree detachment which the opponent does not possess.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "133295",
"title": "Tactical shooter",
"section": "Section::::Game design.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 399,
"text": "Tactical shooters are designed for realism. It is not unusual for players to be killed with a single bullet, and thus players must be more cautious than in other shooter games. The emphasis is on realistic modeling of weapons, and power-ups are often more limited than in other action games. This restrains the individual heroism seen in other shooter games, and thus tactics become more important.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1937745",
"title": "Critical hit",
"section": "Section::::Types.:Headshot.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 621,
"text": "In first person shooter games such as \"\", \"Tactical Ops\", and \"Unreal Tournament\", the concept of a critical hit is often substituted by the headshot, where a player attempts to place a shot on an opposed player or non-player character's head area or other weak-spot, which is generally fatal, or otherwise devastating, when successfully placed. Headshots require considerable accuracy as players often have to compensate for target movement and a very specific area of the enemy's body. In some games, even when the target is stationary, the player may have to compensate for movement generated by the telescopic sight.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21716359",
"title": "Insurgency: Modern Infantry Combat",
"section": "Section::::Gameplay.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 539,
"text": "The game has a pseudo-realistic portrayal of the weaponry used. There is no on-screen crosshair and the players must use the iron sights of the game's weapon model to accurately aim the weapon. Shooting \"from the hip\" is still possible; however, the free-aim system makes this difficult. Weapons are also more deadly than in most first-person shooter titles, with most rifles capable of taking out players with one or two shots to the torso. According to their class, players can also use fragmentation grenades, smoke grenades, and RPGs.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3205715",
"title": "Jet Li: Rise to Honor",
"section": "Section::::Gameplay.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 608,
"text": "Played from a third-person perspective, the majority of the game is a beat 'em up, with the player using the right analog stick to direct blows at enemies. The game also features a number of levels where the player uses firearms with unlimited ammunition. During levels, the player constantly builds up a store of adrenaline, which the player can unleash to perform powerful hand-to-hand combat strikes. An alternative is, when using firearms, the player initiates a temporary slow motion bullet time mode similar to the video game \"Max Payne\". During the firearm scenes, you can hide behind various objects\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "378915",
"title": "Simulation video game",
"section": "Section::::Subgenres.:Other types.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 219,
"text": "BULLET::::- Certain tactical shooters have higher degrees of realism than other shooters. Sometimes called \"soldier sims\", these games try to simulate the feeling of being in combat. This includes games such as \"Arma\".\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3tn3jx
|
why doesn't the us place a price ceiling on medical equipment ?
|
[
{
"answer": "The issue is that developing new medicines or medical devices is somewhat of a gamble. It costs a lot of money and the project you're working on might turn out to be ineffective or not be approved by the FDA or whatever. The prospect of being able to make a lot of money encourages companies to go out there and spend a lot of money developing new medicines.\n\nThere are certain things that could be done to make this system work better, either by identifying situations where the market is failing and then trying to fix them with new rules, or by subsidizing poor people's drug bills more, or by having the government take a more active role in research and development of new medical technology. A blanket law that said something like \"no pill shall cost more than $10\" would not be a good idea though.",
"provenance": null
},
{
"answer": "The primary purpose of manufacturing medical products is for the company to sell it at a profit in order to make money. If the government were to institute a price cap that would negatively impact their profits and reduce their incentive to release new products. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "27553159",
"title": "Health care in the United States",
"section": "Section::::Spending.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 443,
"text": "In 2018, an analysis concluded that prices and administrative costs were largely the cause of the high costs, including prices for labor, pharmaceuticals, and diagnostics. The combination of high prices and high volume can cause particular expense; in the U.S., high-margin high-volume procedures include angioplasties, c-sections, knee replacements, and CT and MRI scans; CT and MRI scans also showed higher utilization in the United States.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1434699",
"title": "Price ceiling",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 482,
"text": "While price ceilings are often imposed by governments, there are also price ceilings which are implemented by non-governmental organizations such as companies, such as the practice of resale price maintenance. With resale price maintenance, a manufacturer and its distributors agree that the distributors will sell the manufacturer's product at certain prices (resale price maintenance), at or below a price ceiling (maximum resale price maintenance) or at or above a price floor. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1185519",
"title": "Price controls",
"section": "Section::::Criticism.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 678,
"text": "The primary criticism leveled against the price ceiling type of price controls is that by keeping prices artificially low, demand is increased to the point where supply can not keep up, leading to shortages in the price-controlled product. For example, Lactantius wrote that Diocletian \"by various taxes he had made all things exceedingly expensive, attempted by a law to limit their prices. Then much blood [of merchants] was shed for trifles, men were afraid to offer anything for sale, and the scarcity became more excessive and grievous than ever. Until, in the end, the [price limit] law, after having proved destructive to many people, was from mere necessity abolished.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4085676",
"title": "Health care prices in the United States",
"section": "Section::::Nature of the healthcare markets.:Price transparency issues.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 725,
"text": "Organizations such as the American Medical Association (AMA) and AARP support a \"fair and accurate valuation for all physician services\". Very few resources exist, however, that allow consumers to compare physician prices (one exception is CostOfDoctors.com) The AMA sponsors the Specialty Society Relative Value Scale Update Committee, a private group of physicians which largely determine how to value physician labor in Medicare prices. Among politicians, former House Speaker Newt Gingrich has called for transparency in the prices of medical devices, noting it is one of the few aspects or U.S. health care where consumers and federal health officials are \"barred from comparing the quality, medical outcomes or price\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4085676",
"title": "Health care prices in the United States",
"section": "Section::::Nature of the healthcare markets.:Price transparency issues.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 673,
"text": "Unlike most markets for consumer services in the United States, the health care market generally lacks transparent market-based pricing. Patients are typically not able to comparison shop for medical services based on price, as medical service providers do not typically disclose prices prior to service. Government mandated critical care and government insurance programs like Medicare also impact market pricing of U.S. health care. According to the New York Times in 2011, \"the United States is far and away the world leader in medical spending, even though numerous studies have concluded that Americans do not get better care\" and prices are the highest in the world.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "176737",
"title": "Private finance initiative",
"section": "Section::::United Kingdom.:Impact.:Waste.\n",
"start_paragraph_id": 108,
"start_character": 0,
"end_paragraph_id": 108,
"end_character": 264,
"text": "With regard to hospitals, Prof. Nick Bosanquet of Imperial College London has argued that the government commissioned some PFI hospitals without a proper understanding of their costs, resulting in a number of hospitals which are too expensive to be used. He said:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1434699",
"title": "Price ceiling",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 734,
"text": "A price ceiling is a government- or group-imposed price control, or limit, on how high a price is charged for a product, commodity, or service. Governments use price ceilings to protect consumers from conditions that could make commodities prohibitively expensive. Such conditions can occur during periods of high inflation, in the event of an investment bubble, or in the event of monopoly ownership of a product, all of which can cause problems if imposed for a long period without controlled rationing, leading to shortages. Further problems can occur if a government sets unrealistic price ceilings, causing business failures, stock crashes, or even economic crises. In unregulated market economies, price ceilings do not exist. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
95znti
|
What is happening at the molecular level when water is being squeezed or wrung out of something?
|
[
{
"answer": "Water goes into fabric and is held in small spaces where its cohesive and adhesive properties hold it to the fibers in the cloth. Tighten up the fibers (by wringing) and you restrict the space the water can be in squeezing it out of all its little spaces in the cloth.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "41351364",
"title": "Flotation of flexible objects",
"section": "Section::::Physical explanation of phenomena.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 433,
"text": "In a liquid solution, any given liquid molecule experience strong cohesive forces from neighboring molecules. While these forces are balanced in the bulk, molecules at the surface of the solution are surrounded on one side by water molecules and on the other side by gas molecules. The resulting imbalance of cohesive forces along the surface results in a net \"pull\" toward the bulk, giving rise to the phenomena of surface tension.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "527232",
"title": "Hydrostatics",
"section": "Section::::Liquids (fluids with free surfaces).:Capillary action.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 399,
"text": "When liquids are constrained in vessels whose dimensions are small, compared to the relevant length scales, surface tension effects become important leading to the formation of a meniscus through capillary action. This capillary action has profound consequences for biological systems as it is part of one of the two driving mechanisms of the flow of water in plant xylem, the transpirational pull.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "113302",
"title": "Surface tension",
"section": "Section::::Physics.:Floating objects.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 738,
"text": "When an object is placed on a liquid, its weight depresses the surface, and if surface tension and downward force becomes equal than is balanced by the surface tension forces on either side , which are each parallel to the water's surface at the points where it contacts the object. Notice that small movement in the body may cause the object to sink. As the angle of contact decreases surface tension decreases the horizontal components of the two arrows point in opposite directions, so they cancel each other, but the vertical components point in the same direction and therefore add up to balance . The object's surface must not be wettable for this to happen, and its weight must be low enough for the surface tension to support it.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "63337",
"title": "Supersaturation",
"section": "Section::::Preparation.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 561,
"text": "suggests that pressure and volume can also be changed to force a system into a supersaturated state. If the volume of solvent is decreased, the concentration of the solute can be above the saturation point and thus create a supersaturated solution. The decrease in volume is most commonly generated through evaporation. Similarly, an increase in pressure can drive a solution to a supersaturated state. All three of these mechanisms rely on the fact that the conditions of the solution can be changed quicker than the solute can precipitate or crystallize out.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23619",
"title": "Pressure",
"section": "Section::::Types.:Direction of liquid pressure.\n",
"start_paragraph_id": 109,
"start_character": 0,
"end_paragraph_id": 109,
"end_character": 1175,
"text": "When a liquid presses against a surface, there is a net force that is perpendicular to the surface. Although pressure doesn't have a specific direction, force does. A submerged triangular block has water forced against each point from many directions, but components of the force that are not perpendicular to the surface cancel each other out, leaving only a net perpendicular point. This is why water spurting from a hole in a bucket initially exits the bucket in a direction at right angles to the surface of the bucket in which the hole is located. Then it curves downward due to gravity. If there are three holes in a bucket (top, bottom, and middle), then the force vectors perpendicular to the inner container surface will increase with increasing depth – that is, a greater pressure at the bottom makes it so that the bottom hole will shoot water out the farthest. The force exerted by a fluid on a smooth surface is always at right angles to the surface. The speed of liquid out of the hole is formula_23, where \"h\" is the depth below the free surface. This is the same speed the water (or anything else) would have if freely falling the same vertical distance \"h\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1183122",
"title": "Water potential",
"section": "Section::::Components of water potential.:Matrix potential (Matric potential).\n",
"start_paragraph_id": 26,
"start_character": 0,
"end_paragraph_id": 26,
"end_character": 697,
"text": "When water is in contact with solid particles (e.g., clay or sand particles within soil), adhesive intermolecular forces between the water and the solid can be large and important. The forces between the water molecules and the solid particles in combination with attraction among water molecules promote surface tension and the formation of menisci within the solid matrix. Force is then required to break these menisci. The magnitude of matrix potential depends on the distances between solid particles—the width of the menisci (also capillary action and differing Pa at ends of capillary)—and the chemical composition of the solid matrix (meniscus, macroscopic motion due to ionic attraction).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33306",
"title": "Water",
"section": "Section::::Chemical and physical properties.:Mechanical properties.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 238,
"text": "Liquid water can be assumed to be incompressible for most purposes: its compressibility ranges from 4.4 to in ordinary conditions. Even in oceans at 4 km depth, where the pressure is 400 atm, water suffers only a 1.8% decrease in volume.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5g7taa
|
why do people hide their license plate when selling their vehicle but not always their vin?
|
[
{
"answer": "The VIN number only shows who manufactured the car and what model it is. The licence plate tells you what state the car is in and, if someone is thorough, it can reveal the location. You don't want randoms knowing where you live.",
"provenance": null
},
{
"answer": "Somewhat easy, there is belief that car thieves will look on craigslist for cars with high $$$ upgrades on the inside, than use the license plate to gain a location. If you talk all this and that about upgrades like speakers, headunit, and tuners but don't mention anything about an alarm system that you've basically told everyone I have basic security protecting all these upgrades. ",
"provenance": null
},
{
"answer": "If someone get someone else's license plate number they could use it in a false 911 phone call and cause major inconvenience to someone (basically a softer version of SWATTING). This happened to a friend of mine a few years back while selling his car, he forgot to blank the liscense plate and someone 400km away saw the add and called his plate in for speeding or something and the police showed up at his door and questioned him, however they quickly realized it would be impossible for my friend to drive 400km in just over an hour. VIN numbers show manufacturer information, and the VIN number is often used to search for the value of the vehicle on programs such as \"blue-book\".",
"provenance": null
},
{
"answer": " Besides someone noticing your car in a parking lot that was on craigslist, \nThere's an actual Federal law for this called the Driver's Privacy Protection Act. \nBack before there were many privacy laws someone such as a licensed private investigator could put in a request and for a fee find this information, well this led to a murder so since then its very limited. You have to go to great lengths, lie and falsify requests in order to find this out today and that owners plate has to give written consent of their information\n\nThen you have each States motor vehicle offices that have their own rules\n\nBasically only police and official business can search plates for driver information. For such instance a tow truck company can request info to a vehicle they towed, only official stuff like this",
"provenance": null
},
{
"answer": "One reason that is not mentioned yet is that if you put your car up for sale and show the plates someone else with a very similar or identical car know enough of what your plates look like to effectively give you blame for everything they do for weeks or even months before getting caught.\n\nParking tickets, automated speeding cameras, surveillance reviews after break ins, automated toll roads and all kinds of stuff can make you spend all your free time on the phone for quite a while to have it sorted out.\n\nBlock your plates.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1038873",
"title": "Vehicle registration plates of Germany",
"section": "Section::::Emission, safety test and registration sticker.\n",
"start_paragraph_id": 85,
"start_character": 0,
"end_paragraph_id": 85,
"end_character": 475,
"text": "This is the recommended procedure for selling a car. Alternatively the seller may hand out their car with valid licence plates and papers still in their name to the new owner thus giving them the responsibility to register the car in their name shortly. In a scenario without a proper sales contract the seller may become liable when the buyer commits criminal acts related to the car or plates and thus it is generally not recommended to sell used cars with licence plates.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15429177",
"title": "Driver's licenses in the United States",
"section": "Section::::Use as identification and proof of age.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 1439,
"text": "Because there is no national identity card in the United States, the driver's license is often used as the \"de facto\" equivalent for completion of many common business and governmental transactions. As a result, driver's licenses are the focus of many kinds of identity theft. Driver's licenses were not always identification cards. In many states, driver's licenses did not even have a photograph well into the 1980s. Activism by the Mothers Against Drunk Driving organization for the use of photo ID age verification in conjunction with increasing the drinking age to 21 in order to reduce underage drinking led to photographs being added to all state licenses. New York and Tennessee were the last states to add photos in 1986. However, New Jersey later allowed drivers to get non-photo licenses; this was later revoked. Vermont license holders have the option of receiving a non-photo license. All Tennessee drivers aged 60 years of age or older had the option of a non-photo driver's license prior to January 2013, when photo licenses were required for voting identification. All people with valid non-photo licenses will be allowed to get a photo license when their current license expires. Thirteen states allow the option of a non-photo driver's license for reasons of religious belief: Arkansas, Indiana, Kansas, Minnesota, Missouri, Nebraska, New Jersey, North Dakota, Oregon, Pennsylvania, Tennessee, Washington, and Wisconsin.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "7106878",
"title": "Vehicle registration plates of Switzerland",
"section": "Section::::Special uses.\n",
"start_paragraph_id": 20,
"start_character": 0,
"end_paragraph_id": 20,
"end_character": 338,
"text": "Because of a European regulation that identification as a rental car should not be possible, the plates with \"V\" are no longer in use. Today, rental cars usually have common car plates with the canton codes VD or AI. Temporary duty unpaid vehicles use \"Z\" plates and year band while Temporary duty paid plate have year band on the right.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16516519",
"title": "Vehicle registration plates of Canada",
"section": "Section::::Showing current registration on plates.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 808,
"text": "Plates that are not up to date quickly attract the attention of law enforcement, because registration \"renewal\" is a transaction that can usually be undertaken only by the car's registered owner, once certain requirements have been met, and because registration fees are a source of government revenue. A delinquent registration sticker is often an indicator that the vehicle may be stolen, that the vehicle's owner has failed to comply with the applicable law regarding emission inspection or insurance, or that the vehicle's owner has unpaid traffic or parking tickets. Even with the stickers, most provinces previously required that all licence plates be replaced every few years; that practice is being abandoned by many provinces because of the expense of continually producing large numbers of plates.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3482737",
"title": "Vehicle registration plates of Sweden",
"section": "Section::::Special plates.:Dealer plates.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 715,
"text": "Dealer plates have black text on a green background. These plates are used on vehicles without registration, insurance and vehicles which have failed inspection. The dealers have reported their car not to be driven, meaning they don't have to pay road tax. Cars can be parked for months awaiting sale. The cars can be used for short test drives with one of these licence plates. Unlike normal Swedish license plates, the dealer plate is not tied to any vehicle but to the plate owner. These plates can also be used by car manufacturers to test vehicles. The plate has a sticker indicating if the plate is for cars, trucks or trailers. The plate shows that the owner has a special insurance that covers test drives.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1861994",
"title": "Real ID Act",
"section": "Section::::Analysis of the law.:IDs and driver's licenses as identification.\n",
"start_paragraph_id": 47,
"start_character": 0,
"end_paragraph_id": 47,
"end_character": 773,
"text": "In the United States, driver's licenses are issued by the states, not by the federal government. Additionally, because the United States has no national identification card and because of the widespread use of cars, driver's licenses have been used as a \"de facto\" standard form of identification within the country. For non-drivers, states also issue voluntary identification cards which do not grant driving privileges. Prior to the Real ID Act, each state set its own rules and criteria regarding the issuance of a driver's license or identification card, including the look of the card, what data is on the card, what documents must be provided to obtain one, and what information is stored in each state's database of licensed drivers and identification card holders.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16516519",
"title": "Vehicle registration plates of Canada",
"section": "Section::::Showing current registration on plates.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 456,
"text": "Currently, Québec, Saskatchewan and Manitoba are the only provinces in which decals are not used. Instead, the police rely on the use of cameras and computers that automatically report any plates for which the registration is expired (making the use of fake stickers obsolete), the car has been reported as stolen and/or similar reasons. That said, the Registration Certificate is the only way for the owner to prove that a vehicle has valid registration.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
qz7e0
|
How often does a comet crash into the Sun?
|
[
{
"answer": "Only one time per comet.",
"provenance": null
},
{
"answer": "The current generation of solar satellites have revolutionized our understanding of this. As of 2011 they've spotted something in the neighborhood of 2024 comets since 1995 (couldn't find the current count). That's a simple average of 126 per year, though the pace of discovery hasn't been constant; [it took 10 years to find the first 1000, but only 5 to double it.](_URL_0_) The really neat thing is that the vast majority are discovered by regular folks who are interested in the Sun and look at the data themselves!\n\nThe majority are from a large comet that broke up into many, many, many pieces, called [Kreutz Sungrazers](_URL_1_). They pass within 1-2 times the radius of the sun, and most evaporate and do not make it around for a second pass. \n\nSometimes, however, they do! [Here you can see Comet Lovejoy on approach and then leaving.](_URL_2_)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "63794",
"title": "Impact event",
"section": "Section::::Elsewhere in the Solar System.:Observed events.:Other impacts.\n",
"start_paragraph_id": 87,
"start_character": 0,
"end_paragraph_id": 87,
"end_character": 579,
"text": "In 1998, two comets were observed plunging toward the Sun in close succession. The first of these was on June 1 and the second the next day. A video of this, followed by a dramatic ejection of solar gas (unrelated to the impacts), can be found at the NASA website. Both of these comets evaporated before coming into contact with the surface of the Sun. According to a theory by NASA Jet Propulsion Laboratory scientist Zdeněk Sekanina, the latest impactor to actually make contact with the Sun was the \"supercomet\" Howard-Koomen-Michels on August 30, 1979. (See also sungrazer.)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5962",
"title": "Comet",
"section": "Section::::Fate of comets.:Breakup and collisions.\n",
"start_paragraph_id": 68,
"start_character": 0,
"end_paragraph_id": 68,
"end_character": 422,
"text": "Some comets meet a more spectacular end – either falling into the Sun or smashing into a planet or other body. Collisions between comets and planets or moons were common in the early Solar System: some of the many craters on the Moon, for example, may have been caused by comets. A recent collision of a comet with a planet occurred in July 1994 when Comet Shoemaker–Levy 9 broke up into pieces and collided with Jupiter.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8721706",
"title": "C/2002 V1 (NEAT)",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 822,
"text": "The comet was hit by a coronal mass ejection during its pass near the Sun; some rumoured it had \"disturbed\" the Sun, but scientists dismissed this notion. The scientific consensus is that there is no link between comets and CMEs that can not be explained through simple coincidence, and there were 56 CMEs in February 2003. On February 18, 2003, comet C/2002 V1 (NEAT) passed 5.7 degrees from the Sun. C/2002 V1 (NEAT) appeared impressive as viewed by the Solar and Heliospheric Observatory (SOHO) as a result of the forward scattering of light off of the dust in the coma and tail. After the comet left LASCO's field of view, on February 20, 2003, an object was seen at the bottom of a single frame. Although technicians dismissed this as a software bug, rumours persisted that the object had been expelled from the Sun.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "23159713",
"title": "Ryves Comet",
"section": "Section::::Very near Sun.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 241,
"text": "It appeared as a ball of hot gas traveling at one hundred miles per second from the Naval Observatory. The comet passed within 7,000,000 miles of the Sun on August 26. A wanderer in the solar system, it is considered unlikely to return from\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "157819",
"title": "Meteor shower",
"section": "Section::::Origin of meteoroid streams.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 373,
"text": "Each time a comet swings by the Sun in its orbit, some of its ice vaporizes and a certain amount of meteoroids will be shed. The meteoroids spread out along the entire orbit of the comet to form a meteoroid stream, also known as a \"dust trail\" (as opposed to a comet's \"gas tail\" caused by the very small particles that are quickly blown away by solar radiation pressure).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "930437",
"title": "Sungrazing comet",
"section": "Section::::History of Sungrazers.:20th Century.:Coronagraphic Observations.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 627,
"text": "In 1987 and 1988 it was first observed by SMM that there could be pairs of sungrazing comets that can appear within very short time periods ranging from a half of a day up to about two weeks. Calculations were made to determine that the pairs were part of the same parent body but broke apart at tens of AU from the Sun. The breakup velocities were only on the order of a few meters per second which is comparable to the speed of rotation for these comets. This led to the conclusion that these comets break from tidal forces and that comets C/1882 R1, C/1965 S1, and C/1963 R1 probably broke off from the Great Comet of 1106.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "37615833",
"title": "2013 in science",
"section": "Section::::Events, discoveries and inventions.:November.\n",
"start_paragraph_id": 695,
"start_character": 0,
"end_paragraph_id": 695,
"end_character": 428,
"text": "BULLET::::- 28 November – The comet C/2012 S1 (ISON) passed roughly above the Sun's surface. Although it was highly anticipated that the comet would be visible to the naked eye on Earth once it orbited the sun, it became increasingly evident that it had vaporized as it made its approach. Hours after it passed behind the sun, a part of the comet re-emerged, though significantly smaller. Over the next 24 hours, it too, faded.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
4g84lf
|
what is with the sometime hours and hours or delay in having sore/dead/tired legs after over doing and pushing yourself with leg exercise/walking/running?
|
[
{
"answer": "I believe it's somewhat to do with DOMS (Delayed Onset Muscle Soreness), in which your body begins to repair tiny microtears in the muscle fibres of your legs after long periods of exertion.\n\nDoing this makes your legs stronger and able to endure more physical activity. It's like how people build muscle in the gym.\n\nThe delay must be due to inflammation occurring in your leg tissue hours later as the body begins to repair them. Not inflammation in the sense that your legs are gonna swell up and become red and very hot, but bits of inflammation in the muscle fibres which add up to aching. There'd probably be lactic acid present as well which contributes to the aches, like how a stitch aches.\n\nExplained this out of my own idea of it, so if I'm a bit off what actually happens, I apologise.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1298492",
"title": "Intermittent claudication",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 448,
"text": "Intermittent claudication (Latin: \"claudicatio intermittens\"), is a symptom that describes muscle pain on mild exertion (ache, cramp, numbness or sense of fatigue), classically in the calf muscle, which occurs during exercise, such as walking, and is relieved by a short period of rest. It is classically associated with early-stage peripheral artery disease, and can progress to critical limb ischemia unless treated or risk factors are modified.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "240832",
"title": "Restless legs syndrome",
"section": "Section::::Treatment.:Physical measures.\n",
"start_paragraph_id": 72,
"start_character": 0,
"end_paragraph_id": 72,
"end_character": 484,
"text": "Stretching the leg muscles can bring temporary relief. Walking and moving the legs, as the name \"restless legs\" implies, brings temporary relief. In fact, those with RLS often have an almost uncontrollable need to walk and therefore relieve the symptoms while they are moving. Unfortunately, the symptoms usually return immediately after the moving and walking ceases. A vibratory counter-stimulation device has been found to help some people with primary RLS to improve their sleep.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44887063",
"title": "Chronic limb threatening ischemia",
"section": "Section::::Types.:Rest pain.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 284,
"text": "Rest pain is a continuous burning pain of the lower leg or feet. It begins, or is aggravated, after reclining or elevating the limb and is relieved by sitting or standing. It is more severe than intermittent claudication, which is also a pain in the legs from arterial insufficiency.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19723734",
"title": "Muscle",
"section": "Section::::Exercise.\n",
"start_paragraph_id": 65,
"start_character": 0,
"end_paragraph_id": 65,
"end_character": 449,
"text": "Delayed onset muscle soreness is pain or discomfort that may be felt one to three days after exercising and generally subsides two to three days later. Once thought to be caused by lactic acid build-up, a more recent theory is that it is caused by tiny tears in the muscle fibers caused by eccentric contraction, or unaccustomed training levels. Since lactic acid disperses fairly rapidly, it could not explain pain experienced days after exercise.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "218580",
"title": "Myelitis",
"section": "Section::::Symptoms.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 924,
"text": "Depending on the cause of the disease, such clinical conditions manifest different speed in progression of symptoms in a matter of hours to days. Most myelitis manifests fast progression in muscle weakness or paralysis starting with the legs and then arms with varying degrees of severity. Sometimes the dysfunction of arms or legs cause instability of posture and difficulty in walking or any movement. Also symptoms generally include paresthesia which is a sensation of tickling, tingling, burning, pricking, or numbness of a person's skin with no apparent long-term physical effect. Adult patients often report pain in the back, extremities, or abdomen. Patients also present increased urinary urgency, bowel or bladder dysfunctions such as bladder incontinence, difficulty or inability to void, and incomplete evacuation of bowel or constipation. Others also report fever, respiratory problems and intractable vomiting.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3917373",
"title": "Snapping hip syndrome",
"section": "Section::::Treatment.:Self-treatment.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 370,
"text": "Stretching of the tight structures (piriformis, hip abductor, and hip flexor muscle) may alleviate the symptoms. The involved muscle is stretched (for 30 seconds), repeated three times separated by 30 second to 1 minute rest periods, in sets performed two times daily for six to eight weeks. This should allow one to progress back into jogging until symptoms disappear.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2088693",
"title": "Floyd Landis",
"section": "Section::::Career.:Hip ailment.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 727,
"text": "Landis rode the 2006 Tour with the constant pain from the injury, which he described: \"It's bad, it's grinding, it's bone rubbing on bone. Sometimes it's a sharp pain. When I pedal and walk, it comes and goes, but mostly it's an ache, like an arthritis pain. It aches down my leg into my knee. The morning is the best time, it doesn't hurt too much. But when I walk it hurts, when I ride it hurts. Most of the time it doesn't keep me awake, but there are nights that it does.\" During the Tour, Landis was medically approved to take cortisone for this injury, a medication otherwise prohibited in professional cycling for its known potential for abuse. Landis himself called his win \"a triumph of persistence\" despite the pain.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
38q2uq
|
why is a revealing outfit that doesn't quite bare all often so much more attractive than a completely nude body?
|
[
{
"answer": "For the same reason that things like burlesque and stripper shows are popular. It's about anticipation and tantalisation. While the body is covered up, your imagination is running wild. Even the most flawless body is still just a body. Your imagination is always more powerful.\n\nEDIT: general useless spelling.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1243208",
"title": "Toplessness",
"section": "Section::::Usage and connotations.:Barechestedness.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 449,
"text": "Barechestedness is the state of a man wearing no clothes above the waist, exposing the upper torso. Bare male chests are generally considered acceptable at beaches, swimming pools and sunbathing areas. However, some stores and restaurants have a \"no shirt, no service\" rule to prevent barechested men from coming inside. While going barechested at outdoor activities may be acceptable, it is taboo at office workplaces, churches and other settings.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1243208",
"title": "Toplessness",
"section": "Section::::Usage and connotations.:Barechestedness.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 603,
"text": "In most societies, barechestedness is much more common than toplessness, as exposure of the male pectoral muscles is often considered to be far less taboo than of the female breasts, despite some considering them equally erogenous. Male barechestedness is often due to practical reasons such as heat, or the ability to move the body without being restricted by an upper body garment. In several sports it is encouraged or even obligatory to be barechested. Barechestedness may also be used as a display of power, or to draw attention to oneself, especially if the upper body muscles are well-developed.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34069805",
"title": "Carolyn Cowan",
"section": "Section::::Photography, yoga and meditation.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 416,
"text": "“Today we are so defined by the exterior, labels and what we wear. Clothes hide and mask who we really are. But if you take it away we are nothing else but ourselves. I am fascinated by bodies, regardless one being skinny, not skinny, fat, obese, wrinkled, aged or young. There is beauty in absolutely everything, even in a nude body, which is not perfect as none of us are. There is beauty in human vulnerability.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "14336171",
"title": "Plain dress",
"section": "Section::::Practices.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 784,
"text": "Plain dress is attributed to reasons of theology and sociology. In general, plain dress involves the covering of much of the body (often including the head, forearms and calves), with minimal ornamentation, rejecting print fabrics, trims, fasteners, and jewelry. Non-essential elements of garments such as neckties, collars, and lapels may be minimized or omitted. Practical garments such as aprons and shawls may be layered over the basic ensemble. Plain dress garments are often handmade and may be produced by groups of women in the community for efficiency and to ensure uniformity of style. Plain dress practices can extend to the grooming of hair and beards and may vary somewhat to accommodate stages in the life cycle such as allowing children and older people more latitude.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11979074",
"title": "Backless dress",
"section": "Section::::Evolution.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 660,
"text": "Backless dresses first appeared in the 1920s. In the 1930s, the style became associated with the sun tanning fashions of the time, and the backless dress was a way of showing off a tan, usually without tan lines. The wearer usually had to be slim to be able to pull off the effect. In December 1937, the actress Micheline Patton was controversially filmed from behind while wearing a backless dress in the final episode of the early BBC fashion documentary \"Clothes-Line\". The illusion of nudity led to outraged viewers writing in to complain, and Pearl Binder, who co-presented the show, quipped, \"Grandmamma looks back but Micheline has no back to be seen.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "106121",
"title": "Modesty",
"section": "Section::::In dress.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 545,
"text": "Most discussion of modesty involves clothing. The criteria for acceptable modesty and decency have relaxed continuously in much of the world since the nineteenth century, with shorter, form-fitting, and more revealing clothing and swimsuits, more for women than men. Most people wear clothes that they consider not to be unacceptably immodest for their religion, culture, generation, occasion, and the people present. Some wear clothes which they consider immodest, due to exhibitionism, the desire to create an erotic impact, or for publicity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "60884944",
"title": "Skin gap",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 473,
"text": "Because women in some countries are forced to cover their bodies and faces, modest dress is often perceived as a symbol of oppression in Western culture even when a woman freely chooses to dress that way. Josephs wrote that when she became an Orthodox Jew and began dressing modestly, she found that covering up made her feel empowered. Her article and short video prompted online discussions and were featured on websites such as \"Glossy\" and the Nachum Segal radio show.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2ok5xw
|
why does food dye in milk react in such a way when soap is added?
|
[
{
"answer": "The soap breaks the surface tension of the milk. The food coloring rests on top because it has a lower density than milk. When you drop soap in the middle, the surface tension drops. But it takes time for that effect to reach the edge of the container. So the edge of milk still has all it's surface tension while the middle doesn't. This makes the food coloring move toward the edges. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "3572030",
"title": "Methyl cellulose",
"section": "Section::::Uses.:Consumer products.:Thickener and emulsifier.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 345,
"text": "Methyl cellulose is very occasionally added to hair shampoos, tooth pastes and liquid soaps, to generate their characteristic thick consistency. This is also done for foods, for example ice cream or croquette. Methyl cellulose is also an important emulsifier, preventing the separation of two mixed liquids because it is an emulsion stabilizer.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "233884",
"title": "Ultra-high-temperature processing",
"section": "Section::::Burnt flavor.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 621,
"text": "Two studies published in the late 20th century showed that UHT treatment causes proteins contained in the milk to unfold and flatten, and the formerly \"buried\" sulfhydryl (SH) groups, which are normally masked in the natural protein, cause extremely-cooked or burnt flavors to appear to the human palate. One study reduced the thiol content by immobilizing sulfhydryl oxidase in UHT-heated skim milk and reported, after enzymatic oxidation, an improved flavor. Two Pennsylvania authors prior to heating added the flavonoid compound epicatechin to the milk, and reported a partial reduction of thermally generated aromas.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "408443",
"title": "Baileys Irish Cream",
"section": "Section::::Drinking.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 437,
"text": "As is the case with milk, cream will curdle whenever it comes into contact with a weak acid. Milk and cream contain casein, which coagulates, when mixed with weak acids such as lemon, tonic water, or traces of wine. While this outcome is undesirable in most situations, some cocktails (such as the cement mixer, which consists of a shot of Bailey's mixed with the squeezed juice from a slice of lime) specifically encourage coagulation.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "167612",
"title": "Jerky",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 268,
"text": "Chemical preservatives can prevent oxidative spoilage, but the moisture-to-protein ratio prevents microbial spoilage by low water activity. Some jerky products are very high in sugar and therefore taste very sweet - unlike biltong, which rarely contains added sugars.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52994500",
"title": "Creolin",
"section": "Section::::Uses.\n",
"start_paragraph_id": 27,
"start_character": 0,
"end_paragraph_id": 27,
"end_character": 416,
"text": "For the preparation of phenol disinfectants, liquid soaps of different types are used which aid in cleaning and, mainly, the solubility of the active substance (phenols or cresols). It has been standard practice to use soaps which, upon dissolving the finished product in water, give a white, milk-like emulsion. This emulsion contains, dissolved in small particles, the active material, whether phenols or cresols.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "338192",
"title": "Common-ion effect",
"section": "Section::::Solubility effects.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 297,
"text": "The salting-out process used in the manufacture of soaps benefits from the common-ion effect. Soaps are sodium salts of fatty acids. Addition of sodium chloride reduces the solubility of the soap salts. The soaps precipitate due to a combination of common-ion effect and increased ionic strength.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13575891",
"title": "Extracellular polymeric substance",
"section": "Section::::Function.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 363,
"text": "The exopolysaccharides of some strains of lactic acid bacteria, e.g., Lactococcus lactis subsp. cremoris, contribute a gelatinous texture to fermented milk products (e.g., Viili), and these polysaccharides are also digestible. An example for industrial use of exopolysaccharides is the application of dextran in panettone and other breads in the bakery industry.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1k3ua6
|
how are government subsidies for food producers different from an indirect food tax?
|
[
{
"answer": "Do you read The Week (the magazine)? They just had a good feature on US food subsidies.\n\nAnyways, there is a very simple difference between an indirect tax and a subsidy. \n\nA tax generates revenue for the government. A subsidy is *paid for by the government*, meaning they lose money on it.\n\nIn terms of effects, (changes to price and quantity), they are generally the same.\n\nFor your other questions:\n\n- I don't know about you, but I heard a lot of debate when the new US Farm Bill was being proposed. The main problem is that it is *so* complex that many common people do not understand most of it. However, there have been a lot of controversies in recent years surrounding it, so I think we can expect reform in the next decade or two.\n\n- They can't, really. The large producers expend a lot of resources on how to get the best deal from the government, from lobbying to expansion/contraction of business. The government does not know the *exact* cost structures or production capabilities of the firms, so have to approximate, which leads to inefficiencies.\n\n- ...\n\nHope this helped!\n\n",
"provenance": null
},
{
"answer": "The difference is that tax money is fungible. Yes, the government derives the money for subsidies from taxes, but it can tax anything to subsidize anything else, so you can pay for food subsidies with fuel or luxury or anything else taxes, thus displacing the costs. A direct tax effects the cost of the specific item that it is placed upon.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5892435",
"title": "Subsidies in India",
"section": "Section::::Introduction.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 842,
"text": "Like indirect taxes, they can alter relative prices and budget constraints and thereby affect decisions concerning production, consumption and allocation of resources. Subsidies in areas such as education, health and environment at times merit justification on grounds that their benefits are spread well beyond the immediate recipients, and are shared by the population at large, present and future. For many other subsidies, however the case is not so clear-cut. Arising due to extensive governmental participation in a variety of economic activities, there are many subsidies that shelter inefficiencies or are of doubtful distributional credentials. Subsidies that are ineffective or distortionary need to be weaned out, for an undiscerning, uncontrolled and opaque growth of subsidies can be deleterious for a country's public finances.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57392260",
"title": "Taxation in Ukraine",
"section": "Section::::Indirect taxes.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 286,
"text": "An indirect tax is collected by one entity in the supply chain (usually a producer or retailer) and paid to the government, but it is passed on to the consumer as part of the purchase price of a good or service. The consumer is ultimately paying the tax by paying more for the product.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5892435",
"title": "Subsidies in India",
"section": "Section::::Introduction.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 648,
"text": "A subsidy, often viewed as the converse of a tax, is an instrument of fiscal policy. Derived from the Latin word 'subsidium', a subsidy literally implies coming to assistance from behind. However, their beneficial potential is at its best when they are transparent, well targeted, and suitably designed for practical implementation. Subsidies are helpful for both economy and people as well. Subsidies have a long-term impact on the economy; the Green Revolution being one example. Farmers were given good quality grain for subsidised prices. Likewise, we can see that how the government of India is trying to reduce air pollution to subsidies lpg\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "171866",
"title": "Agricultural subsidy",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 541,
"text": "An agricultural subsidy (also called an agricultural incentive) is a government incentive paid to agribusinesses, agricultural organizations and farms to supplement their income, manage the supply of agricultural commodities, and influence the cost and supply of such commodities. Examples of such commodities include: wheat, feed grains (grain used as fodder, such as maize or corn, sorghum, barley and oats), cotton, milk, rice, peanuts, sugar, tobacco, oilseeds such as soybeans and meat products such as beef, pork, and lamb and mutton.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26731463",
"title": "Nōgyōsha kobetsu shotoku hoshō seido",
"section": "Section::::Description.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 539,
"text": "The subsidies are calculated based on the difference between the nationwide average production cost and the nationwide average retail price. The payment has several additional components including a reward for quality, distribution method (e.g. selling in a direct marketing shop), effort of manufacturing (e.g. promotion of rice flour), expansion of management level, environmental conservation measures such as creation diversity, production of cereals that substitute for rice (includes rice for ground rice and animal feed rice), etc.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "216361",
"title": "Food security",
"section": "Section::::Risks to food security.:Agricultural subsidies in the United States.\n",
"start_paragraph_id": 120,
"start_character": 0,
"end_paragraph_id": 120,
"end_character": 362,
"text": "Agricultural subsidies are paid to farmers and agribusinesses to supplement their income, manage the supply of their commodities and influence the cost and supply of those commodities. In the United States, the main crops the government subsidizes contribute to the obesity problem; since 1995, $300 billion have gone to crops that are used to create junk food.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "146719",
"title": "Subsidy",
"section": "Section::::Types.:Consumer/consumption subsidy.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 1283,
"text": "A consumption subsidy is one that subsidises the behavior of consumers. This type of subsidies are most common in developing countries where governments subsidise such things as food, water, electricity and education on the basis that no matter how impoverished, all should be allowed those most basic requirements. For example, some governments offer 'lifeline' rates for electricity, that is, the first increment of electricity each month is subsidised. This paper addresses the problems of defining and measuring government subsidies, examines why and how government subsidies are used as a fiscal policy tool, discusses their economic effects, appraises international empirical evidence on government subsidies, and offers options for their reform. Evidence from recent studies suggests that government expenditures on subsidies remain high in many countries, often amounting to several percentage points of GDP. Subsidization on such a scale implies substantial opportunity costs. There are at least three compelling reasons for studying government subsidy behavior. First, subsidies are a major instrument of government expenditure policy. Second, on a domestic level, subsidies affect domestic resource allocation decisions, income distribution, and expenditure productivity.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1ryihz
|
i know it's not quite scientific, but what are elementary particles(e.g. leptons, bosons) "made of"?
|
[
{
"answer": "Thank you all for your answers! I suppose the question was easier to answer than I thought, though as I would hope, I'm still left wanting more answers to the universes mysteries. I imagine my talk with a physics professor would go something like:\nMe: \"Where did that come from?\"\nProfessor: *Explanation given*\nMe: \"But where did THAT come from?\"\nProfessor: *Explanation given*\nMe: \"But where did THAT come from?\"\nProfessor: \"We don't quite know\"\nMe: ...... \"Amazing\".",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "11274",
"title": "Elementary particle",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 559,
"text": "In particle physics, an elementary particle or fundamental particle is a with no sub structure, thus not composed of other particles. Particles currently thought to be elementary include the fundamental fermions (quarks, leptons, antiquarks, and antileptons), which generally are \"matter particles\" and \"antimatter particles\", as well as the fundamental bosons (gauge bosons and the Higgs boson), which generally are \"force particles\" that mediate interactions among fermions. A particle containing two or more elementary particles is a \"composite particle\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "385334",
"title": "List of particles",
"section": "Section::::Elementary particles.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 628,
"text": "Elementary particles are particles with no measurable internal structure; that is, it is unknown whether they are composed of other particles. They are the fundamental objects of quantum field theory. Many families and sub-families of elementary particles exist. Elementary particles are classified according to their spin. Fermions have half-integer spin while bosons have integer spin. All the particles of the Standard Model have been experimentally observed, recently including the Higgs boson in 2012. Many other hypothetical elementary particles, such as the graviton, have been proposed, but not observed experimentally.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "20556915",
"title": "Boson",
"section": "Section::::Properties.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 445,
"text": "All known elementary and composite particles are bosons or fermions, depending on their spin: Particles with half-integer spin are fermions; particles with integer spin are bosons. In the framework of nonrelativistic quantum mechanics, this is a purely empirical observation. In relativistic quantum field theory, the spin–statistics theorem shows that half-integer spin particles cannot be bosons and integer spin particles cannot be fermions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30778041",
"title": "Particle",
"section": "Section::::Conceptual properties.:Composition.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 784,
"text": "Particles can also be classified according to composition. \"Composite particles\" refer to particles that have composition – that is particles which are made of other particles. For example, a carbon-14 atom is made of six protons, eight neutrons, and six electrons. By contrast, \"elementary particles\" (also called \"fundamental particles\") refer to particles that are not made of other particles. According to our current understanding of the world, only a very small number of these exist, such as leptons, quarks, and gluons. However it is possible that some of these might turn up to be composite particles after all, and merely appear to be elementary for the moment. While composite particles can very often be considered \"point-like\", elementary particles are truly \"punctual\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "11274",
"title": "Elementary particle",
"section": "Section::::Overview.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 315,
"text": "All elementary particles are either bosons or fermions. These classes are distinguished by their quantum statistics: fermions obey Fermi–Dirac statistics and bosons obey Bose–Einstein statistics. Their spin is differentiated via the spin–statistics theorem: it is half-integer for fermions, and integer for bosons.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "31880",
"title": "Universe",
"section": "Section::::Composition.:Ordinary matter.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 605,
"text": "Ordinary matter is composed of two types of elementary particles: quarks and leptons. For example, the proton is formed of two up quarks and one down quark; the neutron is formed of two down quarks and one up quark; and the electron is a kind of lepton. An atom consists of an atomic nucleus, made up of protons and neutrons, and electrons that orbit the nucleus. Because most of the mass of an atom is concentrated in its nucleus, which is made up of baryons, astronomers often use the term \"baryonic matter\" to describe ordinary matter, although a small fraction of this \"baryonic matter\" is electrons.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2616739",
"title": "Particle zoo",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 488,
"text": "In the history of particle physics, the situation was particularly confusing in the late 1960s. Before the discovery of quarks, hundreds of strongly interacting particles (hadrons) were known and believed to be distinct elementary particles in their own right. It was later discovered that they were not elementary particles, but rather composites of the quarks. The set of particles believed today to be elementary is known as the Standard Model and includes quarks, bosons and leptons.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
5h1l0a
|
How does the electrolyte function in a dry cell battery?
|
[
{
"answer": "I can't tell you about the specifics of dry batteries but I can try to address your two main questions.\n\n1) The plates have positive or negative charges like a capacitor. However, unlike a capacitor, there are reactions at the electrodes that facilitate the replenishment of the charges and so a constant (-ish) voltage is maintained (a capacitor's voltage depletes overtime). Sometimes the electrodes themselves are involved in these reactions. But I think the your confusion follows from the fact that the electrons don't flow through the electrolyte. Charged ions carry the charge in the electrolyte. At least this is the case for liquid electrolytes. I can only assume that this is also the case with dry, solid-state batteries and I think it's the defects within the material that facilitate movement of charge.\n\n2) Imagine two half cells with two different concentrations of copper metal ions and each with a copper electrode. If you put an ionic bridge between the two beakers and connect a circuit to them, electrons will flow from the beaker with the lower Cu concentration to the beaker with the higher Cu concentration. The electrons will flow until the concentrations become equal and the cells are at equilibrium. This is why batteries 'run out' of energy; for the cell reaction to preceded any further and for one the concentrations to increase, energy must be supplied. Put another way, we've reach the bottom of a energy well and trying to go up either side of this well requires energy. However, instead of a difference in concentration, batteries typically utilise a difference in reactivity of two metals (you get a lot more energy compared with just a difference in concentration). This follows the same basic idea; the reaction or cell wants to move towards equilibrium. \n\nBut what happens to the electrons if there's a component in the circuit? They don't get used up. It's the energy they 'carry' that is used up, not the electrons. The current or flow of electrons is constant throughout the circuit.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "4033683",
"title": "Battery (vacuum tube)",
"section": "Section::::A battery.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 840,
"text": "An A battery is any battery used to provide power to the filament of a vacuum tube. It is sometimes colloquially referred to as a \"wet battery\". (A dry cell could be used for the purpose, but the ampere-hour capacity of dry cells was too low at the time to be of practical use in this service). The term comes from the days of valve (tube) radios when it was common practice to use a dry battery for the plate (anode) voltage and a rechargeable lead/acid \"wet\" battery for the filament voltage. (The filaments in vacuum tubes consumed much more current than the anodes, and so the \"A\" battery would drain much more rapidly than the \"B\" battery; therefore, using a rechargeable \"A\" battery in this role reduced the need for battery replacement. In contrast, a non-rechargeable \"B\" battery would need to be replaced relatively infrequently.)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1178756",
"title": "Dry cell",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 717,
"text": "A dry cell uses a paste electrolyte, with only enough moisture to allow current to flow. Unlike a wet cell, a dry cell can operate in any orientation without spilling, as it contains no free liquid, making it suitable for portable equipment. By comparison, the first wet cells were typically fragile glass containers with lead rods hanging from the open top and needed careful handling to avoid spillage. Lead–acid batteries did not achieve the safety and portability of the dry cell until the development of the gel battery. Wet cells have continued to be used for high-drain applications, such as starting internal combustion engines, because inhibiting the electrolyte flow tends to reduce the current capability.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "254510",
"title": "Galvanic cell",
"section": "Section::::Electrochemical thermodynamics of galvanic cell reactions.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 279,
"text": "Galvanic cells and batteries are typically used as a source of electrical power. The energy derives from a high-cohesive-energy metal dissolving while to a lower-energy metal is deposited, and/or from high-energy metal ions plating out while lower-energy ions go into solution. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19174720",
"title": "Electric battery",
"section": "Section::::Categories and types of batteries.:Cell types.:Dry cell.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 694,
"text": "A common dry cell is the zinc–carbon battery, sometimes called the dry Leclanché cell, with a nominal voltage of 1.5 volts, the same as the alkaline battery (since both use the same zinc–manganese dioxide combination). A standard dry cell comprises a zinc anode, usually in the form of a cylindrical pot, with a carbon cathode in the form of a central rod. The electrolyte is ammonium chloride in the form of a paste next to the zinc anode. The remaining space between the electrolyte and carbon cathode is taken up by a second paste consisting of ammonium chloride and manganese dioxide, the latter acting as a depolariser. In some designs, the ammonium chloride is replaced by zinc chloride.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19174720",
"title": "Electric battery",
"section": "Section::::Categories and types of batteries.:Cell types.:Wet cell.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 1205,
"text": "A \"wet cell\" battery has a liquid electrolyte. Other names are \"flooded cell\", since the liquid covers all internal parts, or \"vented cell\", since gases produced during operation can escape to the air. Wet cells were a precursor to dry cells and are commonly used as a learning tool for electrochemistry. They can be built with common laboratory supplies, such as beakers, for demonstrations of how electrochemical cells work. A particular type of wet cell known as a concentration cell is important in understanding corrosion. Wet cells may be primary cells (non-rechargeable) or secondary cells (rechargeable). Originally, all practical primary batteries such as the Daniell cell were built as open-top glass jar wet cells. Other primary wet cells are the Leclanche cell, Grove cell, Bunsen cell, Chromic acid cell, Clark cell, and Weston cell. The Leclanche cell chemistry was adapted to the first dry cells. Wet cells are still used in automobile batteries and in industry for standby power for switchgear, telecommunication or large uninterruptible power supplies, but in many places batteries with gel cells have been used instead. These applications commonly use lead–acid or nickel–cadmium cells.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19174720",
"title": "Electric battery",
"section": "Section::::Categories and types of batteries.:Cell types.:Dry cell.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 528,
"text": "A \"dry cell\" uses a paste electrolyte, with only enough moisture to allow current to flow. Unlike a wet cell, a dry cell can operate in any orientation without spilling, as it contains no free liquid, making it suitable for portable equipment. By comparison, the first wet cells were typically fragile glass containers with lead rods hanging from the open top and needed careful handling to avoid spillage. Lead–acid batteries did not achieve the safety and portability of the dry cell until the development of the gel battery.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3133405",
"title": "Flow battery",
"section": "Section::::Types.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 295,
"text": "Various types of flow cells (batteries) have been developed, including redox, hybrid and membraneless. The fundamental difference between conventional batteries and flow cells is that energy is stored not as the electrode material in conventional batteries but as the electrolyte in flow cells.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
ce9oxv
|
what makes a good haircut?
|
[
{
"answer": "Asking what a good haircut looks like is like asking what a pretty flower looks like. You kinda just know it when you see it. Browse around on some “popular” haircuts and just tell your barber what you want. Seems like your gf already might have some ideas.",
"provenance": null
},
{
"answer": "Go to a licensed barbershop. Not a salon or discount place. Ask for a gentlemen’s cut. Haircut should run you $30-40.\n\nAsk for a scissor cut. Long on top, taper fade on the sides.\n\nThis is pretty much the traditional ww2 style haircut everyone has. If you want to have a more extreme fade, you can ask them to use clippers down to a 1.\n\nI personally vary my fade length down to a 0, up to scissor length of a half inch depending on the season. \n\nBuy some “American crew” pomade off amazon for $10. Style your hair with that and a **wide toothed** comb. You can easily find one off amazon for < 5.",
"provenance": null
},
{
"answer": "r/malegrooming is that way.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "2460120",
"title": "Artificial hair integrations",
"section": "Section::::Types of hair.:Human hair.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 577,
"text": "The cuticle is responsible for much of the mechanical strength of the hair fiber. A healthy cuticle is more than just a protective layer, as the cuticle also controls the water content of the fiber. Much of the shine that makes healthy hair so attractive is due to the cuticle. In the hair industry, the only way to obtain the very best hair (with cuticle intact and facing the same direction) is to use the services of \"hair collectors,\" who cut the hair directly from people's heads, and bundle it as ponytails. This hair is called virgin cuticle hair, or just cuticle hair.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35938564",
"title": "Undercut (hairstyle)",
"section": "Section::::Origins.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 709,
"text": "Historically, the undercut has been associated with poverty and inability to afford a barber competent enough to blend in the sides, as on a short back and sides haircut. From the turn of the 20th century until the 1920s, the undercut was popular among young working class men, especially members of street gangs. In interwar Glasgow, the Neds (precursors to the Teddy Boys) favored a haircut that was long on top and cropped at the back and sides. Despite the fire risk, lots of paraffin wax was used to keep the hair in place. Other gangs who favored this haircut were the Scuttlers of Manchester and the Peaky Blinders of Birmingham, because longer hair put the wearer at a disadvantage in a street fight.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "38633501",
"title": "Regular haircut",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 492,
"text": "A regular haircut is a men's and boys' hairstyle that has hair long enough to comb on top, a defined or deconstructed side part, and a short, semi-short, medium, long, or extra long back and sides. The style is also known by other names including taper cut, regular taper cut, side-part and standard haircut; as well as short back and sides, business-man cut and professional cut, subject to varying national, regional, and local interpretations of the specific taper for the back and sides.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "52401",
"title": "Hairstyle",
"section": "Section::::Process.:Cutting.\n",
"start_paragraph_id": 48,
"start_character": 0,
"end_paragraph_id": 48,
"end_character": 278,
"text": "Hair cutting or hair trimming is intended to create or maintain a specific shape and form. There are ways to trim one's own hair but usually another person is enlisted to perform the process, as it is difficult to maintain symmetry while cutting hair at the back of one's head.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2057086",
"title": "Flattop",
"section": "Section::::Haircutting methods.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 333,
"text": "The haircut is usually done with electric clippers utilizing the clipper over comb technique, though it can also be cut shears over comb or freehand with a clipper. Some barbers utilize large combs designed for cutting flattops. Others use wide rotary clipper blades specifically designed for freehand cutting the top of a flattop. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "35938564",
"title": "Undercut (hairstyle)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 511,
"text": "The undercut is a hairstyle that was fashionable from the 1910s to the 1940s, predominantly among men, and saw a steadily growing revival in the 1980s before becoming fully fashionable again in the 2010s. Typically, the hair on the top of the head is long and parted on either the side or center, while the back and sides are buzzed very short. It is closely related to the curtained hair of the mid-to-late 1990s, although those with undercuts during the 2010s tend to slick back the bangs away from the face.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6615665",
"title": "Hime cut",
"section": "Section::::Care and maintenance.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 552,
"text": "The hime cut is high-maintenance for those without naturally straight hair, and requires frequent touch-ups on the sidelocks and front bangs in order to maintain its shape. Hair straightening is sometimes used to help with these problems as well as straightening irons and specially formulated shampoos for straight hair. Humidity is also cited as a problem with certain hair types, as the curling caused by excess humidity can change the shape of the hair. Occasionally hair extensions and weaves are used for the side locks in order to prevent this.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
8jdxce
|
In the nervous system, how exaclty does a stimulus cause the initial depolarization of the membrane that will then open the Sodium voltage-dependent channels once the treshold value is reached?
|
[
{
"answer": "At the synapse, the presynaptic terminal releases neurotransmitter that causes receptors on the postsynaptic cell to respond. Typically this can be ligand gated ion channels, G\\-protein coupled receptors, or receptor tyrosine kinases. If the input is stimulatory, that is it triggers an action potential, the ligand gated channels allow sodium and/or calcium to enter the cell causing depolarization. If enough of those channels open to allow for above threshold level depolarization the voltage gated channels open. There are pretty complete explanations [here](_URL_0_).\n\nI tried to keep this explanation broad enough to be true of most types of action potential, but there are many specific types of nerves and synapses so if I didn't answer your question or if I missed your point let me know. ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "598586",
"title": "Threshold potential",
"section": "Section::::Physiological function and characteristics.:Depolarization.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 1347,
"text": "However, once a stimulus activates the voltage-gated sodium channels to open, positive sodium ions flood into the cell and the voltage increases. This process can also be initiated by ligand or neurotransmitter binding to a ligand-gated channel. More sodium is outside the cell relative to the inside, and the positive charge within the cell propels the outflow of potassium ions through delayed-rectifier voltage-gated potassium channels. Since the potassium channels within the cell membrane are delayed, any further entrance of sodium activates more and more voltage-gated sodium channels. Depolarization above threshold results in an increase in the conductance of Na sufficient for inward sodium movement to swamp outward potassium movement immediately. If the influx of sodium ions fails to reach threshold, then sodium conductance does not increase a sufficient amount to override the resting potassium conductance. In that case, subthreshold membrane potential oscillations are observed in some type of neurons. If successful, the sudden influx of positive charge depolarizes the membrane, and potassium is delayed in re-establishing, or hyperpolarizing, the cell. Sodium influx depolarizes the cell in attempt to establish its own equilibrium potential (about +52 mV) to make the inside of the cell more positive relative to the outside.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "248384",
"title": "Refractory period (physiology)",
"section": "Section::::Neuronal refractory period.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 496,
"text": "Phase one is depolarization. During depolarization, voltage-gated sodium ion channels open, increasing the neuron's membrane conductance for sodium ions and depolarizing the cell's membrane potential (from typically -70 mV toward a positive potential). In other words, the membrane is made less negative. After the potential reaches the activation threshold (-55 mV), the depolarization is actively driven by the neuron and overshoots the equilibrium potential of an activated membrane (+30 mV).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1351369",
"title": "Ligand-gated ion channel",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 493,
"text": "When a presynaptic neuron is excited, it releases a neurotransmitter from vesicles into the synaptic cleft. The neurotransmitter then binds to receptors located on the postsynaptic neuron. If these receptors are ligand-gated ion channels, a resulting conformational change opens the ion channels, which leads to a flow of ions across the cell membrane. This, in turn, results in either a depolarization, for an excitatory receptor response, or a hyperpolarization, for an inhibitory response.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "563161",
"title": "Membrane potential",
"section": "Section::::Effects and implications.\n",
"start_paragraph_id": 87,
"start_character": 0,
"end_paragraph_id": 87,
"end_character": 271,
"text": "In neuronal cells, an action potential begins with a rush of sodium ions into the cell through sodium channels, resulting in depolarization, while recovery involves an outward rush of potassium through potassium channels. Both of these fluxes occur by passive diffusion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2879958",
"title": "Sodium channel",
"section": "Section::::Voltage-gated sodium channels.:Gating.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 661,
"text": "Before an action potential occurs, the axonal membrane is at its normal resting potential, and Na channels are in their deactivated state, blocked on the extracellular side by their activation gates. In response to an electric current (in this case, an action potential), the activation gates open, allowing positively charged Na ions to flow into the neuron through the channels, and causing the voltage across the neuronal membrane to increase. Because the voltage across the membrane is initially negative, as its voltage increases \"to\" and \"past\" zero, it is said to depolarize. This increase in voltage constitutes the rising phase of an action potential.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "156998",
"title": "Action potential",
"section": "Section::::Biophysical basis.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 1228,
"text": "As the membrane potential is increased, sodium ion channels open, allowing the entry of sodium ions into the cell. This is followed by the opening of potassium ion channels that permit the exit of potassium ions from the cell. The inward flow of sodium ions increases the concentration of positively charged cations in the cell and causes depolarization, where the potential of the cell is higher than the cell's resting potential. The sodium channels close at the peak of the action potential, while potassium continues to leave the cell. The efflux of potassium ions decreases the membrane potential or hyperpolarizes the cell. For small voltage increases from rest, the potassium current exceeds the sodium current and the voltage returns to its normal resting value, typically −70 mV. However, if the voltage increases past a critical threshold, typically 15 mV higher than the resting value, the sodium current dominates. This results in a runaway condition whereby the positive feedback from the sodium current activates even more sodium channels. Thus, the cell \"fires\", producing an action potential. The frequency at which a neuron elicits action potentials is often referred to as a firing rate or neural firing rate.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "156998",
"title": "Action potential",
"section": "Section::::Phases.:Afterhyperpolarization.\n",
"start_paragraph_id": 56,
"start_character": 0,
"end_paragraph_id": 56,
"end_character": 751,
"text": "The depolarized voltage opens additional voltage-dependent potassium channels, and some of these do not close right away when the membrane returns to its normal resting voltage. In addition, further potassium channels open in response to the influx of calcium ions during the action potential. The intracellular concentration of potassium ions is transiently unusually low, making the membrane voltage \"V\" even closer to the potassium equilibrium voltage \"E\". The membrane potential goes below the resting membrane potential. Hence, there is an undershoot or hyperpolarization, termed an afterhyperpolarization, that persists until the membrane potassium permeability returns to its usual value, restoring the membrane potential to the resting state.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2cvkve
|
where the phrase 'second nature' comes from
|
[
{
"answer": "It's a corruption of the Latin phrase *secundum naturam*, which means 'according to one's nature'. Basically, whatever you're referring to meshes well with your natural abilities or tendencies, as opposed to something that was *contra naturam* (against one's nature), or *super naturam* (above nature, or Godlike). ",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5938176",
"title": "Nature (philosophy)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 262,
"text": "Nature has two inter-related meanings in philosophy. On the one hand, it means the set of all things which are natural, or subject to the normal working of the laws of nature. On the other hand, it means the essential properties and causes of individual things.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5938176",
"title": "Nature (philosophy)",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 227,
"text": "The word \"nature\" derives from Latin \"nātūra\", a philosophical term derived from the verb for birth, which was used as a translation for the earlier (pre-Socratic) Greek term \"phusis\", derived from the verb for natural growth.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42140963",
"title": "Second Nature (Dan Hartman song)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 202,
"text": "\"Second Nature\" is a song by American musician-singer-songwriter Dan Hartman, released as the fourth and final single from his 1984 album \"I Can Dream About You\". The single was released in early 1985.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "163901",
"title": "Information society",
"section": "Section::::Second and third nature.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 616,
"text": "\"Second nature\" refers a group of experiences that get made over by culture. They then get remade into something else that can then take on a new meaning. As a society we transform this process so it becomes something natural to us, i.e. second nature. So, by following a particular pattern created by culture we are able to recognise how we use and move information in different ways. From sharing information via different time zones (such as talking online) to information ending up in a different location (sending a letter overseas) this has all become a habitual process that we as a society take for granted.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21830",
"title": "Nature",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 702,
"text": "The word \"nature\" is derived from the Latin word \"natura\", or \"essential qualities, innate disposition\", and in ancient times, literally meant \"birth\". \"Natura\" is a Latin translation of the Greek word \"physis\" (φύσις), which originally related to the intrinsic characteristics that plants, animals, and other features of the world develop of their own accord. The concept of nature as a whole, the physical universe, is one of several expansions of the original notion; it began with certain core applications of the word φύσις by pre-Socratic philosophers, and has steadily gained currency ever since. This usage continued during the advent of modern scientific method in the last several centuries.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4791872",
"title": "Appeal to nature",
"section": "Section::::Forms.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 371,
"text": "In some contexts, the use of the terms of \"nature\" and \"natural\" can be vague, leading to unintended associations with other concepts. The word \"natural\" can also be a loaded term – much like the word \"normal\", in some contexts, it can carry an implicit value judgement. An appeal to nature would thus beg the question, because the conclusion is entailed by the premise.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3248457",
"title": "The Constitution of Man",
"section": "Section::::Summary/Content.:Chapter I: On Natural Laws.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 1566,
"text": "For Combe, \"A \"law...\"denotes a rule of action; its existence indicates an established and constant mode, or process, according to which phenomena take place.\" Natural Laws refer to \"the rules of action impressed on objects and beings by their natural constitution\" Combe presents the relationship between God, Nature, and the Natural Laws: \"If, then, the reader keep in view that God is the creator; that Nature, in the general sense, means the world which He has made; and, in a more limited sense, the particular constitution which he has bestowed on any special object...and that a Law of Nature means the established mode in which that constitution acts, and the obligation thereby imposed on intelligent beings to attend to it, he will be in no danger of misunderstanding my meaning\" Combe identifies three categories for the Natural Laws: Physical, Organic, and Intelligent. The Physical Laws \"embrace all the phenomena of mere matter,\" the Organic Laws [indicate] that \"every phenomenon connected with the production, health, growth, decay, and death of vegetables and animals, takes place with undeviating regularity.\" Combe defines Intelligent beings as \"all animals that have a distinct consciousness,\" and the Intelligent Laws concern the makeup of the mental capacities of Intelligent beings. He then identifies four principles concerning the Natural Laws: 1) the Laws are independent 2) obeying the Laws brings rewards and disobedience brings punishment 3) the Laws are fixed and universal, and 4) the laws are harmonious with the constitution of man.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
920t84
|
Wool as a materials for uniforms: why? And why into WW2?
|
[
{
"answer": "Delving at little further into OPs question I'm curious if there was a transition period at all where there were wool uniforms for some environments and cotton uniforms for others? Did the layering of uniforms change to mitigate the loss of advantages wool offers, such as wool or (eventually) synthetic undergarments?\n\n \n\nI'm basing this follow up question on the belief that is common within the outdoors community that wool offers a lot of advantages over cotton.",
"provenance": null
},
{
"answer": "Alright, since no good answer has come up yet, I'll field this one. \n\nThere are a lot of misconceptions about wool, and they mostly stem from the horrible synthetic blends that parade around as \"wool\" in cheap suits, cheap coats, cheap socks, and cheap blankets. If your only personal experience with wool is with something like that, of course it seems like a horrifying material to wear, especially in a military uniform. \n\nBut wool was used for hundreds of years, if not thousands, because it was one of the most widely available, inexpensive, and versatile fabrics available to humanity until the last 40 years or so. Depending on the weave and thickness, wool could make quite comfortable blankets, coats, or cloaks, and was even used for athletic clothing and swimwear. Wool wicks moisture away from the body, making it an ideal fabric for cold weather wear, since the material will keep sweat from staying on the body and will prevent outer moisture from penetrating. Outdoor wear, even today, is highly dependent on wool as an inner layer, especially in socks, because other fabrics (like cotton) will swell with moisture and, once wet, will get extremely cold and make the outer layer essentially useless in preventing cold-weather fatigue and injuries.\n\nWool also stretches, and is extremely pliable and durable. Medieval hose, which were often meant to be skin-tight, were generally made of wool and were expected to last a year or more of continuous use. Heavier upper-body clothing, like doublets and coats and overcoats, et al, were often made from a heavier weave and were similarly durable and long-lasting.\n\nSo for military uniforms, wool is beneficial in cold climates because its moisture-wicking properties, and is also not as uncomfortable as you might imagine in hot climates. Wool breathes, it's part of the same microscopic geometry that makes it wick moisture, and so in even direct sunlight it does a good job of blocking sunlight while allowing fresh air to penetrate. An undershirt of cotton with an overshirt or blouse of wool was a standard of the US army, even in the west, where temperatures could fluctuate wildly.\n\nSo there's no reason that wool was holding anything back or was somehow sub-standard. Cotton blends became less expensive and as uniforms changed it was simply a better option for hard-use field uniforms, but wool remained the main component of dress uniforms for a very long time.\n\nIt's honeslty hard to find sources that argue all of this of any quality, partly because wool's ubiquity is itself a weird paradox; when it was the only choice no one wrote about it because it was obvious. Nowadays the assumption is that wool is terrible and scratchy and bad, thanks to synthetic blends. And now, of course, there *are* better options for many things. Performance fabrics and the like. Much of my familiarity with it is wearing wool uniforms at a working historic site, and just realizing that I was fairly comfortable even when the temperatures reached the 90s F, and when I changed back into modern t-shirt and shorts it made very little difference to my personal comfort.\n\ntl;dr, though: wool is extremely versatile, and until the mid 20th century or so was abundant and inexpensive, and made an ideal choice for all sorts of uses.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "172336",
"title": "Mohair",
"section": "Section::::US subsidies for mohair production.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 1318,
"text": "During World War II, U.S. soldiers wore uniforms made of wool. Worried that domestic producers could not supply enough for future wars, Congress enacted loan and price support programs for wool and mohair in the National Wool Act of 1954 as part of the 1954 Farm Bill. Despite these subsidies, wool and mohair production declined. The strategic importance declined as well; the US military adopted uniforms made of synthetic fibers, such as dacron, and officially removed wool from the list of strategic materials in 1960. Nevertheless, the U.S. government continued to provide subsidies to mohair producers until 1995, when the subsidies were \"eliminated effective with the marketing year ending December 31, 1995\". In \"The Future of Freedom: Illiberal Democracy at Home and Abroad\" Fareed Zakaria points out that the subsidies were reinstated a few years later, due in large part to the lobbying on behalf of the special interests of the subsidy recipients. By 2000, Congress had appropriated US$20 million for goat and sheep producers. As of 2002, mohair producers were still able to receive special assistance loans from the U.S. government, after an amendment to eliminate the subsidy was defeated. The U.S. currently subsidizes mohair production under the Marketing Assistance Loan Program of the 2014 Farm Act.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "43595963",
"title": "Hardwick Clothes",
"section": "Section::::History.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 297,
"text": "For the first half of the 20th century, Hardwick Mills was one of the largest manufacturers of wool fabric and men's clothing in the world. During WWII, Hardwick Mills manufactured uniforms for the military. After the war, the demand for wool decreased with the introduction of synthetic fabrics.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "44614637",
"title": "Queensland Woollen Manufacturing Company mill",
"section": "Section::::Heritage listing.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 283,
"text": "As a producer of uniform fabric and blankets for the armed forces in both the First and Second World Wars, The Queensland Woollen Manufacturing Company made an important contribution to Australia's war effort. It has also been important as a supplier of Railway and Police uniforms.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41788249",
"title": "Joseph Gray (painter)",
"section": "Section::::Artwork.:Art of camouflage.\n",
"start_paragraph_id": 24,
"start_character": 0,
"end_paragraph_id": 24,
"end_character": 392,
"text": "During the early years of World War Two he also devised a kind of steel wool camouflage which was used to conceal large military bases and factories from air attack. Gray's notes from his time as a camouflage officer and his research and experiments into steel wool are now kept in the Imperial War Museum Archive. There are photographs, drawings, samples of material, reports and memoranda.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "65955",
"title": "Denim",
"section": "Section::::Etymology and origin.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 336,
"text": "Throughout the 20th century denim was used for cheap durable uniforms like those issued to staff of the French national railways. In the postwar years, Royal Air Force overalls for dirty work were named \"denims.\" These were a one-piece garment, with long legs and sleeves, buttoned from throat to crotch, in an olive drab denim fabric.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12452102",
"title": "Bernat Mill",
"section": "Section::::The Bachman Uxbridge Worsted Company.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 292,
"text": "American Civil War uniforms, World War I Khaki overcoats, and World War II U.S. Army uniforms have all been manufactured in this mill. Latch hook yarn kits were developed by \"Bernat\", here circa 1968 and the name of the mill changes to the Bernat Mill, then the third largest U.S. yarn mill.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57110237",
"title": "Charlottesville Woolen Mills",
"section": "Section::::Twentieth Century.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 247,
"text": "The Woolen Mills thrived, proving woolen textiles to a variety of businesses, with a particularly strong base in the uniform trade. Military schools, city police departments, and the United States military all ordered uniform cloth from the mill.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
272v5r
|
what's wrong with dumping radioactive waste in the bottom of the ocean?
|
[
{
"answer": "Kaiju \nGojira \nHave you no cinema history? \n/s",
"provenance": null
},
{
"answer": "Offhand, I would consider deep ocean circulation. Water in the ocean can be separated into several parts but basically consider top and bottom. Top is moved by wind but bottom is moved by slight differences in temperature and salinity. Eventually though, due to the movements of the top, the water from the bottom will move up (like peru). This means that nutrients and whatever is in the bottom water ends up going up and mixing with the water on top. Peru is a good example for this, where the upwelling allows for good fishing conditions. \n\nIf we then consider how certain things move up the food chain, it would be a bad idea. Short term there shouldn't be too much problem but in the long run. It could be worse.",
"provenance": null
},
{
"answer": "Godzillas Godzillas everywhere",
"provenance": null
},
{
"answer": "Yes! Because the solution to pollution is dilution!",
"provenance": null
},
{
"answer": "It is because of how sensitive ecological systems can be. \n\n Adding or subtracting away elements of the environment could potentially mess up other important processes that is crucial for all kinds of lifeforms to live. \nIf we introduce radioactive material maybe it'll deter certain type of animals away that help regulate blooms in algae. Maybe it'll get rid of viruses that help liberate some of the carbon from single-celled organisms. With the ocean it is really hard to determine what might happen, because oceans have currents which directly connects other ecosystems together and so we don't want to mess with that until we know the full picture.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "37257",
"title": "Radioactive waste",
"section": "Section::::Management.:Long term management.:Geologic disposal.\n",
"start_paragraph_id": 109,
"start_character": 0,
"end_paragraph_id": 109,
"end_character": 648,
"text": "Ocean floor disposal of radioactive waste has been suggested by the finding that deep waters in the North Atlantic Ocean do not present an exchange with shallow waters for about 140 years based on oxygen content data recorded over a period of 25 years. They include burial beneath a stable abyssal plain, burial in a subduction zone that would slowly carry the waste downward into the Earth's mantle, and burial beneath a remote natural or human-made island. While these approaches all have merit and would facilitate an international solution to the problem of disposal of radioactive waste, they would require an amendment of the Law of the Sea.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34015321",
"title": "Ocean disposal of radioactive waste",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 215,
"text": "\"Ocean floor disposal\" (or sub-seabed disposal)—a more deliberate method of delivering radioactive waste to the ocean floor and depositing it into the seabed—was studied by the UK and Sweden, but never implemented.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "820248",
"title": "Ocean floor disposal",
"section": "",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 381,
"text": "Beyond technical and political considerations, the London Convention places prohibitions on disposing of radioactive materials at sea and does not make a distinction between waste dumped directly into the water and waste that is buried underneath the ocean's floor. It remained in force until 2018, after which the sub-seabed disposal option can be revisited at 25-year intervals.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "29091",
"title": "Ship-Submarine Recycling Program",
"section": "Section::::Reactor vessel disposal.:Prior disposal methods.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 319,
"text": "In 1972, the London Dumping Convention restricted ocean disposal of radioactive waste and in 1993, ocean disposal of radioactive waste was completely banned. The US Navy began a study on scrapping nuclear submarines; two years later shallow land burial of reactor compartments was selected as the most suitable option.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34015321",
"title": "Ocean disposal of radioactive waste",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 482,
"text": "From 1946 through 1993, thirteen countries (fourteen, if the USSR and Russia are considered separately) used ocean disposal or ocean dumping as a method to dispose of nuclear/radioactive waste. The waste materials included both liquids and solids housed in various containers, as well as reactor vessels, with and without spent or damaged nuclear fuel. Since 1993, ocean disposal has been banned by international treaties. (London Convention (1972), Basel Convention, MARPOL 73/78)\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16898",
"title": "Kara Sea",
"section": "Section::::Nuclear dumping.\n",
"start_paragraph_id": 23,
"start_character": 0,
"end_paragraph_id": 23,
"end_character": 1104,
"text": "There is concern about radioactive contamination from nuclear waste the former Soviet Union dumped in the sea and the effect this will have on the marine environment. According to an official \"White Paper\" report compiled and released by the Russian government in March 1993, the Soviet Union dumped six nuclear submarine reactors and ten nuclear reactors into the Kara Sea between 1965–1988. Solid high and low-level wastes unloaded from Northern Fleet nuclear submarines during reactor refuelings, were dumped in the Kara Sea, mainly in the shallow fjords of Novaya Zemlya, where the depths of the dumping sites range from 12 to 135 meters, and in the Novaya Zemlya Trough at depths of up to 380 meters. Liquid low-level wastes were released in the open Barents and Kara Seas. A subsequent appraisal by the International Atomic Energy Agency showed that releases are low and localized from the 16 naval reactors (reported by the IAEA as having come from seven submarines and the icebreaker \"Lenin\") which were dumped at five sites in the Kara Sea. Most of the dumped reactors had suffered an accident.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12089912",
"title": "Power Reactor and Nuclear Fuel Development Corporation",
"section": "Section::::Puruto-kun video.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 724,
"text": "The question of the degree of harm which is caused by \"bad guys\" dropping plutonium into the sea is not a simple question; the radioactive power pack containing plutonium-238 which was intended for use in space for the Apollo 13 moon mission was wrapped in a heat-resistant package which is likely to prevent leaking of plutonium for a very long time. However, plutonium released in the form of the nitrate or fine powder is likely to absorb onto mineral particles such as silt. Depending on the exact conditions this absorption onto silt could either tend to fix the plutonium in soil or the silt at the bottom of a lake (or sea), or it could enable the plutonium to migrate from one location to another with greater ease.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
a7q611
|
Were Henrietta Lack's cells special?
|
[
{
"answer": "This is a great question! By today's standards, there is nothing inherently special about HeLa cells. Not only do we have countless \"immortal\" cell lines from other people, we have very well established protocols for immortalizing cell lines ourselves.\n\nHowever, at the time Henrietta Lacks' cells were isolated, this was definitely not the case. These were the first human cells which were found to be able to divide indefinitely. Prior to this, cells would last only a few divisions before either dying or changing dramatically. The use of HeLa cells allowed for: 1.) More convenient cell culturing; and 2.) More importantly, it allowed the scientific field to \"normalize\" their *in vitro* research in a profound way.\n\nAll that said, HeLa cells were special because they were the *first* of their kind isolated, not because they were inherently special. In theory, cells of equivalent value could have been isolated from any cancer patient. ",
"provenance": null
},
{
"answer": "At the time yes, they were the first immortalized human cell line which made it much easier to perform experiments in human cells since you could start with a single cell grow that out into many cells that are more or less identical. But these days there are many such cell lines. They are all unique though, having different mutations/coming from different tissues and HeLa cells are still a go-to cell line for many researchers.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "55251911",
"title": "Post-mortem privacy",
"section": "Section::::Medical confidentiality.:Case of Henrietta Lacks.\n",
"start_paragraph_id": 12,
"start_character": 0,
"end_paragraph_id": 12,
"end_character": 755,
"text": "Henrietta Lacks was an African American woman whose cells were removed without consent while receiving cancer treatment. Her cells became the source of the foundational HeLa cell line in the scientific world today. Lacks and her family were neither informed nor asked for consent to the use of her cells for this research. It was not until the 1980s when Lacks's medical records were made public, exposing the rest of her family's medical information as well as the fact that her family was never informed of this. The major issue surrounding the Lacks case is twofold. Firstly, at no point was consent sought for the extraction and research on Lacks's cells. Secondly, her family never received compensation for the commercial use of the HeLa cell line.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4336748",
"title": "George Otto Gey",
"section": "Section::::Career.:HeLa Cell Line.\n",
"start_paragraph_id": 7,
"start_character": 0,
"end_paragraph_id": 7,
"end_character": 896,
"text": "Gey isolated the cells taken from a cervical tumor found in a woman named Henrietta Lacks in 1951. These cells proved to be very unusual in that they could grow in culture medium that was constantly stirred using the roller drum (a technique developed by Gey), and they did not need a glass surface to grow, and therefore they had no space limit. Once Gey realized the longevity and hardiness of the HeLa cells, he began sharing them with scientists all over the world, and the use of the HeLa cell line became widespread. The cells were used in the development of the polio vaccine, lead to the first clone of a human cell, helped in the discovery that humans have 46 chromosomes, and were used to develop in vitro fertilization. By the time Gey published a short abstract claiming some credit for the development of the line, the cells were already being used by scientists all over the world.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "324812",
"title": "Henrietta Lacks",
"section": "Section::::Medical and scientific research.\n",
"start_paragraph_id": 22,
"start_character": 0,
"end_paragraph_id": 22,
"end_character": 312,
"text": "In the early 1970s, a large portion of other cell cultures became contaminated by HeLa cells. As a result, members of Henrietta Lacks's family received solicitations for blood samples from researchers hoping to learn about the family's genetics in order to differentiate between HeLa cells and other cell lines.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "324812",
"title": "Henrietta Lacks",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 434,
"text": "Henrietta Lacks (born Loretta Pleasant; August 1, 1920 – October 4, 1951) was an African-American woman whose cancer cells are the source of the HeLa cell line, the first immortalized human cell line and one of the most important cell lines in medical research. An immortalized cell line reproduces indefinitely under specific conditions, and the HeLa cell line continues to be a source of invaluable medical data to the present day.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "324812",
"title": "Henrietta Lacks",
"section": "Section::::Recognition.:In popular culture.\n",
"start_paragraph_id": 37,
"start_character": 0,
"end_paragraph_id": 37,
"end_character": 313,
"text": "The HeLa cell line's connection to Henrietta Lacks was first brought to popular attention in March 1976 with a pair of articles in the \"Detroit Free Press\" and \"Rolling Stone\" written by reporter Michael Rogers. In 1998, Adam Curtis directed a BBC documentary about Henrietta Lacks called \"The Way of All Flesh\".\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3071326",
"title": "Giant cell",
"section": "Section::::History.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 449,
"text": "Multinucleated giant cell formations can arise from numerous types of bacteria, diseases, and cell formations. Giant cells are known to develop when infections are also present. They were first noticed as early as the middle of the last century, but still it is not fully understood why these reactions occur. In the process of giant cell formation, monocytes or macrophages fuse together, which could cause multiple problems for the immune system.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4336748",
"title": "George Otto Gey",
"section": "Section::::Career.:Controversies.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 364,
"text": "There was also the controversy surrounding how the cells were retrieved, as made famous by the book, The Immortal Life of Henrietta Lacks. The cells were taken from Henrietta Lacks without her knowledge or permission, and her family remained unaware until the 1970s. He was careful to keep her actual name secret, and it was not made public until after his death.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
70noau
|
why does the skin on our hands & feet have so many lines (i.e. fingerprints)?
|
[
{
"answer": "They increase the grip and durability of that surface, the rest of your skin is pretty slick.\n\nThe process that your body uses to create that type of skin also blocks hair growth and disables melanin production though, so it's only done on the palms and bottoms of your feet.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "9040547",
"title": "Human skin",
"section": "Section::::Structure.:Dermis.:Papillary region.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 378,
"text": "In the palms, fingers, soles, and toes, the influence of the papillae projecting into the epidermis forms contours in the skin's surface. These epidermal ridges occur in patterns (\"see:\" fingerprint) that are genetically and epigenetically determined and are therefore unique to the individual, making it possible to use fingerprints or footprints as a means of identification.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "740480",
"title": "Dermis",
"section": "Section::::Layers.:Dermal papillae.\n",
"start_paragraph_id": 14,
"start_character": 0,
"end_paragraph_id": 14,
"end_character": 450,
"text": "Blood vessels in the dermal papillae nourish all hair follicles and bring nutrients and oxygen to the lower layers of epidermal cells. The pattern of ridges they produce in hands and feet are partly genetically determined features that develop before birth. They remain substantially unaltered (except in size) throughout life, and therefore determine the patterns of fingerprints, making them useful in certain functions of personal identification.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "84777",
"title": "Fingerprint",
"section": "Section::::Dactyloscopy.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 842,
"text": "Fingerprint identification, known as dactyloscopy, or hand print identification, is the process of comparing two instances of friction ridge skin impressions (see Minutiae), from human fingers or toes, or even the palm of the hand or sole of the foot, to determine whether these impressions could have come from the same individual. The flexibility of friction ridge skin means that no two finger or palm prints are ever exactly alike in every detail; even two impressions recorded immediately after each other from the same hand may be slightly different. Fingerprint identification, also referred to as individualization, involves an expert, or an expert computer system operating under threshold scoring rules, determining whether two friction ridge impressions are likely to have originated from the same finger or palm (or toe or sole).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "84777",
"title": "Fingerprint",
"section": "Section::::Capture and detection.:Latent fingerprint detection.\n",
"start_paragraph_id": 35,
"start_character": 0,
"end_paragraph_id": 35,
"end_character": 907,
"text": "Since the late nineteenth century, fingerprint identification methods have been used by police agencies around the world to identify suspected criminals as well as the victims of crime. The basis of the traditional fingerprinting technique is simple. The skin on the palmar surface of the hands and feet forms ridges, so-called papillary ridges, in patterns that are unique to each individual and which do not change over time. Even identical twins (who share their DNA) do not have identical fingerprints. The best way to render latent fingerprints visible, so that they can be photographed, can be complex and may depend, for example, on the type of surfaces on which they have been left. It is generally necessary to use a ‘developer’, usually a powder or chemical reagent, to produce a high degree of visual contrast between the ridge patterns and the surface on which a fingerprint has been deposited.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "439075",
"title": "Hair coloring",
"section": "Section::::Adverse effects.:Skin discoloration.\n",
"start_paragraph_id": 66,
"start_character": 0,
"end_paragraph_id": 66,
"end_character": 361,
"text": "Skin and fingernails are made of a similar type of keratinized protein as hair. That means that drips, slips and extra hair tint around the hairline can result in patches of discolored skin. This is more common with darker hair colors and persons with dry absorbent skin. That is why it is recommended that latex or nitrile gloves be worn to protect the hands.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15153354",
"title": "Intertriginous",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 364,
"text": "In medicine, an intertriginous area is where two skin areas may touch or rub together. Examples of intertriginous areas are the axilla of the arm, the anogenital region, skin folds of the breasts and between digits. Intertriginous areas are known to harbor large amounts of aerobic cocci and aerobic coryneform bacteria, which are both parts of normal skin flora.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "57111064",
"title": "Electronic fingerprint recognition",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 318,
"text": "Everyone has marks on their fingers. They can not be removed or changed. These marks have a pattern and this pattern is called the fingerprint. Every fingerprint is special, and different from any other in the world. Because there are countless combinations, fingerprints have become an ideal means of identification.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
kute0
|
If neutrinos turn out to be faster than light, are they useful in any way? Communication?
|
[
{
"answer": "It is possible, but neutrinos are very hard to detect for the same reason that allows them to travel through the earth.",
"provenance": null
},
{
"answer": "The recent measurement has no practical implications for communication. Neutrinos make inefficient signals because they are almost indetectable. One application I've seen proposed is for messaging submarines, because radio can't penetrate and sound is slow. ",
"provenance": null
},
{
"answer": "Well, if the experiment holds (big if), the neutrinos were only travelling a few parts in a million faster...not much of a gain there.\n\nSecond, since neutrinos are so hard to detect, the bandwidth would be terrible...you'd basically be making a billion dollar telegraph machine. \n\nFinally, since neutrinos pass through everything, you couldn't shield your device...essentially, you could only have one of them running on the entire planet at a time. ",
"provenance": null
},
{
"answer": "Basically, if the neutrinos are really moving faster than light, then the universe is operating according to rules so new and different that it's hard to say for certain what would happen if it were true. \n\nI'm gonna go out on a limb and say that it would \"change everything\" if we discovered this were true. Communication via neutrinos is very difficult, but if we found out about whole new laws of physics, who knows what ramifications that would ultimately have. \n\nBasically, there's no clear line from this to a practical application, but if it were true, it would change everything, and changing everything almost always leads to practical applications down the road in ways that are unforeseeable. \n\nThat's why our species is working so hard at CERN. There are huge practical applications-- they're just so far away, we can't say for sure what those applications are. ",
"provenance": null
},
{
"answer": "[Very speculative] Is it not possible that there is an entire branch of super-luminal physics where our sub-luminal laws don’t fully apply?\n\nFor the sake of speculation let’s call it super-luminal mechanics. This could be something like time slowing down when approaching c, but remains at zero once reached/crossed – thus separating space and time in super-luminal mechanics. \n",
"provenance": null
},
{
"answer": " > If the experiment proves true, is it theoretically possible to use neutrinos for communications?\n\nEven if it proves false you can communicate with neutrinos. Guy at CERN says to guy at OPERA: \"If you receive some neutrinos from me, that means 'yes', otherwise 'no'\".\n\nThe question is why would you want to? You'd be trading a 0.0000248% improvement in latency for an epically, massively, absurdly bad decrease in bandwidth.",
"provenance": null
},
{
"answer": "It is possible, but neutrinos are very hard to detect for the same reason that allows them to travel through the earth.",
"provenance": null
},
{
"answer": "The recent measurement has no practical implications for communication. Neutrinos make inefficient signals because they are almost indetectable. One application I've seen proposed is for messaging submarines, because radio can't penetrate and sound is slow. ",
"provenance": null
},
{
"answer": "Well, if the experiment holds (big if), the neutrinos were only travelling a few parts in a million faster...not much of a gain there.\n\nSecond, since neutrinos are so hard to detect, the bandwidth would be terrible...you'd basically be making a billion dollar telegraph machine. \n\nFinally, since neutrinos pass through everything, you couldn't shield your device...essentially, you could only have one of them running on the entire planet at a time. ",
"provenance": null
},
{
"answer": "Basically, if the neutrinos are really moving faster than light, then the universe is operating according to rules so new and different that it's hard to say for certain what would happen if it were true. \n\nI'm gonna go out on a limb and say that it would \"change everything\" if we discovered this were true. Communication via neutrinos is very difficult, but if we found out about whole new laws of physics, who knows what ramifications that would ultimately have. \n\nBasically, there's no clear line from this to a practical application, but if it were true, it would change everything, and changing everything almost always leads to practical applications down the road in ways that are unforeseeable. \n\nThat's why our species is working so hard at CERN. There are huge practical applications-- they're just so far away, we can't say for sure what those applications are. ",
"provenance": null
},
{
"answer": "[Very speculative] Is it not possible that there is an entire branch of super-luminal physics where our sub-luminal laws don’t fully apply?\n\nFor the sake of speculation let’s call it super-luminal mechanics. This could be something like time slowing down when approaching c, but remains at zero once reached/crossed – thus separating space and time in super-luminal mechanics. \n",
"provenance": null
},
{
"answer": " > If the experiment proves true, is it theoretically possible to use neutrinos for communications?\n\nEven if it proves false you can communicate with neutrinos. Guy at CERN says to guy at OPERA: \"If you receive some neutrinos from me, that means 'yes', otherwise 'no'\".\n\nThe question is why would you want to? You'd be trading a 0.0000248% improvement in latency for an epically, massively, absurdly bad decrease in bandwidth.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "33338392",
"title": "Faster-than-light neutrino anomaly",
"section": "",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 453,
"text": "Neutrino speeds \"consistent\" with the speed of light are expected given the limited accuracy of experiments to date. Neutrinos have small but nonzero mass, and so special relativity predicts that they must propagate at speeds slower than light. Nonetheless, known neutrino production processes impart energies far higher than the neutrino mass scale, and so almost all neutrinos are ultrarelativistic, propagating at speeds very close to that of light.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33338392",
"title": "Faster-than-light neutrino anomaly",
"section": "Section::::Detection.:First results.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 1391,
"text": "In a analysis of their data, scientists of the OPERA collaboration reported evidence that neutrinos they produced at CERN in Geneva and recorded at the OPERA detector at Gran Sasso, Italy, had traveled faster than light. The neutrinos were calculated to have arrived approximately 60.7 nanoseconds (60.7 billionths of a second) sooner than light would have if traversing the same distance in a vacuum. After six months of cross checking, on , the researchers announced that neutrinos had been observed traveling at faster-than-light speed. Similar results were obtained using higher-energy (28 GeV) neutrinos, which were observed to check if neutrinos' velocity depended on their energy. The particles were measured arriving at the detector faster than light by approximately one part per 40,000, with a 0.2-in-a-million chance of the result being a false positive, \"assuming\" the error were entirely due to random effects (significance of six sigma). This measure included estimates for both errors in measuring and errors from the statistical procedure used. It was, however, a measure of precision, not accuracy, which could be influenced by elements such as incorrect computations or wrong readouts of instruments. For particle physics experiments involving collision data, the standard for a discovery announcement is a five-sigma error limit, looser than the observed six-sigma limit.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19649158",
"title": "OPERA experiment",
"section": "Section::::Time-of-flight measurements.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 1043,
"text": "In September 2011, OPERA researchers observed muon neutrinos apparently traveling faster than the speed of light. In February and March 2012, OPERA researchers blamed this result on a loose fibre optic cable connecting a GPS receiver to an electronic card in a computer. On 16 March 2012, a report announced that an independent experiment in the same laboratory, also using the CNGS neutrino beam, but this time the ICARUS detector, found no discernible difference between the speed of a neutrino and the speed of light. In May 2012, the Gran Sasso experiments BOREXINO, ICARUS, LVD and OPERA all measured neutrino velocity with a short-pulsed beam, and obtained agreement with the speed of light, showing that the original OPERA result was mistaken. Finally in July 2012, the OPERA collaboration updated their results. After the instrumental effects mentioned above were taken into account, it was shown that the speed of neutrinos is consistent with the speed of light. This was confirmed by a new, improved set of measurements in May 2013.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33338392",
"title": "Faster-than-light neutrino anomaly",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 354,
"text": "In 2011, the OPERA experiment mistakenly observed neutrinos appearing to travel faster than light. Even before the mistake was discovered, the result was considered anomalous because speeds higher than that of light in a vacuum are generally thought to violate special relativity, a cornerstone of the modern understanding of physics for over a century.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3935943",
"title": "Marcus du Sautoy",
"section": "Section::::Career and research.:Television work.\n",
"start_paragraph_id": 38,
"start_character": 0,
"end_paragraph_id": 38,
"end_character": 231,
"text": "BULLET::::- \"Faster Than the Speed of Light?\" (BBC 2, 2011). Marcus du Sautoy discusses the recent discovery, the faster-than-light neutrino anomaly, that neutrinos may travel faster than light. First broadcast on 19 October 2011.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "183089",
"title": "List of unsolved problems in physics",
"section": "Section::::Problems solved in recent decades.\n",
"start_paragraph_id": 113,
"start_character": 0,
"end_paragraph_id": 113,
"end_character": 328,
"text": "BULLET::::- Faster-than-light neutrino anomaly (2011–2012): In 2011, the OPERA experiment mistakenly observed neutrinos appearing to travel faster than light. On July 12, 2012 OPERA updated their paper by including the new sources of errors in their calculations. They found agreement of neutrino speed with the speed of light.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30485345",
"title": "2011 in science",
"section": "Section::::Events, discoveries and inventions.:September.\n",
"start_paragraph_id": 327,
"start_character": 0,
"end_paragraph_id": 327,
"end_character": 300,
"text": "BULLET::::- An international team of scientists at CERN records neutrino particles apparently traveling faster than the speed of light. If confirmed, the discovery would overturn Albert Einstein's 1905 special theory of relativity, which says that nothing can travel faster than light. (BBC) (ArXiv)\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
96tf3v
|
During the time of the moon landing did people get upset about the governments space race and the mission to the moon?
|
[
{
"answer": "Absolutely. In fact, at no point prior to the first Moon landing did the program receive a majority of support by the public at large. Indeed, there were several notable very vocal public opponents of the program because they felt it drew funds away from or was a distraction from now important work (such as poverty reduction and anti-segregation).\n\nAnd in some regards they had a solid case to make. In the 1960s the per capita gdp of the US was considerably lower than today. There was a level of common poverty that existed then that today exists mostly in the developing world. Remember that in the 1960s the whole country didn't even have indoor plumbing, electricity, or phones. And, of course, this was also the peak of the struggle against Jim Crow. Many people, correctly, saw the moon race as a geopolitical struggle and lamented the waste of resources for what was effectively war making on another front. Even more so while the Vietnam War was raging.\n\nIt was only after the fact that the Apollo program became more closely associated with peace, science, and the inchoate environmental movement. And, of course, for the spending to become a sunk cost that couldn't be undone or diverted elsewhere.\n\nSources & further reading:\n\n* [Historical Studies in the Societal Impact of Spaceflight pgs. 12-17 particularly (25-30 in the pdf)](_URL_3_)\n* [Public opinion polls and perceptions of US human spaceflight](_URL_0_)\n* [Moondoggle: The Forgotten Opposition to the Apollo Program](_URL_2_)\n* Gil-Scott Heron (of \"the revolution will not be televised\" fame): [Whitey on the Moon](_URL_1_)\n* [The Apollo Disappointment Industry](_URL_4_)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "80740",
"title": "Moon landing conspiracy theories",
"section": "Section::::Conspiracists and their contentions.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 598,
"text": "BULLET::::- Marcus Allen – British publisher of \"Nexus\", who said photographs of the lander would not prove that the United States put men on the Moon, and \"Getting to the Moon really isn't much of a problem – the Russians did that in 1959. The big problem is getting people there.\" He suggests that NASA sent robot missions because radiation levels in outer space would be deadly. A variant of this idea has it that NASA and its contractors did not recover quickly enough from the Apollo 1 fire, and so all the early Apollo missions were faked, with Apollos 14 or 15 being the first real mission.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6085216",
"title": "Apollo 11 in popular culture",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 208,
"text": "The Apollo 11 mission was the first human spaceflight mission to land on the Moon. The mission's wide effect on popular culture was anticipated and since then there have been a number of portrayals in media.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "662",
"title": "Apollo 11",
"section": "Section::::Legacy.:Cultural significance.\n",
"start_paragraph_id": 129,
"start_character": 0,
"end_paragraph_id": 129,
"end_character": 812,
"text": "After the Apollo 11 mission, officials from the Soviet Union said landing humans on the Moon was dangerous and unnecessary. At the time the Soviet Union was attempting to retrieve lunar samples robotically. The Soviets publicly denied there was a race to the Moon, and indicated they were not making an attempt. Mstislav Keldysh said in July 1969, \"We are concentrating wholly on the creation of large satellite systems\". It was revealed in 1989 that the Soviets had tried to send people to the Moon, but were unable due to technological difficulties. The public's reaction in the Soviet Union was mixed. The Soviet government limited the release of information about the lunar landing, which affected the reaction. A portion of the populace did not give it any attention, and another portion was angered by it.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "80740",
"title": "Moon landing conspiracy theories",
"section": "Section::::Claimed motives of the United States and NASA.:NASA funding and prestige.\n",
"start_paragraph_id": 39,
"start_character": 0,
"end_paragraph_id": 39,
"end_character": 680,
"text": "Mary Bennett and David Percy have claimed in \"Dark Moon: Apollo and the Whistle-Blowers\", that, with all the known and unknown hazards, NASA would not risk broadcasting an astronaut getting sick or dying on live television. The counter-argument generally given is that NASA in fact \"did\" incur a great deal of public humiliation and potential political opposition to the program by losing an entire crew in the Apollo 1 fire during a ground test, leading to its upper management team being questioned by Senate and House of Representatives space oversight committees. There was in fact no video broadcast during either the landing or takeoff because of technological limitations.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "80740",
"title": "Moon landing conspiracy theories",
"section": "Section::::Public opinion.\n",
"start_paragraph_id": 158,
"start_character": 0,
"end_paragraph_id": 158,
"end_character": 656,
"text": "In a 1994 poll by \"The Washington Post\", 9% of the respondents said that it was possible that astronauts did not go to the Moon and another 5% were unsure. A 1999 Gallup Poll found that 6% of the Americans surveyed doubted that the Moon landings happened and that 5% of those surveyed had no opinion, which roughly matches the findings of a similar 1995 \"Time/CNN\" poll. Officials of the Fox network said that such skepticism rose to about 20% after the February 2001 airing of their network's television special, \"Conspiracy Theory: Did We Land on the Moon?\", seen by about 15 million viewers. This Fox special is seen as having promoted the hoax claims.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3609096",
"title": "A Funny Thing Happened on the Way to the Moon",
"section": "Section::::Overview.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 610,
"text": "Sibrel's claims that the moon landing was a hoax making claims about supposed photographic anomalies; disasters such as the destruction of Apollo 1; technical difficulties experienced in the 1950s and 1960s; and the problems of traversing the Van Allen radiation belts. Sibrel proposes that the most condemning evidence is a piece of footage that he claims was secret, and inadvertently sent to him by NASA; he alleges that the footage shows Apollo 11 astronauts attempting to create the illusion that they were from Earth (or roughly halfway to the Moon) when, he claims, they were only in a low Earth orbit.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "80740",
"title": "Moon landing conspiracy theories",
"section": "Section::::Claimed motives of the United States and NASA.:The Space Race.\n",
"start_paragraph_id": 32,
"start_character": 0,
"end_paragraph_id": 32,
"end_character": 427,
"text": "Motivation for the United States to engage the Soviet Union in a Space Race can be traced to the then on-going Cold War. Landing on the Moon was viewed as a national and technological accomplishment that would generate world-wide acclaim. But going to the Moon would be risky and expensive, as exemplified by President John F. Kennedy famously stating in a 1962 speech that the United States chose to go \"because\" it was hard.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
99llav
|
Who discovered energy = force x distance, and how?
|
[
{
"answer": "The work done by a force was simply *defined* to be the line integral of the force field along the particle's path. There's nothing to discover. But this turns out to be *useful* because of the work-energy theorem and conservation of energy, which can be proven using Newton's laws and experimentally verified.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "23703",
"title": "Potential energy",
"section": "Section::::Gravitational potential energy.:General formula.\n",
"start_paragraph_id": 69,
"start_character": 0,
"end_paragraph_id": 69,
"end_character": 571,
"text": "However, over large variations in distance, the approximation that \"g\" is constant is no longer valid, and we have to use calculus and the general mathematical definition of work to determine gravitational potential energy. For the computation of the potential energy, we can integrate the gravitational force, whose magnitude is given by Newton's law of gravitation, with respect to the distance \"r\" between the two bodies. Using that definition, the gravitational potential energy of a system of masses \"m\" and \"M\" at a distance \"r\" using gravitational constant \"G\" is\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "33710707",
"title": "Planck units",
"section": "Section::::Base units.\n",
"start_paragraph_id": 17,
"start_character": 0,
"end_paragraph_id": 17,
"end_character": 401,
"text": "As can be seen above, the gravitational attractive force of two bodies of 1 Planck mass each, set apart by 1 Planck length is 1 Planck force. Likewise, the distance traveled by light during 1 Planck time is 1 Planck length. To determine, in terms of SI or another existing system of units, the quantitative values of the five base Planck units, those two equations and three others must be satisfied:\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "67088",
"title": "Conservation of energy",
"section": "Section::::History.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 1338,
"text": "Émilie du Châtelet (1706 – 1749) proposed and tested the hypothesis of the conservation of total energy, as distinct from momentum. Inspired by the theories of Gottfried Leibniz, she repeated and publicized an experiment originally devised by Willem 's Gravesande in 1722 in which balls were dropped from different heights into a sheet of soft clay. Each ball's kinetic energy - as indicated by the quantity of material displaced - was shown to be proportional to the square of the velocity. The deformation of the clay was found to be directly proportional to the height the balls were dropped from, equal to the initial potential energy. Earlier workers, including Newton and Voltaire, had all believed that \"energy\" (so far as they understood the concept at all) was not distinct from momentum and therefore proportional to velocity. According to this understanding, the deformation of the clay should have been proportional to the square root of the height from which the balls were dropped from. In classical physics the correct formula is formula_3, where formula_4 is the kinetic energy of an object, formula_5 its mass and formula_6 its speed. On this basis, Châtelet proposed that energy must always have the same dimensions in any form, which is necessary to be able to relate it in different forms (kinetic, potential, heat…).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8808602",
"title": "Kaufmann–Bucherer–Neumann experiments",
"section": "Section::::Historical context.\n",
"start_paragraph_id": 4,
"start_character": 0,
"end_paragraph_id": 4,
"end_character": 985,
"text": "This was connected with the theoretical prediction of the electromagnetic mass by J. J. Thomson in 1881, who showed that the electromagnetic energy contributes to the mass of a moving charged body. Thomson (1893) and George Frederick Charles Searle (1897) also calculated that this mass depends on velocity, and that it becomes infinitely great when the body moves at the speed of light with respect to the luminiferous aether. Also Hendrik Antoon Lorentz (1899, 1900) assumed such a velocity dependence as a consequence of his theory of electrons. At this time, the electromagnetic mass was separated into \"transverse\" and \"longitudinal\" mass, and was sometimes denoted as \"apparent mass\", while the invariant Newtonian mass was denoted as \"real mass\". On the other hand, it was the belief of the German theoretician Max Abraham that all mass would ultimately prove to be of electromagnetic origin, and that Newtonian mechanics would become subsumed into the laws of electrodynamics.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "833572",
"title": "Internal ballistics",
"section": "Section::::Pressure-velocity relationships.:Peak vs area.\n",
"start_paragraph_id": 55,
"start_character": 0,
"end_paragraph_id": 55,
"end_character": 826,
"text": "Energy is defined as the ability to do work on an object; for example, the work required to lift a one-pound weight, one foot against the pull of gravity defines a foot-pound of energy (One joule is equal to the energy needed to move a body over a distance of one meter using one newton of force). If we were to modify the graph to reflect force (the pressure exerted on the base of the bullet multiplied by the area of the base of the bullet) as a function of distance, the area under that curve would be the total energy imparted to the bullet. Increasing the energy of the bullet requires increasing the area under that curve, either by raising the average pressure, or increasing the distance the bullet travels under pressure. Pressure is limited by the strength of the firearm, and duration is limited by barrel length.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "16794316",
"title": "Geroch energy",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 404,
"text": "The Geroch energy or Geroch mass is one of the possible definitions of mass in general relativity. It can be derived from the Hawking energy, itself a measure of the bending of ingoing and outgoing rays of light that are orthogonal to a 2-sphere surrounding the region of space whose mass is to be defined, by leaving out certain (positive) terms related to the sphere's external and internal curvature.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5732212",
"title": "MUSCL scheme",
"section": "Section::::Example: 1D Euler equations.\n",
"start_paragraph_id": 46,
"start_character": 0,
"end_paragraph_id": 46,
"end_character": 257,
"text": "The equations above represent conservation of mass, momentum, and energy. There are thus three equations and four unknowns, formula_55 (density) formula_56 (fluid velocity), formula_57 (pressure) and formula_58 (total energy). The total energy is given by,\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1ddrxr
|
Dinosaurs and the Square/Cube Law: How'd it all work?
|
[
{
"answer": "I think it's a shame that this site comes up as the 4th Google result for \"square cube law\". It sounds more like conspiracy quackery than a scientific critique. Is this guy just a crypto-creationist or what?",
"provenance": null
},
{
"answer": "The square/cube law applies to objects (or animals) that scale isometrically. In other words, the object exactly retains its shape and relative dimensions, it just scales in size. Think of a scale-model matchbox car relative to a real car. \n\nYou can imagine an isometrically scaled chicken as being a chicken that is longer, taller, and wider by a factor *n*, with *n^2* times the surface area, and *n^3* times the mass of a regular chicken.\n\nIn reality, species do not tend to scale isometrically, due to the problems it would create. For example, a chicken that is 10 times as tall as a regular chicken would weigh 10^3 = 1000 times as much, but the cross-sectional area of its leg bones would only be 10^2 = 100 times greater. This means the static pressure the bones would have to bear would be 1000/100 = 10 times greater than for a regular chicken.\n\nFor this reason, many aspects of physiology are found to scale [allometrically](_URL_0_). For example, larger animals tend to have much stockier legs, dinosaurs being no exception to this.\n\nThis also applies to metabolism. Larger animals tend to burn less energy per unit mass per unit time. Specifically, metabolic rate scales as approximately mass^{3/4}, so metabolic rate per unit mass scales as approximately mass^{3/4} /mass = mass^{-1/4}. This relationship is known as [Kleiber's Law](_URL_1_). While we cannot study metabolic rates of extinct species, the ubiquity of this law in living species suggests that dinosaurs too would have followed it.\n\nIn addition, a lot of early estimates of dinosaur masses are now thought to have been [too high](_URL_2_).",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "103791",
"title": "Power Mac G4 Cube",
"section": "Section::::Appearances.:In popular culture.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 803,
"text": "The Cube can be found in many publications related to design and some technology museums. In addition, the computer has been featured in other forms of media. The G4 Cube was used as a prop on shows such as \"Absolutely Fabulous\", \"The Drew Carey Show\", \"Curb Your Enthusiasm\", \"Dark Angel\" , \"The Gilmore Girls\" and \"24\". The computer was parodied in \"The Simpsons\" episode \"Mypods and Boomsticks.\" The Cube is also seen in films such as \"Jay and Silent Bob Strike Back\", \"40 Days and 40 Nights\", \"About a Boy\", \"August\" and \"The Royal Tenenbaums\". In William Gibson's 2003 novel \"Pattern Recognition\", the character Cayce uses her film producer friend's Cube while staying in his London flat. In the movie \"Big Fat Liar,\" a G4 Cube and a Studio Display can be seen in the background of Wolf's kitchen.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1215426",
"title": "Terrahawks",
"section": "Section::::Characters.:Aliens.\n",
"start_paragraph_id": 51,
"start_character": 0,
"end_paragraph_id": 51,
"end_character": 285,
"text": "BULLET::::- Cubes are the aliens' answer to the Zeroids. They can combine into large constructs such as guns and force field cubicles. Their different sides are marked differently, indicating their different functions, such as one serving as a gun. Cy-Star keeps one, Pluto, as a pet.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "48216780",
"title": "Innovision (festival)",
"section": "Section::::Events.:Smeaton’s Cube.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 402,
"text": "Smeaton's Cube (Named after John Smeaton who was an English civil engineer who was responsible for the design of canals, bridges, lighthouses and harbours) is a competition for Civil Engineering undergrads where they design a cube that would undergo successive compressive tests following which a presentation on concrete is to be given. It is organized by the ICE, UK : Student Chapter, NIT Rourkela.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "53713797",
"title": "Tomás Saraceno",
"section": "Section::::Selected Works and Projects.:On Space Time Foam.\n",
"start_paragraph_id": 44,
"start_character": 0,
"end_paragraph_id": 44,
"end_character": 856,
"text": "The cube, a geometric form often used by scientists to represent the concepts of space and time, inspired Saraceno to create an installation in which the visitors' movements enact the time variable, thereby introducing the concept of the fourth dimension within the three-dimensional space. The title of the work can be traced to quantum mechanics on the origins of the universe, distinguished by the idea of extremely fast-moving subatomic particles that can trigger changes in spatio-temporal matter. Freely inspired by these theories, Saraceno makes their movements metaphorically visible. The installation is a device that calls perceptual certainties into question; it is an element that modifies the architecture containing it, a structure that makes the interrelationships among people and visible space, an attempt to overcome the laws of gravity.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "58619229",
"title": "Dino Cube",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 279,
"text": "The Dino Cube is a cubic twisty puzzle in the style of the Rubik's Cube. It was invented in 1985 by Robert Webb, however it was not mass-produced until ten years later. It has a total of 12 external movable pieces to rearrange, compared to 20 movable pieces on the Rubik's Cube.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "901593",
"title": "Lambda cube",
"section": "Section::::Subtyping.\n",
"start_paragraph_id": 70,
"start_character": 0,
"end_paragraph_id": 70,
"end_character": 416,
"text": "The idea of the cube is due to the mathematician Henk Barendregt (1991). The framework of pure type systems generalizes the lambda cube in the sense that all corners of the cube, as well as many other systems can be represented as instances of this general framework. This framework predates the lambda cube by a couple of years. In his 1991 paper, Barendregt also defines the corners of the cube in this framework.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2205718",
"title": "The Cube (film)",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 443,
"text": "The Cube is an hour-long teleplay that aired on NBC's weekly anthology television show \"NBC Experiment in Television\" in 1969. The production was produced and directed by puppeteer and filmmaker Jim Henson, and was one of several experiments with the live-action film medium which he conducted in the 1960s, before focusing entirely on \"The Muppets\" and other puppet works. The screenplay was co-written by long-time Muppet writer Jerry Juhl.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1fnn41
|
Why when looking at a clear container holding water from a side the surface of the water looks like a mirror? Is it the container or the water?
|
[
{
"answer": "You mean the underside of the water? It's because of [total internal reflection](_URL_0_). That's the water. ",
"provenance": null
},
{
"answer": "Materials have a property called refractive index: this measures how fast light travels in the medium, and how much it bends the light when it enters the medium. For any point where there is a change of refractive index (going from water to air, for example), there is an angle at which the light bends so much that it never goes into the air, and is a reflection instead. This is called total internal reflection.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1607411",
"title": "Republic (Plato)",
"section": "Section::::Structure.:By book.:Book X.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 434,
"text": "\"And the same object appears straight when looked at out of the water, and crooked when in the water; and the concave becomes convex, owing to the illusion about colours to which the sight is liable. Thus every sort of confusion is revealed within us; and this is that weakness of the human mind on which the art of conjuring and deceiving by light and shadow and other ingenous devices imposes, having an effect upon us like magic.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28412183",
"title": "Projector",
"section": "Section::::History.:1000 to 1500.:Concave mirrors.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 376,
"text": "The inverted real image of an object reflected by a concave mirror can appear at the focal point in front of the mirror. In a construction with an object at the bottom of two opposing concave mirrors (parabolic reflectors) on top of each other, the top one with an opening in its center, the reflected image can appear at the opening as a very convincing 3D optical illusion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "6446",
"title": "Camouflage",
"section": "Section::::Principles.:Crypsis.:Silvering.\n",
"start_paragraph_id": 52,
"start_character": 0,
"end_paragraph_id": 52,
"end_character": 650,
"text": "In fish such as the herring which live in shallower water, the mirrors must reflect a mixture of wavelengths, and the fish accordingly has crystal stacks with a range of different spacings. A further complication for fish with bodies that are rounded in cross-section is that the mirrors would be ineffective if laid flat on the skin, as they would fail to reflect horizontally. The overall mirror effect is achieved with many small reflectors, all oriented vertically. Silvering is found in other marine animals as well as fish. The cephalopods, including squid, octopus and cuttlefish, have multi-layer mirrors made of protein rather than guanine.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "27558974",
"title": "Underwater camouflage",
"section": "Section::::Methods.:Reflection.\n",
"start_paragraph_id": 16,
"start_character": 0,
"end_paragraph_id": 16,
"end_character": 650,
"text": "In fish such as the herring which live in shallower water, the mirrors must reflect a mixture of wavelengths, and the fish accordingly has crystal stacks with a range of different spacings. A further complication for fish with bodies that are rounded in cross-section is that the mirrors would be ineffective if laid flat on the skin, as they would fail to reflect horizontally. The overall mirror effect is achieved with many small reflectors, all oriented vertically. Silvering is found in other marine animals as well as fish. The cephalopods, including squid, octopus and cuttlefish, have multi-layer mirrors made of protein rather than guanine.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2636111",
"title": "Pelagic fish",
"section": "Section::::Epipelagic fish.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 447,
"text": "In the shallower epipelagic waters, the mirrors must reflect a mixture of wavelengths, and the fish accordingly has crystal stacks with a range of different spacings. A further complication for fish with bodies that are rounded in cross-section is that the mirrors would be ineffective if laid flat on the skin, as they would fail to reflect horizontally. The overall mirror effect is achieved with many small reflectors, all oriented vertically.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "148234",
"title": "Mirror image",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 369,
"text": "A mirror image (in a plane mirror) is a reflected duplication of an object that appears almost identical, but is reversed in the direction perpendicular to the mirror surface. As an optical effect it results from reflection off of substances such as a mirror or water. It is also a concept in geometry and can be used as a conceptualization process for 3-D structures.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30426",
"title": "Total internal reflection",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 735,
"text": "Total internal reflection (TIR) is the phenomenon that makes the water-to-air surface in a fish-tank look like a perfectly silvered mirror when viewed from below the water level (Fig.1). Technically, TIR is the total reflection of a wave incident at a sufficiently oblique angle on the interface between two media, of which the second (\"external\") medium is transparent to such waves but has a higher wave velocity than the first (\"internal\") medium. TIR occurs not only with electromagnetic waves such as light waves and microwaves, but also with other types of waves, including sound and water waves. In the case of a narrow train of waves, such as a laser beam, we tend to speak of the total internal reflection of a \"ray\" (Fig.2).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
2uucpi
|
Is there a point between the earth and the moon where their gravitational forces cancel out?
|
[
{
"answer": "Yes, and it has a name. That's the Earth-moon L1 point.\n\nYou can float there but you are in unstable equilibrium; if you are nudged even slightly to one side then you will drift towards either the moon or Earth never to return.\n\nThe SOHO satellite is at the Earth/Sun L1 to monitor the sun and [continually take pictures of it](_URL_0_) without ever having an object get in the way of the sun. It starts drifting away every once in a while but moves itself back.\n\nThere are not one but five different points around any two objects where gravity and centrifugal force will cancel and you can just hang there with little or no effort. There are [lots of things](_URL_1_) at Lagrange points. L4 and L5 are stable; Jupiter has a whole collection of asteroids that have become caught in those points.\n\n",
"provenance": null
},
{
"answer": "I think what you're asking about is Lagrange points: YES there is a point between the Earth and moon where the gravitational forces balance (I wouldn't say \"cancel\" per se). Here's a [YouTube video](_URL_0_) of astronomer Phil Plait explaining Lagrange points. The point you're asking about is L1, but as other commenters mention, Lagrange points get more complicated than an in-between point when you consider that moon is in constant motion, orbiting the Earth. A star-planet system (eg the sun and Earth) will have Lagrange points as well.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "817677",
"title": "Lunar mare",
"section": "Section::::Distribution of mare basalts.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 663,
"text": "BULLET::::2. It is sometimes suggested that the gravity field of the Earth might preferentially allow eruptions to occur on the near side, but not on the far side. However, in a reference frame rotating with the Moon, the centrifugal acceleration the Moon is experiencing is exactly equal and opposite to the gravitational acceleration of the Earth. There is thus no net force directed towards the Earth. The Earth tides do act to deform the shape of the Moon, but this shape is that of an elongated ellipsoid with high points at both the sub- and anti-Earth points. As an analogy, one should remember that there are two high tides per day on Earth, and not one.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2931866",
"title": "Nordtvedt effect",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 853,
"text": "Nordtvedt then observed that if gravity did in fact violate the strong equivalence principle, then the more-massive Earth should fall towards the Sun at a slightly different rate than the Moon, resulting in a polarization of the lunar orbit. To test for the existence (or absence) of the Nordtvedt effect, scientists have used the Lunar Laser Ranging experiment, which is capable of measuring the distance between the Earth and the Moon with near-millimetre accuracy. Thus far, the results have failed to find any evidence of the Nordtvedt effect, demonstrating that if it exists, the effect is exceedingly weak. Subsequent measurements and analysis to even higher precision have improved constraints on the effect.. Measurements of Mercury's orbit by the MESSENGER Spacecraft have further refined the Nordvedt effect to be below of even smaller scale.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41044068",
"title": "Long period tide",
"section": "Section::::Formation mechanism.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 1576,
"text": "Gravitational Tides are caused by changes in the relative location of the Earth, sun, and moon, whose orbits are perturbed slightly by Jupiter. Newton's law of universal gravitation states that the gravitational force between a mass at a reference point on the surface of the Earth and another object such as the Moon is inversely proportional to the square of the distance between them. The declination of the Moon relative to the Earth means that as the Moon orbits the Earth during half the lunar cycle the Moon is closer to the Northern Hemisphere and during the other half the Moon is closer to the Southern Hemisphere. This periodic shift in distance gives rise to the lunar fortnightly tidal constituent. The ellipticity of the lunar orbit gives rise to a lunar monthly tidal constituent. Because of the nonlinear dependence of the force on distance additional tidal constituents exist with frequencies which are the sum and differences of these fundamental frequencies. Additional fundamental frequencies are introduced by the motion of the Sun and Jupiter, thus tidal constituents exist at all of these frequencies as well as all of the sums and differences of these frequencies, etc. The mathematical description of the tidal forces is greatly simplified by expressing the forces in terms of gravitational potentials. Because of the fact that the Earth is approximately a sphere and the orbits are approximately circular it also turns out to be very convenient to describe these gravitational potentials in spherical coordinates using spherical harmonic expansions.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30718",
"title": "Tide",
"section": "Section::::Tidal constituents.:Principal lunar semi-diurnal constituent.\n",
"start_paragraph_id": 29,
"start_character": 0,
"end_paragraph_id": 29,
"end_character": 854,
"text": "Because the gravitational field created by the Moon weakens with distance from the Moon, it exerts a slightly stronger than average force on the side of the Earth facing the Moon, and a slightly weaker force on the opposite side. The Moon thus tends to \"stretch\" the Earth slightly along the line connecting the two bodies. The solid Earth deforms a bit, but ocean water, being fluid, is free to move much more in response to the tidal force, particularly horizontally. As the Earth rotates, the magnitude and direction of the tidal force at any particular point on the Earth's surface change constantly; although the ocean never reaches equilibrium—there is never time for the fluid to \"catch up\" to the state it would eventually reach if the tidal force were constant—the changing tidal force nonetheless causes rhythmic changes in sea surface height.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "9228",
"title": "Earth",
"section": "Section::::Moon.\n",
"start_paragraph_id": 113,
"start_character": 0,
"end_paragraph_id": 113,
"end_character": 462,
"text": "The gravitational attraction between Earth and the Moon causes tides on Earth. The same effect on the Moon has led to its tidal locking: its rotation period is the same as the time it takes to orbit Earth. As a result, it always presents the same face to the planet. As the Moon orbits Earth, different parts of its face are illuminated by the Sun, leading to the lunar phases; the dark part of the face is separated from the light part by the solar terminator.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "19331",
"title": "Moon",
"section": "Section::::Earth-Moon system.:Tidal effects.\n",
"start_paragraph_id": 77,
"start_character": 0,
"end_paragraph_id": 77,
"end_character": 389,
"text": "The gravitational attraction that masses have for one another decreases inversely with the square of the distance of those masses from each other. As a result, the slightly greater attraction that the Moon has for the side of Earth closest to the Moon, as compared to the part of the Earth opposite the Moon, results in tidal forces. Tidal forces affect both the Earth's crust and oceans.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "30647",
"title": "Tidal acceleration",
"section": "Section::::Theory.:Size of the tidal bulge.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 260,
"text": "Neglecting axial tilt, the tidal force a satellite (such as the moon) exerts on a planet (such as earth) can be described by the variation of its gravitational force over the distance from it, when this force is considered as applied to a unit mass formula_1:\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
pc7ml
|
How is the suffix of an element determined?
|
[
{
"answer": "Well \"gen\" is ~~latin~~ greek for maker, or generator, the word \"Hydrogen\" literally means \"Water Maker\", whereas Oxygen (somewhat misnamed) means \"Sharp (Acid) Maker\"\n\nNot too sure what \"Ium\" means, but ^ is where the gen comes from!\n\nHope that helps :)",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "42209150",
"title": "Naming of chemical elements",
"section": "Section::::Chemical symbol.\n",
"start_paragraph_id": 43,
"start_character": 0,
"end_paragraph_id": 43,
"end_character": 607,
"text": "Once an element has been named, a one-, or two-letter symbol must be ascribed to it so it can be easily referred to in such contexts as the periodic table. The first letter is always capitalised. While the symbol is often a contraction of the element's name, it may sometimes not match the element's name when the symbol is based on non-English words; examples include \"Pb\" for lead (from \"plumbum\" in Latin) or \"W\" for tungsten (from \"Wolfram\" in German). Elements which have only temporary systematic names are given temporary three-letter symbols (e.g. Uue for ununennium, the undiscovered element 119).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5659",
"title": "Chemical element",
"section": "Section::::Nomenclature and symbols.:Chemical symbols.:General chemical symbols.\n",
"start_paragraph_id": 74,
"start_character": 0,
"end_paragraph_id": 74,
"end_character": 887,
"text": "There are also symbols in chemical equations for groups of chemical elements, for example in comparative formulas. These are often a single capital letter, and the letters are reserved and not used for names of specific elements. For example, an \"X\" indicates a variable group (usually a halogen) in a class of compounds, while \"R\" is a radical, meaning a compound structure such as a hydrocarbon chain. The letter \"Q\" is reserved for \"heat\" in a chemical reaction. \"Y\" is also often used as a general chemical symbol, although it is also the symbol of yttrium. \"Z\" is also frequently used as a general variable group. \"E\" is used in organic chemistry to denote an electron-withdrawing group or an electrophile; similarly \"Nu\" denotes a nucleophile. \"L\" is used to represent a general ligand in inorganic and organometallic chemistry. \"M\" is also often used in place of a general metal.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "67513",
"title": "Systematic element name",
"section": "Section::::IUPAC rules.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 522,
"text": "The suffix \"‑ium\" overrides traditional chemical-suffix rules; thus, elements 117 and 118 were \"ununseptium\" and \"ununoctium\", not *\"ununseptine\" and *\"ununocton\". This does not apply to the trivial names these elements receive once confirmed; thus, elements 117 and 118 are now \"tennessine\" and \"oganesson\", respectively. For these trivial names, all elements receive the suffix \"‑ium\" except those in group 17, which receive \"‑ine\" (like the halogens), and those in group 18, which receive \"‑on\" (like the noble gases).\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5659",
"title": "Chemical element",
"section": "Section::::Nomenclature and symbols.:Atomic numbers.\n",
"start_paragraph_id": 59,
"start_character": 0,
"end_paragraph_id": 59,
"end_character": 751,
"text": "The known elements have atomic numbers from 1 through 118, conventionally presented as Arabic numerals. Since the elements can be uniquely sequenced by atomic number, conventionally from lowest to highest (as in a periodic table), sets of elements are sometimes specified by such notation as \"through\", \"beyond\", or \"from ... through\", as in \"through iron\", \"beyond uranium\", or \"from lanthanum through lutetium\". The terms \"light\" and \"heavy\" are sometimes also used informally to indicate relative atomic numbers (not densities), as in \"lighter than carbon\" or \"heavier than lead\", although technically the weight or mass of atoms of an element (their atomic weights or atomic masses) do not always increase monotonically with their atomic numbers.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "673",
"title": "Atomic number",
"section": "Section::::Chemical properties.\n",
"start_paragraph_id": 25,
"start_character": 0,
"end_paragraph_id": 25,
"end_character": 635,
"text": "Each element has a specific set of chemical properties as a consequence of the number of electrons present in the neutral atom, which is \"Z\" (the atomic number). The configuration of these electrons follows from the principles of quantum mechanics. The number of electrons in each element's electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Hence, it is the atomic number alone that determines the chemical properties of an element; and it is for this reason that an element can be defined as consisting of \"any\" mixture of atoms with a given atomic number.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "904",
"title": "Aluminium",
"section": "Section::::Etymology.:Spelling.\n",
"start_paragraph_id": 61,
"start_character": 0,
"end_paragraph_id": 61,
"end_character": 620,
"text": "The ' suffix followed the precedent set in other newly discovered elements of the time: potassium, sodium, magnesium, calcium, and strontium (all of which Davy isolated himself). Nevertheless, element names ending in ' were known at the time; for example, platinum (known to Europeans since the 16th century), molybdenum (discovered in 1778), and tantalum (discovered in 1802). The \"\" suffix is consistent with the universal spelling alumina for the oxide (as opposed to aluminia); compare to lanthana, the oxide of lanthanum, and magnesia, ceria, and thoria, the oxides of magnesium, cerium, and thorium, respectively.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "26808274",
"title": "HTML attribute",
"section": "Section::::Description.\n",
"start_paragraph_id": 8,
"start_character": 0,
"end_paragraph_id": 8,
"end_character": 203,
"text": "Although most attributes are provided as paired names and values, some affect the element simply by their presence in the start tag of the element (like the codice_5 attribute for the codice_6 element).\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
8so2ig
|
the contradiction of why you have to wait x amount of hours to report someone missing when they also say the first 24-48 hours are most important?
|
[
{
"answer": "You can report it whenever you want, but if the person's only been missing a short time, and they are old enough to make their own decisions, the police are going to suspect they just decided to go somewhere and are not in trouble.",
"provenance": null
},
{
"answer": "The 24 hour thing is a myth, an invention for police procedural dramas. In real Police stations they will ask why you suspect that a person has gone missing and will respond accordingly.\n\nFor example, a woman who comes to the Police station stating that her teenage daughter never came home from school and she was supposed to be home two hours ago, the Police might say that she's just running late.\n\nHowever if, in the same scenario, the woman explains that she's worried because she's seen suspicious behavior in the area then the Police may open an investigation immediately.",
"provenance": null
},
{
"answer": "You don't have to wait 24 hours, and law enforcement encourages you to report missing people immediately. The issue is that they having limited resources, and can't go chasing after ever able-bodied adult who forgot to call home and say they are going to be late. 24-48 hours is the approximate range where \"they are passed out drunk at a friends house\" to \"maybe something fishy is going on\". But make no mistake, if your 12-year-old or parent with dementia isn't where they are supposed to be, the police will be all over that.\n\nAlso, while that 48 hours to solve a crime might be statistically accurate, it is also misleading. It isn't like at 48 hours and 1 minute things suddenly get harder. With many crimes, there is nothing to solve, what happened and who did it is obvious, and that brings the average way down. In addition, a significant portion of missing person cases does not involve a crime. Someone goes on vacation without telling anyone, gets pissed of at their family and stops taking their calls, or an ex-spouse is an hour late bringing the kids back. Those don't factor into the statistics, because there was no crime to be solved.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "1034225",
"title": "Missing person",
"section": "Section::::Legal aspects.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 240,
"text": "A common misconception is that a person must be absent for at least 24 hours before being legally classed as missing, but this is rarely the case. Law enforcement agencies often stress that the case should be reported as early as possible.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41212",
"title": "Grade of service",
"section": "Section::::What is Grade of Service and how is it measured?\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 253,
"text": "BULLET::::- The probability that a user may be delayed longer than time \"t\" while waiting for a connection. Time \"t\" is chosen by the telecommunications service provider so that they can measure whether their services conform to a set Grade of Service.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "34269647",
"title": "Miss Peregrine's Home for Peculiar Children",
"section": "Section::::Peculiardom.:Aging Forward.\n",
"start_paragraph_id": 19,
"start_character": 0,
"end_paragraph_id": 19,
"end_character": 1063,
"text": "As a result of time loops, those who reside in them may not be able to return to the present day, depending on how long they've been there. In a mere matter of hours outside of the loop, the amount of time evaded will catch up. An example of this is Miss Peregrine's own former ward, a young girl named Charlotte who left the loop while Miss Peregrine was away. She was discovered by police in the mid-1980s and sent to a welfare agency. When Miss Peregrine found her just two days later, she'd already aged thirty-five years. Although she survived the ordeal, the unnatural aging process had caused Charlotte a great deal of mental disorder, and she was sent to live with Miss Nightjar, an ymbryne more suited for her care. The same process of deterioration applies to anything taken out of time loops as another instance was an apple Jacob took back to the inn where he and his father were staying in the present day. He left it on the nightstand next to his bed as he fell asleep that night, but by morning, found it had rotted to the point of disintegrating.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "321956",
"title": "List of common misconceptions",
"section": "Section::::Arts and culture.:Law, crime, and military.\n",
"start_paragraph_id": 13,
"start_character": 0,
"end_paragraph_id": 13,
"end_character": 390,
"text": "BULLET::::- It is rarely necessary to wait 24 hours before filing a missing person report. In instances where there is evidence of violence or of an unusual absence, law enforcement agencies in the United States often stress the importance of beginning an investigation promptly. The UK government website says in large type, \"You don't have to wait 24 hours before contacting the police.\"\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "4457999",
"title": "H-2A visa",
"section": "Section::::Validity.:Calculation of interrupted stay.\n",
"start_paragraph_id": 50,
"start_character": 0,
"end_paragraph_id": 50,
"end_character": 389,
"text": "BULLET::::- If the worker was in the United States for 18 months or less, then H-2 time is interrupted if the worker is outside the United States for at least 45 days but less than 3 months. This means that time spent outside the United States will not count toward the 3-year limit, but rather, upon return, the worker's clock will resume from where it left off at the time of departure.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "13266",
"title": "Histogram",
"section": "Section::::Examples.\n",
"start_paragraph_id": 18,
"start_character": 0,
"end_paragraph_id": 18,
"end_character": 539,
"text": "The U.S. Census Bureau found that there were 124 million people who work outside of their homes. Using their data on the time occupied by travel to work, the table below shows the absolute number of people who responded with travel times \"at least 30 but less than 35 minutes\" is higher than the numbers for the categories above and below it. This is likely due to people rounding their reported journey time. The problem of reporting values as somewhat arbitrarily rounded numbers is a common phenomenon when collecting data from people.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5193321",
"title": "Scheduled time",
"section": "Section::::Example (from a road rally).:Case 1.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 629,
"text": "If this is allowed in an event, then care must be taken not to break the 3/4 rule: this rule states that any time lost may be made back provided that no section between 2 consecutive Time Controls is done in less than 3/4 of the time allowed for that section unless the section is less than 4 miles in length in which case as much time as required may be made back. This sounds counter-intuitive, but in practice it is very difficult to make back much time on a 4 mile/8 minute section and this rule allows organisers to build in sections of, for example, 2 miles/30 minutes specifically to allow competitors to reduce lateness.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1vkeli
|
Can planets in the habitable zone have moons that also supports life?
|
[
{
"answer": "I think that the magnetic shielding would be a big issue. Even larger planets like Mars have their atmosphere stripped away by the solar winds, and Mars is much larger than our moon.",
"provenance": null
},
{
"answer": "Earth sized planets with earth sized moons would be problematic. In theory, a sufficiently large planet with very large moons that could sustain their own magnetic fields, and retain their own atmospheres would be possible. We've seen gas giants closer to the sun than earth, and if you hit the right mix of sizes of planet, size of the moon, size of star and all the right distances, it would definitely be possible. Most likely very rare though (as even more variables need to be in the right ranges than for a habitable planet), and hard as hell to detect from earth, so we may never find or know of an example. ",
"provenance": null
},
{
"answer": "While Jupiter isn't in the habitable zone, one of it's moons, [Europa](_URL_0_) is a target people are eager to explore for life/habitable conditions. The habitable zone (where liquid water is possible based on distance from the host star) isn't really a restrictive boundary, but more a best guess for where to look first.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "5671098",
"title": "Gliese 876 c",
"section": "Section::::Characteristics.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 459,
"text": "Gliese 876 c lies at the inner edge of the system's habitable zone. While the prospects for life on gas giants are unknown, it might be possible for a large moon of the planet to provide a habitable environment. Unfortunately tidal interactions between a hypothetical moon, the planet, and the star could destroy moons massive enough to be habitable over the lifetime of the system. In addition it is unclear whether such moons could form in the first place.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1072751",
"title": "Circumstellar habitable zone",
"section": "Section::::Determination.:Extrasolar extrapolation.:Other considerations.\n",
"start_paragraph_id": 34,
"start_character": 0,
"end_paragraph_id": 34,
"end_character": 932,
"text": "Planetary-mass natural satellites have the potential to be habitable as well. However, these bodies need to fulfill additional parameters, in particular being located within the circumplanetary habitable zones of their host planets. More specifically, moons need to be far enough from their host giant planets that they are not transformed by tidal heating into volcanic worlds like Io, but must still remain within the Hill radius of the planet so that they are not pulled out of orbit of their host planet. Red dwarfs that have masses less than 20% of that of the Sun cannot have habitable moons around giant planets, as the small size of the circumstellar habitable zone would put a habitable moon so close to the star that it would be stripped from its host planet. In such a system, a moon close enough to its host planet to maintain its orbit would have tidal heating so intense as to eliminate any prospects of habitability.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "50807996",
"title": "Kepler-1647b",
"section": "Section::::Habitability.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 428,
"text": "Kepler-1647b is in the habitable zone of the star system. Since the planet is a gas giant, it is unlikely to host life. However, hypothetical large moons could potentially be suitable for life. However, large moons are usually not created during accretion near a gas giant. Such moons would likely have to be captured separately, e.g., a passing protoplanet caught into orbit due to the gravitational field of the giant planet.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "83516",
"title": "Orphans of the Sky",
"section": "Section::::Scientific basis.\n",
"start_paragraph_id": 21,
"start_character": 0,
"end_paragraph_id": 21,
"end_character": 465,
"text": "The notion of a giant planet with a habitable moon went against theories of planetary formation as they stood before the discovery of \"hot Jupiter\" planets. It was thought that planets large enough to have an Earth-sized moon would only form above the \"snowline\", too far from the star for life. It is now believed that such worlds can migrate inwards, and habitable moons seem likely. The existence of exomoons has not been confirmed, though there are candidates.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "15155395",
"title": "Habitability of natural satellites",
"section": "",
"start_paragraph_id": 2,
"start_character": 0,
"end_paragraph_id": 2,
"end_character": 1052,
"text": "The strongest candidates for natural satellite habitability are currently icy satellites such as those of Jupiter and Saturn—Europa and Enceladus respectively, although if life exists in either place, it would probably be confined to subsurface habitats. Historically, life on Earth was thought to be strictly a surface phenomenon, but recent studies have shown that up to half of Earth's biomass could live below the surface. Europa and Enceladus exist outside the circumstellar habitable zone which has historically defined the limits of life within the Solar System as the zone in which water can exist as liquid at the surface. In the Solar System's habitable zone, there are only three natural satellites—the Moon, and Mars's moons Phobos and Deimos (although some estimates show Mars and its moons to be slightly outside the habitable zone) —none of which sustain an atmosphere or water in liquid form. Tidal forces are likely to play as significant a role providing heat as stellar radiation in the potential habitability of natural satellites.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3451267",
"title": "HD 28185 b",
"section": "Section::::Characteristics.\n",
"start_paragraph_id": 9,
"start_character": 0,
"end_paragraph_id": 9,
"end_character": 769,
"text": "Since HD 28185 b orbits in its star's habitable zone, some have speculated on the possibility of life on worlds in the HD 28185 system. While it is unknown whether gas giants can support life, simulations of tidal interactions suggest that HD 28185 b could harbor Earth-mass satellites in orbit around it for many billions of years. Such moons, if they exist, may be able to provide a habitable environment, though it is unclear whether such satellites would form in the first place. Additionally, a small planet in one of the gas giant's Trojan points could survive in a habitable orbit for long periods. The high mass of HD 28185 b, of over six Jupiter masses, actually makes either of these scenarios more likely than if the planet was about Jupiter's mass or less.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "42635006",
"title": "Habitability of binary star systems",
"section": "Section::::Circumbinary planet.\n",
"start_paragraph_id": 11,
"start_character": 0,
"end_paragraph_id": 11,
"end_character": 214,
"text": "If Earth-like planets form in or migrate into the circumbinary habitable zone they are capable of sustaining liquid water on their surface in spite of the dynamical and radiative interaction with the binary star. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
38ms08
|
Would it be possible to make a mirror that reflects the image back the right way around?
|
[
{
"answer": "Yes. This is called a [non-reversing mirror](_URL_0_). There were some articles about a new kind of such mirror invented a few years ago. The guy who did it also made a side view mirror with \"no blind spot\".",
"provenance": null
},
{
"answer": "The operation of \"mirror imaging\" is called a reflexion or a parity transformation. It's not continuously deformed from the non-reflected image: a reflexion is really a discrete operation. Therefore curving or deforming any single mirror would not be enough to delete this effect.\n\nWhat you can do, though, is to exploit the fact that doing a second reflection cancels the effect of the first. So basically you want to use any system where your image is reflected an even number of times in mirrors (most practically two). This is what the practical example given by /u/albasri exploits.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "21412510",
"title": "Non-reversing mirror",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 833,
"text": "A non-reversing mirror (sometimes referred to as a flip mirror) is a mirror that presents its subject as it would be seen from the mirror. A non-reversing mirror can be made by connecting two regular mirrors at their edges at a 90 degree angle. If the join is positioned so that it is vertical, an observer looking into the angle will see a non-reversed image. This can be seen in places such as public toilets when there are two mirrors mounted on walls which meet at right angles. Such an image is visible while looking towards the corner where the two mirrors meet. The problem with this type of non-reversing mirror is that there is usually a line down the middle interrupting the image. However, if first surface mirrors are used, and care is taken to set the angle to exactly 90 degrees, the join can be made almost invisible.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "21412510",
"title": "Non-reversing mirror",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 433,
"text": "A third type of non-reversing mirror was created by mathematics professor R. Andrew Hicks in 2009. It was created using computer algorithms to generate a \"disco ball\" like surface. The thousands of tiny mirrors are angled to create a surface which curves and bends in different directions. The curves direct rays from an object across the mirror's face before sending them back to the viewer, flipping the conventional mirror image.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "28412183",
"title": "Projector",
"section": "Section::::History.:1000 to 1500.:Concave mirrors.\n",
"start_paragraph_id": 42,
"start_character": 0,
"end_paragraph_id": 42,
"end_character": 376,
"text": "The inverted real image of an object reflected by a concave mirror can appear at the focal point in front of the mirror. In a construction with an object at the bottom of two opposing concave mirrors (parabolic reflectors) on top of each other, the top one with an opening in its center, the reflected image can appear at the opening as a very convincing 3D optical illusion.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "521267",
"title": "Reflection (physics)",
"section": "Section::::Reflection of light.\n",
"start_paragraph_id": 10,
"start_character": 0,
"end_paragraph_id": 10,
"end_character": 436,
"text": "Specular reflection forms images. Reflection from a flat surface forms a mirror image, which appears to be reversed from left to right because we compare the image we see to what we would see if we were rotated into the position of the image. Specular reflection at a curved surface forms an image which may be magnified or demagnified; curved mirrors have optical power. Such mirrors may have surfaces that are spherical or parabolic.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "5732433",
"title": "Curved mirror",
"section": "Section::::Convex mirrors.\n",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 585,
"text": "A convex mirror or diverging mirror is a curved mirror in which the reflective surface bulges towards the light source. Convex mirrors reflect light outwards, therefore they are not used to focus light. Such mirrors always form a virtual image, since the focal point (\"F\") and the centre of curvature (\"2F\") are both imaginary points \"inside\" the mirror, that cannot be reached. As a result, images formed by these mirrors cannot be projected on a screen, since the image is inside the mirror. The image is smaller than the object, but gets larger as the object approaches the mirror.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "47232601",
"title": "Flip mirror",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 540,
"text": "A flip mirror unit is used on astronomical Telescope and other optical instruments in order to send the light from an object in new directions using a small mirror which can be moved into the lightbeam. It is a mirror-diagonal that holds both a camera and an eyepiece and allows you to switch your view between them by flipping a Mirror up or down. It is used to center the object in your camera and to help you focus it. It can also be used in 35-mm photography if it is large enough to allow the entire field of view to reach the camera.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "40203",
"title": "Hubble Space Telescope",
"section": "Section::::Flawed mirror.\n",
"start_paragraph_id": 62,
"start_character": 0,
"end_paragraph_id": 62,
"end_character": 477,
"text": "Analysis of the flawed images showed that the cause of the problem was that the primary mirror had been polished to the wrong shape. Although it was probably the most precisely figured optical mirror ever made, smooth to about , at the perimeter it was too flat by about . This difference was catastrophic, introducing severe spherical aberration, a flaw in which light reflecting off the edge of a mirror focuses on a different point from the light reflecting off its center.\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
1szuoo
|
how will the porn ban in the uk affect ordinary internet browsing?
|
[
{
"answer": "Same way the torrent site \"ban\" affected torrents in the UK - > it won't.",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "23488014",
"title": "General Posts and Telecommunications Company",
"section": "Section::::Internet censorship.\n",
"start_paragraph_id": 5,
"start_character": 0,
"end_paragraph_id": 5,
"end_character": 225,
"text": "In 2013, an 'Official' Court order was called in to bar users from browsing pornographic material. While the rule applies on censoring pornographic sites, it has been found that Internet filters have blocked other websites, \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12832029",
"title": "Internet censorship in the United Kingdom",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 444,
"text": "Internet censorship in the United Kingdom is conducted under a variety of laws, judicial processes, administrative regulations and voluntary arrangements. It is achieved by blocking access to sites as well as the use of laws that criminalise publication or possession of certain types of material. These include English defamation law, the Copyright law of the United Kingdom, regulations against incitement to terrorism and child pornography.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41543290",
"title": "Pornography in Asia",
"section": "Section::::South Asia.:India.\n",
"start_paragraph_id": 67,
"start_character": 0,
"end_paragraph_id": 67,
"end_character": 791,
"text": "In July 2015 the Supreme Court of India refused to allow the blocking of pornographic websites and said that watching pornography indoors in the privacy of ones own home was not a crime. The court rejected an interim order blocking pornographic websites in the country. In August 2015 the Government of India issued an order to Indian ISPs to block at least 857 websites that it considered to be pornographic. In 2015 the Department of Telecommunications (DoT) had asked internet service providers to take down 857 websites in a bid to control cyber crime, but after receiving criticism from the authorities it partially rescinded the ban. The ban from the government came after a lawyer filed a petition in the Supreme Court arguing that online pornography encourages sex crimes and rapes.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "2584324",
"title": "Legal status of Internet pornography",
"section": "Section::::Internet pornography laws in various countries.:India.\n",
"start_paragraph_id": 70,
"start_character": 0,
"end_paragraph_id": 70,
"end_character": 767,
"text": "In July 2015, The Supreme Court of India denied to block pornographic websites sites and said, watching porn in the privacy of your own at indoors isn't a crime and declined to pass an interim order to block pornographic websites in the country. In August 2015, the Government of India issued an order to Indian ISPs to block at least 857 websites that it considered to be pornographic. The Department of Telecom(DoT), in the year 2015, had asked internet service providers to take down as many as 857 websites in a bid to control cyber crime but after receiving criticism from the authorities, it partially rescinded the ban. The ban from the government came after a lawyer filed a petition in Supreme Court arguing that online porn encourage sex crimes and rapes. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "22239137",
"title": "Adult film industry regulations",
"section": "Section::::Internet pornography.\n",
"start_paragraph_id": 31,
"start_character": 0,
"end_paragraph_id": 31,
"end_character": 970,
"text": "In jurisdictions that heavily restrict access or outright ban pornography, various attempts have been made to prevent access to pornographic content. The mandating of Internet filters to try preventing access to porn sites has been used in some nations such as China and Saudi Arabia. Banning porn sites within a nation's jurisdiction does not necessarily prevent access to that site, as it may simply relocate to a hosting server within another country that does not prohibit the content it offers. The United Kingdom's Digital Economy Act 2017 includes powers to require age-verification for pornographic Internet sites and the government accepted an amendment to allow the regulator to require ISPs to block access to non-compliant sites. As the BBFC are expected to become the regulator, this has caused discussion about ISPs being required to block content that is prohibited even under an R18 certificate, the prohibition of some of which is itself controversial.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "12438777",
"title": "Pornography in India",
"section": "Section::::Legality.\n",
"start_paragraph_id": 15,
"start_character": 0,
"end_paragraph_id": 15,
"end_character": 682,
"text": "In July 2015, The Supreme Court of India declined to pass an interim order to block pornographic websites and said that watching pornography in the privacy of one's own isn't a crime. In August 2015, the Government of India issued an order to Indian ISPs to block at least 857 websites that it considered to be pornographic. In 2015, the Department of Telecom(DoT) had asked internet service providers to take down as many as 857 websites in a bid to control cyber crime, but after receiving criticism from the authorities, it partially rescinded the ban. The ban came about after a lawyer filed a petition in Supreme Court arguing that online porn encourage sex crimes and rapes. \n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1133408",
"title": "Pornography laws by region",
"section": "Section::::Asia.:India.\n",
"start_paragraph_id": 60,
"start_character": 0,
"end_paragraph_id": 60,
"end_character": 748,
"text": "In July 2015, The Supreme Court of India denied to block pornographic websites sites and said, watching porn privately indoors is not a crime and declined to pass an interim order to block pornographic websites in the country. In August 2015, the Government of India issued an order to Indian ISPs to block at least 857 websites that it considered to be pornographic. The Department of Telecom(DoT), in the year 2015, had asked internet service providers to take down as many as 857 websites in a bid to control cyber crime but after receiving criticism from the authorities, it partially rescinded the ban. The ban from the government came after a lawyer filed a petition in Supreme Court arguing that online porn encourage sex crimes and rapes. \n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
3ux7lu
|
When and how did Boston, Massachusetts first become so heavily associated with Ireland and Irish culture?
|
[
{
"answer": " > Does Boston simply have an extremely high proportion of Irish Americans?\n\nYes, historically Boston was a site of major Irish immigration, beginning as early as the beginning of the 1800s but massively increasing from 1840-1870 as the influence of the Great Famine was felt in Ireland. \n\nIn the earliest decades of the nineteenth century, the general contours of Irish immigration was to first stop off in the ports of Atlantic Canada (Halifax, Montreal, Quebec) because the shipping rates to those ports were cheapest^1. After a few years, they would then move south to American cities such as Boston, New York, and Philadelphia. In this period from 1800-1820, the majority of immigrants from Ireland tended to be Irish Protestants, owing to the fact that Protestants tended to have greater resources to manage the trip across the atlantic.\n\nIn the period from 1820-1840, the demographics and contours changed, with greater numbers of impoverished Catholics leaving Ireland and heading directly for Boston or New York. This immigration of Catholics did prompt hostility from a Protestant Yankee native population, and this period saw some notable anti-Catholic riots. The most notable event in Boston was the [burning of the Ursuline convent](_URL_3_) in 1834. Another famous confrontation was the [broad street riot](_URL_0_) in 1837.\n\nAs the Great Famine ravaged Ireland from 1845 to 1849, massive numbers of Irish people left the island, far greater in scale than previous migrations. 200,000 Irish immigrated to America in the decade of the 1830s. In the 1840s, 780,000 Irish came to America, mostly after 1846. At the outset of the Famine, Boston had a population of approximately 115,000 residents. In 1847, the first year of major migration due to the famine, 37,000 Irish arrived in Boston^2. \n\nLike the earlier immigration from 1820-1840, this influx of Irish Catholics into a Protestant Yankee majority led to anti-immigrant hostility in the 1850s. In that decade, Know-Nothing politicians filled the State Senate, State House of Representatives, served as Governor and as Mayor of Boston^3.\n\nAs others have said in this thread, migration tended to flow towards established communities of Irish-Americans, where a migrant might have family or friends already living there. Thus, in the post-Famine period, Boston continued to see heavy migration of Irish people into the 1870s, and lower levels of migration into the 20th century.\n\n > and why Boston, as opposed to any other major American city? \n\nAs I said above, other cities like New York, Philadelphia, Savanna and New Orleans all saw migrations of Irish into their cities in the 1840s and 1850s. The immigrant vote was quite important to the functioning of New York's Tammany Hall political machine in the later decades of the 1800s. Additionally, New York City saw severe Draft Riots in 1863, and newly naturalized Irish-Americans played a large part in these riots. They were partly motivated by resentment of exemption provisions, when working-class immigrants could never afford to pay the $300 required. Partially too, the Irish and German immigrant workers were driven by fears that abolition of slavery would result in labor competition from free Blacks.\n\nIn any case, New York City did not become as closely tied to Irish identity as Boston did for a variety of reasons. In the late 1840s and 1850s, New York saw German immigration at the same time and in similar numbers to Irish immigrants. Also, New York has hosted subsequent waves of Italian, Jewish, Balkan, Chinese, and other ethnicities, which made the city a patchwork and prevented New York from being associated with any one community. Boston, in contrast, did witness large Italian immigration into the North End, but not a similar scale and variety of immigration as New York did.\n\nOf course, cities like Philadelphia and Chicago continue to have notable Irish-American communities, and noteworthy St Patrick's day parades. \n\n----\n1) [Enclyopedia of American Immigration](_URL_1_) pp 154.\n\n2)_URL_4_\n\n3)[Hidden History of the Boston Irish](_URL_2_) pp 21",
"provenance": null
},
{
"answer": null,
"provenance": [
{
"wikipedia_id": "41875965",
"title": "History of Irish Americans in Boston",
"section": "Section::::Discrimination and stereotyping.\n",
"start_paragraph_id": 58,
"start_character": 0,
"end_paragraph_id": 58,
"end_character": 402,
"text": "Irish immigrants to the U.S. in the 19th century faced a combination of anti-immigrant, anti-Catholic, and specifically anti-Irish bigotry which were closely intertwined. This was especially true in Puritan-founded Boston, with its strongly Anglo-Saxon population. Generations of Bostonians celebrated Pope's Night on November 5 each year, holding anti-Catholic parades and burning the pope in effigy.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "41875965",
"title": "History of Irish Americans in Boston",
"section": "",
"start_paragraph_id": 1,
"start_character": 0,
"end_paragraph_id": 1,
"end_character": 811,
"text": "People of Irish descent form the largest single ethnic group in Boston, Massachusetts. Once a Puritan stronghold, Boston changed dramatically in the 19th century with the arrival of European immigrants. The Irish dominated the first wave of newcomers during this period, especially following the Great Irish Famine. Their arrival transformed Boston from an Anglo-Saxon, Protestant city into one that has become progressively more diverse. The Yankees hired Irish as workers and servants, but there was little social interaction. In the 1840s and 50s, the anti-Catholic, anti-immigrant Know-Nothing movement targeted Irish Catholics in Boston. In the 1860s, many Irish immigrants fought for the Union in the American Civil War, and that display of patriotism helped to dispel much of the prejudice against them.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "1965717",
"title": "History of Boston",
"section": "Section::::19th century.:Irish.\n",
"start_paragraph_id": 40,
"start_character": 0,
"end_paragraph_id": 40,
"end_character": 677,
"text": "Throughout the 19th century, Boston became a haven for Irish Catholic immigrants, especially following the potato famine of 1845–49. Their arrival transformed Boston from a singular, Anglo-Saxon, Protestant city to one that has progressively become more diverse. The Yankees hired Irish as workers and servants, but there was little social interaction. In the 1850s, an anti-Catholic, anti-immigrant movement was directed against the Irish, called the Know Nothing Party. But in the 1860s, many Irish immigrants joined the Union ranks to fight in the American Civil War, and that display of patriotism and valor began to soften the harsh sentiments of Yankees about the Irish.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3891915",
"title": "Irish Americans in New York City",
"section": "",
"start_paragraph_id": 3,
"start_character": 0,
"end_paragraph_id": 3,
"end_character": 257,
"text": "Boston today has the largest number of Irish-Americans of any city in the United States. During the Celtic Tiger years, when the Irish economy was booming, the city saw a buying spree of residences by native Irish as second homes or as investment property.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "3891915",
"title": "Irish Americans in New York City",
"section": "Section::::Background.\n",
"start_paragraph_id": 6,
"start_character": 0,
"end_paragraph_id": 6,
"end_character": 203,
"text": "In the \"early days\", the 19th century, the Irish formed a predominant part of the European immigrant population of New York City, a \"city of immigrants\", which added to the city's diversity to this day.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "8814319",
"title": "History of Maine",
"section": "Section::::Immigrants.:Irish.\n",
"start_paragraph_id": 67,
"start_character": 0,
"end_paragraph_id": 67,
"end_character": 510,
"text": "Maine experienced a wave of Irish immigration in the mid-19th century, though many came to the state via Canada and Massachusetts, and before the potato famine. There was a riot in Bangor between Irish and Yankee (nativist) sailors and lumbermen as early as 1834, and a number of early Catholic churches were burned or vandalized in coastal communities, where the Know-Nothing Party briefly flourished. After the Civil War, Maine's Irish-Catholic population began a process of integration and upward mobility.\n",
"bleu_score": null,
"meta": null
},
{
"wikipedia_id": "46284800",
"title": "Irish Americans",
"section": "Section::::Sense of heritage.:Cities.\n",
"start_paragraph_id": 114,
"start_character": 0,
"end_paragraph_id": 114,
"end_character": 1292,
"text": "The vast majority of Irish Catholic Americans settled in large and small cities across the North, particularly railroad centers and mill towns. They became perhaps the most urbanized group in America, as few became farmers. Areas that retain a significant Irish American population include the metropolitan areas of Boston, New York City, Philadelphia, Providence, Hartford, Pittsburgh, Buffalo, Albany, Syracuse, Baltimore, Chicago, Cleveland, San Francisco and Los Angeles, where most new arrivals of the 1830–1910 period settled. As a percentage of the population, Massachusetts is the most Irish state, with about a fifth, 21.2%, of the population claiming Irish descent. The most Irish American towns in the United States are Scituate, Massachusetts, with 47.5% of its residents being of Irish descent; Milton, Massachusetts, with 44.6% of its 26,000 being of Irish descent; and Braintree, Massachusetts with 46.5% of its 34,000 being of Irish descent. (Weymouth, Massachusetts, at 39% of its 54,000 citizens, and Quincy, Massachusetts, at 34% of its population of 90,000, are the two most Irish \"cities\" in the country. Squantum, a peninsula in the northern part of Quincy, is the most Irish neighborhood in the country, with close to 60% of its 2600 residents claiming Irish descent.)\n",
"bleu_score": null,
"meta": null
}
]
}
] | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.